AI transcript
0:00:01 Are you forgetting about that chip in your windshield?
0:00:03 It’s time to fix it.
0:00:05 Come to Speedy Glass before it turns into a crack.
0:00:08 Our experts will repair your windshield in less than an hour,
0:00:09 and it’s free if you’re insured.
0:00:12 Book your appointment today at speedyglass.ca.
0:00:14 Details and conditions at speedyglass.ca.
0:00:20 What do you think about when you think about AI?
0:00:25 Maybe chatbots giving you new lasagna recipes,
0:00:29 research assistants helping you finish that paper.
0:00:33 Do you think about machines taking your job?
0:00:37 Maybe you think of something even more ominous,
0:00:41 like Skynet robots wiping out humanity.
0:00:46 If you’re like me, you probably think of all those things,
0:00:47 depending on the day.
0:00:50 And that’s sort of the point.
0:00:55 AI is not well understood, even by the people creating it.
0:00:58 And even though we all know it’s a technology
0:01:00 that’s going to change our lives,
0:01:03 that’s really all we know at this point.
0:01:10 So how do we confront this uncertainty?
0:01:13 How do we navigate the current moment?
0:01:17 And how do we, the people who have been told
0:01:19 that we will be impacted by AI,
0:01:21 but don’t seem to have much of a say
0:01:23 in how the AI is being built,
0:01:26 engage in the conversation?
0:01:31 I’m Sean Elling, and this is The Gray Area.
0:01:45 Today’s guest is Jaron Lanier.
0:01:48 He’s a virtual reality pioneer,
0:01:50 a digital philosopher,
0:01:54 and the author of several best-selling books on technology.
0:01:57 He’s also one of the most profound critics
0:02:01 of Silicon Valley and the business model driving it.
0:02:04 I wanted to bring Jaron on the show
0:02:07 for the first episode of this special series on AI
0:02:10 because I think he’s uniquely positioned
0:02:14 to speak both to the technological side of AI,
0:02:16 what’s happening, where it’s going,
0:02:20 and also to the human side.
0:02:24 Jaron’s a computer scientist who loves technology.
0:02:29 But at his core, he’s a humanist
0:02:32 who’s always thinking about what technologies are doing to us
0:02:36 and how our understanding of these tools
0:02:39 will inevitably determine how they’re used.
0:02:43 Maybe what Jaron does the best, though,
0:02:45 is offer a different lens
0:02:47 through which to view these technologies.
0:02:51 We’re encouraged to treat these machines
0:02:54 as though they’re godlike,
0:02:56 as though they’re thinking for themselves.
0:03:01 Indeed, they’re designed to make you feel that way
0:03:04 because it adds to the mystique around them
0:03:07 and obscures the truth about how they really work.
0:03:12 But Jaron’s plea is to be careful
0:03:15 about thoughtlessly adopting the language
0:03:17 that the AI creators give us
0:03:18 to describe their creation
0:03:21 because that language structures
0:03:25 not only how we think about these technologies,
0:03:27 but what we do with them.
0:03:35 Jaron Lanier, welcome to the show.
0:03:36 That’s me. Hey.
0:03:39 So look, I have heard
0:03:43 so many of these big picture conversations about AI
0:03:48 and they often begin with a question
0:03:52 about how or whether AI is going to take over the world.
0:03:55 But I discovered very quickly
0:03:57 that you don’t accept the terms of that question,
0:03:59 which is why I’m not going to ask it.
0:04:01 but I thought it would be useful
0:04:03 as a beginning to ask you
0:04:05 why you find questions like that
0:04:07 or claims like that ridiculous.
0:04:10 Oh, well, you know,
0:04:12 when it comes to AI,
0:04:15 the whole technical field
0:04:16 is kind of defined
0:04:19 by an almost metaphysical assertion,
0:04:22 which is we are creating intelligence.
0:04:23 Well, what is intelligence?
0:04:26 Something human.
0:04:28 The whole field was founded
0:04:31 by Alan Turing’s thought experiment
0:04:32 called the Turing test,
0:04:37 where if you can fool a human
0:04:38 into thinking you’ve made a human,
0:04:40 then you might as well have made a human
0:04:42 because what other tests could there be?
0:04:45 Which in a way is fair enough.
0:04:45 On the other hand,
0:04:47 what other scientific field
0:04:50 other than maybe supporting stage magicians
0:04:53 is entirely based on being able to fool people?
0:04:53 I mean, it’s stupid.
0:04:56 Fooling people in itself accomplishes nothing.
0:04:58 There’s no productivity.
0:04:59 There’s no insight
0:05:01 unless you’re studying
0:05:03 the cognition of being fooled, of course.
0:05:06 So there’s an alternative way
0:05:07 to think about what we do
0:05:09 with what we call AI,
0:05:12 which is that there’s no new entity.
0:05:14 There’s nothing intelligent there.
0:05:16 What there is is a new
0:05:17 and in my opinion,
0:05:18 sometimes quite useful
0:05:21 form of collaboration between people.
0:05:23 If you look at something like the Wikipedia,
0:05:25 where people mash up
0:05:27 a lot of their communications into one thing,
0:05:30 you can think of that as a step on the way
0:05:32 to what we call large model AI,
0:05:34 where we take all the data that we have
0:05:35 and we put it together
0:05:39 in a way that allows more interpolation
0:05:43 and more commingling than previous methods.
0:05:47 And I think that can be of great use,
0:05:49 but I don’t think there’s any requirement
0:05:52 that we perceive that as a new entity.
0:05:53 Now, you might say,
0:05:54 well, what’s the harm if we do?
0:05:56 That’s a fair question.
0:05:57 Like, who cares?
0:05:58 If somebody wants to think of it
0:06:00 as a new type of person
0:06:02 or even a new type of God or whatever,
0:06:03 what’s wrong with that?
0:06:06 Potentially nothing.
0:06:08 People believe all kinds of things all the time.
0:06:12 But, in the case of our technology,
0:06:15 let me put it this way.
0:06:19 If you’re a mathematician or a scientist,
0:06:25 you can do what you do
0:06:27 in a kind of an abstract way.
0:06:28 Like, you can say,
0:06:30 I’m furthering math.
0:06:33 And, in a way, that’ll be true
0:06:35 even if nobody else ever even perceives
0:06:36 that I’ve done it.
0:06:37 I’ve written down this proof.
0:06:40 But that’s not true for technologists.
0:06:43 Technologists only make sense
0:06:46 if there’s a designated beneficiary.
0:06:49 Like, you have to make technology for someone.
0:06:52 And, as soon as you say
0:06:56 the technology itself is a new someone,
0:07:00 you stop making sense as a technologist.
0:07:01 Right?
0:07:03 Let me actually take up that question
0:07:04 that you just posed a second ago
0:07:05 with a thought,
0:07:07 I’ve heard from you,
0:07:09 which is something to the effect of,
0:07:11 I think the way you put it is
0:07:13 the easiest way to mismanage a technology
0:07:15 is to misunderstand it.
0:07:17 So, to answer your question…
0:07:18 Sounds like me, I guess.
0:07:19 Yeah. Okay.
0:07:22 If we make the mistake,
0:07:23 which is now common,
0:07:26 to insist that AI is, in fact,
0:07:28 some kind of god or creature
0:07:30 or entity or oracle,
0:07:31 whatever term you prefer,
0:07:33 instead of a tool as you define it,
0:07:34 the implication is that
0:07:37 that would be a consequential mistake, right?
0:07:39 That we will mismanage the technology
0:07:40 by misunderstanding it.
0:07:41 So, is that not quite right?
0:07:42 Am I not quite understanding?
0:07:43 No, I think that’s right.
0:07:46 I think when you treat the technology
0:07:47 as its own beneficiary,
0:07:49 you miss a lot of opportunities
0:07:50 to make it better.
0:07:52 Like, I see this in AI all the time.
0:07:53 I see people saying,
0:07:55 well, if we did this,
0:07:56 it would pass the Turing test better,
0:07:57 and if we did that,
0:07:58 it would seem more like
0:07:59 it was an independent mind.
0:08:01 But those are all goals
0:08:01 that are different
0:08:04 from it being economically useful.
0:08:05 They’re different from it
0:08:08 being useful to any particular user.
0:08:09 They’re just these weird,
0:08:12 to me, almost religious ritual goals
0:08:13 or something.
0:08:15 like they, and so every time
0:08:16 you’re devoting yourself to that,
0:08:18 it means you’re not devoting yourself
0:08:20 to making it better.
0:08:22 Like, an example is,
0:08:25 we have, in my view,
0:08:28 deliberately designed large model AI
0:08:32 to obscure the original human sources
0:08:34 of the data that the AI is trained on
0:08:36 to help create this illusion
0:08:37 of the new entity.
0:08:38 But when we do that,
0:08:41 we make it harder to do quality control.
0:08:43 We make it harder to do authentication
0:08:48 and to detect malicious uses of the model
0:08:52 because we can’t tell what the intent is,
0:08:54 what data it’s drawing upon.
0:08:56 We’re sort of willfully making ourselves
0:08:58 kind of blind in a way
0:09:00 that we probably don’t really need to.
0:09:01 And I really want to emphasize
0:09:03 from a metaphysical point of view,
0:09:05 I can’t prove,
0:09:06 and neither can anyone else,
0:09:08 that a computer is alive or not
0:09:09 or conscious or not or whatever.
0:09:11 I mean, all that stuff
0:09:13 is always going to be a matter of faith.
0:09:15 That’s just the way it is.
0:09:17 That’s what we got around here.
0:09:19 But what I can say
0:09:21 is that this emphasis
0:09:22 on trying to make the models
0:09:25 seem like they’re freestanding new entities
0:09:27 does blind us
0:09:29 to some ways we could make them better.
0:09:30 And so I think, like, why bother?
0:09:32 What do we get out of that?
0:09:32 Not a lot.
0:09:34 So do you think maybe
0:09:35 the cardinal mistake
0:09:37 with a lot of this kind of thinking
0:09:38 is to assume
0:09:42 that artificial intelligence
0:09:43 is something that’s in competition
0:09:45 with human intelligence
0:09:46 and human abilities,
0:09:47 that that kind of misunderstanding
0:09:48 sets us off on a course
0:09:50 for a lot of other kinds
0:09:51 of misunderstandings?
0:09:53 I wouldn’t choose that language
0:09:54 because then the natural thing
0:09:55 somebody’s going to say
0:09:56 who’s a true believer
0:09:57 that the AI is coming alive,
0:09:58 they’re going to say,
0:09:59 yeah, you’re right.
0:10:00 It’s not competition.
0:10:01 We’re going to align them
0:10:02 and they’re going to be
0:10:03 our collaborators
0:10:05 or whatever.
0:10:06 So that, to me,
0:10:07 doesn’t go far enough.
0:10:09 My own way of thinking
0:10:11 is that I’m able
0:10:12 to improve the models
0:10:13 when I say
0:10:14 there’s no new entity there.
0:10:15 I just say they don’t,
0:10:15 they’re not there.
0:10:16 They don’t exist
0:10:17 as separate entities.
0:10:18 They’re just collaborations
0:10:19 of people.
0:10:20 I have to go that far
0:10:22 to get the clarity
0:10:23 to improve them.
0:10:26 It might be a little late
0:10:27 in the language game
0:10:29 to replace a term
0:10:30 like artificial intelligence,
0:10:30 but if you could,
0:10:31 do you have a better one?
0:10:34 I have had the experience
0:10:35 of coming up with terms
0:10:37 that were widely adopted
0:10:37 in society.
0:10:38 I came up with
0:10:39 virtual reality
0:10:40 and some other things
0:10:41 when I was young
0:10:44 and I have seen that
0:10:45 even when you get
0:10:46 to coin the term,
0:10:47 you don’t get to define it
0:10:50 and I don’t love
0:10:51 the way people think
0:10:52 of virtual reality
0:10:53 typically today.
0:10:54 It’s lost a little bit
0:10:55 of its old humanism,
0:10:56 I would say.
0:10:59 So that experience
0:11:00 has led me to feel
0:11:01 that it’s really
0:11:02 younger generations
0:11:03 who should come up
0:11:03 with their own terms.
0:11:04 So what I would prefer
0:11:06 to see is younger people
0:11:07 reject our terms
0:11:09 and come up
0:11:09 with their own.
0:11:11 Fair enough.
0:11:14 I’ve read a lot
0:11:14 of your work
0:11:15 on AI
0:11:17 and I’ve listened
0:11:19 to a lot of your interviews
0:11:21 and I take your point
0:11:22 that AI
0:11:25 is a distillation
0:11:26 of all these human inputs
0:11:27 fundamentally.
0:11:30 but for you at what point
0:11:32 does or can complexity
0:11:35 start looking like autonomy
0:11:37 and what would autonomy
0:11:38 even mean
0:11:39 that the thing starts
0:11:40 making its own decisions
0:11:41 and is that the simple
0:11:42 definition of that?
0:11:43 This is an obsession
0:11:44 that people have
0:11:45 but you have to understand
0:11:46 it’s a religious
0:11:48 and entirely subjective
0:11:50 or sort of cultural obsession
0:11:51 not a scientific one.
0:11:52 It’s your judgment
0:11:54 of how you want to see
0:11:55 the start of autonomy.
0:11:58 So I love complex systems
0:11:59 and I love different levels
0:12:00 of description
0:12:01 and I love the independence
0:12:03 of different levels
0:12:03 of grantedness
0:12:04 in physics
0:12:06 so I’m utterly
0:12:07 as obsessed
0:12:07 as anyone
0:12:08 with that
0:12:10 but it’s important
0:12:10 to distinguish
0:12:12 that fascination
0:12:12 which is a scientific
0:12:13 fascination
0:12:14 with the question
0:12:16 of does crossing
0:12:17 some threshold
0:12:18 make something
0:12:19 human or not?
0:12:21 because the question
0:12:22 of humanness
0:12:24 or of becoming
0:12:24 an entity
0:12:26 that we care about
0:12:27 in our planning
0:12:27 becoming
0:12:28 creating something
0:12:29 that itself
0:12:30 is a beneficiary
0:12:31 of our technology
0:12:32 that question
0:12:33 has to be
0:12:34 a matter of faith
0:12:36 we just have
0:12:36 to accept
0:12:38 that our culture
0:12:39 our law
0:12:40 our ability
0:12:41 to be technologists
0:12:42 ultimately rests
0:12:43 on values
0:12:45 that in a sense
0:12:45 we pull out
0:12:46 of our asses
0:12:47 or if you like
0:12:48 we have to be
0:12:49 a little bit mystical
0:12:50 in order to create
0:12:51 the ground layer
0:12:52 in order to be
0:12:52 then rational
0:12:53 as technologists
0:12:54 in a way
0:12:55 I wish it wasn’t so
0:12:56 it sort of sucks
0:12:57 but it’s just the truth
0:12:57 and the sooner
0:12:58 we accept that
0:12:59 the better off
0:13:00 we’ll be
0:13:00 and the more honest
0:13:01 we’ll be
0:13:02 and I’m okay with it
0:13:03 why?
0:13:05 because
0:13:06 if I’m designing
0:13:07 AI for AI’s sake
0:13:08 I’m talking nonsense
0:13:09 you know
0:13:10 like
0:13:11 right now
0:13:13 it’s very expensive
0:13:13 to compute AI
0:13:14 so what percentage
0:13:16 of that expense
0:13:17 it goes into
0:13:18 creating the illusion
0:13:19 so that you can believe
0:13:20 it’s sort of
0:13:21 another person
0:13:22 when you use chat
0:13:23 how much electricity
0:13:24 is being spent
0:13:25 so that the way
0:13:26 it talks to you
0:13:27 feels like it’s a person
0:13:28 a lot
0:13:28 you know
0:13:29 and it’s a waste
0:13:30 like why are we doing that
0:13:31 why are we doing
0:13:32 why are we creating
0:13:34 a carbon footprint
0:13:36 for the benefit
0:13:38 of some non-entity
0:13:39 in order to fool humans
0:13:40 like it’s
0:13:40 it’s ridiculous
0:13:42 but we don’t see that
0:13:43 because we have this
0:13:45 religious imperative
0:13:46 in the tech
0:13:48 cultural world
0:13:49 to create
0:13:50 this new life
0:13:52 but it’s entirely
0:13:53 a matter of
0:13:54 our own perception
0:13:55 there’s no test
0:13:55 for it
0:13:56 other than the
0:13:56 Turing test
0:13:57 which is no test
0:13:57 at all
0:13:58 I mean
0:13:59 we still don’t even
0:14:01 have a real
0:14:01 definition
0:14:03 of consciousness
0:14:05 and I hear all
0:14:05 these discussions
0:14:07 about machine learning
0:14:09 and human intelligence
0:14:09 and the differences
0:14:11 and I continue
0:14:12 to have no idea
0:14:13 when something
0:14:14 stops being a
0:14:15 simulacrum of intelligence
0:14:16 and becomes the real thing
0:14:17 I still don’t quite know
0:14:18 when something can
0:14:19 reasonably be called
0:14:20 sentient
0:14:21 or intelligent
0:14:22 but maybe the question
0:14:22 doesn’t even matter
0:14:24 maybe it’s enough
0:14:25 for us to think it does
0:14:26 right
0:14:27 so the problem
0:14:28 in what you just
0:14:29 said is the word
0:14:29 still
0:14:32 like it’s a
0:14:33 this
0:14:35 lack of knowledge
0:14:36 is structural
0:14:37 you’re not going
0:14:38 to overcome it
0:14:39 you can pretend
0:14:40 you have
0:14:40 but you’re not going
0:14:41 to
0:14:42 this is genuinely
0:14:43 a matter of faith
0:14:43 you know
0:14:44 and
0:14:46 it’s a very
0:14:46 old discussion
0:14:47 when it comes
0:14:48 to God
0:14:49 but
0:14:50 it’s a new
0:14:50 discussion
0:14:51 when it comes
0:14:52 to each other
0:14:53 or to AIs
0:14:54 and
0:14:54 you know
0:14:55 like
0:14:56 faith is okay
0:14:56 we can live
0:14:57 with faith
0:14:57 we just have
0:14:58 to be honest
0:14:59 about it
0:14:59 and I think
0:15:01 being dishonest
0:15:01 and saying
0:15:02 oh
0:15:03 it’s not faith
0:15:04 I have this
0:15:04 rational proof
0:15:05 of something
0:15:07 it’s not
0:15:08 dishonesty
0:15:08 is probably
0:15:09 not good
0:15:10 especially
0:15:10 if you’re
0:15:10 trying to do
0:15:11 science or technology
0:15:15 maybe we just
0:15:17 maybe we just
0:15:18 hold on
0:15:19 maybe
0:15:20 I’m going to
0:15:21 say this
0:15:23 we probably
0:15:23 just have to
0:15:24 hold on to
0:15:24 some notion
0:15:25 that there’s
0:15:26 something
0:15:26 fundamentally
0:15:27 special
0:15:28 about human
0:15:29 consciousness
0:15:30 and that even
0:15:30 if on some
0:15:31 purely empirical
0:15:31 level
0:15:32 that’s not
0:15:32 even true
0:15:33 maybe believing
0:15:34 that it is
0:15:34 is essential
0:15:36 to our
0:15:36 survival
0:15:37 I don’t
0:15:37 think you
0:15:38 can rationally
0:15:40 proceed
0:15:41 as an
0:15:41 as an
0:15:42 acting
0:15:42 technologist
0:15:44 without
0:15:45 an
0:15:46 irrational
0:15:47 belief
0:15:48 that people
0:15:49 are special
0:15:50 because once again
0:15:50 then you have
0:15:51 no recipient
0:15:52 and if you
0:15:53 say well
0:15:53 there’s going
0:15:54 to be
0:15:54 no belief
0:15:55 all the way
0:15:55 to the bottom
0:15:56 it’s just
0:15:56 going to be
0:15:57 rationality
0:15:57 forever
0:15:58 I mean
0:15:59 it doesn’t
0:15:59 work
0:16:00 rationality
0:16:01 never creates
0:16:01 a total
0:16:02 enclosed
0:16:02 system
0:16:04 we kind
0:16:05 of float
0:16:05 in a sea
0:16:05 of mystery
0:16:06 and we
0:16:06 have like
0:16:07 this belief
0:16:07 that lets
0:16:08 us have
0:16:08 a footing
0:16:09 and it’s
0:16:10 our job
0:16:11 to acknowledge
0:16:11 that even
0:16:12 if we’re
0:16:12 uncomfortable
0:16:13 with it
0:16:15 can I try
0:16:15 another angle
0:16:16 on you
0:16:16 yeah
0:16:17 do you know
0:16:17 my
0:16:17 okay
0:16:18 so there’s
0:16:18 another
0:16:19 argument
0:16:19 about the
0:16:20 turing test
0:16:20 right
0:16:21 turing test
0:16:22 you have a
0:16:23 person on a
0:16:23 computer
0:16:23 they’re each
0:16:24 trying to fool
0:16:24 a judge
0:16:25 and at the
0:16:26 moment the
0:16:26 judge can’t
0:16:26 tell them
0:16:27 apart
0:16:27 you say
0:16:28 well we
0:16:28 might as
0:16:29 well call
0:16:30 the computer
0:16:31 human because
0:16:31 what other
0:16:31 tests can
0:16:32 there be
0:16:32 that’s the
0:16:32 best we’ll
0:16:33 get
0:16:33 okay
0:16:35 so the
0:16:36 problem with
0:16:36 the test
0:16:37 is that it
0:16:38 measures whether
0:16:38 there’s a
0:16:38 differential
0:16:39 but it
0:16:40 doesn’t tell
0:16:40 you whether
0:16:41 the computer
0:16:42 got smarter
0:16:42 or the
0:16:42 human got
0:16:43 stupider
0:16:44 it doesn’t
0:16:45 tell you if
0:16:45 the computer
0:16:46 became more
0:16:47 human or if
0:16:47 the human
0:16:48 became less
0:16:48 human in
0:16:49 any sense
0:16:49 whatever that
0:16:50 might be
0:16:51 so there’s
0:16:52 two humans
0:16:52 the contestant
0:16:53 and the judge
0:16:53 and one
0:16:54 computer
0:16:54 therefore
0:16:56 and this is
0:16:56 meant to be
0:16:57 funny but it’s
0:16:57 also kind of
0:16:57 real
0:16:58 there’s a
0:16:58 two-thirds
0:16:59 chance that
0:16:59 it was a
0:17:00 human that
0:17:00 got stupider
0:17:01 rather than
0:17:01 a computer
0:17:01 that got
0:17:02 smarter
0:17:04 and I
0:17:04 see that
0:17:05 borne out
0:17:05 like when I
0:17:06 look at
0:17:06 social media
0:17:07 and I see
0:17:08 people interacting
0:17:08 with the AI
0:17:09 algorithms that
0:17:10 are supposed to
0:17:10 guide their
0:17:11 attention
0:17:12 I see them
0:17:13 getting stupider
0:17:13 two-thirds
0:17:14 of the time
0:17:14 but then you
0:17:15 know sometimes
0:17:16 really good
0:17:16 stuff happens
0:17:17 so I think
0:17:18 this general
0:17:19 spread of most
0:17:20 of the time
0:17:20 things get
0:17:21 worse but then
0:17:21 there’s some
0:17:22 stuff that’s
0:17:22 really cool
0:17:24 tends to be
0:17:24 true when you
0:17:25 believe in AI
0:17:26 and so
0:17:27 I would
0:17:28 say don’t
0:17:28 believe in
0:17:28 it and
0:17:30 some people
0:17:30 are still
0:17:31 getting it
0:17:31 stupider
0:17:31 because that’s
0:17:32 how we are
0:17:33 but I think
0:17:33 we can get to
0:17:34 the point where
0:17:34 the majority
0:17:35 gets better
0:17:36 instead of
0:17:37 stupider but
0:17:37 right now I
0:17:37 think we’re
0:17:38 at two-thirds
0:17:39 get stupider
0:17:40 yeah that
0:17:41 math checks out
0:17:41 to me
0:17:42 great I
0:17:43 think that’s
0:17:43 a rigorous
0:17:44 argument that’s
0:17:44 what you call
0:17:45 a rigorous
0:17:46 quantitative
0:17:47 theoretically and
0:17:48 empirically supported
0:17:49 argument right
0:17:49 there
0:17:50 so do you
0:17:51 think all
0:17:53 the anxieties
0:17:54 including from
0:17:55 serious people
0:17:56 in in the
0:17:57 world of AI
0:17:58 all the worries
0:18:00 about human
0:18:01 extinction and
0:18:01 mitigating the
0:18:02 risks thereof
0:18:04 does that is
0:18:04 that religious
0:18:06 hysteria to
0:18:06 you or does
0:18:07 that feel
0:18:09 what drives me
0:18:09 crazy about
0:18:10 this I this
0:18:11 is my world
0:18:11 you know so I
0:18:12 talk to the
0:18:12 people who
0:18:13 believe that
0:18:14 stuff all the
0:18:15 time and
0:18:16 increasingly a
0:18:16 lot of them
0:18:17 believe that it
0:18:17 would be good to
0:18:18 wipe out people
0:18:19 and that the AI
0:18:19 future would be a
0:18:20 better one and
0:18:21 that we should
0:18:22 wear a disposable
0:18:24 temporary container
0:18:25 for the birth of
0:18:26 AI I hear that
0:18:27 opinion quite a lot
0:18:27 that’s a real
0:18:28 opinion held by
0:18:29 real people
0:18:32 many many I
0:18:33 mean like the
0:18:34 other day I was
0:18:35 at a lunch in
0:18:36 Palo Alto and
0:18:36 there were some
0:18:37 young AI
0:18:38 scientists there
0:18:39 who were saying
0:18:41 that they would
0:18:42 never have a
0:18:43 bio baby because
0:18:43 as soon as you
0:18:44 have a bio baby
0:18:44 you get the
0:18:46 mind virus of
0:18:48 the bio world
0:18:48 and that when
0:18:49 you have the
0:18:50 bio mind virus
0:18:50 you become
0:18:51 committed to
0:18:52 your human baby
0:18:52 but it’s much
0:18:53 more important to
0:18:54 be committed to
0:18:54 the AI of the
0:18:56 future and so
0:18:57 to have human
0:18:58 babies is
0:18:58 fundamentally
0:18:59 unethical
0:19:01 now okay in
0:19:01 this particular
0:19:03 case this was
0:19:03 a young man
0:19:04 with a female
0:19:05 partner who
0:19:06 wanted a kid
0:19:06 and what I’m
0:19:07 thinking is this
0:19:07 is just another
0:19:08 variation of the
0:19:09 very very old
0:19:10 story of young
0:19:11 men attempting to
0:19:12 put off the baby
0:19:13 thing with their
0:19:14 sexual partner as
0:19:15 long as possible
0:19:16 because I’ve been
0:19:16 there and many of
0:19:16 us have been
0:19:17 there so in a
0:19:18 way I think it’s
0:19:19 not anything new
0:19:19 and it’s just the
0:19:20 old thing but
0:19:21 it’s a very
0:19:23 common attitude
0:19:25 not the dominant
0:19:25 one I would say
0:19:26 the dominant one
0:19:27 is that the
0:19:28 super AI will
0:19:29 turn into this
0:19:30 god thing that’ll
0:19:31 save us and
0:19:32 will either upload
0:19:33 us to be immortal
0:19:34 or solve all our
0:19:34 problems at the
0:19:35 very least or
0:19:36 something create
0:19:37 super abundance at
0:19:38 the very very very
0:19:41 least and I
0:19:45 I have to say
0:19:45 there’s a bit of
0:19:46 an inverse
0:19:47 proportion here
0:19:48 between the people
0:19:49 who directly work
0:19:50 in making AI
0:19:51 systems and then
0:19:51 the people who
0:19:52 are adjacent to
0:19:54 them who have
0:19:54 these various
0:19:57 beliefs my own
0:19:58 opinion is that
0:19:59 the people
0:20:00 how can I put
0:20:02 this the people
0:20:03 who are able to
0:20:04 be skeptical and
0:20:05 a little bored and
0:20:06 dismissive of the
0:20:07 technology they’re
0:20:08 working on tend to
0:20:09 improve it more than
0:20:09 the people kind of
0:20:10 worship it too much
0:20:13 like I’ve seen that
0:20:14 a lot in a lot of
0:20:15 different things not
0:20:16 not just computer
0:20:17 science and I think
0:20:18 I think you have to
0:20:19 have a kind of
0:20:20 like you can’t drink
0:20:21 your own whiskey too
0:20:22 much when you’re a
0:20:24 technologist you have
0:20:25 to kind of be ready
0:20:26 to say oh maybe
0:20:27 this thing’s a bit
0:20:28 overhyped I’m not
0:20:29 going to tell that
0:20:30 to the people buying
0:20:31 shares in my company
0:20:31 but you know what
0:20:32 like just between us
0:20:35 you know and but
0:20:35 that attitude is
0:20:37 exactly the one that
0:20:38 puts you over the
0:20:38 threshold to then
0:20:39 start improving it
0:20:40 more and that’s one
0:20:41 of the dangers of
0:20:42 this kind of
0:20:43 mythologizing of it
0:20:44 oh it’s about to
0:20:45 become this god
0:20:45 that’ll take over
0:20:46 everything but
0:20:48 that what follows
0:20:49 from that is this
0:20:50 very curious thing
0:20:51 which is that the
0:20:52 way of thinking
0:20:53 about it where it’s
0:20:54 about to turn into
0:20:55 this god that’ll
0:20:56 run everything and
0:20:57 either kill us all
0:20:57 or fix all our
0:20:58 problems that
0:21:00 attitude in itself
0:21:02 makes you not
0:21:04 only a little bit
0:21:05 of a lesser
0:21:06 improver of the
0:21:07 technology by any
0:21:08 like real measurable
0:21:10 metric but it
0:21:11 also makes you a
0:21:12 bad steward of it
0:21:15 part of part of
0:21:15 what makes this
0:21:16 very confusing
0:21:17 especially to you
0:21:19 know non-technical
0:21:20 normie outsiders
0:21:21 like me and like
0:21:22 most people frankly
0:21:24 is that it is it’s
0:21:25 just moving and
0:21:26 changing and evolving
0:21:27 really quickly and
0:21:28 the terms and
0:21:29 concepts are very
0:21:30 slippery if you’re
0:21:32 not deep in it and
0:21:32 you know you’re
0:21:33 talking about super
0:21:34 super AI and godlike
0:21:36 powers one example
0:21:37 is and you’ll bear
0:21:38 with me for a second
0:21:39 so I can bring people
0:21:41 along we have this
0:21:42 dichotomy between
0:21:44 AI versus AGI
0:21:45 artificial intelligence
0:21:46 versus artificial
0:21:47 general intelligence and
0:21:48 my understanding is
0:21:50 that AI is a term for
0:21:51 the general set of
0:21:52 tools that people
0:21:53 are building chat
0:21:54 bots and that sort
0:21:54 of thing and that
0:21:56 AGI is still sort of
0:21:57 a theoretical thing
0:21:58 where this tech is
0:22:00 basically as good at
0:22:01 everything as a
0:22:03 normal regular person
0:22:03 is and it can also
0:22:04 learn and grow and
0:22:05 apply that knowledge
0:22:07 just like we can and
0:22:08 we’ve got AI now
0:22:09 clearly but we don’t
0:22:11 have AGI yet and if
0:22:13 we get it and there
0:22:13 are people who think
0:22:14 we’re maybe closer
0:22:15 than we thought
0:22:16 recently that it’ll be
0:22:18 a real Rubicon
0:22:20 crossing moment for
0:22:21 us what’s your
0:22:22 feeling on that do
0:22:23 you think AGI is
0:22:24 even possible in the
0:22:25 way most people
0:22:26 have you not
0:22:26 listened to a word
0:22:28 I said that’s a
0:22:28 religious question
0:22:30 that’s like asking
0:22:30 if I think the
0:22:31 rapture is coming
0:22:33 soon I mean it’s
0:22:33 yeah but you can
0:22:34 have an opinion
0:22:34 about religious
0:22:35 questions I guess
0:22:38 that’s true I mean
0:22:40 there are those who
0:22:41 say we have AGI
0:22:42 already and their
0:22:43 opinion is as
0:22:44 legitimate as
0:22:45 anybody else’s I
0:22:46 mean I just think
0:22:47 the moment you’ve
0:22:48 put the question
0:22:48 that way you’ve
0:22:49 already confused
0:22:50 yourself and made
0:22:50 yourself kind of
0:22:51 useless in talking
0:22:52 about what to do
0:22:53 with the technology
0:22:54 so I have to reject
0:22:55 your question as
0:22:56 being like poorly
0:22:56 framed and
0:22:57 ill-informed I’m
0:22:59 sorry I was hoping
0:22:59 to get through this
0:23:00 fucking conversation
0:23:01 without you having
0:23:02 to beat back at
0:23:03 one of my ill-informed
0:23:04 questions and I
0:23:05 did make it I made
0:23:06 it almost 20 minutes
0:23:07 in yeah good luck
0:23:08 with that my friend
0:23:12 all right sir
0:23:13 it was a valiant
0:23:14 effort you win that
0:23:17 you really I mean
0:23:19 look I mean this
0:23:20 is silly this is
0:23:21 like I’m also
0:23:21 trying to speak for
0:23:22 concerns that I
0:23:23 know a lot of
0:23:24 people I know
0:23:25 because we broadcast
0:23:26 that way of thinking
0:23:27 about it so yeah
0:23:31 look there’s a
0:23:31 thing all right
0:23:33 look I’m I
0:23:35 benefit from people
0:23:36 believing in AI
0:23:37 professionally and
0:23:39 there’s a way that
0:23:39 the whole economy
0:23:40 runs on attention
0:23:42 getting and in a
0:23:44 funny way the way
0:23:45 digital attention
0:23:46 economy works
0:23:51 is it rewards
0:23:52 anxieties and
0:23:54 terror as much
0:23:54 or maybe a
0:23:56 little more than
0:23:59 optimism or you
0:24:01 know goodwill and
0:24:02 so you have this
0:24:03 weird situation where
0:24:05 somebody can play
0:24:06 the villain on
0:24:06 social media and
0:24:08 do very well and
0:24:09 similar things
0:24:10 happening in the
0:24:11 rhetoric of computer
0:24:12 science so when we
0:24:13 say oh our stuff
0:24:14 might be about to
0:24:15 come alive and
0:24:16 it’s about to get
0:24:17 smarter than you
0:24:18 it generates this
0:24:19 little anxiety in
0:24:20 people and then that
0:24:21 actually benefits us
0:24:22 because it keeps it
0:24:24 keeps the attention
0:24:27 on us and so
0:24:28 there’s a funny way
0:24:29 that we’re
0:24:30 incentivized to put
0:24:31 things in the most
0:24:33 alarming way what I
0:24:34 what I will say is
0:24:36 that I like the
0:24:37 idea of models being
0:24:38 useful so I think
0:24:40 of the models that
0:24:41 we’re building as
0:24:42 being wonderful
0:24:43 mashup models so
0:24:44 like for instance
0:24:46 I love being able
0:24:47 to use large models
0:24:48 to go through the
0:24:48 scientific literature
0:24:51 and find correlations
0:24:51 between different
0:24:52 papers that might not
0:24:53 use the same
0:24:54 terminology that would
0:24:54 have been a pain in
0:24:55 the butt to detect
0:24:57 before that’s great
0:24:58 if you present that
0:24:59 with a chat
0:25:00 interface it seems
0:25:01 like a smart
0:25:02 scientist if people
0:25:03 like that I mean I
0:25:04 guess whatever it’s
0:25:05 not my job to judge
0:25:06 everybody but the
0:25:08 thing is you don’t
0:25:09 need to present it
0:25:09 that way you’d
0:25:10 still get the
0:25:11 same value but
0:25:11 that’s the way we
0:25:13 do it we we add
0:25:14 in personhood
0:25:16 fooling to what
0:25:17 would otherwise be
0:25:19 really in a way
0:25:20 more clear
0:25:21 freestanding value I
0:25:23 think but we like
0:25:24 to present the
0:25:24 fantasy
0:25:37 there’s over 500
0:25:38 thousand small
0:25:40 businesses in bc and
0:25:40 no two are alike
0:25:42 i’m a carpenter i’m a
0:25:43 graphic designer i sell
0:25:45 dog socks online that’s
0:25:47 why bcaa created one
0:25:48 size doesn’t fit all
0:25:49 insurance it’s
0:25:51 customizable based on
0:25:52 your unique needs so
0:25:53 whether you manage
0:25:54 rental properties or
0:25:55 paint pet portraits you
0:25:56 can protect your small
0:25:58 business with bc’s most
0:25:59 trusted insurance brand
0:26:01 visit bcaa.com slash
0:26:03 small business and use
0:26:04 promo code radio to
0:26:05 receive fifty dollars
0:26:06 off conditions apply
0:26:16 all right let me try to
0:26:17 pull away a little bit
0:26:18 from religious questions
0:26:22 okay so look i i’m i’m
0:26:23 not worried about the
0:26:23 matrix and the
0:26:25 terminator um i am
0:26:27 worried about a much
0:26:28 more boring and
0:26:30 unsexy scenario but i
0:26:31 think equally bad
0:26:34 possibility is that these
0:26:37 emergent technologies will
0:26:39 accelerate a trend that
0:26:41 i think digital tech in
0:26:42 general and social media
0:26:43 in particular has already
0:26:47 started which is to pull
0:26:49 us away more and more
0:26:50 from the physical world
0:26:52 and encourage us to
0:26:54 perform versions of
0:26:55 ourselves in the virtual
0:26:56 world and because of how
0:26:58 it’s designed it has this
0:27:00 habit of reducing other
0:27:02 people to crude avatars
0:27:03 which is why it’s so easy
0:27:05 to be cruel and vicious
0:27:07 online and why people who
0:27:08 are on social media too
0:27:10 much start to become
0:27:12 mutually unintelligible
0:27:13 to each other and i
0:27:16 worry about ai super
0:27:17 charging some of this
0:27:18 stuff i mean do you even
0:27:19 accept that framing am i
0:27:20 right to be thinking of ai
0:27:23 as a potential accelerant of
0:27:26 these trends yeah i mean i
0:27:29 i think you are correct
0:27:36 so it’s arguable and
0:27:37 actually consistent with the
0:27:38 way the community speaks
0:27:41 internally to say that the
0:27:43 algorithms that have been
0:27:44 driving social media up to
0:27:49 now are a form of ai if you
0:27:52 if you unlike me wish to use
0:27:55 the term ai and what the
0:27:59 algorithms do is they
0:28:01 attempt to predict human
0:28:03 behavior based on the
0:28:05 stimulus given to the
0:28:07 human and by putting that
0:28:08 in an adaptive loop they
0:28:11 hope to drive attention and
0:28:13 sort of an obsessive
0:28:15 attachment to a platform
0:28:18 because these algorithms
0:28:21 can’t tell whether
0:28:23 something’s being driven
0:28:25 because of things that we
0:28:25 might think are positive
0:28:26 or things that we might
0:28:28 think are negative so i
0:28:29 call this the life of the
0:28:30 parody the this notion
0:28:32 that you can’t tell like
0:28:33 if a bid is one or zero
0:28:34 doesn’t matter because it’s
0:28:36 an arbitrary designation in
0:28:38 a digital system so if
0:28:39 somebody’s getting
0:28:40 attention by being a dick
0:28:42 that works just as well as
0:28:43 if they’re offering
0:28:44 life-saving information or
0:28:45 helping people improve
0:28:46 themselves but then the
0:28:47 peaks that are good are
0:28:48 really good and i don’t
0:28:49 want to deny that i love
0:28:50 dance culture on tiktok
0:28:53 science bloggers on on
0:28:54 youtube have achieved a
0:28:55 level that’s like
0:28:57 astonishingly good and so
0:28:58 on like there’s all these
0:29:00 really really positive good
0:29:01 spots but then overall
0:29:03 there’s this loss of truth
0:29:06 and political paranoia and
0:29:09 unnecessary confrontation
0:29:11 between arbitrarily created
0:29:13 cultural groups and so on
0:29:15 that’s really doing damage
0:29:18 um and as is often pointed
0:29:20 out especially to young
0:29:21 girls and so on and so
0:29:22 forth uh not not great
0:29:25 and so uh yeah could
0:29:27 better ai algorithms make
0:29:27 that worse
0:29:31 plausibly i mean it’s
0:29:32 possible that it’s already
0:29:34 bottomed out that it’s kind
0:29:37 of the the badness just
0:29:37 comes from the overall
0:29:38 structure and if the
0:29:39 algorithms themselves get
0:29:41 more sophisticated it won’t
0:29:42 really push it that much
0:29:43 further but i think
0:29:45 actually kind of can i’m
0:29:46 i’m worried about it i
0:29:48 because we so much want to
0:29:49 pass the turing test and
0:29:50 make people think our
0:29:51 programs are people
0:29:55 we’re moving to this um
0:29:56 so-called agentic era where
0:29:59 it’s not just that you have a
0:30:00 chat interface with with the
0:30:01 thing but the chat interface
0:30:04 gets to know you for years at
0:30:06 a time and gets a so-called
0:30:08 personality and but and all
0:30:09 this and then the idea is that
0:30:10 people then fall in love with
0:30:11 these and we’re already
0:30:13 seeing examples of this
0:30:15 here and there um and this
0:30:16 notion of a whole generation
0:30:17 of young people falling in
0:30:20 love with fake avatars i mean
0:30:24 people people talk about ai as
0:30:25 if it’s just like this yeast in
0:30:26 the air it’s like oh ai will
0:30:27 appear and people will fall in
0:30:29 love with ai avatars but it’s
0:30:30 not ai is always run by
0:30:32 companies so like they’re going
0:30:33 to be falling in love with
0:30:35 something from google or meta or
0:30:39 whatever and like that notion
0:30:41 that your love life becomes
0:30:44 owned by some company or even
0:30:45 worse tiktok or a chinese thing
0:30:49 eek eek eek eek i think that’ll
0:30:51 create a a a new centralization
0:30:56 or or or xai eek eek eek eek i’ll
0:30:57 add some more eeks to that and so
0:30:59 this centralization of power and
0:31:02 influence could be even worse and
0:31:04 that might be a breaking point
0:31:06 event and so that kind of thing
0:31:07 ending civilization or ending up
0:31:09 killing all the people does seem
0:31:11 plausible to me and some of my
0:31:12 colleagues would interpret that as
0:31:15 ai become coming alive and killing
0:31:16 everybody but i would just
0:31:17 interpret it as people being
0:31:20 making terrible choices it all
0:31:21 amounts to the same thing in the
0:31:23 end anyway it does at the end of
0:31:25 the day in terms of actual events
0:31:27 the same so jaron from your point
0:31:29 of view is it even possible to have
0:31:33 good algorithms nudging us around
0:31:35 online or are all algorithms bad yes
0:31:37 of course it is okay what does that
0:31:39 look like course it is of course it
0:31:41 is yes yes yes yes give me the good
0:31:42 stuff here give me the good
0:31:44 algorithms well i mean look in the
0:31:49 scientific community we do it like i
0:31:51 mean like okay here’s an example um
0:31:55 deep research from open ai is a great
0:31:57 tool it does a literature search on some
0:31:59 topic and assembles a little report
0:32:03 it has unnecessary chatbot elements
0:32:05 to try to make it seem like there’s
0:32:07 somebody there i view that as a waste
0:32:10 of time and a waste of energy and i i
0:32:11 would be happy without it but but
0:32:13 whatever okay it’s it’s not terrible
0:32:16 though what it does is it saves
0:32:18 scientists a ton of time it makes a lot
0:32:20 of sense i get a lot out of it it’s
0:32:21 great and now there’s some new
0:32:25 competitors to it great that stuff’s
0:32:27 fabulous i really really really it’s
0:32:28 good because the scientific literature
0:32:30 has become impossible to use without
0:32:33 it i do a lot of work that’s pretty
0:32:35 mathematical and the problem is that
0:32:37 every time somebody comes across
0:32:38 similar math they don’t realize
0:32:39 somebody else has done it so they come
0:32:41 up with their own terms for things and
0:32:43 then you have the same ideas or
0:32:45 similar ones with different terms and
0:32:46 all these scattered papers in totally
0:32:47 different communities at different
0:32:48 conferences and different journals
0:32:53 yeah but with a tool like this you
0:32:55 can capture all that and get it into
0:32:59 place it’s like what what what ai is is
0:33:01 it’s a way of improving collaboration
0:33:03 between people it’s a way of gathering
0:33:06 what people have done in a more unified
0:33:09 way that can notice multiple hops of
0:33:12 different terms and similar structures it’s
0:33:15 it’s a better way of using statistics to
0:33:17 connect what we’ve all done together to
0:33:21 get more use out of it it’s great i love
0:33:25 it and the amount of avatar illusion
0:33:27 nonsense is kept to a minimum because
0:33:29 our job is not to fall in love with our
0:33:31 research our fake research assistant our
0:33:35 job is to make progress efficiently on
0:33:37 whatever we’re doing right and so that
0:33:39 that’s great what is wrong with that
0:33:41 nothing it’s fabulous so yeah there’s
0:33:43 wonderful uses if i didn’t think those
0:33:46 things existed i’d quit what i do
0:33:49 professionally in the industry of course
0:33:51 there’s wonderful uses and i think we
0:33:52 need those things i think they really
0:33:53 matter
0:33:56 i guess what i’m hovering around is the
0:33:58 business model right i mean uh the
0:34:00 advertising model was sort of the
0:34:02 original sin of the internet yeah yeah i
0:34:02 think it is
0:34:06 um how do we not fuck this up how do we
0:34:07 not repeat those mistakes what’s a better
0:34:09 model i mean you talk a lot about data
0:34:11 dignity so you’re saying we can say fuck
0:34:13 on this podcast oh you can say whatever
0:34:15 you want if i had known that there would
0:34:17 be a lot of fuckery up to now in my in my
0:34:18 speech it’s not too late anyway it’s not
0:34:23 too late we got plenty of time okay but no
0:34:25 but seriously what how do we get it right
0:34:26 this time how do we not make the same
0:34:29 mistakes what is a better model yeah well
0:34:32 um this is actually more important this
0:34:34 question is the central question of our
0:34:36 time in my view like the central
0:34:39 question of our time isn’t um being able
0:34:42 to scale ai more is is an important
0:34:45 question and i get that and most people
0:34:47 are focused on that and dealing with the
0:34:49 climate is an important question but in
0:34:51 terms of our own survival coming up with
0:34:53 a business model for civilization that
0:34:56 isn’t self-destructive is in a way our
0:34:59 most primary problem and challenge right
0:35:01 now because the way we’re doing it what
0:35:04 we kind of we went through this thing in
0:35:06 the earlier phase of the internet like
0:35:08 information should be free and then the
0:35:09 only business model that’s left is paying
0:35:12 for influence uh and so then all the
0:35:16 platforms look free or very cheap to the
0:35:17 user but then actually the real customer
0:35:19 trying to influence the user and you end
0:35:23 up with what’s essentially a stealthy form
0:35:26 of um manipulation being the central
0:35:30 project of civilization and we can only
0:35:31 get away with that for so long at some
0:35:33 point that bites us and we become too
0:35:36 crazy to survive so we must change the
0:35:38 business model of civilization and so
0:35:41 exactly how to get from here to there is
0:35:44 a bit of a mystery but i continue to work
0:35:46 on it like i think we should incentivize
0:35:48 people to put great data into the ai
0:35:51 programs of the future uh and i’d like
0:35:53 people to be paid for data used
0:35:55 ai models and also to be celebrated and
0:35:56 made visible and known because i think
0:35:58 it’s just a big collaboration and our
0:36:01 collaborators should be valued how easy
0:36:02 would it be to do that do you think we
0:36:05 can or will there’s still some unsolved
0:36:07 technical questions about how to do it
0:36:09 i’m very very actively working on those
0:36:10 and i believe it’s doable and there’s a
0:36:12 whole you know research community devoted
0:36:14 to exactly that distributed around the
0:36:16 world and i think it’ll make better
0:36:18 models i mean better data makes better
0:36:20 models and there’s a lot of people who
0:36:21 dispute that and they say no it’s just
0:36:22 better algorithms and we already have
0:36:25 enough data for the rest of all time but
0:36:28 i disagree with that i think i don’t
0:36:29 think we’re the smartest people who will
0:36:31 ever live and there might be new creative
0:36:33 things that happen in the future that we
0:36:35 don’t foresee and the models we’ve
0:36:37 currently built might not extend into
0:36:39 those things and having some open system
0:36:41 where people can contribute to new models
0:36:44 in new ways is a more expansive and
0:36:47 creative and you know open-minded and
0:36:51 and just you know kind of spiritually
0:36:53 optimistic way of thinking about the deep
0:36:53 future
0:37:15 today explained here with eric levitt senior
0:37:17 correspondent at vox.com to talk about the
0:37:21 2024 election that can’t be right eric i thought
0:37:22 we were done with that i feel like i’m pacino
0:37:24 in three just when i thought i was out
0:37:28 they pull me back in why are we talking about
0:37:30 the 2024 election again the reason why we’re
0:37:33 still looking back is that it takes a while
0:37:36 after an election to get all of the most high
0:37:40 quality data on what exactly happened so the
0:37:42 full picture is starting to just come into view
0:37:45 now and you wrote a piece about the full
0:37:49 picture for vox recently and it did bonkers business
0:37:53 on the internet what did it say what struck a
0:37:56 chord yeah so this was my interview with
0:38:00 david shore of blue rose research he’s one of
0:38:04 the biggest sort of democratic data gurus in
0:38:08 the party and basically the big picture headline
0:38:12 takeaways are on today explained you’ll have to go listen
0:38:15 to them there find the show wherever you listen to shows bro
0:38:35 i think i’m a humanist like you in the end and what i want fundamentally is just the
0:38:39 elevation of human agency not the diminishment of it and part of what that means to borrow your
0:38:45 language is creating more creative classes and less dependent classes yep uh you’ve convinced me
0:38:50 that that’s at least possible i don’t know if it’s likely but i hope it is and and maybe some
0:38:55 some kind of data dignity type model is the most promising thing i’ve heard
0:39:06 no i sort of feel like the human project our our survival is simultaneously both certain and
0:39:11 unlikely if you know what i mean like i i feel like if we just follow the immediate trend lines
0:39:13 and what we see we’re probably gonna
0:39:16 buck ourselves up to use the word i’m
0:39:22 encouraged to say here there you go but i also just have this feeling we’ve made it through a lot of
0:39:26 stuff in the past and i just have this feeling we’re gonna rise to the occasion and figure this
0:39:32 one out really i don’t know exactly how we will but i think we will i don’t know what the
0:39:40 alternative is the alternative is in 200 million years there’ll be smart cephalopods to take over
0:39:45 the planet uh and maybe they’ll do that i mean that’s the alternative but i think we can do it i
0:39:56 really do i really i we just we just have to be a little less full of ourselves and not believe we’re
0:40:02 making a new god no more golden calves that’s really our problem still yeah good luck with
0:40:09 that i mean i i i like i’m constantly thinking more about the the social and political and cultural
0:40:14 dynamics because that’s just my background um and you know i mean i i guess speaking of dependent
0:40:23 classes i a very common concern is is this fear that ai is going to create a lot of social instability by
0:40:29 taking all of our jobs it’s a widespread fear it’s scary as hell and it feels like
0:40:35 the latest iteration of a very old story about new technologies like automation displacing workers
0:40:39 i mean how do you speak to these sorts of fears when you hear them because surely you hear them a lot
0:40:41 yeah and they concern me i mean
0:40:54 look um there’s not a perfect solution to that problem uh there i’ll give you an example of one
0:41:00 that i find tricky to think about uh my mom died in a car accident and i’ve always believed from when
0:41:05 i was very young that cars should drive themselves that it was manifestly obvious that we could create
0:41:13 a digital system that would save many many lives so we have tens of thousands of people killed by cars
0:41:16 every year still in the us and i think it’s over a million worldwide or something like that i mean
0:41:24 it’s like crazy it’s like and so um there are a lot of reasons for it and a self-driving car is never
0:41:28 going to be perfect because it’s not a task that can be done perfectly there’ll be circumstances where
0:41:34 there’s no optimal solution you know in the instant but overall we ought to be able to save a lot of
0:41:42 life so i’m really supportive of that project at the same time an incredibly large number of blue collar
0:41:49 people around the world get by behind a wheel whether it’s truck drivers or rideshare drivers these days
0:42:00 or etc you know and so like how do you reconcile those two things uh and i i don’t think there’s any way to do it perfectly i think there’s
0:42:10 there’s two things that should be true one is that we need to find an intermediate way to love
0:42:15 a social safety net that isn’t all the way to universal basic income because the universal basic
0:42:21 income idea gives people this idea that they’re not worth anything and they’re just being supported by
0:42:27 the tech titans as a hobby and it doesn’t feel very secure or very dignified and or stable there’s like
0:42:33 just a lot of reasons why i’m skeptical of that uh in the long term and i don’t think people like it
0:42:40 or want it but on the other hand um just telling people well you’re thrown out into the mix and in
0:42:45 the u.s you have no health insurance and just figure something out that’s also just too cruel and not viable
0:42:51 if it’s a lot of people at once so we have to find our way to a very unfashionable intermediate
0:43:00 sense of social safety network or uh to help people through transitions and right now the accounting for
0:43:04 that is very very difficult to sort out and especially in the united states there’s a deep
0:43:13 hostility to it and i just don’t see logically any other way but then beyond that um i do think new roles
0:43:18 will appear like the the story that well new things will happen and new new things will be possible
0:43:25 i do believe that like there’s a kind of a vague and uncomfortable sense that surely new things will
0:43:31 come along and i i actually think that’s true i don’t feel comfortable making that claim for all
0:43:35 those drivers like we’re not going to retrain them to be programmers because low-level programming
0:43:43 itself is also getting automated right i don’t know exactly how that’ll work um i have thought a great
0:43:49 great deal about it but that’s who i am for the moment i believe that there could be all kinds of
0:43:57 things we don’t foresee and that within that explosion of new sectors of creativity there will be enough new
0:44:04 needs for people to do things if only to train ai’s that it’ll keep up with human needs and support
0:44:08 some kind of a world of economics that’s more distributed than just a central authority
0:44:13 distributing income to everybody which i think would be corrupted yeah yeah i agree with that
0:44:20 do you think we’re being sufficiently intentional about the development of this technology do you
0:44:27 think we’re asking the right questions as a society now well i mean the questions are dominated by
0:44:33 a certain internal technical culture which is and the mainstream of technical culture is very
0:44:39 obsessed with ai as a new god or some kind of new entity and so i think that that does make the
0:44:47 whole conversation go askew and that said we’re almost like if you go to ai conferences
0:44:54 there might there’s usually more talk where somebody is saying we’re going to talk about how to talk the
0:45:01 ai into not killing us you know and that kind of conversation which to me is not well grounded and
0:45:09 i think it kind of loses itself in loops but that kind of conversation can take up as much time and space
0:45:14 as like a serious conversation of like how can we optimize this algorithm or how can we you know like the
0:45:20 the actual work that we should be doing as technologists um i was at one conference i was
0:45:25 kind of funny where i forget what there were these different factions there’s the artificial general
0:45:30 intelligence and there’s the super intelligence and there’s all these different people who have
0:45:34 slightly different ideas about how awesome ai will be and help might kill us all in different ways
0:45:42 and they were so conflicted that they got into a fist fight um a not very competent fist fight it must be
0:45:48 said but i’m shocked it’s kind of funny anyway i sort of wish i had a film of that that was really funny but
0:45:54 i don’t know i mean i love my world i love the people i do kind of make fun of us a little bit
0:46:00 sometimes because i just think it’s important too you know okay so if we just let’s just set aside for
0:46:06 the moment that the more common fears about ai the alignment problem and taking our jobs and
0:46:12 flattening human creativity all that stuff all that is there all of that um is there is there a fear of
0:46:18 yours something you think we could get terribly wrong that’s not currently something we hear much about
0:46:26 uh god i don’t even know where to start yeah there’s like a lot lot lot lot lot lot lot
0:46:38 lot i’m i mean one of the things i worry about is we’re gradually moving education into an ai model
0:46:46 and the motivations for that are often very good because in a lot of places on earth it’s just been
0:46:50 impossible to come up with an economics of supporting and training enough human teachers
0:46:58 and a lot of cultural issues in changing societies make it very very hard to make schools that work
0:47:06 and so on like there’s a lot of issues and in theory a sort of uh client self-adapting ai tutor
0:47:13 could solve a lot of problems at a low cost in a lot of situations but then the issue with that is
0:47:19 once again creativity how do you keep people who learn in a system like that
0:47:24 how do you train them so that they’re able to step outside of what the system was trained on
0:47:29 you know like there’s this funny way that you’re always retreading and recombining the training data
0:47:35 in any ai system and you can address that to a degree with constant fresh input and this and that but
0:47:40 i am a little worried about people being trained in a closed system that makes them a little less than
0:47:46 they might otherwise have been and have a little less faith in themselves i’m a little concerned about
0:47:53 sort of defining the nature of life and education downward you know and the thing is the history
0:47:59 of education is filled with doing exactly that thing like education has been filled with overly
0:48:08 reductive ideas or overly idealistic and and um biased ideas of different kinds i mean so it’s not like
0:48:14 we’re entering this perfect system messing it up we’re entering a messed up system and trying to figure out
0:48:22 how to not perpetuate it it’s messed up itness i think in the case of education um challenging really
0:48:28 challenging i think i just ask just because i’m just curious what you would say i i have a five-year-old
0:48:37 son and he’s already started asking questions about you know like what kind of skills should he learn what
0:48:42 what should he what should he aspire to do in the world oh man that’s a hard one right and i don’t know
0:48:49 what to tell him because i have no idea what the world is going to look like by the time he’s 18 or 20 or 15
0:48:54 hell you know i what would you what would you tell him if uncle jaron came over oh yeah and he asked
0:48:59 you that what would you say well i have a teen daughter now and when she was younger uh she went
0:49:07 to coding camp you know and loved it and then when uh copilot for github came out and now some of the
0:49:12 other ones that are out she was like well you know the kinds of programs i’d write i can just ask for now
0:49:17 so why did you send me to all this thing why did i waste all my time at these things and i said uh remember
0:49:24 you loved coding camp remember you liked it you liked it it’s like well yeah but i would have
0:49:30 i could have liked spelunking camp or something too like why coding camp and um i i mean
0:49:36 i don’t have a perfect answer for all that right now i really don’t i do
0:49:44 i do think there are new things that will emerge i have a feeling there’ll be a lot of new professions
0:49:51 related to adaptive biology and modifications and helping people deal with weird changes to their
0:49:56 bodies that will become possible i think that’ll become a big thing i don’t know exactly how it’s
0:50:04 too early to say like i there’s a subtle point here i want to make which is um i am very far from being
0:50:12 anti-futuristic or disliking extreme change in the future but what i what i have to insist upon
0:50:18 is continuity so in this idea there’s a term called the singularity uh applied to ai sometimes that
0:50:23 there’ll be this rush of change so fast that nobody can learn anything nobody can know anything and it
0:50:29 just is beyond us beyond us beyond us the problem with the singularity whether it’s in a black hole or in
0:50:35 the big bang or in technology is that it’s very hard to have you know like by definition even if
0:50:41 you don’t technically lose information you lose the ability to access the information in the in the
0:50:46 original context or with any kind of structure so it’s essentially a form of massive forgetting and
0:50:54 massive loss of context and massive loss of meaning therefore and so however radical we get if in the future
0:51:00 we’re all going to evolve into massive distributed colonies of space bacteria flying around
0:51:07 and intergalactically or something whatever we turn into i’m all for it i’m in i’m in i’m in but
0:51:12 the line from here to there has to have memory it has to be continuous enough that we’re learning
0:51:19 lessons and we we remember if we break that because we want the thrill of polpot’s year zero where from
0:51:24 now on we’re the smartest people and everybody else was wrong and we start over if we want that break
0:51:29 we must resist it we must oppose people who want that break year zero never works out well it’s a
0:51:38 really really bad idea and so that to me i’m like pro extreme futures but anti discontinuity into the
0:51:43 future and and so that’s a an in-between place to be that’s a little subtle and hard to get across but
0:51:48 i think that that’s the right place to be well i always try to end these conversations with as much
0:51:55 optimism as possible so do you have any other good news or uh rosy scenarios you can you can paint for
0:52:01 us uh before we get out of here about how things are going to be awesome in the future right now we’re
0:52:09 in a very hard to parse moment things are strange things are scary and what i keep on telling myself
0:52:16 there’s always hope in chaos as much as someone might someone driving chaos might be certain that
0:52:25 it’s under their command but it never is and those of us who watch unfolding chaos looking for signs of
0:52:33 hope looking for optimism looking for little openings in which to do something good we will find them if we
0:52:40 stay alert and so i’d urge everybody to do that during this period jaron lanier i’m a fan of your
0:52:45 work i’m a fan of you as a human being as well i appreciate you coming in oh well that’s very kind
0:52:52 of you thank you so much and i really appreciate all the effort and also just the goodwill and warmth
0:53:03 you put into this interview i really do appreciate it so much
0:53:12 all right i hope you enjoyed this episode there was a lot going on in this one jaron is a unique mind
0:53:22 and i appreciate the way he thinks about all of this this conversation did force me to reflect on the
0:53:31 language i use to make sense of ai and all the assumptions buried in that language so i hope you
0:53:39 found his insights useful but either way as always we want to know what you think so drop us a line
0:53:52 at the gray area at vox.com or leave us a message on our new voicemail line at 1-800-214-5749
0:53:58 and once you’re finished with that if you have a second please go ahead and rate and review and
0:54:09 subscribe to the podcast this episode was produced by beth morrissey edited by jorge just engineered by erica
0:54:17 wong fact check by melissa hirsch and alex overington wrote our theme music new episodes of the gray area
0:54:25 drop on mondays listen and subscribe the show is part of vox support vox’s journalism by joining our
0:54:33 membership program today go to vox.com slash members to sign up and if you decide to sign up because of this show
0:54:51 let us know you
0:00:03 It’s time to fix it.
0:00:05 Come to Speedy Glass before it turns into a crack.
0:00:08 Our experts will repair your windshield in less than an hour,
0:00:09 and it’s free if you’re insured.
0:00:12 Book your appointment today at speedyglass.ca.
0:00:14 Details and conditions at speedyglass.ca.
0:00:20 What do you think about when you think about AI?
0:00:25 Maybe chatbots giving you new lasagna recipes,
0:00:29 research assistants helping you finish that paper.
0:00:33 Do you think about machines taking your job?
0:00:37 Maybe you think of something even more ominous,
0:00:41 like Skynet robots wiping out humanity.
0:00:46 If you’re like me, you probably think of all those things,
0:00:47 depending on the day.
0:00:50 And that’s sort of the point.
0:00:55 AI is not well understood, even by the people creating it.
0:00:58 And even though we all know it’s a technology
0:01:00 that’s going to change our lives,
0:01:03 that’s really all we know at this point.
0:01:10 So how do we confront this uncertainty?
0:01:13 How do we navigate the current moment?
0:01:17 And how do we, the people who have been told
0:01:19 that we will be impacted by AI,
0:01:21 but don’t seem to have much of a say
0:01:23 in how the AI is being built,
0:01:26 engage in the conversation?
0:01:31 I’m Sean Elling, and this is The Gray Area.
0:01:45 Today’s guest is Jaron Lanier.
0:01:48 He’s a virtual reality pioneer,
0:01:50 a digital philosopher,
0:01:54 and the author of several best-selling books on technology.
0:01:57 He’s also one of the most profound critics
0:02:01 of Silicon Valley and the business model driving it.
0:02:04 I wanted to bring Jaron on the show
0:02:07 for the first episode of this special series on AI
0:02:10 because I think he’s uniquely positioned
0:02:14 to speak both to the technological side of AI,
0:02:16 what’s happening, where it’s going,
0:02:20 and also to the human side.
0:02:24 Jaron’s a computer scientist who loves technology.
0:02:29 But at his core, he’s a humanist
0:02:32 who’s always thinking about what technologies are doing to us
0:02:36 and how our understanding of these tools
0:02:39 will inevitably determine how they’re used.
0:02:43 Maybe what Jaron does the best, though,
0:02:45 is offer a different lens
0:02:47 through which to view these technologies.
0:02:51 We’re encouraged to treat these machines
0:02:54 as though they’re godlike,
0:02:56 as though they’re thinking for themselves.
0:03:01 Indeed, they’re designed to make you feel that way
0:03:04 because it adds to the mystique around them
0:03:07 and obscures the truth about how they really work.
0:03:12 But Jaron’s plea is to be careful
0:03:15 about thoughtlessly adopting the language
0:03:17 that the AI creators give us
0:03:18 to describe their creation
0:03:21 because that language structures
0:03:25 not only how we think about these technologies,
0:03:27 but what we do with them.
0:03:35 Jaron Lanier, welcome to the show.
0:03:36 That’s me. Hey.
0:03:39 So look, I have heard
0:03:43 so many of these big picture conversations about AI
0:03:48 and they often begin with a question
0:03:52 about how or whether AI is going to take over the world.
0:03:55 But I discovered very quickly
0:03:57 that you don’t accept the terms of that question,
0:03:59 which is why I’m not going to ask it.
0:04:01 but I thought it would be useful
0:04:03 as a beginning to ask you
0:04:05 why you find questions like that
0:04:07 or claims like that ridiculous.
0:04:10 Oh, well, you know,
0:04:12 when it comes to AI,
0:04:15 the whole technical field
0:04:16 is kind of defined
0:04:19 by an almost metaphysical assertion,
0:04:22 which is we are creating intelligence.
0:04:23 Well, what is intelligence?
0:04:26 Something human.
0:04:28 The whole field was founded
0:04:31 by Alan Turing’s thought experiment
0:04:32 called the Turing test,
0:04:37 where if you can fool a human
0:04:38 into thinking you’ve made a human,
0:04:40 then you might as well have made a human
0:04:42 because what other tests could there be?
0:04:45 Which in a way is fair enough.
0:04:45 On the other hand,
0:04:47 what other scientific field
0:04:50 other than maybe supporting stage magicians
0:04:53 is entirely based on being able to fool people?
0:04:53 I mean, it’s stupid.
0:04:56 Fooling people in itself accomplishes nothing.
0:04:58 There’s no productivity.
0:04:59 There’s no insight
0:05:01 unless you’re studying
0:05:03 the cognition of being fooled, of course.
0:05:06 So there’s an alternative way
0:05:07 to think about what we do
0:05:09 with what we call AI,
0:05:12 which is that there’s no new entity.
0:05:14 There’s nothing intelligent there.
0:05:16 What there is is a new
0:05:17 and in my opinion,
0:05:18 sometimes quite useful
0:05:21 form of collaboration between people.
0:05:23 If you look at something like the Wikipedia,
0:05:25 where people mash up
0:05:27 a lot of their communications into one thing,
0:05:30 you can think of that as a step on the way
0:05:32 to what we call large model AI,
0:05:34 where we take all the data that we have
0:05:35 and we put it together
0:05:39 in a way that allows more interpolation
0:05:43 and more commingling than previous methods.
0:05:47 And I think that can be of great use,
0:05:49 but I don’t think there’s any requirement
0:05:52 that we perceive that as a new entity.
0:05:53 Now, you might say,
0:05:54 well, what’s the harm if we do?
0:05:56 That’s a fair question.
0:05:57 Like, who cares?
0:05:58 If somebody wants to think of it
0:06:00 as a new type of person
0:06:02 or even a new type of God or whatever,
0:06:03 what’s wrong with that?
0:06:06 Potentially nothing.
0:06:08 People believe all kinds of things all the time.
0:06:12 But, in the case of our technology,
0:06:15 let me put it this way.
0:06:19 If you’re a mathematician or a scientist,
0:06:25 you can do what you do
0:06:27 in a kind of an abstract way.
0:06:28 Like, you can say,
0:06:30 I’m furthering math.
0:06:33 And, in a way, that’ll be true
0:06:35 even if nobody else ever even perceives
0:06:36 that I’ve done it.
0:06:37 I’ve written down this proof.
0:06:40 But that’s not true for technologists.
0:06:43 Technologists only make sense
0:06:46 if there’s a designated beneficiary.
0:06:49 Like, you have to make technology for someone.
0:06:52 And, as soon as you say
0:06:56 the technology itself is a new someone,
0:07:00 you stop making sense as a technologist.
0:07:01 Right?
0:07:03 Let me actually take up that question
0:07:04 that you just posed a second ago
0:07:05 with a thought,
0:07:07 I’ve heard from you,
0:07:09 which is something to the effect of,
0:07:11 I think the way you put it is
0:07:13 the easiest way to mismanage a technology
0:07:15 is to misunderstand it.
0:07:17 So, to answer your question…
0:07:18 Sounds like me, I guess.
0:07:19 Yeah. Okay.
0:07:22 If we make the mistake,
0:07:23 which is now common,
0:07:26 to insist that AI is, in fact,
0:07:28 some kind of god or creature
0:07:30 or entity or oracle,
0:07:31 whatever term you prefer,
0:07:33 instead of a tool as you define it,
0:07:34 the implication is that
0:07:37 that would be a consequential mistake, right?
0:07:39 That we will mismanage the technology
0:07:40 by misunderstanding it.
0:07:41 So, is that not quite right?
0:07:42 Am I not quite understanding?
0:07:43 No, I think that’s right.
0:07:46 I think when you treat the technology
0:07:47 as its own beneficiary,
0:07:49 you miss a lot of opportunities
0:07:50 to make it better.
0:07:52 Like, I see this in AI all the time.
0:07:53 I see people saying,
0:07:55 well, if we did this,
0:07:56 it would pass the Turing test better,
0:07:57 and if we did that,
0:07:58 it would seem more like
0:07:59 it was an independent mind.
0:08:01 But those are all goals
0:08:01 that are different
0:08:04 from it being economically useful.
0:08:05 They’re different from it
0:08:08 being useful to any particular user.
0:08:09 They’re just these weird,
0:08:12 to me, almost religious ritual goals
0:08:13 or something.
0:08:15 like they, and so every time
0:08:16 you’re devoting yourself to that,
0:08:18 it means you’re not devoting yourself
0:08:20 to making it better.
0:08:22 Like, an example is,
0:08:25 we have, in my view,
0:08:28 deliberately designed large model AI
0:08:32 to obscure the original human sources
0:08:34 of the data that the AI is trained on
0:08:36 to help create this illusion
0:08:37 of the new entity.
0:08:38 But when we do that,
0:08:41 we make it harder to do quality control.
0:08:43 We make it harder to do authentication
0:08:48 and to detect malicious uses of the model
0:08:52 because we can’t tell what the intent is,
0:08:54 what data it’s drawing upon.
0:08:56 We’re sort of willfully making ourselves
0:08:58 kind of blind in a way
0:09:00 that we probably don’t really need to.
0:09:01 And I really want to emphasize
0:09:03 from a metaphysical point of view,
0:09:05 I can’t prove,
0:09:06 and neither can anyone else,
0:09:08 that a computer is alive or not
0:09:09 or conscious or not or whatever.
0:09:11 I mean, all that stuff
0:09:13 is always going to be a matter of faith.
0:09:15 That’s just the way it is.
0:09:17 That’s what we got around here.
0:09:19 But what I can say
0:09:21 is that this emphasis
0:09:22 on trying to make the models
0:09:25 seem like they’re freestanding new entities
0:09:27 does blind us
0:09:29 to some ways we could make them better.
0:09:30 And so I think, like, why bother?
0:09:32 What do we get out of that?
0:09:32 Not a lot.
0:09:34 So do you think maybe
0:09:35 the cardinal mistake
0:09:37 with a lot of this kind of thinking
0:09:38 is to assume
0:09:42 that artificial intelligence
0:09:43 is something that’s in competition
0:09:45 with human intelligence
0:09:46 and human abilities,
0:09:47 that that kind of misunderstanding
0:09:48 sets us off on a course
0:09:50 for a lot of other kinds
0:09:51 of misunderstandings?
0:09:53 I wouldn’t choose that language
0:09:54 because then the natural thing
0:09:55 somebody’s going to say
0:09:56 who’s a true believer
0:09:57 that the AI is coming alive,
0:09:58 they’re going to say,
0:09:59 yeah, you’re right.
0:10:00 It’s not competition.
0:10:01 We’re going to align them
0:10:02 and they’re going to be
0:10:03 our collaborators
0:10:05 or whatever.
0:10:06 So that, to me,
0:10:07 doesn’t go far enough.
0:10:09 My own way of thinking
0:10:11 is that I’m able
0:10:12 to improve the models
0:10:13 when I say
0:10:14 there’s no new entity there.
0:10:15 I just say they don’t,
0:10:15 they’re not there.
0:10:16 They don’t exist
0:10:17 as separate entities.
0:10:18 They’re just collaborations
0:10:19 of people.
0:10:20 I have to go that far
0:10:22 to get the clarity
0:10:23 to improve them.
0:10:26 It might be a little late
0:10:27 in the language game
0:10:29 to replace a term
0:10:30 like artificial intelligence,
0:10:30 but if you could,
0:10:31 do you have a better one?
0:10:34 I have had the experience
0:10:35 of coming up with terms
0:10:37 that were widely adopted
0:10:37 in society.
0:10:38 I came up with
0:10:39 virtual reality
0:10:40 and some other things
0:10:41 when I was young
0:10:44 and I have seen that
0:10:45 even when you get
0:10:46 to coin the term,
0:10:47 you don’t get to define it
0:10:50 and I don’t love
0:10:51 the way people think
0:10:52 of virtual reality
0:10:53 typically today.
0:10:54 It’s lost a little bit
0:10:55 of its old humanism,
0:10:56 I would say.
0:10:59 So that experience
0:11:00 has led me to feel
0:11:01 that it’s really
0:11:02 younger generations
0:11:03 who should come up
0:11:03 with their own terms.
0:11:04 So what I would prefer
0:11:06 to see is younger people
0:11:07 reject our terms
0:11:09 and come up
0:11:09 with their own.
0:11:11 Fair enough.
0:11:14 I’ve read a lot
0:11:14 of your work
0:11:15 on AI
0:11:17 and I’ve listened
0:11:19 to a lot of your interviews
0:11:21 and I take your point
0:11:22 that AI
0:11:25 is a distillation
0:11:26 of all these human inputs
0:11:27 fundamentally.
0:11:30 but for you at what point
0:11:32 does or can complexity
0:11:35 start looking like autonomy
0:11:37 and what would autonomy
0:11:38 even mean
0:11:39 that the thing starts
0:11:40 making its own decisions
0:11:41 and is that the simple
0:11:42 definition of that?
0:11:43 This is an obsession
0:11:44 that people have
0:11:45 but you have to understand
0:11:46 it’s a religious
0:11:48 and entirely subjective
0:11:50 or sort of cultural obsession
0:11:51 not a scientific one.
0:11:52 It’s your judgment
0:11:54 of how you want to see
0:11:55 the start of autonomy.
0:11:58 So I love complex systems
0:11:59 and I love different levels
0:12:00 of description
0:12:01 and I love the independence
0:12:03 of different levels
0:12:03 of grantedness
0:12:04 in physics
0:12:06 so I’m utterly
0:12:07 as obsessed
0:12:07 as anyone
0:12:08 with that
0:12:10 but it’s important
0:12:10 to distinguish
0:12:12 that fascination
0:12:12 which is a scientific
0:12:13 fascination
0:12:14 with the question
0:12:16 of does crossing
0:12:17 some threshold
0:12:18 make something
0:12:19 human or not?
0:12:21 because the question
0:12:22 of humanness
0:12:24 or of becoming
0:12:24 an entity
0:12:26 that we care about
0:12:27 in our planning
0:12:27 becoming
0:12:28 creating something
0:12:29 that itself
0:12:30 is a beneficiary
0:12:31 of our technology
0:12:32 that question
0:12:33 has to be
0:12:34 a matter of faith
0:12:36 we just have
0:12:36 to accept
0:12:38 that our culture
0:12:39 our law
0:12:40 our ability
0:12:41 to be technologists
0:12:42 ultimately rests
0:12:43 on values
0:12:45 that in a sense
0:12:45 we pull out
0:12:46 of our asses
0:12:47 or if you like
0:12:48 we have to be
0:12:49 a little bit mystical
0:12:50 in order to create
0:12:51 the ground layer
0:12:52 in order to be
0:12:52 then rational
0:12:53 as technologists
0:12:54 in a way
0:12:55 I wish it wasn’t so
0:12:56 it sort of sucks
0:12:57 but it’s just the truth
0:12:57 and the sooner
0:12:58 we accept that
0:12:59 the better off
0:13:00 we’ll be
0:13:00 and the more honest
0:13:01 we’ll be
0:13:02 and I’m okay with it
0:13:03 why?
0:13:05 because
0:13:06 if I’m designing
0:13:07 AI for AI’s sake
0:13:08 I’m talking nonsense
0:13:09 you know
0:13:10 like
0:13:11 right now
0:13:13 it’s very expensive
0:13:13 to compute AI
0:13:14 so what percentage
0:13:16 of that expense
0:13:17 it goes into
0:13:18 creating the illusion
0:13:19 so that you can believe
0:13:20 it’s sort of
0:13:21 another person
0:13:22 when you use chat
0:13:23 how much electricity
0:13:24 is being spent
0:13:25 so that the way
0:13:26 it talks to you
0:13:27 feels like it’s a person
0:13:28 a lot
0:13:28 you know
0:13:29 and it’s a waste
0:13:30 like why are we doing that
0:13:31 why are we doing
0:13:32 why are we creating
0:13:34 a carbon footprint
0:13:36 for the benefit
0:13:38 of some non-entity
0:13:39 in order to fool humans
0:13:40 like it’s
0:13:40 it’s ridiculous
0:13:42 but we don’t see that
0:13:43 because we have this
0:13:45 religious imperative
0:13:46 in the tech
0:13:48 cultural world
0:13:49 to create
0:13:50 this new life
0:13:52 but it’s entirely
0:13:53 a matter of
0:13:54 our own perception
0:13:55 there’s no test
0:13:55 for it
0:13:56 other than the
0:13:56 Turing test
0:13:57 which is no test
0:13:57 at all
0:13:58 I mean
0:13:59 we still don’t even
0:14:01 have a real
0:14:01 definition
0:14:03 of consciousness
0:14:05 and I hear all
0:14:05 these discussions
0:14:07 about machine learning
0:14:09 and human intelligence
0:14:09 and the differences
0:14:11 and I continue
0:14:12 to have no idea
0:14:13 when something
0:14:14 stops being a
0:14:15 simulacrum of intelligence
0:14:16 and becomes the real thing
0:14:17 I still don’t quite know
0:14:18 when something can
0:14:19 reasonably be called
0:14:20 sentient
0:14:21 or intelligent
0:14:22 but maybe the question
0:14:22 doesn’t even matter
0:14:24 maybe it’s enough
0:14:25 for us to think it does
0:14:26 right
0:14:27 so the problem
0:14:28 in what you just
0:14:29 said is the word
0:14:29 still
0:14:32 like it’s a
0:14:33 this
0:14:35 lack of knowledge
0:14:36 is structural
0:14:37 you’re not going
0:14:38 to overcome it
0:14:39 you can pretend
0:14:40 you have
0:14:40 but you’re not going
0:14:41 to
0:14:42 this is genuinely
0:14:43 a matter of faith
0:14:43 you know
0:14:44 and
0:14:46 it’s a very
0:14:46 old discussion
0:14:47 when it comes
0:14:48 to God
0:14:49 but
0:14:50 it’s a new
0:14:50 discussion
0:14:51 when it comes
0:14:52 to each other
0:14:53 or to AIs
0:14:54 and
0:14:54 you know
0:14:55 like
0:14:56 faith is okay
0:14:56 we can live
0:14:57 with faith
0:14:57 we just have
0:14:58 to be honest
0:14:59 about it
0:14:59 and I think
0:15:01 being dishonest
0:15:01 and saying
0:15:02 oh
0:15:03 it’s not faith
0:15:04 I have this
0:15:04 rational proof
0:15:05 of something
0:15:07 it’s not
0:15:08 dishonesty
0:15:08 is probably
0:15:09 not good
0:15:10 especially
0:15:10 if you’re
0:15:10 trying to do
0:15:11 science or technology
0:15:15 maybe we just
0:15:17 maybe we just
0:15:18 hold on
0:15:19 maybe
0:15:20 I’m going to
0:15:21 say this
0:15:23 we probably
0:15:23 just have to
0:15:24 hold on to
0:15:24 some notion
0:15:25 that there’s
0:15:26 something
0:15:26 fundamentally
0:15:27 special
0:15:28 about human
0:15:29 consciousness
0:15:30 and that even
0:15:30 if on some
0:15:31 purely empirical
0:15:31 level
0:15:32 that’s not
0:15:32 even true
0:15:33 maybe believing
0:15:34 that it is
0:15:34 is essential
0:15:36 to our
0:15:36 survival
0:15:37 I don’t
0:15:37 think you
0:15:38 can rationally
0:15:40 proceed
0:15:41 as an
0:15:41 as an
0:15:42 acting
0:15:42 technologist
0:15:44 without
0:15:45 an
0:15:46 irrational
0:15:47 belief
0:15:48 that people
0:15:49 are special
0:15:50 because once again
0:15:50 then you have
0:15:51 no recipient
0:15:52 and if you
0:15:53 say well
0:15:53 there’s going
0:15:54 to be
0:15:54 no belief
0:15:55 all the way
0:15:55 to the bottom
0:15:56 it’s just
0:15:56 going to be
0:15:57 rationality
0:15:57 forever
0:15:58 I mean
0:15:59 it doesn’t
0:15:59 work
0:16:00 rationality
0:16:01 never creates
0:16:01 a total
0:16:02 enclosed
0:16:02 system
0:16:04 we kind
0:16:05 of float
0:16:05 in a sea
0:16:05 of mystery
0:16:06 and we
0:16:06 have like
0:16:07 this belief
0:16:07 that lets
0:16:08 us have
0:16:08 a footing
0:16:09 and it’s
0:16:10 our job
0:16:11 to acknowledge
0:16:11 that even
0:16:12 if we’re
0:16:12 uncomfortable
0:16:13 with it
0:16:15 can I try
0:16:15 another angle
0:16:16 on you
0:16:16 yeah
0:16:17 do you know
0:16:17 my
0:16:17 okay
0:16:18 so there’s
0:16:18 another
0:16:19 argument
0:16:19 about the
0:16:20 turing test
0:16:20 right
0:16:21 turing test
0:16:22 you have a
0:16:23 person on a
0:16:23 computer
0:16:23 they’re each
0:16:24 trying to fool
0:16:24 a judge
0:16:25 and at the
0:16:26 moment the
0:16:26 judge can’t
0:16:26 tell them
0:16:27 apart
0:16:27 you say
0:16:28 well we
0:16:28 might as
0:16:29 well call
0:16:30 the computer
0:16:31 human because
0:16:31 what other
0:16:31 tests can
0:16:32 there be
0:16:32 that’s the
0:16:32 best we’ll
0:16:33 get
0:16:33 okay
0:16:35 so the
0:16:36 problem with
0:16:36 the test
0:16:37 is that it
0:16:38 measures whether
0:16:38 there’s a
0:16:38 differential
0:16:39 but it
0:16:40 doesn’t tell
0:16:40 you whether
0:16:41 the computer
0:16:42 got smarter
0:16:42 or the
0:16:42 human got
0:16:43 stupider
0:16:44 it doesn’t
0:16:45 tell you if
0:16:45 the computer
0:16:46 became more
0:16:47 human or if
0:16:47 the human
0:16:48 became less
0:16:48 human in
0:16:49 any sense
0:16:49 whatever that
0:16:50 might be
0:16:51 so there’s
0:16:52 two humans
0:16:52 the contestant
0:16:53 and the judge
0:16:53 and one
0:16:54 computer
0:16:54 therefore
0:16:56 and this is
0:16:56 meant to be
0:16:57 funny but it’s
0:16:57 also kind of
0:16:57 real
0:16:58 there’s a
0:16:58 two-thirds
0:16:59 chance that
0:16:59 it was a
0:17:00 human that
0:17:00 got stupider
0:17:01 rather than
0:17:01 a computer
0:17:01 that got
0:17:02 smarter
0:17:04 and I
0:17:04 see that
0:17:05 borne out
0:17:05 like when I
0:17:06 look at
0:17:06 social media
0:17:07 and I see
0:17:08 people interacting
0:17:08 with the AI
0:17:09 algorithms that
0:17:10 are supposed to
0:17:10 guide their
0:17:11 attention
0:17:12 I see them
0:17:13 getting stupider
0:17:13 two-thirds
0:17:14 of the time
0:17:14 but then you
0:17:15 know sometimes
0:17:16 really good
0:17:16 stuff happens
0:17:17 so I think
0:17:18 this general
0:17:19 spread of most
0:17:20 of the time
0:17:20 things get
0:17:21 worse but then
0:17:21 there’s some
0:17:22 stuff that’s
0:17:22 really cool
0:17:24 tends to be
0:17:24 true when you
0:17:25 believe in AI
0:17:26 and so
0:17:27 I would
0:17:28 say don’t
0:17:28 believe in
0:17:28 it and
0:17:30 some people
0:17:30 are still
0:17:31 getting it
0:17:31 stupider
0:17:31 because that’s
0:17:32 how we are
0:17:33 but I think
0:17:33 we can get to
0:17:34 the point where
0:17:34 the majority
0:17:35 gets better
0:17:36 instead of
0:17:37 stupider but
0:17:37 right now I
0:17:37 think we’re
0:17:38 at two-thirds
0:17:39 get stupider
0:17:40 yeah that
0:17:41 math checks out
0:17:41 to me
0:17:42 great I
0:17:43 think that’s
0:17:43 a rigorous
0:17:44 argument that’s
0:17:44 what you call
0:17:45 a rigorous
0:17:46 quantitative
0:17:47 theoretically and
0:17:48 empirically supported
0:17:49 argument right
0:17:49 there
0:17:50 so do you
0:17:51 think all
0:17:53 the anxieties
0:17:54 including from
0:17:55 serious people
0:17:56 in in the
0:17:57 world of AI
0:17:58 all the worries
0:18:00 about human
0:18:01 extinction and
0:18:01 mitigating the
0:18:02 risks thereof
0:18:04 does that is
0:18:04 that religious
0:18:06 hysteria to
0:18:06 you or does
0:18:07 that feel
0:18:09 what drives me
0:18:09 crazy about
0:18:10 this I this
0:18:11 is my world
0:18:11 you know so I
0:18:12 talk to the
0:18:12 people who
0:18:13 believe that
0:18:14 stuff all the
0:18:15 time and
0:18:16 increasingly a
0:18:16 lot of them
0:18:17 believe that it
0:18:17 would be good to
0:18:18 wipe out people
0:18:19 and that the AI
0:18:19 future would be a
0:18:20 better one and
0:18:21 that we should
0:18:22 wear a disposable
0:18:24 temporary container
0:18:25 for the birth of
0:18:26 AI I hear that
0:18:27 opinion quite a lot
0:18:27 that’s a real
0:18:28 opinion held by
0:18:29 real people
0:18:32 many many I
0:18:33 mean like the
0:18:34 other day I was
0:18:35 at a lunch in
0:18:36 Palo Alto and
0:18:36 there were some
0:18:37 young AI
0:18:38 scientists there
0:18:39 who were saying
0:18:41 that they would
0:18:42 never have a
0:18:43 bio baby because
0:18:43 as soon as you
0:18:44 have a bio baby
0:18:44 you get the
0:18:46 mind virus of
0:18:48 the bio world
0:18:48 and that when
0:18:49 you have the
0:18:50 bio mind virus
0:18:50 you become
0:18:51 committed to
0:18:52 your human baby
0:18:52 but it’s much
0:18:53 more important to
0:18:54 be committed to
0:18:54 the AI of the
0:18:56 future and so
0:18:57 to have human
0:18:58 babies is
0:18:58 fundamentally
0:18:59 unethical
0:19:01 now okay in
0:19:01 this particular
0:19:03 case this was
0:19:03 a young man
0:19:04 with a female
0:19:05 partner who
0:19:06 wanted a kid
0:19:06 and what I’m
0:19:07 thinking is this
0:19:07 is just another
0:19:08 variation of the
0:19:09 very very old
0:19:10 story of young
0:19:11 men attempting to
0:19:12 put off the baby
0:19:13 thing with their
0:19:14 sexual partner as
0:19:15 long as possible
0:19:16 because I’ve been
0:19:16 there and many of
0:19:16 us have been
0:19:17 there so in a
0:19:18 way I think it’s
0:19:19 not anything new
0:19:19 and it’s just the
0:19:20 old thing but
0:19:21 it’s a very
0:19:23 common attitude
0:19:25 not the dominant
0:19:25 one I would say
0:19:26 the dominant one
0:19:27 is that the
0:19:28 super AI will
0:19:29 turn into this
0:19:30 god thing that’ll
0:19:31 save us and
0:19:32 will either upload
0:19:33 us to be immortal
0:19:34 or solve all our
0:19:34 problems at the
0:19:35 very least or
0:19:36 something create
0:19:37 super abundance at
0:19:38 the very very very
0:19:41 least and I
0:19:45 I have to say
0:19:45 there’s a bit of
0:19:46 an inverse
0:19:47 proportion here
0:19:48 between the people
0:19:49 who directly work
0:19:50 in making AI
0:19:51 systems and then
0:19:51 the people who
0:19:52 are adjacent to
0:19:54 them who have
0:19:54 these various
0:19:57 beliefs my own
0:19:58 opinion is that
0:19:59 the people
0:20:00 how can I put
0:20:02 this the people
0:20:03 who are able to
0:20:04 be skeptical and
0:20:05 a little bored and
0:20:06 dismissive of the
0:20:07 technology they’re
0:20:08 working on tend to
0:20:09 improve it more than
0:20:09 the people kind of
0:20:10 worship it too much
0:20:13 like I’ve seen that
0:20:14 a lot in a lot of
0:20:15 different things not
0:20:16 not just computer
0:20:17 science and I think
0:20:18 I think you have to
0:20:19 have a kind of
0:20:20 like you can’t drink
0:20:21 your own whiskey too
0:20:22 much when you’re a
0:20:24 technologist you have
0:20:25 to kind of be ready
0:20:26 to say oh maybe
0:20:27 this thing’s a bit
0:20:28 overhyped I’m not
0:20:29 going to tell that
0:20:30 to the people buying
0:20:31 shares in my company
0:20:31 but you know what
0:20:32 like just between us
0:20:35 you know and but
0:20:35 that attitude is
0:20:37 exactly the one that
0:20:38 puts you over the
0:20:38 threshold to then
0:20:39 start improving it
0:20:40 more and that’s one
0:20:41 of the dangers of
0:20:42 this kind of
0:20:43 mythologizing of it
0:20:44 oh it’s about to
0:20:45 become this god
0:20:45 that’ll take over
0:20:46 everything but
0:20:48 that what follows
0:20:49 from that is this
0:20:50 very curious thing
0:20:51 which is that the
0:20:52 way of thinking
0:20:53 about it where it’s
0:20:54 about to turn into
0:20:55 this god that’ll
0:20:56 run everything and
0:20:57 either kill us all
0:20:57 or fix all our
0:20:58 problems that
0:21:00 attitude in itself
0:21:02 makes you not
0:21:04 only a little bit
0:21:05 of a lesser
0:21:06 improver of the
0:21:07 technology by any
0:21:08 like real measurable
0:21:10 metric but it
0:21:11 also makes you a
0:21:12 bad steward of it
0:21:15 part of part of
0:21:15 what makes this
0:21:16 very confusing
0:21:17 especially to you
0:21:19 know non-technical
0:21:20 normie outsiders
0:21:21 like me and like
0:21:22 most people frankly
0:21:24 is that it is it’s
0:21:25 just moving and
0:21:26 changing and evolving
0:21:27 really quickly and
0:21:28 the terms and
0:21:29 concepts are very
0:21:30 slippery if you’re
0:21:32 not deep in it and
0:21:32 you know you’re
0:21:33 talking about super
0:21:34 super AI and godlike
0:21:36 powers one example
0:21:37 is and you’ll bear
0:21:38 with me for a second
0:21:39 so I can bring people
0:21:41 along we have this
0:21:42 dichotomy between
0:21:44 AI versus AGI
0:21:45 artificial intelligence
0:21:46 versus artificial
0:21:47 general intelligence and
0:21:48 my understanding is
0:21:50 that AI is a term for
0:21:51 the general set of
0:21:52 tools that people
0:21:53 are building chat
0:21:54 bots and that sort
0:21:54 of thing and that
0:21:56 AGI is still sort of
0:21:57 a theoretical thing
0:21:58 where this tech is
0:22:00 basically as good at
0:22:01 everything as a
0:22:03 normal regular person
0:22:03 is and it can also
0:22:04 learn and grow and
0:22:05 apply that knowledge
0:22:07 just like we can and
0:22:08 we’ve got AI now
0:22:09 clearly but we don’t
0:22:11 have AGI yet and if
0:22:13 we get it and there
0:22:13 are people who think
0:22:14 we’re maybe closer
0:22:15 than we thought
0:22:16 recently that it’ll be
0:22:18 a real Rubicon
0:22:20 crossing moment for
0:22:21 us what’s your
0:22:22 feeling on that do
0:22:23 you think AGI is
0:22:24 even possible in the
0:22:25 way most people
0:22:26 have you not
0:22:26 listened to a word
0:22:28 I said that’s a
0:22:28 religious question
0:22:30 that’s like asking
0:22:30 if I think the
0:22:31 rapture is coming
0:22:33 soon I mean it’s
0:22:33 yeah but you can
0:22:34 have an opinion
0:22:34 about religious
0:22:35 questions I guess
0:22:38 that’s true I mean
0:22:40 there are those who
0:22:41 say we have AGI
0:22:42 already and their
0:22:43 opinion is as
0:22:44 legitimate as
0:22:45 anybody else’s I
0:22:46 mean I just think
0:22:47 the moment you’ve
0:22:48 put the question
0:22:48 that way you’ve
0:22:49 already confused
0:22:50 yourself and made
0:22:50 yourself kind of
0:22:51 useless in talking
0:22:52 about what to do
0:22:53 with the technology
0:22:54 so I have to reject
0:22:55 your question as
0:22:56 being like poorly
0:22:56 framed and
0:22:57 ill-informed I’m
0:22:59 sorry I was hoping
0:22:59 to get through this
0:23:00 fucking conversation
0:23:01 without you having
0:23:02 to beat back at
0:23:03 one of my ill-informed
0:23:04 questions and I
0:23:05 did make it I made
0:23:06 it almost 20 minutes
0:23:07 in yeah good luck
0:23:08 with that my friend
0:23:12 all right sir
0:23:13 it was a valiant
0:23:14 effort you win that
0:23:17 you really I mean
0:23:19 look I mean this
0:23:20 is silly this is
0:23:21 like I’m also
0:23:21 trying to speak for
0:23:22 concerns that I
0:23:23 know a lot of
0:23:24 people I know
0:23:25 because we broadcast
0:23:26 that way of thinking
0:23:27 about it so yeah
0:23:31 look there’s a
0:23:31 thing all right
0:23:33 look I’m I
0:23:35 benefit from people
0:23:36 believing in AI
0:23:37 professionally and
0:23:39 there’s a way that
0:23:39 the whole economy
0:23:40 runs on attention
0:23:42 getting and in a
0:23:44 funny way the way
0:23:45 digital attention
0:23:46 economy works
0:23:51 is it rewards
0:23:52 anxieties and
0:23:54 terror as much
0:23:54 or maybe a
0:23:56 little more than
0:23:59 optimism or you
0:24:01 know goodwill and
0:24:02 so you have this
0:24:03 weird situation where
0:24:05 somebody can play
0:24:06 the villain on
0:24:06 social media and
0:24:08 do very well and
0:24:09 similar things
0:24:10 happening in the
0:24:11 rhetoric of computer
0:24:12 science so when we
0:24:13 say oh our stuff
0:24:14 might be about to
0:24:15 come alive and
0:24:16 it’s about to get
0:24:17 smarter than you
0:24:18 it generates this
0:24:19 little anxiety in
0:24:20 people and then that
0:24:21 actually benefits us
0:24:22 because it keeps it
0:24:24 keeps the attention
0:24:27 on us and so
0:24:28 there’s a funny way
0:24:29 that we’re
0:24:30 incentivized to put
0:24:31 things in the most
0:24:33 alarming way what I
0:24:34 what I will say is
0:24:36 that I like the
0:24:37 idea of models being
0:24:38 useful so I think
0:24:40 of the models that
0:24:41 we’re building as
0:24:42 being wonderful
0:24:43 mashup models so
0:24:44 like for instance
0:24:46 I love being able
0:24:47 to use large models
0:24:48 to go through the
0:24:48 scientific literature
0:24:51 and find correlations
0:24:51 between different
0:24:52 papers that might not
0:24:53 use the same
0:24:54 terminology that would
0:24:54 have been a pain in
0:24:55 the butt to detect
0:24:57 before that’s great
0:24:58 if you present that
0:24:59 with a chat
0:25:00 interface it seems
0:25:01 like a smart
0:25:02 scientist if people
0:25:03 like that I mean I
0:25:04 guess whatever it’s
0:25:05 not my job to judge
0:25:06 everybody but the
0:25:08 thing is you don’t
0:25:09 need to present it
0:25:09 that way you’d
0:25:10 still get the
0:25:11 same value but
0:25:11 that’s the way we
0:25:13 do it we we add
0:25:14 in personhood
0:25:16 fooling to what
0:25:17 would otherwise be
0:25:19 really in a way
0:25:20 more clear
0:25:21 freestanding value I
0:25:23 think but we like
0:25:24 to present the
0:25:24 fantasy
0:25:37 there’s over 500
0:25:38 thousand small
0:25:40 businesses in bc and
0:25:40 no two are alike
0:25:42 i’m a carpenter i’m a
0:25:43 graphic designer i sell
0:25:45 dog socks online that’s
0:25:47 why bcaa created one
0:25:48 size doesn’t fit all
0:25:49 insurance it’s
0:25:51 customizable based on
0:25:52 your unique needs so
0:25:53 whether you manage
0:25:54 rental properties or
0:25:55 paint pet portraits you
0:25:56 can protect your small
0:25:58 business with bc’s most
0:25:59 trusted insurance brand
0:26:01 visit bcaa.com slash
0:26:03 small business and use
0:26:04 promo code radio to
0:26:05 receive fifty dollars
0:26:06 off conditions apply
0:26:16 all right let me try to
0:26:17 pull away a little bit
0:26:18 from religious questions
0:26:22 okay so look i i’m i’m
0:26:23 not worried about the
0:26:23 matrix and the
0:26:25 terminator um i am
0:26:27 worried about a much
0:26:28 more boring and
0:26:30 unsexy scenario but i
0:26:31 think equally bad
0:26:34 possibility is that these
0:26:37 emergent technologies will
0:26:39 accelerate a trend that
0:26:41 i think digital tech in
0:26:42 general and social media
0:26:43 in particular has already
0:26:47 started which is to pull
0:26:49 us away more and more
0:26:50 from the physical world
0:26:52 and encourage us to
0:26:54 perform versions of
0:26:55 ourselves in the virtual
0:26:56 world and because of how
0:26:58 it’s designed it has this
0:27:00 habit of reducing other
0:27:02 people to crude avatars
0:27:03 which is why it’s so easy
0:27:05 to be cruel and vicious
0:27:07 online and why people who
0:27:08 are on social media too
0:27:10 much start to become
0:27:12 mutually unintelligible
0:27:13 to each other and i
0:27:16 worry about ai super
0:27:17 charging some of this
0:27:18 stuff i mean do you even
0:27:19 accept that framing am i
0:27:20 right to be thinking of ai
0:27:23 as a potential accelerant of
0:27:26 these trends yeah i mean i
0:27:29 i think you are correct
0:27:36 so it’s arguable and
0:27:37 actually consistent with the
0:27:38 way the community speaks
0:27:41 internally to say that the
0:27:43 algorithms that have been
0:27:44 driving social media up to
0:27:49 now are a form of ai if you
0:27:52 if you unlike me wish to use
0:27:55 the term ai and what the
0:27:59 algorithms do is they
0:28:01 attempt to predict human
0:28:03 behavior based on the
0:28:05 stimulus given to the
0:28:07 human and by putting that
0:28:08 in an adaptive loop they
0:28:11 hope to drive attention and
0:28:13 sort of an obsessive
0:28:15 attachment to a platform
0:28:18 because these algorithms
0:28:21 can’t tell whether
0:28:23 something’s being driven
0:28:25 because of things that we
0:28:25 might think are positive
0:28:26 or things that we might
0:28:28 think are negative so i
0:28:29 call this the life of the
0:28:30 parody the this notion
0:28:32 that you can’t tell like
0:28:33 if a bid is one or zero
0:28:34 doesn’t matter because it’s
0:28:36 an arbitrary designation in
0:28:38 a digital system so if
0:28:39 somebody’s getting
0:28:40 attention by being a dick
0:28:42 that works just as well as
0:28:43 if they’re offering
0:28:44 life-saving information or
0:28:45 helping people improve
0:28:46 themselves but then the
0:28:47 peaks that are good are
0:28:48 really good and i don’t
0:28:49 want to deny that i love
0:28:50 dance culture on tiktok
0:28:53 science bloggers on on
0:28:54 youtube have achieved a
0:28:55 level that’s like
0:28:57 astonishingly good and so
0:28:58 on like there’s all these
0:29:00 really really positive good
0:29:01 spots but then overall
0:29:03 there’s this loss of truth
0:29:06 and political paranoia and
0:29:09 unnecessary confrontation
0:29:11 between arbitrarily created
0:29:13 cultural groups and so on
0:29:15 that’s really doing damage
0:29:18 um and as is often pointed
0:29:20 out especially to young
0:29:21 girls and so on and so
0:29:22 forth uh not not great
0:29:25 and so uh yeah could
0:29:27 better ai algorithms make
0:29:27 that worse
0:29:31 plausibly i mean it’s
0:29:32 possible that it’s already
0:29:34 bottomed out that it’s kind
0:29:37 of the the badness just
0:29:37 comes from the overall
0:29:38 structure and if the
0:29:39 algorithms themselves get
0:29:41 more sophisticated it won’t
0:29:42 really push it that much
0:29:43 further but i think
0:29:45 actually kind of can i’m
0:29:46 i’m worried about it i
0:29:48 because we so much want to
0:29:49 pass the turing test and
0:29:50 make people think our
0:29:51 programs are people
0:29:55 we’re moving to this um
0:29:56 so-called agentic era where
0:29:59 it’s not just that you have a
0:30:00 chat interface with with the
0:30:01 thing but the chat interface
0:30:04 gets to know you for years at
0:30:06 a time and gets a so-called
0:30:08 personality and but and all
0:30:09 this and then the idea is that
0:30:10 people then fall in love with
0:30:11 these and we’re already
0:30:13 seeing examples of this
0:30:15 here and there um and this
0:30:16 notion of a whole generation
0:30:17 of young people falling in
0:30:20 love with fake avatars i mean
0:30:24 people people talk about ai as
0:30:25 if it’s just like this yeast in
0:30:26 the air it’s like oh ai will
0:30:27 appear and people will fall in
0:30:29 love with ai avatars but it’s
0:30:30 not ai is always run by
0:30:32 companies so like they’re going
0:30:33 to be falling in love with
0:30:35 something from google or meta or
0:30:39 whatever and like that notion
0:30:41 that your love life becomes
0:30:44 owned by some company or even
0:30:45 worse tiktok or a chinese thing
0:30:49 eek eek eek eek i think that’ll
0:30:51 create a a a new centralization
0:30:56 or or or xai eek eek eek eek i’ll
0:30:57 add some more eeks to that and so
0:30:59 this centralization of power and
0:31:02 influence could be even worse and
0:31:04 that might be a breaking point
0:31:06 event and so that kind of thing
0:31:07 ending civilization or ending up
0:31:09 killing all the people does seem
0:31:11 plausible to me and some of my
0:31:12 colleagues would interpret that as
0:31:15 ai become coming alive and killing
0:31:16 everybody but i would just
0:31:17 interpret it as people being
0:31:20 making terrible choices it all
0:31:21 amounts to the same thing in the
0:31:23 end anyway it does at the end of
0:31:25 the day in terms of actual events
0:31:27 the same so jaron from your point
0:31:29 of view is it even possible to have
0:31:33 good algorithms nudging us around
0:31:35 online or are all algorithms bad yes
0:31:37 of course it is okay what does that
0:31:39 look like course it is of course it
0:31:41 is yes yes yes yes give me the good
0:31:42 stuff here give me the good
0:31:44 algorithms well i mean look in the
0:31:49 scientific community we do it like i
0:31:51 mean like okay here’s an example um
0:31:55 deep research from open ai is a great
0:31:57 tool it does a literature search on some
0:31:59 topic and assembles a little report
0:32:03 it has unnecessary chatbot elements
0:32:05 to try to make it seem like there’s
0:32:07 somebody there i view that as a waste
0:32:10 of time and a waste of energy and i i
0:32:11 would be happy without it but but
0:32:13 whatever okay it’s it’s not terrible
0:32:16 though what it does is it saves
0:32:18 scientists a ton of time it makes a lot
0:32:20 of sense i get a lot out of it it’s
0:32:21 great and now there’s some new
0:32:25 competitors to it great that stuff’s
0:32:27 fabulous i really really really it’s
0:32:28 good because the scientific literature
0:32:30 has become impossible to use without
0:32:33 it i do a lot of work that’s pretty
0:32:35 mathematical and the problem is that
0:32:37 every time somebody comes across
0:32:38 similar math they don’t realize
0:32:39 somebody else has done it so they come
0:32:41 up with their own terms for things and
0:32:43 then you have the same ideas or
0:32:45 similar ones with different terms and
0:32:46 all these scattered papers in totally
0:32:47 different communities at different
0:32:48 conferences and different journals
0:32:53 yeah but with a tool like this you
0:32:55 can capture all that and get it into
0:32:59 place it’s like what what what ai is is
0:33:01 it’s a way of improving collaboration
0:33:03 between people it’s a way of gathering
0:33:06 what people have done in a more unified
0:33:09 way that can notice multiple hops of
0:33:12 different terms and similar structures it’s
0:33:15 it’s a better way of using statistics to
0:33:17 connect what we’ve all done together to
0:33:21 get more use out of it it’s great i love
0:33:25 it and the amount of avatar illusion
0:33:27 nonsense is kept to a minimum because
0:33:29 our job is not to fall in love with our
0:33:31 research our fake research assistant our
0:33:35 job is to make progress efficiently on
0:33:37 whatever we’re doing right and so that
0:33:39 that’s great what is wrong with that
0:33:41 nothing it’s fabulous so yeah there’s
0:33:43 wonderful uses if i didn’t think those
0:33:46 things existed i’d quit what i do
0:33:49 professionally in the industry of course
0:33:51 there’s wonderful uses and i think we
0:33:52 need those things i think they really
0:33:53 matter
0:33:56 i guess what i’m hovering around is the
0:33:58 business model right i mean uh the
0:34:00 advertising model was sort of the
0:34:02 original sin of the internet yeah yeah i
0:34:02 think it is
0:34:06 um how do we not fuck this up how do we
0:34:07 not repeat those mistakes what’s a better
0:34:09 model i mean you talk a lot about data
0:34:11 dignity so you’re saying we can say fuck
0:34:13 on this podcast oh you can say whatever
0:34:15 you want if i had known that there would
0:34:17 be a lot of fuckery up to now in my in my
0:34:18 speech it’s not too late anyway it’s not
0:34:23 too late we got plenty of time okay but no
0:34:25 but seriously what how do we get it right
0:34:26 this time how do we not make the same
0:34:29 mistakes what is a better model yeah well
0:34:32 um this is actually more important this
0:34:34 question is the central question of our
0:34:36 time in my view like the central
0:34:39 question of our time isn’t um being able
0:34:42 to scale ai more is is an important
0:34:45 question and i get that and most people
0:34:47 are focused on that and dealing with the
0:34:49 climate is an important question but in
0:34:51 terms of our own survival coming up with
0:34:53 a business model for civilization that
0:34:56 isn’t self-destructive is in a way our
0:34:59 most primary problem and challenge right
0:35:01 now because the way we’re doing it what
0:35:04 we kind of we went through this thing in
0:35:06 the earlier phase of the internet like
0:35:08 information should be free and then the
0:35:09 only business model that’s left is paying
0:35:12 for influence uh and so then all the
0:35:16 platforms look free or very cheap to the
0:35:17 user but then actually the real customer
0:35:19 trying to influence the user and you end
0:35:23 up with what’s essentially a stealthy form
0:35:26 of um manipulation being the central
0:35:30 project of civilization and we can only
0:35:31 get away with that for so long at some
0:35:33 point that bites us and we become too
0:35:36 crazy to survive so we must change the
0:35:38 business model of civilization and so
0:35:41 exactly how to get from here to there is
0:35:44 a bit of a mystery but i continue to work
0:35:46 on it like i think we should incentivize
0:35:48 people to put great data into the ai
0:35:51 programs of the future uh and i’d like
0:35:53 people to be paid for data used
0:35:55 ai models and also to be celebrated and
0:35:56 made visible and known because i think
0:35:58 it’s just a big collaboration and our
0:36:01 collaborators should be valued how easy
0:36:02 would it be to do that do you think we
0:36:05 can or will there’s still some unsolved
0:36:07 technical questions about how to do it
0:36:09 i’m very very actively working on those
0:36:10 and i believe it’s doable and there’s a
0:36:12 whole you know research community devoted
0:36:14 to exactly that distributed around the
0:36:16 world and i think it’ll make better
0:36:18 models i mean better data makes better
0:36:20 models and there’s a lot of people who
0:36:21 dispute that and they say no it’s just
0:36:22 better algorithms and we already have
0:36:25 enough data for the rest of all time but
0:36:28 i disagree with that i think i don’t
0:36:29 think we’re the smartest people who will
0:36:31 ever live and there might be new creative
0:36:33 things that happen in the future that we
0:36:35 don’t foresee and the models we’ve
0:36:37 currently built might not extend into
0:36:39 those things and having some open system
0:36:41 where people can contribute to new models
0:36:44 in new ways is a more expansive and
0:36:47 creative and you know open-minded and
0:36:51 and just you know kind of spiritually
0:36:53 optimistic way of thinking about the deep
0:36:53 future
0:37:15 today explained here with eric levitt senior
0:37:17 correspondent at vox.com to talk about the
0:37:21 2024 election that can’t be right eric i thought
0:37:22 we were done with that i feel like i’m pacino
0:37:24 in three just when i thought i was out
0:37:28 they pull me back in why are we talking about
0:37:30 the 2024 election again the reason why we’re
0:37:33 still looking back is that it takes a while
0:37:36 after an election to get all of the most high
0:37:40 quality data on what exactly happened so the
0:37:42 full picture is starting to just come into view
0:37:45 now and you wrote a piece about the full
0:37:49 picture for vox recently and it did bonkers business
0:37:53 on the internet what did it say what struck a
0:37:56 chord yeah so this was my interview with
0:38:00 david shore of blue rose research he’s one of
0:38:04 the biggest sort of democratic data gurus in
0:38:08 the party and basically the big picture headline
0:38:12 takeaways are on today explained you’ll have to go listen
0:38:15 to them there find the show wherever you listen to shows bro
0:38:35 i think i’m a humanist like you in the end and what i want fundamentally is just the
0:38:39 elevation of human agency not the diminishment of it and part of what that means to borrow your
0:38:45 language is creating more creative classes and less dependent classes yep uh you’ve convinced me
0:38:50 that that’s at least possible i don’t know if it’s likely but i hope it is and and maybe some
0:38:55 some kind of data dignity type model is the most promising thing i’ve heard
0:39:06 no i sort of feel like the human project our our survival is simultaneously both certain and
0:39:11 unlikely if you know what i mean like i i feel like if we just follow the immediate trend lines
0:39:13 and what we see we’re probably gonna
0:39:16 buck ourselves up to use the word i’m
0:39:22 encouraged to say here there you go but i also just have this feeling we’ve made it through a lot of
0:39:26 stuff in the past and i just have this feeling we’re gonna rise to the occasion and figure this
0:39:32 one out really i don’t know exactly how we will but i think we will i don’t know what the
0:39:40 alternative is the alternative is in 200 million years there’ll be smart cephalopods to take over
0:39:45 the planet uh and maybe they’ll do that i mean that’s the alternative but i think we can do it i
0:39:56 really do i really i we just we just have to be a little less full of ourselves and not believe we’re
0:40:02 making a new god no more golden calves that’s really our problem still yeah good luck with
0:40:09 that i mean i i i like i’m constantly thinking more about the the social and political and cultural
0:40:14 dynamics because that’s just my background um and you know i mean i i guess speaking of dependent
0:40:23 classes i a very common concern is is this fear that ai is going to create a lot of social instability by
0:40:29 taking all of our jobs it’s a widespread fear it’s scary as hell and it feels like
0:40:35 the latest iteration of a very old story about new technologies like automation displacing workers
0:40:39 i mean how do you speak to these sorts of fears when you hear them because surely you hear them a lot
0:40:41 yeah and they concern me i mean
0:40:54 look um there’s not a perfect solution to that problem uh there i’ll give you an example of one
0:41:00 that i find tricky to think about uh my mom died in a car accident and i’ve always believed from when
0:41:05 i was very young that cars should drive themselves that it was manifestly obvious that we could create
0:41:13 a digital system that would save many many lives so we have tens of thousands of people killed by cars
0:41:16 every year still in the us and i think it’s over a million worldwide or something like that i mean
0:41:24 it’s like crazy it’s like and so um there are a lot of reasons for it and a self-driving car is never
0:41:28 going to be perfect because it’s not a task that can be done perfectly there’ll be circumstances where
0:41:34 there’s no optimal solution you know in the instant but overall we ought to be able to save a lot of
0:41:42 life so i’m really supportive of that project at the same time an incredibly large number of blue collar
0:41:49 people around the world get by behind a wheel whether it’s truck drivers or rideshare drivers these days
0:42:00 or etc you know and so like how do you reconcile those two things uh and i i don’t think there’s any way to do it perfectly i think there’s
0:42:10 there’s two things that should be true one is that we need to find an intermediate way to love
0:42:15 a social safety net that isn’t all the way to universal basic income because the universal basic
0:42:21 income idea gives people this idea that they’re not worth anything and they’re just being supported by
0:42:27 the tech titans as a hobby and it doesn’t feel very secure or very dignified and or stable there’s like
0:42:33 just a lot of reasons why i’m skeptical of that uh in the long term and i don’t think people like it
0:42:40 or want it but on the other hand um just telling people well you’re thrown out into the mix and in
0:42:45 the u.s you have no health insurance and just figure something out that’s also just too cruel and not viable
0:42:51 if it’s a lot of people at once so we have to find our way to a very unfashionable intermediate
0:43:00 sense of social safety network or uh to help people through transitions and right now the accounting for
0:43:04 that is very very difficult to sort out and especially in the united states there’s a deep
0:43:13 hostility to it and i just don’t see logically any other way but then beyond that um i do think new roles
0:43:18 will appear like the the story that well new things will happen and new new things will be possible
0:43:25 i do believe that like there’s a kind of a vague and uncomfortable sense that surely new things will
0:43:31 come along and i i actually think that’s true i don’t feel comfortable making that claim for all
0:43:35 those drivers like we’re not going to retrain them to be programmers because low-level programming
0:43:43 itself is also getting automated right i don’t know exactly how that’ll work um i have thought a great
0:43:49 great deal about it but that’s who i am for the moment i believe that there could be all kinds of
0:43:57 things we don’t foresee and that within that explosion of new sectors of creativity there will be enough new
0:44:04 needs for people to do things if only to train ai’s that it’ll keep up with human needs and support
0:44:08 some kind of a world of economics that’s more distributed than just a central authority
0:44:13 distributing income to everybody which i think would be corrupted yeah yeah i agree with that
0:44:20 do you think we’re being sufficiently intentional about the development of this technology do you
0:44:27 think we’re asking the right questions as a society now well i mean the questions are dominated by
0:44:33 a certain internal technical culture which is and the mainstream of technical culture is very
0:44:39 obsessed with ai as a new god or some kind of new entity and so i think that that does make the
0:44:47 whole conversation go askew and that said we’re almost like if you go to ai conferences
0:44:54 there might there’s usually more talk where somebody is saying we’re going to talk about how to talk the
0:45:01 ai into not killing us you know and that kind of conversation which to me is not well grounded and
0:45:09 i think it kind of loses itself in loops but that kind of conversation can take up as much time and space
0:45:14 as like a serious conversation of like how can we optimize this algorithm or how can we you know like the
0:45:20 the actual work that we should be doing as technologists um i was at one conference i was
0:45:25 kind of funny where i forget what there were these different factions there’s the artificial general
0:45:30 intelligence and there’s the super intelligence and there’s all these different people who have
0:45:34 slightly different ideas about how awesome ai will be and help might kill us all in different ways
0:45:42 and they were so conflicted that they got into a fist fight um a not very competent fist fight it must be
0:45:48 said but i’m shocked it’s kind of funny anyway i sort of wish i had a film of that that was really funny but
0:45:54 i don’t know i mean i love my world i love the people i do kind of make fun of us a little bit
0:46:00 sometimes because i just think it’s important too you know okay so if we just let’s just set aside for
0:46:06 the moment that the more common fears about ai the alignment problem and taking our jobs and
0:46:12 flattening human creativity all that stuff all that is there all of that um is there is there a fear of
0:46:18 yours something you think we could get terribly wrong that’s not currently something we hear much about
0:46:26 uh god i don’t even know where to start yeah there’s like a lot lot lot lot lot lot lot
0:46:38 lot i’m i mean one of the things i worry about is we’re gradually moving education into an ai model
0:46:46 and the motivations for that are often very good because in a lot of places on earth it’s just been
0:46:50 impossible to come up with an economics of supporting and training enough human teachers
0:46:58 and a lot of cultural issues in changing societies make it very very hard to make schools that work
0:47:06 and so on like there’s a lot of issues and in theory a sort of uh client self-adapting ai tutor
0:47:13 could solve a lot of problems at a low cost in a lot of situations but then the issue with that is
0:47:19 once again creativity how do you keep people who learn in a system like that
0:47:24 how do you train them so that they’re able to step outside of what the system was trained on
0:47:29 you know like there’s this funny way that you’re always retreading and recombining the training data
0:47:35 in any ai system and you can address that to a degree with constant fresh input and this and that but
0:47:40 i am a little worried about people being trained in a closed system that makes them a little less than
0:47:46 they might otherwise have been and have a little less faith in themselves i’m a little concerned about
0:47:53 sort of defining the nature of life and education downward you know and the thing is the history
0:47:59 of education is filled with doing exactly that thing like education has been filled with overly
0:48:08 reductive ideas or overly idealistic and and um biased ideas of different kinds i mean so it’s not like
0:48:14 we’re entering this perfect system messing it up we’re entering a messed up system and trying to figure out
0:48:22 how to not perpetuate it it’s messed up itness i think in the case of education um challenging really
0:48:28 challenging i think i just ask just because i’m just curious what you would say i i have a five-year-old
0:48:37 son and he’s already started asking questions about you know like what kind of skills should he learn what
0:48:42 what should he what should he aspire to do in the world oh man that’s a hard one right and i don’t know
0:48:49 what to tell him because i have no idea what the world is going to look like by the time he’s 18 or 20 or 15
0:48:54 hell you know i what would you what would you tell him if uncle jaron came over oh yeah and he asked
0:48:59 you that what would you say well i have a teen daughter now and when she was younger uh she went
0:49:07 to coding camp you know and loved it and then when uh copilot for github came out and now some of the
0:49:12 other ones that are out she was like well you know the kinds of programs i’d write i can just ask for now
0:49:17 so why did you send me to all this thing why did i waste all my time at these things and i said uh remember
0:49:24 you loved coding camp remember you liked it you liked it it’s like well yeah but i would have
0:49:30 i could have liked spelunking camp or something too like why coding camp and um i i mean
0:49:36 i don’t have a perfect answer for all that right now i really don’t i do
0:49:44 i do think there are new things that will emerge i have a feeling there’ll be a lot of new professions
0:49:51 related to adaptive biology and modifications and helping people deal with weird changes to their
0:49:56 bodies that will become possible i think that’ll become a big thing i don’t know exactly how it’s
0:50:04 too early to say like i there’s a subtle point here i want to make which is um i am very far from being
0:50:12 anti-futuristic or disliking extreme change in the future but what i what i have to insist upon
0:50:18 is continuity so in this idea there’s a term called the singularity uh applied to ai sometimes that
0:50:23 there’ll be this rush of change so fast that nobody can learn anything nobody can know anything and it
0:50:29 just is beyond us beyond us beyond us the problem with the singularity whether it’s in a black hole or in
0:50:35 the big bang or in technology is that it’s very hard to have you know like by definition even if
0:50:41 you don’t technically lose information you lose the ability to access the information in the in the
0:50:46 original context or with any kind of structure so it’s essentially a form of massive forgetting and
0:50:54 massive loss of context and massive loss of meaning therefore and so however radical we get if in the future
0:51:00 we’re all going to evolve into massive distributed colonies of space bacteria flying around
0:51:07 and intergalactically or something whatever we turn into i’m all for it i’m in i’m in i’m in but
0:51:12 the line from here to there has to have memory it has to be continuous enough that we’re learning
0:51:19 lessons and we we remember if we break that because we want the thrill of polpot’s year zero where from
0:51:24 now on we’re the smartest people and everybody else was wrong and we start over if we want that break
0:51:29 we must resist it we must oppose people who want that break year zero never works out well it’s a
0:51:38 really really bad idea and so that to me i’m like pro extreme futures but anti discontinuity into the
0:51:43 future and and so that’s a an in-between place to be that’s a little subtle and hard to get across but
0:51:48 i think that that’s the right place to be well i always try to end these conversations with as much
0:51:55 optimism as possible so do you have any other good news or uh rosy scenarios you can you can paint for
0:52:01 us uh before we get out of here about how things are going to be awesome in the future right now we’re
0:52:09 in a very hard to parse moment things are strange things are scary and what i keep on telling myself
0:52:16 there’s always hope in chaos as much as someone might someone driving chaos might be certain that
0:52:25 it’s under their command but it never is and those of us who watch unfolding chaos looking for signs of
0:52:33 hope looking for optimism looking for little openings in which to do something good we will find them if we
0:52:40 stay alert and so i’d urge everybody to do that during this period jaron lanier i’m a fan of your
0:52:45 work i’m a fan of you as a human being as well i appreciate you coming in oh well that’s very kind
0:52:52 of you thank you so much and i really appreciate all the effort and also just the goodwill and warmth
0:53:03 you put into this interview i really do appreciate it so much
0:53:12 all right i hope you enjoyed this episode there was a lot going on in this one jaron is a unique mind
0:53:22 and i appreciate the way he thinks about all of this this conversation did force me to reflect on the
0:53:31 language i use to make sense of ai and all the assumptions buried in that language so i hope you
0:53:39 found his insights useful but either way as always we want to know what you think so drop us a line
0:53:52 at the gray area at vox.com or leave us a message on our new voicemail line at 1-800-214-5749
0:53:58 and once you’re finished with that if you have a second please go ahead and rate and review and
0:54:09 subscribe to the podcast this episode was produced by beth morrissey edited by jorge just engineered by erica
0:54:17 wong fact check by melissa hirsch and alex overington wrote our theme music new episodes of the gray area
0:54:25 drop on mondays listen and subscribe the show is part of vox support vox’s journalism by joining our
0:54:33 membership program today go to vox.com slash members to sign up and if you decide to sign up because of this show
0:54:51 let us know you
Why do we keep comparing AI to humans?
Jaron Lanier — virtual reality pioneer, digital philosopher, and the author of several best-selling books on technology — thinks that we should stop. In his view, technology is only valuable if it has beneficiaries. So instead of asking “What can AI do?,” we should be asking, “What can AI do for us?”
In today’s episode, Jaron and Sean discuss a humanist approach to AI and how changing our understanding of AI tools could change how we use, develop, and improve them.
Host: Sean Illing (@SeanIlling)
Guest: Jaron Lanier, computer scientist, artist, and writer.
Learn more about your ad choices. Visit podcastchoices.com/adchoices