Reid Hoffman on AI, Consciousness, and the Future of Humanity

Leave a Reply

AI transcript
0:00:03 This is actually one of the things I think people don’t realize about Silicon Valley.
0:00:07 You start with, what’s the amazing thing that you can suddenly create?
0:00:10 Lots of these companies, you go, what’s your business model?
0:00:10 You go, I don’t know.
0:00:13 They’re like, yeah, we’re going to try to work it out.
0:00:15 I can create something amazing here.
0:00:19 And that’s actually one of the fundamental, call it the religion of Silicon Valley,
0:00:23 and the knowledge of Silicon Valley that I so much love and admire and embody.
0:00:29 Reid Hoffman has spent decades helping shape how we connect, work, and build online
0:00:31 from PayPal and LinkedIn to OpenAI and beyond.
0:00:36 In this episode, I’m joined by Reid and A16Z general partner Alex Rampell
0:00:41 to talk about how AI is reshaping not just work, but what it means to be human.
0:00:45 We discuss how far current AI models can go, what’s holding them back,
0:00:50 and why the next breakthroughs will likely come from places Silicon Valley isn’t even looking.
0:00:55 We also talk about friendship, meaning, and how to stay grounded in an era of exponential change
0:00:59 when our tools might soon think, reason, and even feel alongside us.
0:01:01 Let’s get into it.
0:01:04 Reid, welcome to A16Z Podcast.
0:01:05 It’s great to be here.
0:01:10 So, Reid, you’re one of the most successful Web2 investors of that era.
0:01:13 Facebook, LinkedIn, obviously, which you co-created, Airbnb, many, many others.
0:01:16 And you had several frameworks to help you do that, one of which was the seven deadly sins,
0:01:18 which we talk about often and love.
0:01:23 As you’re thinking about AI investing, what’s a framework, worldview that you take to your AI investing?
0:01:30 So, obviously, we’re all looking through a glass darkly, looking through a fog with strobe lights
0:01:32 that, you know, are hard to understand what’s going on.
0:01:34 So, we’re all navigating this new universe.
0:01:38 So, I don’t know if I have as crisp of a framework, but seven deadly sins still work
0:01:45 because that’s a question of what is psychological infrastructure across all 8 billion plus human beings.
0:01:48 But, I’d say there’s a couple things.
0:01:55 So, first is, there is going to be a set of things that are the kind of the obvious line of sight.
0:02:00 Obvious line of sight, a bunch of stuff with chatbots, a bunch of stuff with productivity, coding assistance, da-da-da-da-da-da.
0:02:07 And, by the way, that’s still worth investing in, but, obviously, obvious line of sight means it’s obvious to everybody.
0:02:08 Line of sight.
0:02:11 And so, doing a differential investment is harder.
0:02:16 The second area is, well, what does this mean?
0:02:21 Because, too often, people say in an area of disruption that everything changes as opposed to significant things change.
0:02:23 So, like you were mentioning Web 2.0 and LinkedIn.
0:02:31 And, obviously, part of this with a platform change, you go, okay, well, are there now new LinkedIn’s that are possible because of AI or something like that?
0:02:35 And, of course, obviously, given my own heritage, I would love LinkedIn to be that.
0:02:39 But, you know, I’m always pro-innovation entrepreneurship, the best possible thing for humanity.
0:02:44 But what are the kind of more traditional kind of things that haven’t changed?
0:02:45 Network effects.
0:02:46 Enterprise integration.
0:02:55 Other kinds of things that the new platform upsets the apple cart, but you’re still going to be putting that apple cart kind of back together in some way.
0:03:04 And then the third, which is probably where I’ve been putting most of my time, has been what I think of as Silicon Valley blind spots.
0:03:08 Because Silicon Valley is one of the most amazing places in the world.
0:03:17 There’s a network of intense coopetition, learning, invention, building new things, et cetera, which is just great.
0:03:20 But we also have our cannons.
0:03:21 We have our kind of blind spots.
0:03:26 And a classic one for us tends to be, well, everything should be done in CS.
0:03:27 Everything should be done in software.
0:03:28 Everything should be done in bits.
0:03:31 And that’s the most relevant thing because, by the way, it’s a great area to invest.
0:03:41 But it was like, okay, what are the areas where the AI revolution will be magical but won’t be within the Silicon Valley blind spots?
0:03:51 And that’s probably where I’ve been putting the majority of my co-founding time, invention time, kind of investment time, et cetera.
0:04:05 Because I think usually the blind spot on something that’s very, very big is precisely the kinds of things that you go, okay, you have a long runway to create something that could be like another one of the iconic companies.
0:04:06 Yeah.
0:04:14 Let’s go deeper on that because we were also talking just before this about how people focus so much on the productivity and set the workflow sides, but they’re missing other elements.
0:04:17 Say more about other things that you find more interesting there.
0:04:28 So one of the things I kind of told my partners back at Greylock in 2015, so it’s 10 years ago, was I said, look, there’s going to be a bunch of things on productivity around AI.
0:04:29 I’ll help.
0:04:32 You have companies you want me to work with that you’re doing.
0:04:32 Great.
0:04:33 That’s awesome.
0:04:37 Enterprise productivity, et cetera, things that Greylock tends to specialize on.
0:04:54 But I said, actually, in fact, what I think that’s here, getting the blind spots, is also going to be some things like, you know, what, as you guys both know, Mattis AI, which is how do we create a drug discovery factory that works at the speed of software?
0:05:03 Now, obviously, there’s regulatory, obviously, there’s biological bits, obviously, and so it won’t be purely a speed of software, but how do we do this?
0:05:06 And they said, oh, well, what do you know about biology?
0:05:08 And the answer is zero.
0:05:13 Well, it may be not quite zero, but on the board of Biohub for 10 years, I’m on the board of ARK, et cetera.
0:05:21 Like, I’ve been thinking about the intersection of the worlds of atoms and the worlds of bits, and you have biological bits, which are kind of halfway between atoms and bits in various ways.
0:05:32 I’ve been thinking about this a lot and kind of what the things are, not so much with a specific company focus, as much as a what are things that elevate human life kind of focus.
0:05:35 Part of the reason why Biohub, part of the reason why ARK.
0:05:37 But then I was like, well, wait a minute.
0:05:42 Actually, now with AI, and you have the acceleration, because like, for example, this detour will be fun.
0:05:50 So, roughly also around 10 years ago, I was asked to give a talk to the Stanford Long-Term Planning Commission.
0:06:01 And what I told them was that they should basically divert and put all of their energy into AI tools for every single discipline.
0:06:04 And this was well before ChatGPT and all the rest.
0:06:12 And the metaphor I used was a search metaphor, because think if you had a custom search productivity tool in every single discipline.
0:06:19 Now, back then, I could imagine it, I could build one for every discipline other than theoretical math or theoretical physics.
0:06:22 Today, you might even be able to do theoretical math and theoretical physics.
0:06:24 Right, exactly.
0:06:26 And so, do that.
0:06:30 Like, transform knowledge generation, knowledge communication, knowledge analysis.
0:06:36 Well, that kind of same thing, now thinking, well, the biological system is still too complex to simulate.
0:06:39 We’ve got all these amazing things with LLMs.
0:06:46 But like, the classic Silicon Valley blind spot is, oh, we’ll just put it all in simulation and drugs will fall out.
0:06:47 Right.
0:06:50 That simulation is difficult.
0:06:59 Now, part of the insight that you begin to see from, like, the work with AlphaGo and AlphaZero is, because, like, people just think, ah, physical material is going to take quantum acuting.
0:07:02 Now, quantum computing could do really amazing things.
0:07:07 But actually, simply doing prediction and getting that prediction right.
0:07:09 And by the way, it doesn’t have to be right 100% of the time.
0:07:11 It has to be right, like, 1% of the time.
0:07:14 Because you can validate the other 99% weren’t right.
0:07:16 And then finding that one thing.
0:07:19 And so, literally, it’s not a needle in a haystack.
0:07:22 It’s like a needle in a solar system.
0:07:23 Right.
0:07:25 But you can possibly do that.
0:07:31 And that’s part of what led to, okay, Silicon Valley will classically go, we’ll put it all in simulation and that will solve it.
0:07:33 Nope, that’s not going to work.
0:07:38 Or, oh, no, we’re going to have a super intelligent drug researcher and that will be two years down the thing.
0:07:41 Look, maybe someday, not soon.
0:07:42 Right.
0:07:46 So, anyway, that was the kind of thing that was in other different areas.
0:07:50 Now, part of it’s also kind of what a lot of people don’t realize.
0:07:56 Actually, if I’m not going too long, I’ll go to the other example that I gave because you’ll love this.
0:08:00 This will echo some of our conversations from 10, 15 years ago.
0:08:11 So, I am prepping for a debate on Sunday, this week, on whether or not AIs will replace all doctors in a small number of years.
0:08:16 Now, the pro case is very easy, which is we have massively increasing capabilities.
0:08:23 If you look at ChatGBT today, you’d go, for example, advice to everyone who’s listening to this.
0:08:29 If you’re not using ChatGBT or equivalent as a second opinion, you’re out of your mind.
0:08:29 You’re ignorant.
0:08:31 You get a serious result.
0:08:33 Check it as a second opinion.
0:08:36 And, by the way, if it diverts, then go get a third.
0:08:44 And so, the diagnostic capabilities, these are much better knowledge stores than any human being on the planet.
0:08:49 So, you go, well, if a doctor is just a knowledge store, yeah, that’s going away.
0:08:59 However, the question is, I actually think things that really do mean doctor, and it’s not like, oh, someone will hold your hand and says, oh, it’s okay, et cetera.
0:09:05 I actually think there will be a position for a doctor 10 years from now, 20 years from now.
0:09:07 It won’t be as the knowledge store.
0:09:12 It will be as a user of an, as an expert user of the knowledge store.
0:09:19 But it’s not going to be, oh, because I went to med school for 10 years and I memorized things intensely.
0:09:20 That’s why I’m a doctor.
0:09:22 That’s all going away.
0:09:22 Great.
0:09:24 But there’s a lot of other parts to being a doctor.
0:09:30 Now, so I went to ChatGBT, Pro, using deep research.
0:09:34 I went to Claude, Opus 4.5, deep research.
0:09:36 I went to Gemini Ultra.
0:09:39 I went to Co-Pilot, deep research.
0:09:45 In all of these things, I was doing everything I knew about prompting to give me the best possible arguments for my position.
0:09:48 Because I thought, well, I’m about to debate on AI.
0:09:49 Of course I should be using AI to debate.
0:09:56 The answers were B minus or B, despite absolute topping.
0:10:05 And I’m not, like, maybe there’s probably better prompters in the world, but I’ve been doing this since I got access to GPD for six months before the public did.
0:10:06 Right.
0:10:08 So I’ve got some experience in the whole prompting thing.
0:10:10 It’s not like I’m an amateur prompter.
0:10:18 And so I looked at this and I went, oh, this is very interesting and a telling of where current LLMs are limited in their reasoning capabilities.
0:10:35 What it did is it basically did 10 to 15 minutes of 32 GPU compute clusters doing inference, bringing off all in amazing work relative to a work that an analyst would have produced in three days was produced in 10 minutes.
0:10:43 And, of course, I set it up all in parallel with different browser tabs all going into the different systems and then ran the comparisons across them and everything.
0:10:55 But its flaw was that it was giving me a consensus opinion about how articles in good magazines, good things, are arguing for that position today.
0:11:01 And all of that was weak because it was kind of like, oh, you need to have humans cross-check the diagnosis.
0:11:02 Right.
0:11:03 It was a common theme across this.
0:11:13 And I’m like, well, by the way, very clearly we know as technologists that human cross-checking the diagnosis, we’re going to have AIs cross-checking the diagnosis.
0:11:16 We’re going to have AIs cross-checking the AIs or cross-checking the diagnosis.
0:11:24 And sure, there’ll be humans around here somewhere, but that’s not going to be the central place to say in 20 years doctors are going to be cross-checking the diagnosis.
0:11:37 Because, by the way, what doctors should be learning very quickly is if you believe something different than the consensus opinion that an AI gives you, you’d better have a very good reason and you’re going to go do some investigation.
0:11:39 It doesn’t mean the AI is always right.
0:11:45 That’s actually part of what you’re, like, what we’re going to need in all of our professions is more sideways thinking, more lateral thinking.
0:11:48 The, okay, this is good consensus opinion.
0:11:50 Now, what if it’s not consensus opinion?
0:11:53 That’s what doctors need to be doing.
0:11:54 That’s what lawyers will need to be doing.
0:11:55 That’s what coders will need to be doing.
0:11:57 That’s what it is.
0:11:59 And LLMs are still pretty structurally limited there.
0:12:00 That’s funny.
0:12:04 My favorite saying is by Richard Feynman, science is the belief in the ignorance of experts.
0:12:05 Yes.
0:12:10 And there are so many professions where the credentialism is the expertness.
0:12:10 Yes.
0:12:12 It’s like it’s if this, then that.
0:12:14 And it’s like I have MD, therefore I know.
0:12:16 I have JD, therefore I know.
0:12:21 And that’s why coding is actually a little bit ahead of it because it’s like I don’t care where you got your degree.
0:12:23 This is a, it’s kind of ahead of the rest of society.
0:12:25 Now, it’s funny.
0:12:30 Milton Friedman one time got asked, because he was a famous libertarian, don’t you think that brain surgeons should be credentials?
0:12:32 And it’s like, yeah, the market will figure that out.
0:12:33 It seems kind of crazy, right?
0:12:37 But that’s how we now do coding when you’re in the world of bits.
0:12:45 But it feels like a lot of the reasons why you have this not very advanced thinking is because so much of it is built upon layers of credentialism.
0:12:47 And that’s a very good heuristic.
0:12:48 Historically, it has been.
0:12:53 If you have a doctor that graduated at the top of their class from Harvard Medical School, it’s like probably a good doctor.
0:12:55 By the way, you critically wanted that.
0:12:55 Yes.
0:12:56 Three years ago.
0:12:57 Right.
0:12:57 Right.
0:12:59 It’s like, no, no, I need someone who has the knowledge base.
0:13:00 You have it?
0:13:00 Great.
0:13:00 Right.
0:13:02 But now we have a knowledge base.
0:13:02 Yeah.
0:13:03 I totally agree.
0:13:07 That was the reason I was saying you would love this because it echoes of our expertise.
0:13:15 I thought you were going to get into bits versus atoms, where it’s kind of interesting right now, where it’s like all this high value work, like Goldman Sachs, cell site analyst, that’s deep research.
0:13:16 Right.
0:13:20 Whereas fold by laundry, that’s $100,000 of capex.
0:13:23 So it doesn’t work as well as somebody that you could pay $10 an hour to.
0:13:23 Yes.
0:13:27 And it’s like the atoms stuff is so hard to actually disrupt.
0:13:27 Yes.
0:13:28 And we’re going to get there eventually.
0:13:31 But that’s where Silicon Valley certainly has a blind spot.
0:13:34 But it’s like a capex versus opex or bits versus atoms.
0:13:35 The atoms is another part.
0:13:40 But that’s also the reason why bio, because bios are the bitty atoms.
0:13:40 Yes.
0:13:40 Yes.
0:13:41 Yes.
0:13:41 Right.
0:13:46 And what’s the best explanation for why it’s so hard to figure out, folding laundry, but so easy to figure out?
0:13:50 Well, it’s actually not that hard to figure out.
0:13:54 Or why it’s taken us much longer, much more expensive, because we couldn’t, it would have been hard to foresee that in advance.
0:13:56 Well, I remember I talked to Ilya about this a few years ago.
0:14:05 And it’s like, why is it that if you read an Asimov novel where it talked about like how, you know, people cook for you and fold your laundry, like why have none of these things happened?
0:14:07 And it’s like, well, you just never had a brain that was smart enough.
0:14:12 This was part of the problem is that you could, I mean, yes, you have things like, you know, how do you actually pick up this water bottle?
0:14:17 And it turns out your hands are very, very well, like why are humans more advanced than every other species?
0:14:19 So there are two reasons.
0:14:20 Number one is we have opposable thumbs.
0:14:26 And then number two is we’ve come up with a language system that we could pass down from generation to generation, which is writing.
0:14:27 Dolphins are very smart.
0:14:31 Like there was actually a whole theory, which is it wasn’t just brain size.
0:14:33 It was brain to body size.
0:14:35 So humans were the highest.
0:14:36 Nope, not true.
0:14:41 And now that we’ve actually measured every single animal, there are a lot of animals that have more brain over body size.
0:14:50 Like that ratio is in tilt of an elephant or of a dolphin or I forgot the numbers, but there are a bunch that are actually more advanced than humans, but they don’t have opposable thumbs.
0:14:52 And because of that, they never developed writing.
0:14:55 So they can’t actually iterate from generation to generation.
0:14:56 And humans did.
0:15:00 And then, of course, like the human condition was like it was this and then the industrial revolution.
0:15:01 Then it went like that.
0:15:02 And now it’s continued like this.
0:15:12 But this is the reason why in the last four or five years, one of the things I realized is, you know, because of the classic classification of human beings is homo sapiens.
0:15:16 I actually think we’re homo techne because it’s that iteration through technology.
0:15:17 Yes, yes, exactly.
0:15:22 Whatever version, writing, typing, you know, but it’s we iterate through technology.
0:15:28 That’s the actual thing goes to future generations, builds on science, you know, all the rest of it.
0:15:29 And that’s what I think is really key.
0:15:37 A couple of other explanations could be that we have more training data on white collar work than sort of, you know, picking things up.
0:15:43 Or some people make this evolutionary argument that we’ve been using our disposable thumbs for way longer than we’ve been, say, you know, reading.
0:15:45 Well, yeah, it’s the lizard brain.
0:15:46 Like most of your brain is not the neocortex.
0:15:51 And like that’s the like draw and paint and everything else, which is actually very, very hard.
0:15:53 You can’t find a dolphin that can draw or paint.
0:15:54 And that’s probably because they don’t have opposable thumbs.
0:15:57 But it’s also like maybe that part of the brain hasn’t developed.
0:16:04 But you have like billions of years of evolution for these somewhat autonomous responses like fight or flight.
0:16:07 That’s been around for a long, long time, well before drawing and painting.
0:16:11 But I think the main issue is just like you have battery chemistry problems.
0:16:19 Like I can’t, like it turns out like a lithium ion battery is pretty cool, but the energy density of that is terrible relative to ATP with cells, right?
0:16:22 Like you have all of these reasons why robotics don’t work.
0:16:26 But first and foremost is the brain was never very good.
0:16:29 So you had robotics like Fanuc, which makes assembly line robots.
0:16:33 Those work really well, but it’s like very deterministic or highly deterministic.
0:16:37 But once you go into like, you know, multiple degrees of freedom, you have to get so many things to work.
0:16:41 And the capex, it’s like I need $100,000 to have a robot fold my laundry.
0:16:44 And we have so many extra people that will do that work.
0:16:46 The economics never made sense.
0:16:50 But this is why Japan is a leader in robotics because they can’t hire anybody.
0:16:53 So therefore, I might as well build a true story.
0:16:59 I went bowling in Japan and they had a robot to get like a vending machine robot that would give you your bowling shoes.
0:17:01 And then it would clean the bowling shoes.
0:17:04 And it’s like you would never build that here.
0:17:06 So you’d hire some guy from the local high school.
0:17:06 Yes.
0:17:07 And he’d go do that.
0:17:07 Yeah.
0:17:09 And much cheaper and actually more effective.
0:17:15 But it’s this capex, like the capex line and the opex line when they cross, then it’s like, ooh, I should build robots.
0:17:16 So that’s the other thing that you probably need.
0:17:20 But if the cost goes down, then of course it goes in favor of capex versus opex.
0:17:24 I think there’s a couple of things to go deeper on the robot side.
0:17:30 So one is the density, the bits to value, right?
0:17:36 So like in language, when we encapsulated all these things, even into like romance novels, there’s a high bits to value.
0:17:44 Whereas when you’re kind of in the whole world, there’s a lot of like, how do you, we abstract from all those bits and how do you abstract them?
0:17:47 There’s another part of it, which is kind of common sense awareness.
0:17:56 Like this is one of the things that, like when I look at, you know, GBD2, 3, 4, 5, it’s a progression of savants, right?
0:17:57 And the savants are amazing.
0:18:02 It doesn’t mean the savants, but like when it makes mistakes, like as a classic thing.
0:18:10 So Microsoft has had running for years now agents talking to each other long form, like just like, let’s go for a year and do that and see what happens.
0:18:13 And so often they get into like, oh, thank you.
0:18:14 No, thank you.
0:18:15 No, thank you.
0:18:16 One month later.
0:18:17 Thank you.
0:18:18 No, thank you.
0:18:21 Which human beings are like, stop, right?
0:18:26 Like just like, and that’s like a, that’s a simple way of putting the context awareness thing of like, no, no, no, no.
0:18:29 Let’s, let’s stay very context aware.
0:18:40 And even as magical as the progression has been, like much, much better data, much, much better reasoning, much, much better personalization, et cetera, et cetera.
0:18:43 Context awareness only is a proxy of that.
0:18:44 Yeah.
0:18:45 Yeah.
0:18:48 I want to go deeper on your question about Dr. Reed.
0:18:52 And because Alex, we just released one of your talks around, you know, software eating labor.
0:19:01 And I’m curious where you, how you, what sort of frameworks you have for thinking about what spaces are going to have more of this co-pilot model versus what spaces it’s going to be sort of replacing the work entirely.
0:19:05 I have, I wish I could, I’m going to use an LLM to go predict the future, but I’m going to get a B minus.
0:19:06 Yes.
0:19:08 Maybe I’ll answer when I get a B plus.
0:19:14 I think a lot of it is like the natural, like there’s the skeuomorphic version, which is, okay, well, I trust the doctor.
0:19:15 Everybody trusts the doctor.
0:19:17 The heuristic is where did you go to medical school?
0:19:25 Apparently two thirds of doctors now use open evidence, which is like chat GPT, but it ingests to the New England Journal of Medicine and have like a license to that.
0:19:26 So.
0:19:27 Daniel Nadler.
0:19:28 Good guy.
0:19:30 Kenshi, right.
0:19:30 So, yeah.
0:19:33 So, so that seems like there’s no reason not to do that.
0:19:38 Like my, my seven deadly sins version, I’ll simplify it, which is like everybody wants to be lazier and richer.
0:19:43 So this is a way that I can like get more patients and do less work.
0:19:45 Of course, people are going to use this.
0:19:46 There’s no reason not to.
0:19:49 But does it replace that particular thing?
0:19:53 And actually most of like the, the software eats labor thing, it doesn’t actually eat labor right now.
0:19:57 The thing that’s working the best is not like, hey, I have a product where everybody’s going to lose their job.
0:19:58 Nobody’s going to buy that product.
0:20:04 It’s very, very hard to get that distributed as opposed to, I will give you this magic product that allows you to be lazier.
0:20:11 Obviously, it’s not framed this way, like lazy and rich, it sounds kind of, you know, not, not great, but I’m going to let you work fewer hours and make more money.
0:20:13 And that’s, that’s a very killer combo.
0:20:24 And if you have a product like that and it’s delivered by somebody that already has that heuristic of expertise, these are just going to go one after another and get adopted, adopted, adopted.
0:20:31 And then eventually you’re going to have cases like the one that you mentioned, where if you don’t use ChatGPT when you get a medical diagnosis, you’re insane.
0:20:33 But that is not fully diffused across the population.
0:20:36 Well, it’s barely diffused.
0:20:36 No, I know.
0:20:36 Yes, yes.
0:20:38 No, no, but you were saying not fully.
0:20:40 I mean, part of the reason, everyone, start doing it.
0:20:42 Yes, 100%.
0:20:45 Well, it’s because it’s the fastest growing product of all time and yet it’s barely, you know.
0:20:47 Well, that’s why I’m convinced that AI is massively underhyped.
0:20:50 Because in Silicon Valley, you might not make that claim.
0:20:52 Maybe it’s overhyped, maybe valuation, whatever.
0:20:53 No, we all don’t think it’s overhyped.
0:20:58 But I think once I meet somebody in the real world and I show them this stuff, they have no idea.
0:21:02 And part of it is like they see the IBM Watson commercials and like, oh, that’s AI.
0:21:03 No, that’s not AI, right?
0:21:05 Or they see the fake AI.
0:21:07 They’ve seen ChatGPT two years ago.
0:21:08 It didn’t solve a problem.
0:21:11 And it’s funny, I made this blog post.
0:21:15 Back when you were my investor at TrialPay, I called it never judge people on the present.
0:21:16 And this is a mistake.
0:21:19 It’s a category error that a lot of big company people make.
0:21:21 But I mean that almost metaphorically.
0:21:24 And the way that I wrote this blog post was I found a video of Tiger Woods.
0:21:25 He was two and a half years old.
0:21:27 He hit a perfectly straight drop.
0:21:31 And he was on, you know, not the, I think the Tonight Show or something.
0:21:33 And there are two ways of watching that video.
0:21:34 You could say, well, I’m 44.
0:21:37 I can hit a drive much further than that kid, which is correct.
0:21:41 Or you could say, wow, if that two and a half year old kid keeps that up, he could be really, really good.
0:21:43 And most people judge things on the present.
0:21:45 And that’s why it’s underhyped.
0:21:47 Because it’s like they tried it at some point in time.
0:21:50 There’s a distribution of when they tried it.
0:21:52 Like probabilistically, it’s in the past.
0:21:53 And they’re like, oh, that didn’t work for my use case.
0:21:54 It doesn’t work.
0:21:56 And that’s bad.
0:22:01 So I think it’s going to diffuse largely around this, like, lazy, rich, like, concept.
0:22:03 And that’s where a lot of these things have taken off.
0:22:06 And I see it less at the very, very big companies.
0:22:09 Because you have a principal agent problem at the very big companies.
0:22:12 Like, okay, my company made money or saved money.
0:22:13 I’m a director of XYZ.
0:22:17 Like, all I know is that I want to leave earlier and get promoted.
0:22:19 And how does that actually help me?
0:22:21 It helps the ethereal being of the corporation.
0:22:25 Whereas at a smaller business or a sole proprietor or an individual doctor,
0:22:29 where I run a dermatology clinic and somehow I can have five times as many patients.
0:22:30 Or I’m a plaintiff’s attorney.
0:22:32 I can have five times as many settlements.
0:22:34 It’s like, of course I’m going to use that.
0:22:36 Because I get to be lazier and richer.
0:22:36 Yeah.
0:22:37 Yeah.
0:22:37 A hundred percent.
0:22:39 That seems to be a great model.
0:22:41 By the way, the other one, you’re reminding me.
0:22:44 Ethan Mullick has a quote here that I use often.
0:22:45 He’s great.
0:22:45 Yes.
0:22:48 The worst AI you’re ever going to use is the AI you’re using today.
0:22:48 Correct.
0:22:50 Because it’s to remind you, use it tomorrow.
0:22:52 Yeah.
0:22:54 And a lot of the skeptics, it’s exactly this.
0:22:56 It’s like, well, I tried it two months ago and it didn’t solve this problem.
0:22:57 Therefore, it’s bad.
0:22:58 It’s like, so you’re judging it on the present.
0:22:59 Like, you have to extrapolate.
0:23:00 Yes.
0:23:02 And you don’t want to get, like, too extrapolatory.
0:23:04 I’m like, you know, oh, LLMs have this.
0:23:09 Like, you actually have, I feel like the two types of people that are under hyping AI are
0:23:11 people that know nothing and people that know everything.
0:23:13 It’s really interesting.
0:23:15 It’s like the meme where it’s like, you know, the idiot meme, right?
0:23:19 It’s like the people, but it’s, yeah, it’s like the people in this part of the distribution
0:23:19 are correct.
0:23:20 Normally, the meme is the opposite.
0:23:23 It’s like, these people are smart, even though they’re dumb.
0:23:24 These people are smart, even though they’re smart.
0:23:27 Everybody here, like, this part of the curve is actually correct.
0:23:30 Because they’re the ones that are using it to get richer and be lazier.
0:23:37 The other thing I also tell people is, if you haven’t found a use of AI that helps you on
0:23:42 something serious today, not just write a sonnet for your kid’s birthday, or, you know, I’ve
0:23:43 got these ingredients in my fridge, what should I make?
0:23:44 Do those too.
0:23:49 But if you haven’t for something like work, for, like, something is serious about what
0:23:51 you’re doing, you’re not trying hard enough.
0:23:51 Yeah.
0:23:53 It doesn’t that a word does everything.
0:23:57 Like, for example, I still think if I put in, like, how should Reid Hoffman make money
0:24:00 investing in AI, and I’ll go try that again.
0:24:05 I suspect I will still get what I think is the bozo business professor answer versus the
0:24:07 actual name of the game.
0:24:12 But everyone should be trying.
0:24:17 And I, you know, like, for example, we put, when we get decks, we put them in and say, give
0:24:18 me a due diligence plan, right?
0:24:22 If not everybody here doing that, that’s a mistake.
0:24:26 Because five minutes, you get one, and you go, oh, no, not two, not five.
0:24:28 Oh, but three is good.
0:24:31 And it would have taken me a day to getting to about three.
0:24:31 Yeah.
0:24:32 Yeah.
0:24:35 In terms of, let’s go back to extrapolation.
0:24:37 Obviously, the last few years have had incredible growth.
0:24:41 You were involved, of course, with Open AI since the beginning.
0:24:45 When we look for the next few years, there’s a broader question as to whether scaling laws
0:24:50 will hold, whether it’s sort of the limitations or how far we can get with LLMs.
0:24:53 Do we need another breakthrough of a different kind?
0:24:54 What is your view on some of these questions?
0:25:02 So, one of the things we, you know, we all swim in this universe of extrapolating the future.
0:25:06 One of the things that’s great about Silicon Valley, and so you get such things as, you
0:25:10 know, theories of singularity, theories of superintelligence, theory of exponential getting
0:25:12 to superintelligence soon.
0:25:20 And what I find is usually the mistake in that is not the fact that extrapolating the future.
0:25:24 That’s smart, and people need to do that, and far too few people do.
0:25:27 I think I remember liking your post and helping promote it, if I recall.
0:25:33 But it’s the notion of, well, what curve is that?
0:25:40 Like, if it’s a savant curve, that’s different than, oh, my gosh, it’s an apotheosis, and now
0:25:41 it’s God, you know?
0:25:46 You know, it’s like, no, no, it’ll be an even more amazing savant than we have.
0:25:49 But by the way, if it’s only savant, there’s always room for us.
0:25:53 There’s always rooms for the generalist and the cross-checker and the context awareness.
0:25:55 And all the rest of it.
0:25:58 Now, maybe it’ll cross over a threshold or not.
0:25:59 Maybe it won’t.
0:26:01 You know, like, I think there’s a bunch of different questions there.
0:26:05 But that extrapolation too often goes, well, it’s exponential.
0:26:08 So in two and a half years, magic.
0:26:13 And you’re like, well, look, it is magic, but it’s not all magic, is the kind of way he’s
0:26:13 doing it.
0:26:23 Now, so my own personal belief is that, look, so the critics of LLMs make a mistake in that.
0:26:25 And, you know, we can go through all the different critics.
0:26:27 Oh, not knowledge representation.
0:26:30 It screws up on, you know, prime numbers.
0:26:31 And, you know, blah, blah, blah, blah, blah.
0:26:32 We’ve already.
0:26:33 How many R’s and strawberries.
0:26:33 Yes, exactly.
0:26:34 Yeah, exactly.
0:26:35 You know.
0:26:36 And they go, wow, see?
0:26:36 It’s broken.
0:26:40 And you’re like, you’re missing the magic, right?
0:26:45 Like, yes, maybe there’s some structural things that over time, even in three to five years,
0:26:47 will continue to be a difficult problem for LLMs.
0:26:52 But AI is not just the one LLM to rule them all.
0:26:53 It’s a combination of models.
0:26:54 We already have combination of models.
0:26:58 We use diffusion models for various image and video tasks.
0:27:03 Now, by the way, they wouldn’t work without also having LLMs in order to have the ontology
0:27:11 to say, create me an Eric Tornberg as a Star Trek captain, you know, going out to, you know,
0:27:15 explore the universe and meeting and making first contact with the Vulcans and so forth,
0:27:19 which, you know, now with our phone, we could do that, right?
0:27:21 And it would be there, courtesy OpenAI.
0:27:25 And, you know, VO, because Google’s model is also very good.
0:27:27 But it needs the LLMs for that.
0:27:32 But the thing that people on track is, it’s going to be LLMs and diffusion models,
0:27:36 and I think other things with a fabric across them.
0:27:40 Now, one of the interesting questions is, is the fabric fundamental LLMs?
0:27:41 Is the fabric of the things?
0:27:43 I think that’s a TBD on this.
0:27:46 And the degree to which it gets to intelligence is an interesting question.
0:27:52 Now, one of the things I think is a, you know, I talk to all the critics intensely,
0:27:55 not because I necessarily agree with the criticism, but I’m trying to get to the,
0:27:57 what’s the kernel of insight?
0:28:02 And like one of the things that I loved about, you know, kind of a set of recent conversations
0:28:07 with Stuart Russell was saying, hey, if we could actually get the fabric of these models
0:28:16 to be more predictable, that would greatly allay the fears of what happens if something goes amok.
0:28:19 Well, okay, let’s try to do that.
0:28:23 Now, I don’t think the whole verification of outputs, like, like logical,
0:28:25 like we can’t even do verification of coding, right?
0:28:27 Like verification strikes me as very hard.
0:28:30 Now, brilliant man, maybe we’ll figure it out.
0:28:35 But the, but, but on the other hand, the, hey, this is a good goal.
0:28:39 Can we make that more programmable, reliable?
0:28:44 I think that is a good goal that people, that very smart people should be working on.
0:28:46 And by the way, smart AIs.
0:28:51 Well, that’s some of the math side is like, if you think about the foundation of the world,
0:28:53 I mean, philosophy is the basis of everything.
0:28:55 Actually, math comes from philosophy.
0:28:57 It’s called the Cartesian plane after Descartes.
0:28:58 You know, you’re a philosophy, you know this, right?
0:29:05 So you have, you have philosophy, math, physics, like why did Newton build calculus to understand
0:29:06 the real world?
0:29:11 So math, physics, physics gets you chemistry, chemistry gets you biology, and then biology
0:29:12 gets you psychology.
0:29:13 So that’s kind of the stack.
0:29:19 So if you solve math, that’s actually quite interesting because there’s a professor at Rutgers,
0:29:21 Kontorovich, who’s written about this a lot.
0:29:27 And I find this part fascinating just as a former mathematician, because there are some very,
0:29:28 very hard problems.
0:29:33 There’s a rumor that the Navier-Stokes equation is going to be solved by DeepMind, which would
0:29:34 be huge.
0:29:35 That’s one of the clay math problems.
0:29:40 But, you know, the Riemann hypothesis, like this is not like, there’s no eval, right?
0:29:45 If it’s like, this is why if you look at the progression of AI, there is the AMI, the American
0:29:50 Invitational Math Examination, where you, the answers are all just like three, it’s just
0:29:51 integers, right?
0:29:53 It’s like zero to 999 is the answer.
0:29:56 And then, of course, you can keep trying different things, and then you either get the right answer
0:29:57 or you don’t, and it’s very, very easy to do that.
0:30:00 Whereas once you get to proofs, very, very hard.
0:30:00 Yes.
0:30:04 And if you solve that, I mean, is that AGI?
0:30:06 No, because the goalposts keep changing out of AGI.
0:30:07 Yes.
0:30:09 But math is just so interesting.
0:30:10 AGI is the AI we haven’t invented.
0:30:10 Exactly.
0:30:11 That’s just the piece.
0:30:12 It’s the corollary to it.
0:30:16 It’s like, you know, if the worst AI you’re going to try is today, well, AGI is what you’re
0:30:17 going to have tomorrow.
0:30:17 Right.
0:30:19 It’s the same kind of thing.
0:30:23 But math is a very, very interesting one as well, because you have these things.
0:30:25 It’s not like solving high school math.
0:30:25 Right.
0:30:29 This is like, if you’re able to actually logically construct a proof for something and then validate
0:30:30 it.
0:30:30 Yeah.
0:30:32 There’s a whole programming language called Lean, which is for that.
0:30:34 Like, that stuff is also fascinating.
0:30:38 So there’s so many different vectors of ATT&CK, which is the other way of thinking about it.
0:30:39 It’s fascinating.
0:30:44 So as you just mentioned, Alex, Reed, you’re a philosophy major, but you’re also very interested
0:30:44 in deep in neuroscience.
0:30:49 And some people say that, hey, we’ll never create AI with its own sort of consciousness
0:30:51 because we don’t understand our own consciousness.
0:30:52 We don’t understand how our own brain works.
0:30:57 And then there’s a broader question as, oh, will AI have its own goals or will it have its
0:30:58 own agency?
0:31:02 What is sort of your view on some of these questions surrounding consciousness and the way
0:31:02 it’s AI?
0:31:08 Well, consciousness is its own tarball, which I will say a few things about.
0:31:14 I think agency and goals is almost certain.
0:31:21 There is a question, I think this is one of the areas where we want to have some clarity
0:31:21 and control.
0:31:25 That was a little bit like the kind of question of what kind of compute fabric holds it together
0:31:32 because you can’t get complex problem solving without it being able to set its own minimum
0:31:33 sub-goals and other kinds of things.
0:31:38 And so goal setting and behavior and inference from it, and that’s where you get the classic
0:31:43 kind of like, whoa, you tell it to maximize paperclips, and it tries to convert the entire
0:31:45 planet into paperclips.
0:31:51 And there’s one thing that’s definitely old computer, which is no context awareness, something
0:31:53 I even worry about modern AI systems.
0:31:57 But on the other hand, it’s like, look, if you’re actually creating an intelligence, they
0:32:02 don’t go, oh, let me just go try to convert everything into paperclips.
0:32:06 So it’s like, it’s actually, in fact, not that simple in terms of how it plays.
0:32:10 Now, consciousness is an interesting question because you’ve got some very smart people, Roger
0:32:16 Penrose, who I actually interviewed way back when on Emperor’s New Mind, speaking of mathematicians,
0:32:25 who are like, look, actually, in fact, there’s something about our form of intelligence, our
0:32:32 form of computational intelligence that’s quantum-based, that has to do with how our physics work, that
0:32:34 has to do with things like tubulars and so forth.
0:32:35 And by the way, it’s not impossible.
0:32:42 Like, that’s a, it’s a coherent theory from a very smart mathematician, like one of the
0:32:44 world’s smartest, right?
0:32:47 Like, it’s kind of in the category of there’s other people that are smart, but there’s no
0:32:50 one smarter, right, in that kind of vector.
0:32:51 And so, so that’s possible.
0:32:59 I don’t think you need consciousness for goal setting or reasoning.
0:33:04 I’m not even sure you need consciousness for certain forms of self-awareness.
0:33:08 There may be some forms of self-awareness that consciousness is necessary for.
0:33:09 It’s a tricky thing.
0:33:15 Philosophers have been trying to address this not very well for as long as we’ve got records
0:33:17 of philosophy, right?
0:33:18 And philosophers agree.
0:33:21 I’m not, philosophers wouldn’t think I was throwing them under the bus with this.
0:33:25 They’re like, yeah, this is a hard problem because it ties to agency and free will and
0:33:25 a bunch of other things.
0:33:29 And, and I think that the right thing to do is keep an open mind.
0:33:33 Now, part of keeping an open mind, I think Mustafa Suleyman wrote a very good piece in the
0:33:38 last month or two on like semi-consciousness, which is we make too many mistakes, all of
0:33:43 the Turing test, a piece of brilliance, which is, well, it talks to us.
0:33:46 So therefore it’s fully intelligence and all the rest.
0:33:51 And so similarly, you had that kind of, you know, kind of nutty event from that Google
0:33:54 engineer who said, I asked this earlier model, was it conscious?
0:33:55 And it said, yes.
0:33:56 So therefore it is.
0:33:56 QED.
0:33:57 Yes, QED.
0:33:58 You’re like, no, no, no, no.
0:34:02 Like you have to be not misled by that kind of thing.
0:34:07 And like, for example, you know, the kind of thing that, you know, what, what I actually
0:34:12 think most people obsess about the wrong things when it comes to AI.
0:34:16 They obsess about the climate change stuff because actually, in fact, if you apply intelligence
0:34:20 at the scale and availability of electricity, you’re going to help climate change.
0:34:23 You’re going to solve grids and appliances and a bunch of other stuff.
0:34:26 It’s just like, no, this will be net super positive.
0:34:28 And by the way, you already see elements of it.
0:34:35 Google applied its algorithms to its own data centers, which are some of the best tuned grid
0:34:37 systems in the world, 40% energy savings.
0:34:40 I mean, just, you know, just that, and just applying it.
0:34:41 So that’s the mistake.
0:34:48 But one of the areas I think is this question around like, what is the way that we want children
0:34:49 growing up with AIs?
0:34:50 What is their epistemology?
0:34:52 What is their learning curves?
0:34:55 You know, what are the things that kind of play to this?
0:35:00 Because that kind of question is something that we want to be very intentional about in terms
0:35:01 of how we’re doing it.
0:35:05 And I think that’s like, like, if you want to go ask a good question that you should be
0:35:09 trying to get good answers that you could do something again and contributing good answers
0:35:11 to, that’s a good one.
0:35:11 Yeah.
0:35:16 Well, the most cogent argument that I’ve heard against free will is just that we are biochemical
0:35:17 machines.
0:35:21 So if you want to test somebody’s free will, get them very hungry, very angry, like all
0:35:22 of these things where it’s just, there’s a hormone.
0:35:24 It’s like norepinephrine.
0:35:26 It’s just like that makes you act a particular way.
0:35:27 It’s like an override.
0:35:27 Yeah.
0:35:32 So you have this like free will thing, but then you just insert a certain chemical and then
0:35:33 like, boom, it changes.
0:35:35 Are you saying you’re not a Cartesian?
0:35:38 You don’t have a little pineal gland that connects the two substances?
0:35:38 Yeah.
0:35:39 I don’t know.
0:35:40 So, but it’s true.
0:35:43 I mean, it’s like, like hanger is, yeah, I’m hangry.
0:35:44 Like that’s a thing.
0:35:44 Yes.
0:35:49 And, you know, what is the, like, do you actually want, if you’re developing super
0:35:52 intelligence, do you want to have this like kind of silly override?
0:35:56 I mean, the reason why people go to jail sometimes that are perfectly normal is they get very angry.
0:36:00 They do things that are kind of like out of character, but it’s actually not out of character
0:36:04 if you think about this free will override of just like chemicals going through your bloodstream,
0:36:05 which is kind of crazy to think about.
0:36:09 Look, since we’re on a geeky, nerdy podcast, I’m going to say two geeky, nerdy things.
0:36:14 One, the classic one is people say, yes, we are biochemical machines, but let’s not be overly
0:36:15 simplistic on what a biochemical machine is.
0:36:18 That’s like the Penrose quantum computing, et cetera.
0:36:25 And you get to this weird stuff in quantum, which is, well, it’s, it’s a probabilistic
0:36:27 dual supervisional form until it’s measured.
0:36:30 Why is there magic in measurement?
0:36:33 And is that magic in measurement, something that’s conscious, you know, blah, blah, blah.
0:36:35 So there’s a bunch of stuff there.
0:36:42 The other thing that I think is interesting that we’re seeing as a resurgence in philosophy
0:36:44 a little bit is idealism, right?
0:36:48 We would have thought as physical materialists that, that we go, no, no, idealists were
0:36:52 disproven, they’re gone, but actually beginning to say, no, actually, in fact, what exists
0:36:59 is thinking and that all of the physical things around us come from that thinking.
0:37:05 And obviously we see versions of this because, you know, I find myself entertained frequently
0:37:07 here in Silicon Valley by people saying, we’re living in a simulation.
0:37:08 I know it, you know it.
0:37:13 And you’re like, well, your simulation theory is very much like Christian intelligent design
0:37:14 theory.
0:37:17 It’s the, I have things that I can’t explain.
0:37:22 So therefore creator, no, therefore simulation, no, therefore creator of simulation.
0:37:27 You’re like, no, no, no, but I, you know, so clearly I’m not an idealist, but that’s
0:37:29 why I see some resurgence of idealism happening.
0:37:36 Well, I suspect, I suspect we’ll solve for AGI before we solve for the, for various definitions
0:37:39 of AGI before we solve for the hard problems of, uh, of consciousness.
0:37:39 Yes.
0:37:45 Um, I want to return to, uh, LinkedIn, how we began the conversation, uh, because we
0:37:47 were lucky to, or I was lucky to work many years with you.
0:37:51 We would get pitches, uh, every week about a LinkedIn disruptor.
0:37:53 It was the last 20 years, right?
0:37:53 Yes.
0:37:56 And so, and nothing’s come even close.
0:37:58 And so it’s fascinating.
0:38:02 I’m curious why people sort of underrated how hard it was.
0:38:06 And people have this about Twitter too, or other things that kind of look simple perhaps, but
0:38:10 are actually very, very difficult to unseat and have a lot of staying power.
0:38:13 And, and it’s interesting, you know, OpenAI, they said they’re coming out with a job service
0:38:17 to quote, use AI to help find the perfect matches between what companies need and what workers
0:38:18 can offer.
0:38:21 I’m curious how you think about sort of LinkedIn’s durability.
0:38:26 So look, I obviously think LinkedIn is durable, but first and foremost, I kind of look at this
0:38:28 as humanity, society, industry.
0:38:32 So first and foremost is what are the things that are good for humanity?
0:38:33 Then what’s good for society?
0:38:34 Then what’s good for industry?
0:38:37 And by the way, we do industry to be good for society and humanity.
0:38:39 It’s not, it’s not oppositional.
0:38:42 It’s just a, you know, how you’re making these decisions and what you’re thinking about.
0:38:49 So I would be delighted if there were new, amazing things that helped people, you know, kind
0:38:54 of make productive work, find productive work, and make them do them.
0:38:58 We’re having, we’re going to have all this job transition coming from technological disruption
0:38:59 with AI.
0:39:00 Like it would be awesome.
0:39:06 I, of course, would be extra awesome if it was LinkedIn bringing it, just given my own personal
0:39:10 craft of my hands and pride at what we built and all the rest.
0:39:16 Now, the thing with LinkedIn and, you know, Alex was with me on a lot of this journey, you
0:39:18 know, as I sought his advice on various things.
0:39:26 The, the, LinkedIn was one of those things where it’s where the turtle eventually actually, in
0:39:29 fact, like grows into something huge.
0:39:34 Because for many, many years, the general scuttlebutt in Silicon Valley was LinkedIn was
0:39:39 the, was the, the, the dull, boring, useless thing, et cetera.
0:39:41 And it was going to be Friendster.
0:39:44 Probably most people listening to this don’t know what Friendster is.
0:39:46 Then MySpace, maybe a few people have heard of that.
0:39:47 Right.
0:39:51 You know, and then of course we’ve got, you know, Facebook and Meta and, you know, TikTok and
0:39:51 all the rest.
0:39:57 And part of the thing for LinkedIn is it’s built a network that’s hard to build.
0:39:58 Right.
0:40:03 Because it doesn’t have the same sizzle and pizzazz that photo sharing has.
0:40:08 It doesn’t have the same sizzle and pizzazz that, you know, you know, like one of the things
0:40:12 that, you know, you were referencing the seven deadly sins comment.
0:40:16 And back when I started doing that, 2002, yes, I left my walker at the door.
0:40:21 The, the thing that I used to say was Twitter was identity.
0:40:22 I actually mistook it.
0:40:23 It’s wrath.
0:40:24 Right.
0:40:24 Yeah.
0:40:27 And so it doesn’t have the wrath, you know, kind of component of it.
0:40:34 And so, and so the, you know, the thing that, and you said with LinkedIn, LinkedIn’s greed.
0:40:35 Great.
0:40:39 You know, cause seven deadly sins kind of, you know, cause, cause that’s, you know, a motivation
0:40:41 that’s very common across a lot of human beings.
0:40:42 Rich and lazy.
0:40:42 Yes, exactly.
0:40:49 And so, or, you know, you’re, you’re putting it in the punchy way, but simply being productive.
0:40:49 Yeah.
0:40:51 More value creation.
0:40:51 Right.
0:40:53 And accruing some of that value to yourself.
0:41:01 And so, and so I think the reason why it’s been difficult to create a, a disruptor to LinkedIn
0:41:03 is it’s a very hard network to build.
0:41:05 It’s actually not easy.
0:41:11 And, and by staying really true to it, you end up getting a lot of people going, well, this
0:41:13 is, this is where I am for that.
0:41:17 And now I have a network of people with this, and we are here together collaborating and
0:41:18 doing stuff together.
0:41:21 And that’s the thing that a new thing would have to be.
0:41:34 And, you know, I, you know, I, when I saw GVD4 and knew that Microsoft had access to this,
0:41:40 I called the LinkedIn people and said, you guys have got to get in the room to see this, right?
0:41:44 Because you need to start thinking about what are the ways we help people more with that?
0:41:48 Because you start with, this is actually one of the things that I think people don’t realize
0:41:51 about Silicon Valley, because, you know, the general discussion is, oh, you’re trying to
0:41:53 make all this money through equity and all this revenue.
0:41:54 Of course, you know, business people are trying to do that.
0:42:00 But they don’t realize as you start with, what’s the amazing thing that you can suddenly create?
0:42:04 And part of it is like lots of these companies, like it started with, and you go, what’s your
0:42:05 business model?
0:42:06 And you go, I don’t know.
0:42:10 Like, yeah, we’re going to try to work it out, but I can create something amazing here.
0:42:16 And that’s actually one of the fundamental, like, places of what the, you know, call it the
0:42:20 religion of Silicon Valley and the knowledge of Silicon Valley that I so much, you know,
0:42:22 love and admire and embody.
0:42:24 That’s actually a question that I have.
0:42:25 So I’ll say one thing.
0:42:26 It’s a huge compliment to LinkedIn.
0:42:27 It’s anti-fragile.
0:42:28 Yes.
0:42:30 And that, like, Facebook, oh, nobody goes there anymore.
0:42:31 It’s like the Yogi Bear.
0:42:32 It’s too crowded.
0:42:32 Nobody goes there anymore.
0:42:34 It’s, oh, there are too many parents there.
0:42:35 And there’s always been a new one.
0:42:36 Like, where did Snap?
0:42:37 Like, how did Snap start?
0:42:40 Like, all these other networks started because people didn’t want to hang out with their boomer
0:42:40 parents.
0:42:43 My kid won’t let me follow him on Instagram, right?
0:42:45 It’s like, he doesn’t want to use Facebook.
0:42:47 So LinkedIn has survived through all of that.
0:42:52 But you referenced something that I think is a very interesting point, which is back in,
0:42:57 like, Web 2, it was like, get lots of traffic, get amazing retention, you know, smile
0:42:59 curve, and then you will figure out monetization.
0:42:59 Yes.
0:43:01 And, like, that isn’t happening right now.
0:43:02 Yeah.
0:43:04 It’s not like, yes, it happened with chat.
0:43:05 It’s not GPT, but it’s like, it’s $20 a month.
0:43:05 Yes.
0:43:06 Right?
0:43:10 Like, the monetization was kind of built in very, very clear subscription versus, like,
0:43:10 become giant.
0:43:11 Yes.
0:43:11 Build a giant.
0:43:15 Like, do you think there will be new ones of those with AI?
0:43:15 Yes.
0:43:17 And there will be new kind of freemium.
0:43:18 It’s part of our tool chest.
0:43:22 Now, part of the reason why it’s more tricky, especially when you’re doing open AI, is because
0:43:26 the, like, the cogs are changing a little off.
0:43:26 Yes.
0:43:27 For now.
0:43:27 Yes.
0:43:27 No, no.
0:43:32 But, like, and so you just can’t, this is one of the reasons why at PayPal we had to change
0:43:36 to, like, we, as you know, because you were close to us there, like, we had to change to
0:43:41 a paid model because we’re like, oh, look, we have exponentiating volume, which means exponentiating
0:43:45 cost curve, which means despite having raised hundreds of millions of dollars, we could literally
0:43:48 count the, we could point to the hour that we’d go out of business.
0:43:49 Right?
0:43:51 Because, you know, no, you can’t have an exponentiating cost curve.
0:43:55 So, I think that’s one of the reasons why some of it has been different in AI because you,
0:44:00 like, you can’t have an exponentiating cost curve without at least a following revenue
0:44:00 curve.
0:44:02 But it’s almost no fun.
0:44:03 It’s like Pinterest.
0:44:04 It’s like, how are they going to make money now?
0:44:04 Yes.
0:44:05 It’s a big public company.
0:44:09 It’s like there were a lot of these during that era, and now it’s like they’re burning
0:44:12 lots of money, they’re raising lots of money, but the subscription revenue is baked
0:44:12 in from day zero.
0:44:13 Yeah.
0:44:14 And that’s the fundamental difference.
0:44:15 But they have to because of the cost curve.
0:44:16 They have to, exactly.
0:44:16 Yeah.
0:44:20 So, I’m waiting for, like, one of these, like, you know, net new companies that appeals
0:44:23 to probably one of the seven deadly sins that is the new counterpart.
0:44:23 Yeah.
0:44:25 Well, I’d be happy to work on it.
0:44:26 Yes.
0:44:28 Well, it is fascinating.
0:44:31 Some people, many people have tried sort of different angles on LinkedIn.
0:44:35 One that I was curious about a few years ago was sort of this idea of could you get, what’s
0:44:37 on LinkedIn is resumes, but not necessarily references.
0:44:42 But the same way that resumes are viral, references are, like, anti-viral, or anti-memetic, and
0:44:43 people don’t want them on the internet.
0:44:47 If there was a data set that people wanted on the internet, LinkedIn would have done it
0:44:48 to some degree.
0:44:53 But, yeah, I think most people who try these attempts don’t kind of appreciate sort of the
0:44:55 subtleties of a…
0:44:59 And I’ve actually, I mean, we do have the equivalent of book blurb references on it.
0:44:59 Right, you have endorsements.
0:45:00 Yes, endorsements.
0:45:01 But you don’t have a negative reference.
0:45:05 Well, but, by the way, part of the reason why negative references is you have complexity
0:45:06 in social relationships.
0:45:09 That’s the negative virality point that you were just making.
0:45:15 And then you also have complexity on, like, you know, kind of not just legal liability,
0:45:16 but social relationships and a bunch of other stuff.
0:45:20 Now, LinkedIn is still the best way to find a negative reference.
0:45:25 I mean, that’s actually one of the things that I use LinkedIn to figure out who might know
0:45:25 a person.
0:45:25 Yeah.
0:45:28 And I have a standard email.
0:45:29 You’ve probably gotten a bunch of these from me.
0:45:35 where I’ve, where I’ve, I email people saying, um, could you rate this person for me from
0:45:37 one to 10 or, or reply, call me?
0:45:40 It’s a negative one.
0:45:40 Yes.
0:45:41 Yes.
0:45:41 Right.
0:45:44 And when you get a call me, you’re like, okay.
0:45:45 Don’t even need to take the call.
0:45:45 Yeah.
0:45:46 It’s amazing.
0:45:47 I understand.
0:45:47 Right.
0:45:52 And by the way, sometimes you go, when a person writes back 10, you’re like, really?
0:45:55 Like, best person you know?
0:45:56 Right.
0:45:58 But what you’re looking for is like a set of eight nons.
0:45:58 Yeah.
0:46:01 And if you’re going to set an eight nines, well, you may still call and get some, get
0:46:03 some information, but you’re like, okay, I got it.
0:46:05 I got a quick referential information.
0:46:09 Whereas by the way, more often than not, you know, when you’re checking someone, you really
0:46:11 know you, you get a couple of call me’s.
0:46:11 Yeah.
0:46:16 Cause, cause my, and it’s just that quick because email, one sentence thing, get back, call
0:46:16 me.
0:46:17 You’re like, okay, I understand.
0:46:18 Yeah.
0:46:22 Um, we have about 10 minutes left just logistics check.
0:46:24 Um, a couple of last things we’ll get into.
0:46:26 Um, is there anything you wanted to make sure?
0:46:27 But we can do this again.
0:46:27 This is always fun.
0:46:28 Yeah.
0:46:28 That’s great.
0:46:34 The, um, I’m curious, Reed, as you’ve sort of continued to up level in your career and
0:46:37 have more opportunities and they seem to compound, especially, you know, post-selling LinkedIn,
0:46:42 how have you decided where is the highest leverage used for, for your time or where can
0:46:44 you have the, the, the, the biggest impact?
0:46:45 What’s your mental framework for you?
0:46:52 So, I mean, one of the things that I’m sure I speak for all three of us is an amazing time
0:46:53 to be alive.
0:46:59 I mean, this AI and the transformation of what it means for evolving homo techne and what,
0:47:04 what is possible in life and in society and work and all the rest is just amazing.
0:47:10 So, I stay as, uh, involved with that as I possibly can.
0:47:14 Like, it has to be something that’s so important that I will stop doing that.
0:47:15 Yeah.
0:47:21 Now, within that, you know, part of that was, you know, co-founding, um, on a say, I was
0:47:27 Siddhartha Mukherjee, who’s the CEO, Emperor, uh, author of Emperor All Maladies, um, inventor
0:47:29 of, um, some T-cell therapies.
0:47:33 So, it was like, like, for example, getting an instruction from him on the FDA process, you
0:47:37 know, that’s the kind of thing that makes us all run screaming for the hills, right?
0:47:38 As a, as an instance.
0:47:45 Um, and so, uh, you know, that kind of stuff, but also, um, you know, like one of the things
0:47:50 I think is really important is as technology drives more and more of everything that’s going
0:47:53 on in society, how do we make government more intelligent on technology?
0:47:59 So, you know, every kind of, um, you know, kind of well-ordered Western democracy, um, I’ve
0:48:02 been doing this for at least 20 to 25 years.
0:48:09 If, if a minister, you know, or kind of senior person from a, from a democracy comes and asks
0:48:11 for advice, I give it to them.
0:48:15 So, you know, just last week I was in France talking with Macron because he’s trying to figure
0:48:19 out, like, how do I help French industry, French society, French people?
0:48:20 What are the things I need to be doing?
0:48:26 You know, if all the frontier models are going to be built in the U.S. and maybe China, what
0:48:30 does that mean for how I help, you know, our people and so forth?
0:48:34 And, and he’s doing the exact right thing, which is, I understand that I have a potential
0:48:35 challenge.
0:48:37 What do I do to help my people?
0:48:37 Yeah.
0:48:38 How do I reach out?
0:48:39 How do I talk?
0:48:39 Sure.
0:48:44 They’ve got my straw, they’ve got some other things, but like, how do I maximally help what
0:48:44 I’m doing?
0:48:46 And so putting a bunch of time into that as well.
0:48:46 Yeah.
0:48:51 I remember seeing your, your, your calendar and it was, what seemed like seven days a week
0:48:53 meetings, absolutely stacked.
0:48:55 And one of the ways in which.
0:48:56 I’ve gone to six and a half days.
0:48:56 Okay.
0:48:58 I’m glad you’ve calmed down.
0:49:01 One of the ways in which you’re able to do that, one, it’s important problems, but two,
0:49:05 you, you work on projects with friends, sometimes over, over decades.
0:49:07 And you, you, maybe we’ll close here.
0:49:08 You’ve thought a lot about friendship.
0:49:09 You’ve, you’ve, you’ve written about it.
0:49:10 You’ve spoken about it.
0:49:16 I’m curious what you’ve found most remarkable or most surprising about friendship or what
0:49:19 you think more people should appreciate it, especially as we enter this AI era where
0:49:22 people sort of are questioning, you know, the next generation, what’s their going to be
0:49:23 relationship to friendship?
0:49:28 I actually, I’m going to write a bunch about this specifically because AI is now bringing
0:49:32 some very important things that people need to understand, which is friendship is a joint
0:49:33 relationship.
0:49:37 It’s not a, Oh, you’re just loyal to me or, Oh, you just do things for me.
0:49:38 Oh, this person does things for me.
0:49:40 Well, there’s a lot of people who do things for you.
0:49:45 Your bus driver does things for you, you know, like, like, but that doesn’t mean that you’re
0:49:45 friends.
0:49:49 Friends, like, for example, like a classic way of putting it is like, Oh, I had a really bad
0:49:52 day and I show up my friend, Alex, and I want to talk to him.
0:49:54 And then Alex is like, Oh my God, here’s my day.
0:49:55 I’m like, Oh, your day is much worse.
0:49:57 We’re going to talk about your day versus my day.
0:50:02 You know, that’s the kind of thing that happens because what I think fundamentally happens
0:50:07 with friends is two people agree to help each other become the best possible versions of
0:50:07 themselves.
0:50:13 And by the way, sometimes that leads to friendship conversations that are tough love.
0:50:16 They’re like, yeah, you’re fucking this up and I need to talk to you about it.
0:50:17 Right.
0:50:22 It’s not, I tell you like, you know, the, the whole sycopency phase and AI thing.
0:50:26 It’s not that it’s like the, how do, how do I help you?
0:50:31 But as part of also the thing that I, uh, I gave the, um, commencement speech at Vanderbilt
0:50:33 a few years back and was on friendship.
0:50:41 And part of it was to say, look, part of friends is not just does, does Alex help me, but Alex
0:50:42 allows me to help him.
0:50:43 Right.
0:50:46 And as part of that, that’s part of how I become a deeper friend.
0:50:51 I learn things from my, it’s not just help that helping Alex, that joint relationship’s
0:50:52 really important.
0:50:56 And you’re going to see all kinds of nutty people saying, oh, I have your AI friend right
0:50:57 here.
0:50:58 And it’s like, no, you don’t.
0:50:59 It’s not a bi-directional relation.
0:51:04 Maybe awesome companion, like just spectacular, but it’s not a friend.
0:51:08 And you need to understand like part of friend is part of when we begin to realize that life’s
0:51:11 not just about us, that we, that it’s a team sport.
0:51:12 We go into it together.
0:51:18 Um, that sometimes, you know, friendship conversations are wonderful and difficult, you know, and that
0:51:18 kind of thing.
0:51:20 And I think that’s, what’s really important.
0:51:25 And now that, you know, we’ve got this blurriness that AI has created, it’s like, shoot, I have
0:51:30 to go write some of this very soon so that people understand how to navigate it and why
0:51:35 they should not think about AI anytime soon as friends.
0:51:39 Well, one thing I’ve always appreciated about you as well is you’re able to be friends with
0:51:44 people for whom you have disagreements with or people for whom, you know, you are not close
0:51:48 to do it for a few years, but you can reconnect and sort of, uh, yeah, that ability is, um,
0:51:52 Yeah, it’s about us making each other the better versions of ourselves.
0:51:55 And, and sometimes that, you know, those, sometimes those go through rough patches.
0:51:56 Yeah.
0:51:57 I think it’s a great place to close.
0:51:58 Reid, thanks so much for coming on the podcast.
0:51:59 My pleasure.
0:52:01 And I hope we do this again.
0:52:01 Yeah.
0:52:06 Thanks for listening to this episode of the A16Z podcast.
0:52:11 If you liked this episode, be sure to like, comment, subscribe, leave us a rating or review
0:52:13 and share it with your friends and family.
0:52:17 For more episodes, go to YouTube, Apple podcasts, and Spotify.
0:52:24 Follow us on X at A16Z and subscribe to our sub stack at A16Z.substack.com.
0:52:25 Thanks again for listening.
0:52:27 And I’ll see you in the next episode.
0:52:32 As a reminder, the content here is for informational purposes only should not be taken as legal
0:52:37 business tax or investment advice, or be used to evaluate any investment or security and is
0:52:41 not directed at any investors or potential investors in any A16Z fund.
0:52:46 Please note that A16Z and its affiliates may also maintain investments in the companies discussed
0:52:46 in this podcast.
0:52:54 For more details, including a link to our investments, please see A16Z.com forward slash disclosures.
0:53:28 Thank you.

Reid Hoffman has been at the center of every major tech shift, from co-founding LinkedIn and helping build PayPal to investing early in OpenAI. In this conversation, he looks ahead to the next transformation: how artificial intelligence will reshape work, science, and what it means to be human.

In this episode, Reid joins Erik Torenberg and Alex Rampell to talk about what AI means for human progress, where Silicon Valley’s blind spots lie, and why the biggest breakthroughs will come from outside the obvious productivity apps. They discuss why reasoning still limits today’s AI, whether consciousness is required for true intelligence, and how to design systems that augment, not replace, people.

Reid also reflects on LinkedIn’s durability, the next generation of AI-native companies, and what friendship and purpose mean in an era where machines can simulate almost anything. This is a sweeping, high-level conversation at the intersection of technology, philosophy, and humanity.

 

Resources:

Follow Reid on X: ​​x.com/reidhoffman
Follow Alex on X: x.com/arampell

 

Stay Updated: 

If you enjoyed this episode, be sure to like, subscribe, and share with your friends!

Find a16z on X: https://x.com/a16z

Find a16z on LinkedIn: https://www.linkedin.com/company/a16z

Listen to the a16z Podcast on Spotify: https://open.spotify.com/show/5bC65RDvs3oxnLyqqvkUYX

Listen to the a16z Podcast on Apple Podcasts: https://podcasts.apple.com/us/podcast/a16z-podcast/id842818711

Follow our host: https://x.com/eriktorenberg

Please note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures.

Stay Updated:

Find a16z on X

Find a16z on LinkedIn

Listen to the a16z Podcast on Spotify

Listen to the a16z Podcast on Apple Podcasts

Follow our host: https://twitter.com/eriktorenberg

 

Please note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures.

Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

a16z Podcasta16z Podcast
0
Let's Evolve Together
Logo