AI transcript
0:00:12 There’s one just obvious historical analogy is, you know, the personal computer from sort of invention in 1975 through to, you know, basically 1992 was a text prompt system.
0:00:16 17 years in, you know, the whole industry took a left turn into GUIs and never looked back.
0:00:21 And then, by the way, you know, five years after that, the industry took a left turn into web browsers and never looked back, right?
0:00:29 And, you know, look, I’m sure there will be chatbots 20 years from now, but I’m pretty confident that both the current chatbot companies and many new companies are going to figure out many kinds of user experiences.
0:00:31 That are radically different that we don’t, we don’t even know yet.
0:00:39 Every major technology shift brings new capabilities, new pressures, and new questions about how progress unfolds.
0:00:53 At A16Z’s Runtime Conference, I sat down with Mark Andreessen and Ben Horowitz to discuss the current state of AI, how reasoning and creativity are evolving, how markets adjust to new technology, and what this moment means for founders and institutions shaping what comes next.
0:00:55 Now, to Mark and Ben.
0:01:03 Please join me in welcoming Mark Andreessen and Ben Horowitz with general partner Eric Tornberg.
0:01:17 Thank you for the Rock Kim who did that.
0:01:18 Ben picked the music.
0:01:31 Mark, there’s been a lot of talk lately about the limitations of LLMs, that they can’t do true invention of, say, new science, that they can’t do true creative genius, that it’s just combining or packaging.
0:01:34 You have thoughts here, would say you.
0:01:36 Yeah, so for me, it’s, yeah, so you get all these questions.
0:01:44 And yeah, they usually come in either sort of, are language models intelligent in the sense of, can they actually process information and have sort of conceptual breakthroughs the way that people can?
0:01:46 And then there’s our language models or video models creative.
0:01:49 Can they create new art, actually have genuine creative breakthroughs?
0:01:51 And of course, my answer to both of those is, well, can people do those things?
0:02:04 And I think there’s two questions there, which is, okay, even if some people are, quote unquote, intelligent as in having original conceptual breakthroughs and not just, let’s just say, regurgitating the training set or following scripts, what percentage of people can actually do that?
0:02:08 I’ve only met a few, some of them are here in the room, but not that many, most people never do.
0:02:11 And then creativity, I mean, how many people are actually genuinely creative, right?
0:02:14 And so you kind of point to a Beethoven or a Van Gogh or something like that.
0:02:15 You’re like, okay, that’s creativity.
0:02:16 And yeah, that’s creativity.
0:02:18 And then how many Beethovens and Van Goghs are there?
0:02:19 Obviously not very many.
0:02:27 So one is just like, okay, like if these things clear the bar of 99.99% of humanity, then that’s pretty interesting just in and of itself.
0:02:36 But then you dig into it further and you’re like, okay, like how many actual real conceptual breakthroughs have there ever been actually ever in human history as compared to sort of remixing ideas?
0:02:44 If you look at the history of technology, it’s almost always the case that the big breakthroughs are the result of usually at least 40 years of sort of work ahead of time, four decades.
0:02:49 Right, in fact, language models themselves are the culmination of eight decades, right, of previous work.
0:02:50 And so there’s remixing.
0:02:53 And then in the arts, it’s the exact same thing, you know, novels and music and everything.
0:02:58 There are clearly creative leaps, but there’s just tremendous amounts of influence that can end from people who came before.
0:03:05 And even if you think about like somebody with the creativity of a Beethoven, like there was a lot of Beethoven and Mozart and Haydn and in the composers that came before.
0:03:07 And so there’s just tremendous amounts of remixing and combination.
0:03:18 And so it’s a little bit of an angel’s dancing on the head of a pin question, which is like if you can get within, you know, 0.01% of kind of world beating generational creativity intelligence, like you’re probably all the way there.
0:03:24 So emotionally, I want to like hold out hope that there is still something special about human creativity.
0:03:25 And I certainly believe that.
0:03:26 And I very much want to believe that.
0:03:27 But I don’t know.
0:03:30 When I use these things, I’m like, wow, they seem to be awfully smart and awfully creative.
0:03:32 So I’m pretty convinced that they’re going to clear the bar.
0:03:37 Yeah, I think that seems to be a common theme in your analysis when people talk about the limitations of LMs.
0:03:39 Can they do transfer learning or just learning in general?
0:03:42 You seem to ask, can people do this?
0:03:43 Yes, can people do these things?
0:03:44 Well, it’s like lateral thinking, right?
0:03:46 So yeah, so it’s like reasoning in or out of distribution, right?
0:03:47 And so it’s okay.
0:03:49 I know a lot of people who are very good at reasoning inside distribution.
0:03:53 How many people do I actually know who are good at reasoning outside of distribution and doing transfer learning?
0:03:55 And the answer is like, I know a handful.
0:03:59 I know a few people where whenever you ask them a question, you get an extremely original answer.
0:04:04 And usually that answer involves bringing in some idea from some adjacent space and basically being able to bridge domains.
0:04:08 And so you’ll ask them a question about, I don’t know, finance, and they’ll bring you an answer from psychology.
0:04:12 Or you ask them a question about psychology, and they’ll bring you an answer from biology, right?
0:04:13 Or whatever it is.
0:04:16 And so I know, I don’t know, sitting here today, probably three.
0:04:18 I probably know three people who can do that real loudly.
0:04:20 I’ve got 10,000 in my address book.
0:04:23 And so three out of 10,000 is not that high a percentage.
0:04:25 By the way, I find this very encouraging.
0:04:28 Yeah, immediately the mood in the room has gone completely to hell.
0:04:32 I find this very encouraging because look at what humanity has been able to build, right?
0:04:34 Despite all of our limitations, right?
0:04:42 I look at all the creativity that we’ve been able to exhibit and all the amazing art and all the amazing movies and all the amazing novels and all the amazing technical inventions and scientific breakthroughs.
0:04:46 And so we’ve been able to do everything we’ve been able to do with the limitations that we have.
0:04:51 And so I think that, do you need to get to the thing where you are 100% positive that’s actually doing original thinking?
0:04:53 I don’t think so.
0:04:54 I think it’d be great if you did.
0:04:56 And I think ultimately we’ll probably conclude that that’s what’s happening.
0:04:59 But it’s not even necessary for just tremendous amounts of improvement.
0:05:04 Ben, we were just celebrating some hip-hop legends at your paid-in-full event last week.
0:05:06 And so you think a lot about creative genius.
0:05:07 How do you think about this question?
0:05:15 Yeah, I mean, I think that I agree with Mark that it’s, whatever it is, it’s very useful, even if it isn’t all the way that level.
0:05:36 I think that there’s something about the actual, like, real-time human experience that humans are very into, at least in art, where, you know, with the current state of the technology, kind of the pre-training doesn’t have quite the right data to get to what you really want to see.
0:05:39 But, you know, it’s pretty good.
0:05:47 One of Ben’s nonprofit activities is something called the Paid-in-Full Foundation, which is honoring and actually providing essentially a pension for sort of the great innovators in rap and hip-hop.
0:05:54 And so he knows and has many of, we were just at the event, and he has many of the kind of leading lights of that field for the last 50 years perform.
0:05:55 And it’s really fun to meet them and talk to them.
0:06:02 But, like, how many people in that entire field over the course of the last 50 years did you classify as, like, a true conceptual innovator?
0:06:05 Yeah, well, you know, it’s interesting.
0:06:07 It depends how broadly you define it.
0:06:10 But there were several of them there on Saturday.
0:06:13 So, Rakim, I think, yeah, Rakim, you’d certainly put in that category.
0:06:15 Dr. Dre, you’d certainly put in that category.
0:06:18 George Clinton, you’d certainly put in that category.
0:06:23 In a narrower sense, like, Cool G Rap certainly had a new idea.
0:06:25 But, you know, it depends.
0:06:32 Like, a fundamental kind of musical breakthrough, you’d probably just say Rakim and George Clinton.
0:06:34 Are they excited?
0:06:35 So two out of?
0:06:37 Well, I mean, those of the guys who were there.
0:06:38 Oh, yeah, yeah, yeah.
0:06:40 Yeah, but, yeah, it’s a tiny percentage.
0:06:42 Tiny, tiny, tiny, tiny, tiny.
0:06:44 We had the fireside last night with Jared Leto.
0:06:48 He was talking about how many people in Hollywood are really scared,
0:06:49 or against what’s happening here.
0:06:52 What do you see in, you know, when you talk to the Dr. Dre’s, the Nas, the Kanye’s?
0:06:53 Are they excited?
0:06:54 Are they using it?
0:06:54 Are they?
0:07:00 So, everybody who I speak to, there are definitely people who are scared in music,
0:07:03 but there are a lot of people who are very, very interested in it.
0:07:08 And particularly the hip-hop guys are interested because it’s almost like a replay of what they did, right?
0:07:13 They just took other music and they kind of built new music out of it.
0:07:17 And I think that AI is a fantastic creative tool for them.
0:07:19 It, like, way opens up the palate.
0:07:27 And then for a lot of what hip-hop is, is it’s kind of telling a very specific story of a specific time and place,
0:07:33 which having intimate knowledge and being trained just on that thing is actually an advantage
0:07:37 as opposed to being, like, a generally smart music model.
0:07:44 At the same time, people also use the same logic of, hey, whatever is more intelligent will rule whatever is less intelligent.
0:07:45 And Mark, you recently…
0:07:47 Not said by anybody who owns a cat.
0:07:56 Mark, you recently tweeted, a supreme shape rotator can only rotate shapes, but a supreme word cell can rotate shape rotators.
0:07:58 And also…
0:07:59 Someone’s clapping here.
0:08:04 And also, high IQ experts work for mid-IQ generalists.
0:08:05 What means?
0:08:08 Yeah, so the PhDs all work for MBAs, right?
0:08:09 So, it’s…
0:08:10 Okay.
0:08:11 So, yeah.
0:08:12 Well, I just take it up a level.
0:08:15 It’s just like, when you look at the world today, do you think we’re being ruled by the smart ones?
0:08:18 Right?
0:08:21 Is that your big conclusion from, like, current events, current affairs?
0:08:22 Right?
0:08:23 Okay, we put the geniuses in charge?
0:08:25 You mean Kamala and Tromba aren’t the best?
0:08:27 Well, let’s not even be specific towards the U.S.
0:08:28 Let’s just look all over the world.
0:08:29 Yeah.
0:08:31 And so, I think two things are true.
0:08:33 One is, we probably all kind of underrate the importance of intelligence.
0:08:43 And actually, there’s a whole kind of backstory here of intelligence actually turns out to be this, like, incredibly inflammatory kind of topic for lots of reasons over the last hundred years, which we could talk about in great detail.
0:08:48 And even the just very idea that, like, some people are smarter than other people, which just, like, really freaks people out and people don’t like to talk about it.
0:08:50 We really struggle with that as a society.
0:08:56 And so, and then it is true that intelligence is, like, in humans, intelligence is correlated to almost every kind of positive life outcome, right?
0:09:01 And so, intelligence, generally in the social sciences, what they’ll tell you is what they call fluid intelligence.
0:09:06 The G factor, or IQ, is sort of 0.4 correlated to basically everything.
0:09:12 And so, it has 0.4 correlation to, like, educational outcomes and professional outcomes and income.
0:09:14 And, by the way, also, like, life satisfaction.
0:09:18 And, by the way, nonviolence, being able to solve problems without physical violence and so forth.
0:09:21 And so, like, on the one hand, like, we probably all underrate intelligence.
0:09:26 On the other hand, the people who are in the fields that involve intelligence probably overrate intelligence.
0:09:34 And you might even coin a term, like, maybe, like, intelligence supremacist or something like that, where it’s just like, oh, like, intelligence is very important.
0:09:37 And so, therefore, maybe it’s, like, the most important thing or the only thing.
0:09:40 But then you look at reality and you’re like, okay, that’s clearly not the case.
0:09:42 Yeah, it’s still only 0.4, right?
0:09:43 Yeah.
0:09:44 Well, so, to start with, it’s only 0.4.
0:09:48 And, you know, in the social sciences, 0.4 is a giant correlation factor, right?
0:09:55 Like, most things where you can correlate, whether it’s, you know, genes or observed behavior or whatever, to anything in the social sciences, the correlations are much smaller than that.
0:09:56 So, 0.4 is tiny.
0:09:58 But it’s still only 0.4.
0:10:10 So, even if you’re, like, a full-on, if you, even if you’re, like, a full-on genetic determinist and you’re just, like, you know, genetic IQ just, like, drives all these outcomes, like, it still doesn’t explain, you know, 0.6 of the correlation.
0:10:10 And so, that leaves it.
0:10:12 But that’s just on the individual level.
0:10:13 Then you just look at the collective level.
0:10:15 Well, you just look at the collective level.
0:10:23 And it’s, like, a famous, famous observation is you take a bunch of, you take any group of people, you put them in a mob, and the mob is dumber, right, than the average.
0:10:29 And you put a bunch of smart people in a mob, and they definitely turn dumber, like, and you see that all the time, right?
0:10:33 And so, you put people in groups, and they behave very differently.
0:10:40 And then you create, and then you create these, you know, questions around, like, who’s in charge, whether who’s in charge at a company or who’s in charge of a country.
0:10:48 And, like, whatever the filtration process, it’s clearly not, it’s not, it’s not, it’s certainly not only on IQ, and it may not even be primarily on IQ.
0:10:56 And so, therefore, it’s just, like, this assumption that you kind of hear in some of the AI circles, which is, like, inevitably, the smart, you know, kind of thing is going to govern the dumb thing.
0:10:58 Like, I just think that’s, like, very easily.
0:11:00 It’s just sort of very easily and obviously falsified.
0:11:02 Like, intelligence isn’t sufficient.
0:11:05 And then you just, you just, you just convey it.
0:11:08 You know, we’re all in this room lucky enough to know a lot of smart people, and you just kind of observe smart people.
0:11:14 Like, some smart people, you know, really figure out how to have their stuff together and become very successful, and a lot of smart people never do.
0:11:25 And so, there’s, there must be, there obviously are, and there, in fact, must be many other factors that have to do with success and have to do with, like, who’s in charge than just raw intelligence.
0:11:36 It begs the follow-up question of what are some examples of what that might be, you know, skills sort of outside of intelligence, and more particularly, specifically, why couldn’t AI systems, you know, learn them?
0:11:46 Yeah, so, Ben, like, what, other than intelligence, what, in your experience, determines, for example, success in leadership, or in entrepreneurship, or in solving complex problems, or organizing people?
0:11:49 Yeah, well, there are many things.
0:11:56 You know, like, a lot of it is being able to have a confrontation in the correct way.
0:12:23 And, like, there’s some intelligence in that, but a lot of it is just under really understanding who you’re talking to, you know, being able to interpret everything about how they’re thinking about it, and just kind of generally seeing decisions through the eyes of the people working in the company, not through your eyes, is a skill that, you know, you develop by talking to people all the time, understanding what they’re saying, so forth, these kinds of things.
0:12:38 And it’s just, you know, it’s certainly not an IQ thing, and not that, like, I could imagine an AI training on any individual, and, like, figuring it all out, and knowing what to say, and so forth.
0:12:45 But then you also need that integrated with, you know, like, whatever the business ought to be doing.
0:12:53 So you’re not trying to do what’s popular, you’re trying to get people to do what’s correct, even if they don’t like it.
0:12:55 And, you know, that’s a lot of management.
0:13:01 So it’s not a problem anybody’s working on currently, but maybe they will.
0:13:10 Right, some combination of, like, courage, some combination of motivation, some combination of emotional understanding, theory of mind.
0:13:12 Yeah, you know, what do people want?
0:13:20 Like, you know, married to, you know, what needs to be done, and then, like, how talented are they?
0:13:22 Like, which ones can you afford?
0:13:24 Like, if they jump out the window, it’s fine.
0:13:26 You know, which one’s not fine, you know, this kind of thing.
0:13:32 There’s a lot of, like, weird subtleties to it, and it’s very situational.
0:13:38 I think the hardest thing about it, and why management books are so bad, is because it’s situational.
0:13:45 You know, like, your company, your product, your people, your org chart is very, very different than, you know,
0:13:48 here are the five steps to building a strategy.
0:13:53 It’s like, well, that’s the most useless fucking thing I ever read, because it has nothing to do with you.
0:13:59 So, one of the interesting things on this, like, the concept of theory of mind is really important, right?
0:14:03 So, the theory of mind is, can you and your head model what’s happening in the other person’s head, right?
0:14:07 And you would think that maybe that, you know, maybe, obviously, people who are smarter should be better at that.
0:14:11 It turns out that that may not be true, and there’s a reason to believe that that’s not true, which is as follows.
0:14:19 So, the U.S. military was the early adopter and has continued to be sort of the leading adopter in U.S. society of actually IQ testing.
0:14:25 And they basically launder it through something called the ASVAB, which is their vocational aptitude battery test.
0:14:28 But it’s basically, it’s essentially an IQ test.
0:14:40 And so, they still use basically explicit IQ tests, and they slot people into different specialties and roles, you know, in part according to IQ, including into leadership roles.
0:14:44 And so, they know what everybody’s IQ is, and they kind of organize around that.
0:14:51 And one of the things that they’ve found over the years is, if the leader is more than one standard deviation of IQ away from the followers, it’s a real problem.
0:14:54 And that’s true in both directions, right?
0:15:07 If the leader is not smart enough to be able to, right, manage, you know, to be able to, you know, for somebody who is less smart to model the mental behavior of somebody who is more smart is, of course, inherently very challenging and maybe impossible.
0:15:16 But it turns out the reverse is also true, which is if the leader is two standard deviations above the norm of the organization that he’s running, he also loses theory of mind, right?
0:15:23 It’s actually very hard for very smart people to model the internal thought processes of even moderately smart people.
0:15:31 And so, there’s actually a real need to have a level of connection there that’s not just, right?
0:15:38 And therefore, by inference, if you had a person or a machine that had, you know, a thousand IQ or something like it, it may just be, it would be so alien.
0:15:46 Its understanding of reality would be so alien to the people or the things that it was managing that it wouldn’t even be able to connect in any sort of realistic way.
0:15:53 So, again, this is a very good argument that, like, yeah, this is, the world is going to be far from organized by IQ for, yeah, for centuries to come.
0:15:57 Yeah, and Zuckerberg had a great line, which is, intelligence is not life.
0:16:08 And life has a lot of dimensionality to it that is independent of intelligence, I think, that, you know, if you spend all your time working on intelligence, you lose track of that.
0:16:21 We sometimes say about some specific people that they’re too smart to properly model or, you know, they sort of assume too much rationality on other people or they just overthink things or over-rationalize them.
0:16:24 Yeah, just to your point that it’s on everything.
0:16:25 Yeah, yeah.
0:16:30 People often, people seldom do what’s in their best interest, I should say.
0:16:48 You know, I also suspect, this kind of gets more into the biology side of things, I, you know, there’s more and more scientific evidence that basically also that, like, human cognition, human cognition or human, I don’t know, whatever you want to call it, self-awareness, information processing, decision-making sort of experience is not purely a brain.
0:16:52 Like, the basically, the sort of mind-body dualism is just not correct.
0:17:01 Like, and again, this is an argument against sort of IQ supremacism or intelligent supremacism is, it’s not, you know, we, human beings didn’t experience existence just through the rational thought.
0:17:06 And, and, and specifically not through just the rational thought of the brain, but rather it’s a whole body experience, right?
0:17:20 And there’s, there’s, there’s aspects of our nervous system and there’s aspects of everything from our gut biome to, you know, to, to, you know, to, to smells, you know, to the olfactory senses and, and, and, you know, and hormones and like all, all kinds of like biochemical kind of aspects to life.
0:17:28 Like, I, if you just kind of track the research, I suspect we’re going to find as human cognition as a full body experience, much, much more than, much more than people thought.
0:17:43 And, and so therefore to actually, to, and this, you know, this is like a, and this is, you know, one of the kind of big fundamental challenges in the AI field right now, which is, you know, the form of AI that we have working as, is the, is the fully mind-body dual version of it, which is, it’s just a disembodied, you know, like a disembodied brain.
0:17:56 You know, the robotics revolution for sure is coming when that happens, when we put AI in, in physical objects that move around the world, you know, you’re, you’re going to be able to get closer to having that kind of, you know, integrated intellectual physical, you know, experience.
0:18:00 You’re going to have sensors and the robots, they’re going to be able to, you know, gather a lot more, you know, real world data.
0:18:10 And so maybe you can start to actually think about, you know, synthesizing, you know, a more advanced model of cognition and, and, you know, maybe we’re going to actually discover more both about how the human version of that works and also how the machine version of that works.
0:18:16 But it’s just, to me, at least reading the research like that, all those ideas feel very nascent and we have a lot of work to do to try to figure that out.
0:18:20 Do you have a sense for how they are, how I’m sorry, a theory of mind today?
0:18:23 Or do you have a sense where the limitations are?
0:18:24 You like to talk to them a lot.
0:18:27 Are there any particular things that are particularly surprising to you as you do?
0:18:29 Yeah, I would say generally they’re really good.
0:18:36 Yeah, and so like one of the, one of the more, I find one of the more fascinating ways, you know, to work with language models is actually have them create personas.
0:18:43 And, and then, you know, basically have, well, actually, so the way I like, I like, you know, I like basically, I like Socratic dialogues.
0:18:45 I like when things are argued out and like a Socratic dialogue.
0:18:50 And so, you know, tell a, tell any advanced LLM today to create a Socratic dialogue and it’ll either make up the personas.
0:18:51 You can tell what it is.
0:18:52 It does a good job.
0:18:55 It does this very, very annoying property, which is it wants everybody to be happy.
0:18:58 And so it wants all of its personas to agree.
0:19:08 And so by default, it will have a, it will have a briefly interesting discussion and then it will sort of figure out, you know, basically it like you’re watching, I don’t know, PBS special or something.
0:19:11 It’ll, it’ll kind of figure out how to bring everybody in agreement and everybody’s happy at the end of the discussion.
0:19:12 And of course, I fucking hate that.
0:19:13 Like it drives me nuts.
0:19:14 I don’t want that.
0:19:19 So instead I tell it, I’m like, make the conversation more tense, right.
0:19:26 And like fraught with like anger and like, you know, people, you know, they’re going to get like increasingly upset throughout the conversation.
0:19:27 And then it starts to get really interesting.
0:19:34 And then I, and then I tell it, you know, bring it, you know, introduce a lot more cursing, you know, really have them go at it.
0:19:35 Like all the gloves come off.
0:19:38 They’re going for full, you know, reputational destruction of each other.
0:19:39 You do a lot of these skits.
0:19:40 Yeah, skits.
0:19:43 And then I get carried away and then I’m like, it turns out they’re all like secret ninjas.
0:19:48 And then I’ll start fighting and you’ve got Einstein, you know, you know, you know, hitting, you know, Niels Bohr with nunchucks.
0:19:50 And by the way, it’s happy to do that too.
0:19:55 So you do have to, you have to, you have to control yourself, but it is very good at theory of mind.
0:19:56 And then I’ll give you another example.
0:20:00 There’s a, there’s a startup actually in the UK in, in, in the world of politics.
0:20:05 And what they’ve found is that they’ve found that language models now are good enough.
0:20:10 So specifically for, for politics, which is sort of a sub subcategory where this, this idea matters.
0:20:15 So, you know, in politics, people do focus group, you do focus groups of voters all the time.
0:20:17 And by the way, many businesses also do that.
0:20:23 You know, so you get a bunch of people together from different backgrounds in a room and you kind of guide them through discussion and try to get their, their points of view on things.
0:20:28 And focus groups are often surprising, like politicians who, if you talk to politicians who do focus groups, they’re often surprising.
0:20:32 They’re often surprised by the things that they thought voters cared about is actually not the things that voters care about.
0:20:34 And so you can actually learn a lot by doing this.
0:20:36 But focus groups are very expensive to run.
0:20:42 And then there’s a long lag time because they have to be actually physically organized and you have to recruit people and vet people and so forth.
0:20:53 And so it turns out that the, the, the, the state of the art models now are good enough at this so that they can actually, they can, they can correctly, accurately reproduce a focus group of real people inside the model.
0:20:55 So, so, so, so they’re going to secure that bar.
0:21:00 In other words, you can basically have a focus group actually happening in the model where you create personas in the model.
0:21:09 And then it actually accurately represents, you know, a college student from, you know, Kentucky is contrasted to a housewife from Tennessee is contrasted to a, you know, whatever, whatever, you, you just like specify this.
0:21:14 And so, you know, they’re good enough to clear, they’re good enough to clear that bar and, you know, we’ll, we’ll see how far they get.
0:21:18 I want to segue to the bubble conversation.
0:21:25 Amin and G2, Jensen and Matt spoke about the enormous scale of physical infrastructure being built out.
0:21:28 AI CapEx is 1% of GDP.
0:21:31 How should we understand and think about this bubble question?
0:21:35 Well, I think the fact that it’s a question means we’re not in a bubble.
0:21:37 That’s the first thing to understand.
0:21:42 I mean, a bubble is a psychological phenomenon as much as anything.
0:21:45 And in order to get to a bubble, everybody has to believe it’s not a bubble.
0:21:49 That, that’s sort of the, the, the core mechanic of it.
0:21:52 And they, you know, we call that capitulation.
0:21:55 Everybody just gives up like, okay, I’m not going to short these stocks anymore.
0:21:57 I’m tired of losing all my money.
0:21:58 I’m going to go long.
0:22:00 And we saw that actually.
0:22:05 And, you know, and a little bit of question, like really what was the tech bubble?
0:22:14 But in the kind of dot com era, right as the prices went through the roof, Warren Buffett started inviting, investing in tech.
0:22:18 So like, and he swore he would never invest in tech because he didn’t understand it.
0:22:24 And so if he capitulated, nobody was saying it was a bubble when it became like a quote unquote bubble.
0:22:32 Now, if you look at that phenomenon, the internet clearly was not a bubble, you know, it was a real thing.
0:22:46 It was in the short term, there was a kind of price dislocation that happened because the market, you know, there were just not enough people on the network to make those products go at the time.
0:22:50 And then the prices kind of outran the market.
0:22:56 You know, in AI, it’s much harder to see that because there’s so much demand in the short term, right?
0:22:58 Well, like we don’t have a demand problem right now.
0:23:04 And like the idea that we’re going to have a demand problem five years from now, to me, seems quite absurd.
0:23:14 You know, could there be like weird bottlenecks that appear, you know, like we just, at some point, we just don’t have enough cooling or something like that?
0:23:29 You know, maybe, but like, like right now, if you look at demand and supply and what’s going on and multiples against growth, it doesn’t look like a bubble at all to me.
0:23:32 But I don’t know.
0:23:33 Do you think it’s a bubble, Mark?
0:23:35 Yeah, look, I would just say this.
0:23:41 Yeah, like nobody, so nobody knows in the sense of like the experts, like if you’re talking to anybody at like a hedge fund or a bank or whatever, like they definitely don’t know.
0:23:44 Generally, the CEOs don’t know.
0:23:46 By the way, a lot of VCs don’t know.
0:23:47 They just get upset.
0:23:52 Like VCs get like emotionally upset when you guys have higher valuations.
0:23:55 Like it makes them like angry.
0:23:58 And, you know, and I get it all the time.
0:24:00 And I’m like, what are you mad about?
0:24:03 Like the shit is working, man.
0:24:03 Be happy.
0:24:03 Come on.
0:24:10 But so like there’s a lot of emotion around like people wanting it to be a bubble.
0:24:11 Yeah.
0:24:14 Nothing’s worse than passing on a deal and then having the company become a great success.
0:24:17 Like that’s just, that valuation is outrageous.
0:24:19 You can be furious about that for 30 years in our business.
0:24:20 It’s amazing.
0:24:25 And you can find, yeah, you come up with all kinds of reasons to cope and explain why it wasn’t your mistake.
0:24:27 But it’s, you know, it’s the world.
0:24:28 It’s the world that’s wrong, not me.
0:24:29 Right.
0:24:31 So there’s a lot of that.
0:24:31 Yeah.
0:24:32 Yeah.
0:24:36 So I just, I would just, I would just say, like, I would always say, bring the conversation back to ground truth fundamentals.
0:24:41 And the two big ground truth fundamentals are, number one, does the technology actually work?
0:24:43 You know, can it deliver on its promise?
0:24:45 And then number two is our customers paying for it.
0:24:55 And if those, if those two things are true, then it’s very hard to, it’s very hard to, like, as long as those two things stay grounded, you know, generally, generally things are going to, I think are going to be on track.
0:25:03 When Gavin was up here with DG, he said, ChatGPT was a Pearl Harbor moment for Google, the moment when the giant wakes up.
0:25:09 When we look at history and platform shifts, what determine whether the incumbent actually wins the next wave versus new entrants?
0:25:12 Or how should we think about that in AM?
0:25:23 Well, you know, reacting to it is important, but that doesn’t mean, like, it’s a Pearl Harbor moment.
0:25:28 And I think Google got their head out of their ass, so there’s a sound of it.
0:25:39 So, you know, they’re not going to get completely run over, but nonetheless, like, I don’t think OpenAI is going away.
0:25:41 So, like, they definitely let that happen.
0:25:44 Yeah, some of it to speed.
0:25:48 And then just, look, it’s execution over a long period of time.
0:25:55 And, you know, some of these very large companies, to varying degrees, have lost their ability to execute.
0:26:05 And so, if you’re talking about a brand new platform, and you’re talking about, you know, kind of building for a long time, it’s like, you know, Microsoft got caught with their pants down on Google.
0:26:12 Microsoft is still, like, very strong, but they missed that whole opportunity.
0:26:16 They also missed the opportunity, you know, Apple was nothing.
0:26:19 And Microsoft fully believed that they were going to own mobile computing.
0:26:21 They completely missed that one.
0:26:25 But they were still so big from their Windows monopoly, they could build into other things.
0:26:30 So, you know, I think generally the new companies have won the new markets.
0:26:40 And that doesn’t mean the big company, the biggest companies, the biggest monopolies from the prior generation just last a long time, is the way I would look at it.
0:26:44 Yeah, I also think we don’t quite know, like, it’s all happened so fast.
0:26:49 We actually don’t, I think we don’t yet know the shape and form of the ultimate products.
0:26:49 Yeah.
0:27:04 Right, and so, like, because it’s tempting, and this is kind of what always happens, it’s kind of tempting to look, I’m not saying that’s what these guys did on stage, but it’s kind of tempting to look, sometimes you hear the kind of reductive version of this, which is basically, it’s like, oh, there’s either going to be a chat bot or a search engine.
0:27:06 Right, the competition is between a chat bot and a search engine.
0:27:17 And the problem Google has is the classic problem of, you know, disruption, are you going to disrupt the 10 blue links model and swap in, you know, at, you know, sort of AI answers and, you know, potentially disrupt the advertising model.
0:27:25 And then the problem OpenAI has is they have the full, you know, the full chat product, but, you know, they don’t have the advertising yet, and they don’t have the distribution, Google scale distribution.
0:27:36 And so, you know, you kind of say, okay, that’s a fairly, it’s a fairly, like, that’d be straight out of a, like, you know, the innovator’s dilemma, you know, business textbook, like, this is just a very clear, you know, one versus one, you know, kind of dynamic.
0:27:48 But that assumes that, you know, or the mistake that you could make and think in that way is that assumes that the forms of the product in 5, 10, 15, 20 years that are going to be the main things that people use are going to be either a search engine or a chat bot, right?
0:27:53 And, you know, there’s just, you know, there’s just obvious historical analogies.
0:28:04 One just obvious historical analogy is, you know, the personal computer from sort of invention in 1975 through to, you know, basically 1992, you know, was a text prompt system, right?
0:28:11 You know, and at the time, by the way, an interactive text prompt was a big advance over the previous generation of, like, punch card systems, time sharing systems.
0:28:19 And then, you know, it was, you know, 1992, so it was, what, 17 years in, you know, the whole industry took a left turn into GUIs and never looked back, you know?
0:28:24 And then, by the way, you know, five years after that, the industry took a left turn into web browsers and never looked back, right?
0:28:32 And so the very shape and form and nature of the user experience and how it fits into our lives, you know, is, I think, still unformed.
0:28:45 And so, like, you know, like, I’m sure there will be chatbots 20 years from now, but I’m pretty confident that, you know, both the current chatbot companies and many new companies are going to figure out many kinds of user experiences that are radically different that we don’t even know yet.
0:28:55 And by the way, that’s one of the things, of course, that keeps the tech industry fun, which is, you know, especially on the software side, you know, it’s not obvious what the shape and form of the products are.
0:28:57 And there’s just, I think there’s just tremendous headroom for invention.
0:29:11 As you’re coaching entrepreneurs and the entrepreneurs in this room, what else feels different about this era or other advice that you find yourself to spend, whether it’s around sort of the talent wars that are going on or other aspects that feel unique to this era?
0:29:15 What other advice do you want to be leaving our entrepreneurs with?
0:29:17 That’s unique to this era?
0:29:22 Well, like, I actually think you said the right thing, which is this is a unique era.
0:29:40 And so, trying to learn the organizational design lessons of the past or trying to learn kind of too much from the last generation can be deceptive because things really are different.
0:29:46 Like, the way these, you know, the way these, you know, the way these, you know, the way your companies are getting built is quite different in many aspects.
0:30:06 And, you know, the types of, you know, what the, just like our observer observation on like PhD AI researchers is just very different than like a traditional engineer, full stack engineer or something like that.
0:30:14 So, you know, I think you do have to think through a lot of things from first principles because it is different.
0:30:17 And like, you know, observing from the outside, it’s really different.
0:30:19 Yeah.
0:30:21 And I would just offer, like, I do think things are going to change.
0:30:23 So, I already talked about, I think the shape and form of products is going to change.
0:30:27 And so, like, I think there’s still a lot of creativity there.
0:30:34 I also think, and I, let’s say, I think that, like, you know, in a world of supply and demand, the thing that creates gluts is shortages.
0:30:35 Right.
0:30:41 So, like, when something becomes too scarce, there becomes a massive economic incentive to figure out how to unlock new supply.
0:30:48 And so, the current generation of AI companies are really struggling with particular shortages of the really talented AI researchers and engineers.
0:30:54 And then they’re really, you know, challenged with a shortage of infrastructure capacity, chips, and data centers, and power.
0:30:57 I don’t want to call timing on this.
0:30:59 There will come a time when both of those things become gluts.
0:31:05 And so, I don’t know that we can plan for that, although I would just say the following.
0:31:18 Number one, the researcher-engineer side of things, it is striking to the degree to which there are excellent, you know, outstanding models coming out of China now, you know, for multiple companies.
0:31:21 And, you know, specifically, you know, DeepSeek and Quinn and Kimmy.
0:31:30 It is striking how the teams that are making those are not, you know, the name brand, you know, for the most part, these are not, like, the name brand people with their names on all the papers.
0:31:36 And so, like, China is successfully decoding how to, like, basically take young people and train them up in the field.
0:31:38 Well, and XAI to a large extent, too.
0:31:38 Yeah.
0:31:48 And so, I think that, I think there’s going to be, and look, it makes sense up until, it makes sense that for a while it’s going to be this super esoteric skill set and people are going to pay through the nose for it.
0:31:49 But, like, you know, there’s no question.
0:31:52 The information is, right, being transferred into the environment.
0:31:53 People are learning how to do this.
0:31:55 You know, college kids are figuring it out.
0:32:04 And so, you know, there’s, and I don’t know that there’s ever going to be a talent glut, per se, but, like, I think for sure there’s going to be a lot more people in the future who, of course, know how to build these things.
0:32:08 And then, by the way, also, of course, you know, AI building AI, right?
0:32:12 So, the tools themselves are going to be better at contributing to that.
0:32:19 And so, and I think this is good because I think that, you know, the current level of shortage of engineers and researchers is too constraining.
0:32:25 And then on the chip side, I don’t want to, I’m not a chip guy and I don’t want to call it specifically, but, like, it’s never been the case.
0:32:38 It’s never been the case in the chip industry that there’s ever, you know, every shortage in the chip industry has always resulted in a glut because the profit pool of a shortage, the margins get too big, the incentive for other people to come in and figure out how to commoditize the function get too big.
0:32:47 And so, you know, NVIDIA has, like, you know, the best position probably anybody’s ever had in chips, but notwithstanding that, I find it hard to believe that there’s going to be this level of pressure on infrastructure in five years.
0:32:57 Yeah, and even if the bottleneck within the infrastructure moves, so if it becomes power, if it becomes cooling or anything else, then you’ll have a chip glut for sure, yeah.
0:32:58 Right.
0:33:04 So, I think over the, I would just say this, it’s likely the challenges that we all have in five years from now are going to be different challenges.
0:33:05 Yeah.
0:33:05 Yeah, yeah.
0:33:15 Like, don’t, definitely, this industry of all industries don’t look at us as static, like, you know, the positions could change very, very fast.
0:33:19 Let’s actually close on more of this macro note.
0:33:20 Mark, you mentioned China.
0:33:28 Last month, we were in D.C., and one of the big questions the senator has is, how should we make sense of sort of the state of the AI race vis-a-vis China?
0:33:32 Do you want to share just the high-level summary of what you shared with them?
0:33:40 Yeah, so my sense of things, and I think the current, I think the current, if you just observe currently, specifically, like, Deep Sea, Kwan, and Kimi, and these models coming out of China,
0:33:48 my sense basically is, like, I would say the U.S., specifically in the West generally, but, you know, more and more specifically the U.S.,
0:33:56 like, the conceptual innovations are, you know, have been, you know, coming out of the U.S., coming out of the West, you know, kind of the big kind of conceptual breakthroughs.
0:34:06 China is extremely good at picking up ideas and implementing them and scaling them and commoditizing them, and, you know, they do that, obviously, throughout the manufacturing world,
0:34:10 and they’re doing it now very, I think, successfully sort of in AI.
0:34:14 And so I would say they’re running the catch-up game, like, really well.
0:34:24 You know, and then there’s sort of always this question of, like, how much of that is, like, being done, let’s just say, like, authentically, you know, through hard work and smart people,
0:34:31 and then how much is being done with maybe a little bit of help, maybe a little USB stick in the middle of the night, you know, kind of help.
0:34:38 So, you know, there’s always a little bit of a question, but, like, either way, you know, they’re doing a great job.
0:34:43 Obviously, they aspire to, you know, more than that, and there are many very smart and creative people in China.
0:34:49 And so, you know, it will be interesting now to see, you know, the level to which the conceptual breakthroughs start to come from there and whether they pull ahead.
0:34:56 And so, but, like, I would say, like, what we tell people in Washington is, like, look, this is a foot, this is now, this is a full-on race.
0:34:57 It’s a foot race.
0:34:58 It’s a game of inches.
0:35:00 Like, we’re not going to have a five-year lead.
0:35:02 We’re going to have, like, maybe a six-month lead.
0:35:03 Like, we have to run fast.
0:35:04 We have to win.
0:35:06 Like, we have to do this.
0:35:11 We can’t, and then we can’t put constraints on our companies that the Chinese government isn’t putting on their own companies.
0:35:13 And so, you know, we’ll just lose.
0:35:18 And, you know, do you really want to wake up in the morning and live in a world, you know, really controlled and run by Chinese AI?
0:35:21 Most of us would say, no, we don’t want to live in that world.
0:35:25 And so, there’s that.
0:35:29 And I would say I feel moderately good about that just because I think we’re really good at software.
0:35:35 You know, the minute this goes into, you know, embodied AI in the form of robotics, I think things get a lot scarier.
0:35:40 And, you know, this is the thing I’m now spending time in D.C. trying to really educate people on, which is, you know,
0:35:46 because the U.S. and the West have chosen to de-industrialize to the extent that we have over the last 40 years,
0:35:52 you know, China specifically now has this giant industrial ecosystem for building, you know,
0:35:58 sort of mechanical, electrical, and semiconductor, and now software, you know, devices of all kinds,
0:36:02 including phones and drones and cars and robots.
0:36:06 And so, you know, there’s going to be a phase two to the AI revolution.
0:36:07 It’s going to be robotics.
0:36:09 It’s going to happen, you know, pretty quickly here, I think.
0:36:15 And when it does, like, even if the U.S. stays ahead in software, like, the robot’s got to get built,
0:36:16 and that’s not an easy thing.
0:36:18 And it’s not just, like, a company that does that.
0:36:20 It’s got to be an entire ecosystem.
0:36:24 And, you know, it’s going to be, you know, it’s going to be, you know, like, I mean, you know,
0:36:25 the car industry was not three car companies.
0:36:28 It was thousands and thousands of component suppliers building all the parts.
0:36:32 And it’s been the same thing for airplanes and the same thing for computers and everything else.
0:36:34 It’s going to be the same thing for robotics.
0:36:37 And, you know, by default, sitting here today, that’s all going to happen in China.
0:36:42 And so, even if they never quite catch us in software, they might just, like, lap us in hardware,
0:36:43 and that’ll be that.
0:36:48 You know, the good news is I think there’s a growing awareness in – there’s a growing awareness,
0:36:51 I would say, across the political spectrum in the U.S.
0:36:52 that, like, de-industrialization went too far.
0:36:55 And there’s a growing desire to kind of figure out how to reverse that.
0:37:00 And, you know, I say I’m guardedly optimistic that we’ll be making progress on that.
0:37:01 But I think there’s a lot of work to be done.
0:37:05 On that call to arms, let’s wrap.
0:37:06 Thank you, Mark and Ben.
0:37:08 To wrap up, I’d like to welcome you.
0:37:08 Thank you.
0:37:09 Thank you, everybody.
0:37:15 Thanks for listening to this episode of the A16Z podcast.
0:37:18 If you liked this episode, be sure to like, comment, subscribe,
0:37:22 leave us a rating or a review, and share it with your friends and family.
0:37:26 For more episodes, go to YouTube, Apple Podcasts, and Spotify.
0:37:33 Follow us on X at A16Z, and subscribe to our Substack at a16z.substack.com.
0:37:35 Thanks again for listening, and I’ll see you in the next episode.
0:37:40 As a reminder, the content here is for informational purposes only.
0:37:43 It should not be taken as legal, business, tax, or investment advice,
0:37:45 or be used to evaluate any investment or security,
0:37:50 and is not directed at any investors or potential investors in any A16Z fund.
0:37:54 Please note that A16Z and its affiliates may also maintain investments
0:37:55 in the companies discussed in this podcast.
0:37:58 For more details, including a link to our investments,
0:38:03 please see A16Z.com forward slash disclosures.
In this closing keynote from a16z’s Runtime conference, General Partner Erik Torenberg speaks with our firm’s cofounders, Marc Andreessen and Ben Horowitz on highlights from throughout the conference, the current state of LLM capabilities, and why despite huge capex, AI is not a bubble.
Resources:
Follow Marc on X: https://x.com/pmarca
Follow Ben on X: https://x.com/bhorowitz
Stay Updated:
If you enjoyed this episode, be sure to like, subscribe, and share with your friends!
Find a16z on X: https://x.com/a16z
Find a16z on LinkedIn: https://www.linkedin.com/company/a16z
Listen to the a16z Podcast on Spotify: https://open.spotify.com/show/5bC65RDvs3oxnLyqqvkUYX
Listen to the a16z Podcast on Apple Podcasts: https://podcasts.apple.com/us/podcast/a16z-podcast/id842818711
Follow our host: https://x.com/eriktorenberg
Please note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures.
Stay Updated:
Find a16z on X
Find a16z on LinkedIn
Listen to the a16z Podcast on Spotify
Listen to the a16z Podcast on Apple Podcasts
Follow our host: https://twitter.com/eriktorenberg
Please note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures.
Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.


Leave a Reply
You must be logged in to post a comment.