AI transcript
0:00:10 Why is that the case? And I think it’s just like, AIs are lacking these capabilities, humans have these capabilities.
0:00:16 You are a natural general intelligence, but we cannot easily do each other’s jobs, even though our jobs are fairly similar.
0:00:21 The reason humans are so valuable is not just their raw intellect.
0:00:28 It’s their ability to build up context, it’s to interrogate their own failures, and pick up small efficiencies and improvements as they practice a task.
0:00:35 Whereas with an AI model, it’s understanding of your problem, your business, will be expunged by the end of a session.
0:00:42 Every other technological tool is a compliment to humans, and yet when people talk about AI and think about AI, they essentially never seem to think in these terms.
0:00:44 They always seem to think in terms of perfect substitutability.
0:00:50 What happens when AI can do almost every white-collar job but still can’t remember what you told it yesterday?
0:00:54 What does that mean for AGI, the future of work, and the shape of the global economy?
0:01:03 I sat down with Noah Smith, author of No Opinion, and Dorkesh Patel, host of the Dorkesh Podcast, to unpack what’s real and what’s hype in the race against AGI.
0:01:11 We talk about continual learning, economic substitution, galaxy-scale growth, and whether humanity’s biggest challenge is technological or political.
0:01:13 Let’s get into it.
0:01:20 As a reminder, the content here is for informational purposes only.
0:01:25 It should not be taken as legal, business, tax, or investment advice, or be used to evaluate any investment or security,
0:01:30 and is not directed at any investors or potential investors in any A16Z fund.
0:01:35 Please note that A16Z and its affiliates may also maintain investments in the companies discussed in this podcast.
0:01:42 For more details, including a link to our investments, please see A16Z.com forward slash disclosures.
0:01:49 Dorkesh, Noah, welcome.
0:01:50 Our first podcast ever as a trio.
0:01:51 Yes.
0:01:52 Excited.
0:01:52 I’m very excited.
0:01:55 So, Dorkesh, you came up with the Scaling Era.
0:01:57 It’s almost like you’re a future historian.
0:02:05 You’re sort of telling the history as it’s being written, and so it’s only appropriate to ask you, what is your definition of AGI, and how has that evolved over time?
0:02:07 I feel like I’m like five decades too young to be a historian.
0:02:08 It’s got to be like in your 80s or something.
0:02:10 But we’re living in history right now.
0:02:11 Right.
0:02:19 So, the ultimate definition is can do almost any job, say like 98% of jobs, at least, as well, fast, cheaply as a human.
0:02:31 I think the definition that’s often useful for near-term debates is can automate 95% of white-collar work, because there’s a clear path to get to that, whereas robotics, there’s like a long tail of things you have to do in the physical world, and robotics is slower.
0:02:33 So, automate white-collar work.
0:02:35 That’s interesting because it’s an economic definition.
0:02:38 It’s not a definition about like how it thinks, how it reasons, et cetera.
0:02:39 It’s about what it can do.
0:02:40 Yeah.
0:02:45 I mean, we’ve been surprised what capabilities have come first in AIs.
0:02:54 Like, they can reason already, and why they seem to lack the economic value we would have assumed would correspond to that level of capability.
0:03:00 This thing can reason, but it’s making OpenAI $10 billion a year, and McDonald’s and Kohl’s make more than $10 billion a year, right?
0:03:06 So, clearly, there’s more things relevant to automating entire jobs than we previously assumed.
0:03:11 So, then it’s just useful to, like, who knows what all those things are, but once they can automate it, then it’s AGI.
0:03:15 And so, when Ilya or Meta is using the word superintelligence, what do they mean?
0:03:17 Do they mean the same thing, or is it something totally different?
0:03:20 I’m not sure what they mean.
0:03:25 There’s a spectrum between God and just something that thinks like a human, but much faster.
0:03:27 Do you have some sense of what you think they mean?
0:03:28 God.
0:03:31 I think probably they mean something they would worship as a god.
0:03:36 And so, when Tyler says we’ve achieved AGI and you differ from him, where is the tangible difference there?
0:03:43 I’m just noticing that if there was a human who was working for me, they could do things for me that these models cannot do, right?
0:03:44 And I’m not talking about something super advanced.
0:03:47 I’m just saying, I have transcripts for my podcast.
0:03:49 I want you to rewrite them the way a human would.
0:03:51 And then I’ll give you feedback about what you messed up.
0:03:54 And I want you to integrate that feedback as you get better over time.
0:03:55 You learn my preferences.
0:03:56 You learn my content.
0:04:02 And they can’t learn, over the course of six months, how to become a better editor for me or how to become a better transcriptor for me.
0:04:05 And since a human hire would be able to do this, they can’t.
0:04:06 So, therefore, it’s not AGI.
0:04:07 Now, I have a question.
0:04:09 I am a natural general intelligence.
0:04:11 You are a natural general intelligence.
0:04:15 But we cannot easily do each other’s jobs, even though our jobs are fairly similar.
0:04:15 Right.
0:04:19 Put me in the DoorCash podcast, and I could not interview people nearly so well.
0:04:24 If you had to write Substack articles, like, several times a week on economics, you might not do as well.
0:04:27 But we are general intelligences, and we’re not exactly substitutable.
0:04:30 So, why should we use substitutability as the criterion for AGI?
0:04:32 What else is it that we want them to do?
0:04:38 I think with humans, we have more of a sense of there is some other human who theoretically could do what you would do.
0:04:42 An individual copy of a model might be, say, fine-tuned to do a particular job.
0:04:46 And it would be fair to say, then, why expect this particular fine-tuned to be able to do any job in the economy?
0:04:49 But then there’s a question of, well, there’s many different models in the world.
0:04:52 And each model might have many different fine-tunes or many different instances.
0:04:58 Any one of them should be able to do a particular white-collar job for it to count as AGI.
0:05:01 It’s not that, like, any AGI should be able to do every single job.
0:05:06 That, like, some artificial intelligence should be able to do this job for this model to count as AGI.
0:05:06 I see. Okay.
0:05:08 But so, let’s take another similar example.
0:05:09 Let’s take Star Trek.
0:05:09 Yeah.
0:05:10 Okay, you got Spock.
0:05:11 He’s very logical.
0:05:13 He can do stuff that Kirk and whoever can’t do.
0:05:16 But then those guys can do stuff that Spock can’t do.
0:05:16 Right.
0:05:18 Get in touch with their emotions, intuition, stuff like that.
0:05:21 They’re both general intelligences, but they’re alien to each other.
0:05:23 So, AI feels alien to me.
0:05:25 Sometimes it talks just like us.
0:05:26 It was built off of our thoughts, obviously.
0:05:30 But then sometimes it talks just like us, and sometimes it’s just, like, very alien.
0:05:36 And so, should we ever expect that to change such that it’s no longer an alien intelligence?
0:05:51 I think it’ll continue to be alien, but I think eventually we will gain capabilities which are necessary to unlock the trillions of dollars of economic value that are implied by automating human labor, which these models are clearly not generating right now.
0:05:57 So, you could say, like, if we substituted jobs right now, immediately there’d be a huge productivity dip.
0:05:59 But over time, we would learn to start doing them better.
0:06:03 I mean, maybe a better example is just that, like, you hire people to do things for you.
0:06:05 I don’t know if you actually hire people, but I assume.
0:06:05 Okay.
0:06:09 But you’re still having to do that rather than hiring an AI.
0:06:16 And I have, like, many rules where it’s, like, an AI might be generating hundreds of dollars of value for me a month, but, like, humans are generating thousands of dollars or tens of thousands of dollars of value for me a month.
0:06:18 Why is that the case?
0:06:20 And I think it’s just, like, AIs are lacking these capabilities.
0:06:21 Humans have these capabilities.
0:06:24 And is the main thing missing, in your view, sort of continual learning?
0:06:26 What is the bottleneck?
0:06:30 The reason humans are so valuable is not just their raw intellect.
0:06:32 It’s not mainly their raw intellect, although that’s not important.
0:06:35 It’s their ability to build up context.
0:06:40 It’s to interrogate their own failures and pick up small efficiencies and improvements as they practice a task.
0:06:47 Whereas with an AI model, it’s understanding of your problem, your business, will be expunged by the end of a session.
0:06:49 And then you’re starting off at the baseline of the model.
0:06:52 Like, with a human, you’ve got to train them over many months to make them useful employees.
0:06:53 Yeah.
0:06:57 And what will need to change in order for AI to develop a capability?
0:07:01 I mean, I probably wouldn’t be a podcaster if I knew the answer to that question.
0:07:09 It just seems to me that, like, a lot of the modalities that we have today to teach LLM stuff do not constitute this kind of continual learning.
0:07:16 For example, making the system prompt better is not the kind of continual learning or on-the-job training that my human employees experience.
0:07:18 Or RL fine-tuning is not this.
0:07:24 But, like, what the solution to this looks like is precisely because I don’t have an obvious solution that I think we’re many years away.
0:07:27 Okay, so here’s my question about replacing jobs.
0:07:29 It seems to me that it’s partly about demand.
0:07:34 So, for example, suppose that AI has already replaced my job or can replace my job.
0:07:46 So that suppose that anyone who fires up ChatGPT or whatever model and says, search the web, find the most interesting topics that people are talking about economics and write me an insightful post telling me some cool new thing I should think about that.
0:07:49 And then they get a better blog than no opinion.
0:07:51 I don’t know if that’s happened yet.
0:07:53 I mean, I’ve tried that and I don’t like it as much.
0:07:55 But suppose that most people will like it as much.
0:07:57 And so my job is an automated and people just don’t realize it.
0:08:01 Or people have this sort of idea in their mind of like, well, is it really a human and blah, blah, blah.
0:08:05 And then as generational turnover happens, young people won’t care about reading a human.
0:08:06 They’ll care about reading an AI.
0:08:08 But in terms of functional capabilities, it’s already there.
0:08:10 But in terms of demand, it’s not there.
0:08:12 How much of that could there be?
0:08:16 I expect there will be much less of that than people assume.
0:08:18 If you just look at the example of Waymo versus Uber.
0:08:23 I think previously you could have had this thing about people will hesitate to take automated rides.
0:08:29 And in fact, in the cities where it’s been deployed, people love this product despite the fact that you had to wait 20 minutes because the demand is so high.
0:08:32 And it’s still like it got some glitches to iron out.
0:08:39 But just the seamlessness of using machines to do things for you, the fact that it can be personalized to you, it can happen immediately.
0:08:42 One thing people will be like, okay, well, doctors and lawyers will set up guilds.
0:08:43 And so you won’t be able to consult.
0:08:46 I think there might be guilds on who can call themselves a doctor or a lawyer.
0:08:56 But I just think if genuinely chat GPT can give me as good medical advice as a real doctor, the experience of just talking to a chatbot rather than spending three hours in a waiting room is so much better.
0:09:00 I think a lot of sectors in the economy look like this where we’re assuming people will care about having a human.
0:09:05 But in fact, they will not if you assume that they will genuinely have the capabilities that the human brings to bear.
0:09:11 Right. So it’s interesting. AI is better for diagnosis on a lot of things than humans.
0:09:18 Right. But then something about having humans to follow up with makes me also want to check with a human after I’ve gotten diagnoses from an AI on something.
0:09:24 And so that might vary by job. Like cars may be one thing, but maybe it is about capabilities. I can’t say.
0:09:30 I’m just saying like everybody seems to think that AI is a perfect substitute for humans and that’s what it should be and that’s what it will be.
0:09:32 And everyone seems to think of it in that case.
0:09:36 However, every other tool that’s ever been made, every other technological tool was a complement to humans.
0:09:38 It could do some things humans could do.
0:09:43 Maybe even it could do anything humans could do, but at different relative costs, different relative prices.
0:09:50 So that you’d have humans do something and the tool do other things and you’d have this complementarity between the two.
0:09:54 And yet when people talk about AI and think about AI, they essentially never seem to think in these terms.
0:09:56 They always seem to think in terms of perfect substitutability.
0:10:00 And so I’m trying to get to the bottom of like why people insist on always thinking in terms of perfect substitutability
0:10:03 when every other tool has been complementary in the end.
0:10:07 Well, human labor is also complementary to other human labor, right?
0:10:08 There’s increasing returns to scale.
0:10:12 But that doesn’t mean that Microsoft has to hire some number of software engineers.
0:10:17 And like it will care about the cost of what the software engineers cost.
0:10:22 Like it will go to markets where they can get the highest performance for the relative value the software engineers are bringing in.
0:10:25 I think it will be a similar story with AI labor and human labor.
0:10:29 And AI labor just has the benefit of having extremely low subsistence wages.
0:10:35 Like the marginal cost of keeping an H-100 running is much lower than the cost of keeping a human alive for a year.
0:10:40 Noah, would you say you’re AGI-pilled in the sense that Dorkesh described the term?
0:10:42 And we’ve talked a little bit about AI’s effect on labor.
0:10:42 Why don’t you share it?
0:10:47 Why you’re perhaps a little bullish that there’ll be plenty for humans to do and that’ll be more complementary?
0:10:49 What is AI-pilled?
0:10:54 We just believe that it will automate a huge swath of the economy or labor.
0:10:58 I mean, I am very unwilling to say like, here’s something technology will never be able to do.
0:11:01 I mean, that always seems like a bad bet.
0:11:05 Here’s two things people have been saying since the beginning of the Industrial Revolution,
0:11:10 neither of which has ever remotely come close to being true, even in specific subdomains.
0:11:14 The first one is, here’s a thing technology will never be able to do.
0:11:19 And the second one is, human labor will be made obsolete.
0:11:23 Those people have been saying those two things and you can just go, you can read it,
0:11:26 you can even ask AI to go search and find, I have done this.
0:11:28 And then find you examples of people saying those two things.
0:11:31 People have been saying those two things over and over and over and over and over and it’s never been true.
0:11:32 That doesn’t mean it could never be true.
0:11:36 Sometimes something happens that never happened before, such as the Industrial Revolution itself.
0:11:40 You have this hockey stick where suddenly like, oh, we’ll never get rich, we’ll never get rich.
0:11:40 Oh, we’re rich.
0:11:42 And so sometimes that happens.
0:11:44 The unprecedented can happen.
0:11:47 However, I’m always wary because I’ve seen it said so many times.
0:11:53 And so within just the last 10 years or whatever, I’ve seen a couple predictions just spectacularly fail.
0:11:59 So for example, in 2015, 10 years ago, I was sitting in the Bloomberg office in New York and my colleague, I won’t name him,
0:12:06 he was physically yelling at me that truck drivers were in trouble and that truck drivers were all going to be put out of a job by self-driving trucks.
0:12:09 And he said, this is going to just devastate a sector of the economy.
0:12:12 It’s going to devastate the working class, it’s going to devastate blue-collar labor, blah, blah, blah.
0:12:17 And at the same time, I was reading like, I always read the sci-fi top stories of the year or whatever.
0:12:22 And so there were two stories in the same year about truckers being mass unemployed by self-driving trucks.
0:12:27 And then 10 years later, there’s a trucker shortage and the number of truckers we hire is higher than ever.
0:12:30 I’m not saying truckers will never be automated, they may.
0:12:32 However, I’m saying that was a spectacularly wrong prediction.
0:12:36 And you also got Jeffrey Hinden’s prediction that radiologists would be unemployed within a certain time frame.
0:12:39 And by that time, radiologists’ wages were higher than ever and employment was higher than ever.
0:12:41 I’m not saying this can’t happen.
0:12:46 I’m not smugly sitting here and saying there’s a law of the universe that says you’ll never see this kind of mass unemployment, blah, blah, blah.
0:12:49 I mean, there were encyclopedia salespeople were mass unemployed by the internet.
0:12:51 We’ve seen it happen in real life.
0:12:53 But these predictions keep coming wrong and keep coming wrong.
0:12:55 I’m trying to figure out why is that true?
0:12:56 Why do they keep coming wrong?
0:13:00 Is it simply that people overestimate progress in technical capabilities?
0:13:07 Or are there complementarities that people can’t imagine from sort of like the O-Net division of tasks or the standard mental division of tasks?
0:13:14 I think the problem has been that people underestimate how many things are truly needed to automate human labor.
0:13:17 And so they think like we’ve got reasoning.
0:13:21 And now that we’ve got reasoning, like this is what it takes to take over a job.
0:13:25 In fact, there’s much more to a job than is assumed.
0:13:28 That’s why I wrote this blog post where I’m like, look, it’s not a couple years away.
0:13:29 It might be longer than that.
0:13:35 Then there’s another question of like by 2100, will there be jobs that humans are doing?
0:13:44 If you just like zoom out long enough, will we ever be able to make machines that can think and do physical labor at least as cheaply and as well as humans can?
0:13:47 And fundamentally, the big advantage they have is like we can keep building more of them.
0:13:53 Right. So we make as many of those machines as the value they generate equals the cost of producing them.
0:13:54 And the cost will continue to go down.
0:13:54 Right. Yeah.
0:13:57 And it will be lower than the cost of keeping a human alive.
0:14:03 So even if a human could do the exact same labor, a human needs like a lot of stuff to stay alive, let alone to grow a human everything.
0:14:06 An H100 costs $40,000 today.
0:14:09 The yearly cost of running it is like thousands of dollars.
0:14:11 We can just buy more H100s.
0:14:16 Like if currently we have the algorithm for AGI, we could run it on an H100 and yeah.
0:14:22 So however big the demand is, the latent demand that’s unlocked, we just increase the supply basically to meet that demand.
0:14:26 So first, when AGI is here, what does the world look like?
0:14:30 Because Sam Altman was reflecting on his podcast with Jack Altman the other week.
0:14:36 He was saying, if you told me 10 years ago that we would have PhD level AI, I would think the world looks a lot different.
0:14:38 But in fact, it doesn’t look that different.
0:14:43 And so is there a potential where we have much more increased capabilities, but actually the world does it?
0:14:46 It’s like the Peter Thiel called the 1973 test or something.
0:14:48 We have these phones, but the world just looks the same.
0:14:49 We just have phones in our pockets.
0:14:56 Yeah, I think if we have like chatbots that can answer hard math questions, I don’t expect the world to look that different.
0:15:00 Because the fraction of economic value that is generated by math is like extremely small.
0:15:06 But there’s like other jobs that are much more mundane than quote unquote PhD intelligence, which a chatbot just cannot do, right?
0:15:08 A chatbot cannot edit videos for me.
0:15:11 And once those are automated, I actually expect a pretty crazy world.
0:15:21 Because the big bottleneck to growth has been that human population can only increase at this slow clip.
0:15:26 And in fact, one of the reasons that growth has slowed since the 70s is that in developing countries, the population has plateaued.
0:15:31 With AI, the capital and the labor are functionally equivalent, right?
0:15:34 You can just build more data centers or build more robot factories.
0:15:38 And they can do real work or they can build more robot factories.
0:15:39 And so you can have this explosive dynamic.
0:15:43 And once we get like that loop closed, I think it would just be like 20% growth plus.
0:15:45 Do you see that feasible, possible?
0:15:46 20% growth?
0:15:48 Tyler, I believe, said 5%, right?
0:15:50 0.5% more than the steady state.
0:15:51 0.5%.
0:15:52 0.5%.
0:15:52 What is the argument for that?
0:15:54 For Tyler’s argument?
0:15:55 Bottlenecks.
0:15:58 I think the problem with that argument is that there’s always bottlenecks, right?
0:16:03 So you could have said before the Industrial Revolution, well, we will never 10x the rate of growth because there will be bottlenecks.
0:16:10 And that doesn’t tell you, like, you empirically have to just look at the fraction of the economy that will be bottlenecked and what is the fraction that’s not and then like actually derive the rate of growth.
0:16:12 The fact that there’s bottlenecks, this doesn’t tell you, yeah, okay.
0:16:15 Is he mostly referring to regulation or?
0:16:21 Yeah, and just that like we live in a fallen world and people will have to use the AIs and yeah, things like that.
0:16:23 Who will be buying all the stuff?
0:16:28 So background, in economics, GDP is what people are willing to pay for.
0:16:28 Right.
0:16:31 Who will be buying the stuff in a world where we get 20% growth?
0:16:32 First of all, I don’t know.
0:16:38 So you could have said in 10,000 BC, like, the economy is going to be a billion times bigger in 10,000 years.
0:16:41 What does it mean to produce a billion times more stuff than we’re producing right now?
0:16:42 Who is buying all this stuff?
0:16:44 You can’t, like, predict that in advance.
0:16:47 In 1700s, I could tell you exactly who was buying stuff.
0:16:47 We was everybody.
0:16:48 Peasants.
0:16:53 In fact, people wrote these things around 1900 about what the world would look like in 100 years.
0:16:54 You know, what we’ll have.
0:17:01 They didn’t get exactly the right things right that we’ll have, but they correctly identified that it would be regular consumers who would be buying all these things, regular people.
0:17:03 And so that came true.
0:17:04 It was obvious.
0:17:04 But here’s my point.
0:17:14 Suppose that 99% of people do not have a job and are not getting paid an income, and all the money is going to sort of Sam Altman, Elon Musk, and five other guys.
0:17:15 Okay?
0:17:19 And they’re captive AIs that they own because, for some reason, our property rights system still exists.
0:17:19 But okay.
0:17:22 Suppose that that’s what the future we’re contemplating, right?
0:17:26 And so 99% of people or more don’t have any job.
0:17:27 They don’t have any income.
0:17:27 They’re out on the street.
0:17:30 And yet, you’re saying 20% growth a year.
0:17:36 That growth is defined by people, consumers, paying for things and saying, here is my money.
0:17:37 I wouldn’t define it just as people.
0:17:38 Okay.
0:17:40 I would just define it as, like, the raw—
0:17:41 I mean, I assume the AIs are created to each other.
0:17:42 Yeah.
0:17:43 No, no.
0:17:44 That doesn’t count GDP.
0:17:45 Only final good—
0:17:45 Okay.
0:17:47 So we’re, like, launching the Dyson Spears.
0:17:49 We’re not allowed to count that because the AIs are doing it.
0:17:51 I mean, like, I want to know what the solar system will look like.
0:17:53 I don’t care, like, what, like, the semantics of that are.
0:17:58 And I think the better way to capture what is physically happening is just include the AIs and the GDP numbers.
0:17:59 Why will they do any of that?
0:18:05 One argument is simply that if there’s any agent, AI or human, who cares about colonizing the galaxy,
0:18:10 even if 99% of agents don’t care about that, if one agent cares, they can go do it,
0:18:13 colonizing the galaxy is a lot of growth because the galaxy is really big, right?
0:18:17 So it’s very easy for me to imagine if, like, Sam Altman decides to launch the probes,
0:18:21 how, like, you know, breaking down Mars and sending out the virus probes, like, generates 20% growth.
0:18:24 I think what you’re getting at here is that AI will have to have property rights.
0:18:29 AI agents will have to be able to have autonomous control of resources.
0:18:31 I guess it depends on what you mean by autonomous.
0:18:35 Today we already have computer programs that have autonomous use of resources, right?
0:18:37 Okay, but the program goes off and colonizes the solar system.
0:18:42 It’s not like a dude telling it, colonize the solar system now and doing all this stuff.
0:18:46 It’s like the AI has made the decision to do it, and Sam Altman’s sitting back there saying,
0:18:47 Oh, well, I’m just saying this is not a crux.
0:18:50 Sam Altman could say it, or the AI could say it.
0:18:54 If some Asian cares about this, and they’re not stopped from doing it, like, this is just, like,
0:18:56 physically you can easily see where the 20% growth is coming from.
0:18:57 Let me make this a little more concrete.
0:19:04 Suppose that AI is going to produce a bounty of the things that humans desire, and that’s going to be what growth is.
0:19:07 How will it get to the humans if the humans don’t have a job?
0:19:10 And if the humans don’t have a job, why will AI be?
0:19:15 So, in other words, if there’s no consumers to buy my cars, why am I building cars?
0:19:17 You might be assuming there’s some UBI or some sort of…
0:19:18 No, no, I don’t need to assume that.
0:19:19 Although, I mean…
0:19:20 Let’s assume there’s not that.
0:19:21 Yes, I don’t need to assume that.
0:19:26 It seems like you’re saying, look, if 99% of consumers are no longer consumers, where’s this economic
0:19:27 activity coming from?
0:19:27 Yeah.
0:19:33 And I’m just saying, okay, if one person cares about colonizing the galaxy, that’s generating
0:19:34 a lot of demand.
0:19:35 It takes a lot of stuff to colonize the galaxy.
0:19:39 So, this world where, like, even if there’s not an egalitarian world where everybody is, like,
0:19:43 roughly contributing equivalent amounts of demand, the potential for one person alone to generate
0:19:45 this demand is so high enough that, like…
0:19:48 So, Sam Allman tells his infinite army of robots to go out to colonize the galaxy, we count that
0:19:50 as consumption, we put a value on it, and that’s GDP.
0:19:52 Yeah, or it might be investment.
0:19:56 Maybe he’s going to defer his consumption to once he’s, like, after colonize the galaxy.
0:19:56 Yeah, yeah.
0:19:58 And I’m not saying this is the world I want.
0:19:59 I’m just saying, think about it physically.
0:20:03 If you’re colonizing the galaxy, which you can do potentially after AGI, I’m not saying, like,
0:20:04 it’ll happen tomorrow after AGI, right?
0:20:05 But there’s a thing that’s physically possible.
0:20:07 Is that growth?
0:20:08 Like, something’s happening that’s, like, explosive.
0:20:09 Right.
0:20:10 Maybe.
0:20:12 The thing is that it’s a very weird world.
0:20:14 It doesn’t look like the kind of economy we’ve ever had.
0:20:15 Right.
0:20:20 And we created the notion of GDP to represent people exchanging money for goods and services,
0:20:24 people, like, basically exchanging their labor for goods, exchanging the value
0:20:25 of their labor for goods and services.
0:20:27 At a fundamental level, that’s what GDP is.
0:20:33 We’re envisioning a radical shift of what GDP means to a sort of internal pricing that a
0:20:36 few overlords set for the things that their AI agents want to do.
0:20:39 And that’s incredibly different than what we’ve called GDP in the past.
0:20:42 I think the economy will be incredibly different from what it was in the past.
0:20:43 I’m not saying this is, like, the modal world.
0:20:46 There’s a couple of reasons why this might not end up happening.
0:20:52 One is, even if your labor is not worth that much, the property you own is potentially worth
0:20:53 a lot, right?
0:20:58 If you own the S&P 500 and there’s been explosive growth, you’re, like, a multi-multi-millionaire
0:21:01 or the land you have is, like, worth a lot.
0:21:05 If the AI can make such good use of that land to build the space probes, assuming that our
0:21:07 system of property rights continues into this regime.
0:21:14 And second, so, I mean, in many cases, it’s hard to ascribe how much economic growth there
0:21:15 has been over very long periods of time.
0:21:19 For example, if you’re comparing the basket of goods that we can produce as an economy
0:21:24 today versus, like, 500 years ago, it’s not clear how you compare, we have antibiotics today.
0:21:28 I wouldn’t want to go back 500 years for any amount of money because they don’t have antibiotics
0:21:30 and I might die and it’ll just suck.
0:21:34 So there’s actually, like, no amount of money to live in 1,500 that we would rather have than
0:21:35 live today.
0:21:39 And so if we have those quality of goods for normal people, just, like, you can live
0:21:43 forever, you have, like, euphoria, drugs, whatever, these are things we can imagine now.
0:21:44 Hopefully, it’ll be even more compelling than that.
0:21:49 Then it’s easy to imagine, like, okay, it makes sense why this stuff is worth way more than
0:21:51 the stuff that the world economy can produce for even for normal people today.
0:21:52 Right.
0:21:56 And so I guess I’m just thinking about, this is a thing that economists really struggled
0:21:57 with in the early 20th century.
0:22:01 It’s this idea that we had this capacity to expand production, expand production, expand production.
0:22:05 And then the thing is that companies competed their profits to zero and the profits
0:22:08 crashed and nobody wanted to expand production anymore because they weren’t making any profit.
0:22:10 We’re seeing this happen again in China right now with overproduction.
0:22:15 We’re seeing BYD having to take loans from its suppliers just to stay financially afloat,
0:22:19 even though it’s the best car company in the world because the Chinese government has paid
0:22:22 a million other car companies to compete with BYD.
0:22:23 And so you overproduce.
0:22:24 So you have this overproduction.
0:22:26 So the solution was to expand consumption.
0:22:30 This is the solution people are recommending for China now to expand consumption so that you
0:22:33 can refloat the profit margins of all these companies and have them continuous companies.
0:22:37 And so the idea is if AI is producing all this stuff, but it’s overproducing services, it’s
0:22:41 overproducing whatever AI can produce and the profits from this go negative, that makes the
0:22:43 GDP contribution go to zero.
0:22:46 And basically OpenAI and Anthropic and XAI and whatever will just be sitting there saying,
0:22:48 why am I doing this again?
0:22:48 Why am I?
0:22:49 No one’s buying this shit.
0:22:56 And so at that point, it seems like there will be corporate pressure on the government
0:23:01 to do something to redistribute purchasing power so that they don’t compete their profits
0:23:02 to negative.
0:23:06 And so they have some reason to create more economic activity so they can take a slice of
0:23:08 it, which is essentially what happened in the early 20th century.
0:23:09 Yeah, I disagree with this.
0:23:10 I think…
0:23:11 Oh, I’m not saying this will happen.
0:23:13 I’m saying like that would be the analogous thing.
0:23:13 I disagree.
0:23:18 I would prefer it to be the case that even as a libertarian, I would prefer for significant
0:23:21 amounts of redistribution in this world because the libertarian argument doesn’t make sense
0:23:23 if there’s no way you could physically pick yourself up by the bootstraps.
0:23:28 Like your labor is not worth anything or your labor is worth less than subsistence calories
0:23:29 or whatever, which is the more relevant thing.
0:23:32 But I don’t think this is analogous to the situation in China.
0:23:36 I think what’s happening in China is more due to the fact that you have the system of financial
0:23:41 repression, which redistributes money and also currency manipulation, which basically
0:23:46 redistributes ordinary people’s money to basically producing one EV maker in every
0:23:47 single province.
0:23:51 So it is the market distortion that the government is creating that causes this overproduction.
0:23:54 We can go into what like the analogous thing in the AI case looks like.
0:23:59 But I think if there isn’t some market distortion, I just think like people will use AI where it
0:24:00 has the highest rate of return.
0:24:03 If it’s not space colonization, there will be like longevity drugs or whatever.
0:24:07 I’m just asking like, why would I invest all this money into AI producing stuff?
0:24:11 Why would I just invest the massive hundreds and billions of trillions and whatever of
0:24:15 dollars into producing stuff for people who are all going to be out of a job and won’t
0:24:15 be able to buy this?
0:24:17 But again, I don’t think you’ll be producing it for them.
0:24:18 I think you’d be producing it for whoever does that.
0:24:20 Like there’s stuff in the world.
0:24:21 So somebody will have stuff.
0:24:22 Maybe it’s the AIs.
0:24:23 Maybe it’s Sam Altman.
0:24:26 You’re producing it for whoever has the capability to buy your stuff.
0:24:28 And will they want AI?
0:24:31 And I’m just saying AI can do so many things, least of which is like colonizing the galaxy.
0:24:33 People are willing to pay a lot of stuff.
0:24:37 I’m just trying to get this straight in my head of what this economy looks like.
0:24:41 And I’m seeing a picture of the trillions of dollars needed to build out all these data
0:24:46 centers will be done not for profit, not to make money from a consumer economy for the
0:24:50 creators of the AI, but to satisfy the whims of a few robot lords to colonize the galaxy.
0:24:54 I think you’re making two different points and they’re getting wrapped into one.
0:24:57 I’m saying, yes, important word.
0:25:04 So there’s one about, do you expect it to be the case that the robot overlord world happens?
0:25:07 And I’m saying, no, actually, even without redistribution, first of all, I expect redistribution
0:25:08 to happen.
0:25:08 I hope it happens.
0:25:12 But even if it doesn’t, and I don’t think it’ll happen because people like corporations
0:25:13 wanted to redistribution to happen.
0:25:16 I think it’ll be good to happen for independent reasons, but I don’t buy this argument that
0:25:18 the corporations will be like, we need somebody to buy our AI, therefore we need to give the
0:25:19 money to the ordinary consumers.
0:25:24 You believe broad-based asset ownership will create a whole lot of broad-based consumer demand,
0:25:26 even in the absence of labor income.
0:25:29 Honestly, I don’t have like a super strong opinion, but I think that’s like plausible.
0:25:34 But independent of that, I’m like, okay, even if that demand doesn’t exist, just like the
0:25:39 things you can do with a new frontier of technology, as long as one person wants it, there’s
0:25:40 so much room to do things.
0:25:42 Space colonization is an obvious example.
0:25:43 That’s a lot of money.
0:25:44 Right.
0:25:47 There’s like obvious demand for the things that AI will be able to produce, right?
0:25:49 Like one of the things that AI can produce is colonize the galaxy.
0:25:50 Right, exactly.
0:25:55 So, but the question is like, I can see a paperclip maximizing autonomous intelligence as colonizing
0:25:56 the galaxy, but in terms of-
0:25:57 That’s a lot of growth.
0:25:58 That is.
0:26:03 But in terms of, and so by the way, I would like to say that I am a paperclip maximizer.
0:26:05 I am the real paperclip maximizer.
0:26:07 I want to maximize rabbits in the galaxy.
0:26:09 I want to turn the entire galaxy into floofy rabbit.
0:26:10 That’s my goal.
0:26:15 And so my goal with AGI is to enlist the AGI is to help me in this call, but then to
0:26:16 align them toward rabbit.
0:26:16 Yeah.
0:26:17 But anyway.
0:26:19 Get this down in front of the open AI board of directors.
0:26:19 I know.
0:26:22 I mean, like the social welfare function is floofiness.
0:26:28 But I guess my point here is as long as AI still doesn’t have property rights and it’s
0:26:32 humans making all the economic decisions, be it Sam Altman and Elon Musk or, you know, you
0:26:38 don’t have property rights in me, then at that point, like, that really matters for what gets
0:26:38 done.
0:26:42 Because if we’re talking about the money needed to build all these massive data centers, which
0:26:44 currently, it’s a lot of money.
0:26:46 It’s a ton of money required to build these data centers.
0:26:49 And that money need will not go away.
0:26:53 We can’t just say, oh, cost goes to zero because we can say unit cost goes to zero, but total
0:26:54 cost doesn’t go to zero.
0:26:55 Nor has it.
0:26:56 It has increased.
0:26:58 The total spend on data centers has increased.
0:27:00 And I think everyone expects it to increase for the future.
0:27:08 The question is, is that money being spent because AI companies expect to reap benefits from consumers
0:27:09 like you and me?
0:27:11 Or to what extent is it that?
0:27:17 And to what extent is it Sam Altman feels like doing some crazy stuff and Sam Altman’s just
0:27:20 godlike richer than everybody else.
0:27:23 And so Sam Altman is actually consuming when he builds those data centers.
0:27:27 He is building those data centers so that he can indulge his godlike whims.
0:27:32 I think it’s more plausible than either a single godlike person is able to direct the whole
0:27:35 economy or like there’s this broad based consumer.
0:27:36 Every person is buying.
0:27:37 These are extremes.
0:27:37 Yeah.
0:27:43 I think more plausible is like AIs will be integrated through all the firms in the economy.
0:27:45 A firm can have property.
0:27:50 Firms will be like largely run by AIs, even though there’s nominally a human board of directors.
0:27:52 And it might not even be nominal, right?
0:27:55 Like maybe the AIs are aligned and like genuinely give the board of directors an accurate summary
0:27:56 of what’s happening.
0:27:58 But like day to day, they’re being run by AIs.
0:28:00 And firms can have property rights.
0:28:01 Firms can demand things.
0:28:03 So say all you have is a board of directors and AI.
0:28:03 Yeah.
0:28:04 Okay.
0:28:06 I mean, in the ideal world.
0:28:06 Okay.
0:28:11 So then what we’re basically looking at is the labor share of income goes to zero or something
0:28:12 approaching that.
0:28:14 Depends on how you define the AI labor.
0:28:16 And capital income is distributed highly unevenly.
0:28:20 It’s more distributed, much more unevenly than labor income, but it’s still distributed reasonably
0:28:20 broadly.
0:28:22 Like I have capital income, you have capital income.
0:28:26 So at that point, we have just an extremely unequal society where owners get everything
0:28:28 and then workers get nothing.
0:28:30 And then so we have to figure out what to do about that.
0:28:30 A hundred percent.
0:28:32 Piketty is killing himself somewhere.
0:28:34 Piketty’s been wrong about everything.
0:28:34 Yeah, I know.
0:28:37 So let’s hope he’s wrong again.
0:28:38 I mean, he’d be happy.
0:28:40 He’d be like, see, I was right.
0:28:42 Because for an economist, being right is the most important thing.
0:28:43 Yeah, exactly.
0:28:48 I mean, the hopeful case here is the way our society currently treats retirees and old
0:28:51 people who are not generating any economic value anymore.
0:28:56 And if you just look at like the percent of your paycheck that’s basically being transferred
0:28:58 to old people, it’s like, I don’t know, 25% or something.
0:29:02 And you’re willing to do this because they have a lot of political power.
0:29:06 They’ve used that political power in order to lock in these advantages.
0:29:08 They’re not like so overwhelming.
0:29:09 You’re like, I’m going to go to like Costa Rica instead.
0:29:12 You’re like, okay, I had to pay this money.
0:29:13 I had to pay this concession.
0:29:13 I’ll do it.
0:29:18 And hopefully humans can be in a similar position to this massive AI economy that old
0:29:20 people today have in today’s economy.
0:29:20 All right.
0:29:21 What do humans do?
0:29:23 Let’s say they get some money.
0:29:23 They have enough to live.
0:29:25 How do they spend their time?
0:29:28 Is it art, religion, poetry, drugs?
0:29:29 It’s the final job.
0:29:31 Yeah, we’re out of the curve here.
0:29:34 We’re the last man of history.
0:29:37 Wait, so here’s an idea.
0:29:38 How about Sovereign Wealth Fund?
0:29:39 Okay.
0:29:43 So Sovereign Wealth Fund, we tax Sam Altman and Elon Musk.
0:29:45 We’re using Sam as a metaphor here.
0:29:45 He’s a friend of the firm.
0:29:46 Yeah, yeah, yeah.
0:29:47 We tax him.
0:29:48 We tax Mark.
0:29:49 And so then we use their money.
0:29:51 Only the friends of the show will be taxed.
0:29:56 We use that money to buy like shares in the things that those people have.
0:29:59 So they get their money back because we’re buying the shares back from them.
0:29:59 Okay.
0:30:00 So it’s okay.
0:30:01 And then we hire them.
0:30:02 Yeah.
0:30:07 Because then what we do is we hire a number of firms, including A16Z and pay them two and
0:30:12 20 or whatever, to manage the investment of AI stuff on behalf of the humans.
0:30:16 But then the humans become broad-based sort of index fund shareholders or shareholders and
0:30:17 whatever you guys choose to invest in.
0:30:18 Then you take a cut.
0:30:20 And this could be the future economy.
0:30:22 This is what my PhD advisor, Myles Kimball, has suggested.
0:30:25 This is what the socialist Matt Bruning has suggested.
0:30:28 And this is what Alaska actually does with oil.
0:30:29 Capitalists like it.
0:30:30 Socialists like it.
0:30:31 Alaska likes it.
0:30:34 I think Sovereign Wealth Funds generally have a bad track record.
0:30:37 There’s some exceptions that have managed to use their wealth well, like Norway or Alaska.
0:30:42 But there’s just like these political economy problems that come up when there’s this tight
0:30:47 connection between the investment, which should theoretically be just highest rate of return
0:30:49 and politicians.
0:30:51 So I don’t have like have a strong alternative.
0:30:54 Ideally, you just let the market decide how the investment should happen.
0:30:55 And then you can just take a tax.
0:30:57 But then exactly where does that tax happen?
0:30:58 I haven’t thought it through, but.
0:30:59 Are you dubious of this?
0:31:00 Yeah.
0:31:03 I wouldn’t want the government influencing where that investment happens, but I want the government
0:31:06 taking a significant share of the returns of that investment.
0:31:07 Yeah.
0:31:10 Are you dubious of the trope that labor provides meaning?
0:31:14 And if people don’t have a clear sense for labor, then it will be very difficult for them
0:31:16 to obtain alternative sources of meaning?
0:31:21 Or is that kind of a capitalist sort of stroke that isn’t necessarily true?
0:31:27 My suspicion is that humans have just adapted to so much, like agricultural revolution, industrial
0:31:29 revolution, the growth of states.
0:31:34 Like once in a while, like a communist or fascist regime will come around or something like the
0:31:39 idea that being free and having millions of dollars is the thing that finally gets us.
0:31:39 Yeah.
0:31:40 I’m just suspicious of.
0:31:43 By the way, do we not disagree about the thing?
0:31:49 I’m saying once we get AGI, humans will not have high-paying jobs.
0:31:50 Do we disagree about this?
0:31:54 I think humans may have high-paying jobs.
0:31:55 Okay.
0:31:56 Because of comparative advantage.
0:32:00 The key here is if there’s some AI-specific resource constraint that doesn’t apply to humans,
0:32:05 then comparative advantage law takes over and then humans get high-paying jobs,
0:32:07 even though AI would be better at any specific thing than human.
0:32:09 Because there’s some sort of aggregate constraint.
0:32:13 The example I always use, of course, is Marc Andreessen, who is the fastest typist I have
0:32:16 ever seen in my life and yet does not do his own typing.
0:32:21 And so because there’s a Marc Andreessen-specific aggregate constraint on Marc Andreessen’s,
0:32:22 there is only one of him.
0:32:28 So he hasn’t taken all the secretary’s typist jobs, but because he has better things to do.
0:32:32 And so if there’s some sort of AI-specific resource constraint that hits, then humans could have.
0:32:33 Now, I’m not saying there will be.
0:32:34 Yeah.
0:32:35 And I’m not saying there won’t be.
0:32:35 Yeah.
0:32:36 I’m saying I don’t know if there is.
0:32:37 Yeah.
0:32:40 The reason I find that implausible is that I think that will be true in the short term,
0:32:44 because right now there’s 10 million H-100 equivalents in the world.
0:32:45 In a couple of years, there might be 100 million.
0:32:48 Like H-100 has the same amount of flops as a human brain.
0:32:52 So theoretically, they’re like as good as a brain if you had the right algorithm.
0:32:57 So there’s like a lower population of AIs, even if you had AGI right now, than humans.
0:33:04 But the key difference is that in the long run, you can just keep increasing the supply of compute or of robots.
0:33:09 And so if it is the case, so if an H-100 costs a couple thousand dollars a year to run,
0:33:14 but the value of an extra year of intellectual work is still like $100,000.
0:33:19 So you’re like, look, we’ve saturated all the H-100s and we’re going to pay a human $100,000 because there’s still so much intellectual work to do.
0:33:24 In that world, the return on buying another H-100, like an H-100 costs $40,000.
0:33:30 Just like in a year, that H-100 will pay you over 200% return, right?
0:33:40 So you’ll just keep expanding that supply of compute until basically the H-100 plus depreciation plus running cost is the same as an extra year of labor.
0:33:43 And in that world, that’s like much lower than human subsistence.
0:33:47 So comparative advantage is totally consistent with human wages being below subsistence.
0:33:52 It is, but that comes from the common resource consumption.
0:34:01 So if basically all of the land and energy that could be used to feed and clothe and shelter humans gets appropriated by H-100s, then that is the case.
0:34:08 However, if you pass a law that says this land is reserved for growing human food,
0:34:16 if we actually were to just pass a simple law saying that you have to use these resources, these resources are reserved for humans.
0:34:22 At that point, human labor has nothing to do with this.
0:34:24 The only reason the system works is that you are basically transferring resources.
0:34:29 You’ve come up with a sort of like intricate way to transfer resources to humans.
0:34:31 It’s just like, this resource is for you.
0:34:33 You have this land and therefore you can survive.
0:34:36 And this is just like an inefficient way to allocate resources to humans.
0:34:37 It’s true that it is an inefficient way.
0:34:41 I think people will like hear this argument of comparative advantage and be like, oh, there’s some intrinsic reason.
0:34:42 We take UBI instead.
0:34:43 Yeah.
0:34:43 Okay.
0:34:43 Yeah.
0:34:45 Yeah, I mean, sure.
0:34:52 But then again, we typically do not see the first, best, most efficient political solution implemented for things like redistribution.
0:35:01 In the real world, redistribution happens via things like the minimum wage or like letting the AMA decide how many doctors there’s going to be.
0:35:05 So redistribution in the real world is not always the most efficient thing.
0:35:11 So I’m just saying that like comparative advantage, if you’re talking about will humans actually continue to get high paid work, yes or no,
0:35:16 it depends on political decisions that may be made and it depends on physical constraints that will happen.
0:35:21 But the high paid jobs are literally because like you have said that there must be high paying jobs politically.
0:35:22 I understand.
0:35:25 In this case, you’ve said it in an indirect way, but you still said it.
0:35:25 Right.
0:35:25 Yeah.
0:35:26 You’re absolutely right.
0:35:26 Yeah.
0:35:27 You’re not wrong.
0:35:28 Yeah.
0:35:31 I guess it’s like incredibly different from what somebody might assume.
0:35:32 Right.
0:35:34 Like it has almost nothing to do with the comparative advantage argument.
0:35:35 Okay, sure.
0:35:36 But that’s true of a lot of jobs that exist now.
0:35:43 Like a lot of jobs that exist now, I’m not sure what like university professors, there’s a lot of those jobs.
0:35:53 Or like credit rating agencies or, you know, there’s a lot of things where probably we could wring out some significant TFP growth, more or less, by eliminating those things.
0:35:56 But we don’t because our politics is a clujocracy.
0:35:57 I think this is one of Tyler’s points.
0:36:14 I mean, I do think it’s important to like point out in advance, like basically, it would be better if we just bit the bullet about AGI so that instead of doing redistribution by expanding Medicaid and then Medicaid can procure all the amazing services that AI will create.
0:36:17 It would be better if we just said, look, this is coming.
0:36:25 And I’m not saying we should do a UBI today, but like if all human wages go below subsistence, then the only way to deal with that is through some kind of UBI.
0:36:28 Rather than if you happen to sue OpenAI, you get a trillion dollar settlement.
0:36:30 Otherwise, you’re screwed, right?
0:36:34 Some people said the bear case for UBI was something around like COVID as an example.
0:36:36 You gave people a bunch of money and what do they go do?
0:36:38 Go ride to the streets.
0:36:38 I’m teasing.
0:36:41 But are people going to use that money in an effective way?
0:36:42 I mean, that was literally what happened.
0:36:44 Yeah.
0:36:49 So is UBI the form that you would think that like what is the most effective method?
0:36:59 The reason I favor UBI is like this thing where in a future world with explosive growth, we’re going to see so many new kinds of goods and services that will be possible that are not available today.
0:37:08 And so distributing just like a basket of goods is just inferior to saying, oh, if like we solve aging, here’s some fraction of GDP.
0:37:17 Go spend your tens of millions partly on buying this aging cure, whatever this new thing that AI enables, rather than here’s a food stamps equivalent of the AGI world that you can have access to.
0:37:18 Of course.
0:37:24 I mean, this discussion may be academic because I believe that you said that we got phones in the world look the same.
0:37:26 I mean, no, it doesn’t.
0:37:28 Phones have destroyed the human race.
0:37:34 Like the fertility crash that’s happening all around the world, nobody has replacement level.
0:37:37 Fertility is going far below replacement everywhere because of technology.
0:37:39 Is that the phone or the pill?
0:37:41 Well, no, it’s the phone.
0:37:48 I mean, we’ll know the pill and other things like women’s education, whatever, like lowered fertility like quite a bit.
0:37:50 But some countries are still at replacement level.
0:37:51 Some are still around replacement level.
0:37:56 But the crash we’ve seen since everybody got phones is epic and is just unbounded.
0:38:02 The human race does not have a desire, a collective desire to perpetuate itself.
0:38:09 Yes, we’re going to get lonely, but we’ll have company through AI and through the internet, social media, until there’s just a few of us and we dwindle and dwindle.
0:38:12 But yeah, I mean, like technology has already destroyed the human race.
0:38:16 And basically, UBI is just like keeping us around on life support for a little while while that plays out.
0:38:24 I do think so far there’s been a lot of negative effects from widespread TikTok use or whatever that we’re still like learning about.
0:38:29 I am somewhat optimistic that in the long run, there’s some optimistic vision here that could work.
0:38:45 Just because right now the ratio of like, it’s impossible for Steven Spielberg to make every single TikTok and direct it in a sort of really compelling way that’s like genuine content and not just video games at the bottom and some music video at the top.
0:38:57 In the future, it might genuinely be possible to give every single person their own dedicated Steven Spielberg and create incredibly compelling but long narrative arcs that include other people they know, etc.
0:38:57 Oh, yeah.
0:39:00 So in the long run, I’m like, maybe this might happen.
0:39:03 I don’t think TikTok is like the best possible medium.
0:39:03 No.
0:39:06 I also don’t think TikTok is unique in destroying human race.
0:39:09 I think that interacting online instead of interacting in person.
0:39:10 How do you make your money?
0:39:10 That’s a great filter.
0:39:12 How do you make your money?
0:39:12 Well, good.
0:39:13 I agree.
0:39:16 We’re all making, we’re all making money destroying our species.
0:39:19 You don’t think we get isolated to dating apps and…
0:39:21 No, I’m saying like as long as you can get your…
0:39:23 Why did humans perpetuate the human species?
0:39:25 It was not because they wanted to see the human species perpetuated.
0:39:27 It was because it’s like, oop, I had sex and there came a baby.
0:39:28 And that’s done.
0:39:29 We’ve severed that.
0:39:30 That is the end.
0:39:32 We did not evolve to want our species to continue.
0:39:33 Right.
0:39:36 But you’re saying the reasons why we’re not having babies is because we can make friends
0:39:37 on the internet.
0:39:40 But is it that dating apps have created just a much more efficient market and thus there
0:39:42 is the same pair of bun?
0:39:42 I don’t know.
0:39:44 I mean, like people are having less sex.
0:39:49 If Elon gets his way, everybody will just sit there gooning to some sort of grok companion.
0:39:50 The goonpocalypse seems upon us.
0:39:52 Is this available right now?
0:39:53 What’s the website?
0:39:54 Oh, no.
0:39:58 This podcast got silly.
0:40:05 Anyway, I guess the point is that the idea of a humanity that just keeps increasing in
0:40:10 numbers and spreading out to the galaxy, I don’t see a lot of evidence that is in our
0:40:14 future and that we have to go to great lengths to make sure that future is compatible with
0:40:16 AGI because I don’t think it’s happening in any case.
0:40:17 AGI or none.
0:40:23 By the way, not to cope too hard, but in a world where AGI happens, how important is increasing
0:40:23 population?
0:40:31 I mean, population has so far been the decisive factor in terms of which countries are powerful.
0:40:34 Like the reason China, if the U.S. was not involved, the reason China could take over
0:40:39 Taiwan is just that there’s 1.4 billion Chinese people and there’s 20 million Taiwanese people.
0:40:46 Now, if in future your population is, your effective labor supply is like largely AIs,
0:40:50 then this dynamic just means that like your inference capacity is literally your geopolitical
0:40:51 power, right?
0:40:54 I want to shift to short term a bit.
0:40:58 You’ve had some people on the podcast, you’re the AI 2027 folks who believe that AGI is perhaps
0:40:59 two years away.
0:41:00 I think they updated to three years away.
0:41:04 And then you’ve also had some folks on who said it’s not for 30 something years.
0:41:07 Maybe you could steel man both arguments and then share where you netted out.
0:41:08 Yeah.
0:41:12 So two years, if I’m steel manning them, is that, look, if you just look at the progress
0:41:15 over the last few years, it’s reasoning.
0:41:17 Aristotle is like the thing that makes humans is reasoning.
0:41:18 It was not that hard, right?
0:41:23 Like train on math and code problems and have it like think for a second and you get reasoning.
0:41:24 Like it’s crazy.
0:41:27 So what is a secret thing that we won’t get?
0:41:29 Can I ask a stupid question?
0:41:32 Why was stuff like O3 type models?
0:41:37 Why are those called reasoning models, but like GPT-4O is not called reasoning?
0:41:39 What are they doing different that’s reasoning?
0:41:45 One, I think it’s GPT-3 can technically do a lot of things GPT-4 can, but GPT-4 just does
0:41:45 it way more reliably.
0:41:52 And I think this is even more true of reasoning models relative to GPT-4O where 4O can’t solve
0:41:53 math problems.
0:41:57 And in fact, like modern day 4O has been probably trained a lot on math and code, but the original
0:42:00 GPT-4 just wasn’t trained that much on math and code problems.
0:42:05 So like it didn’t have whatever meta circuits there exist for like, how do you backtrack?
0:42:06 How do you be like, wait, but I’m on the wrong track.
0:42:07 I got to go back.
0:42:08 I got to pursue the solution this way.
0:42:13 Algorithmically, I have an okay idea of what a reasoning model does that the non-reasoning
0:42:14 models don’t.
0:42:18 But in terms of how does that map to a thing that we call reasoning?
0:42:22 What is the definition of what it means to reason that these people are using, the operational
0:42:23 definition here?
0:42:25 Because I don’t understand that myself.
0:42:28 I mean, 4O can’t get a golden IMO.
0:42:31 Okay, but I can reason and I can’t get a golden IMO.
0:42:32 But I can reason.
0:42:37 Yeah, I can’t get a golden either, but I don’t think I can reason as well as math Olympiad,
0:42:38 at least in the relevant domain.
0:42:43 I agree that reasoning is not just about mathematics, but this is true of any word you come up with.
0:42:43 Like the zebra.
0:42:46 What about the thing that like is a mixture of a zebra and a giraffe and they have a baby?
0:42:47 Is that a zebra still?
0:42:48 I agree there’s edge cases to everything.
0:42:50 But there’s a general conceptual category of zebra.
0:42:53 And I think there’s like a general conceptual category of reasoning.
0:42:55 Okay, I’m just wondering what it is.
0:42:57 Like when you have a checkout clerk, right?
0:43:00 That checkout clerk wouldn’t would look at an IMO problem back.
0:43:03 But then like you have a checkout clerk and the checkout clerk, you’re like,
0:43:09 okay, so you put the thing on this shelf and therefore someone has looked for it and didn’t
0:43:10 find it.
0:43:11 So something else must have happened.
0:43:17 So I think a reasoning model will be more reliable and be better at solving that kind of
0:43:17 problem than 4O.
0:43:20 So you were still manning the AI 2027.
0:43:21 Yes.
0:43:24 So a lot of things we previously thought were hard have just been incredibly easy.
0:43:28 So whatever additional bottlenecks that you are anticipating, whether it’s this continual
0:43:33 learning, on the job training thing, whether it’s computer use, this is just going to be
0:43:35 the kind of thing where in advance, it’s like, how would we solve this?
0:43:39 And then deep learning just works so well that we like, I don’t know, try to train it to do
0:43:40 that and then it’ll work.
0:43:44 The long timeline people will say, I don’t know, there’s a sort of longer argument.
0:43:45 I don’t know how much to bore you with this.
0:43:50 But basically, the things we think of as very difficult and requiring intelligence have
0:43:52 been some of the things that machines have gotten first.
0:43:55 So just adding numbers together, we got in the 40s and 50s.
0:43:59 Reasoning might be another one of those things where we think of it as the apogee of like
0:44:00 human abilities.
0:44:04 But in fact, it’s only been recently optimized by evolution over the last few million years,
0:44:08 whereas things like just moving about in the world and having common sense and so forth
0:44:13 and having this long term memory, evolution spent hundreds of millions, if not billions
0:44:14 of years optimizing those kinds of things.
0:44:17 So those might be much harder to build into these AI models.
0:44:21 I mean, the reasoning models still go off in these crazy hallucinations that they’ll never
0:44:26 admit we’re wrong and we’ll just gaslight you infinitely on some crap it made up.
0:44:28 Like just knowing truth from falsehood.
0:44:28 Yeah.
0:44:31 I’ve met a couple of humans who don’t seem to be able to know truth from falsehood.
0:44:32 They’re weird.
0:44:32 Yeah.
0:44:35 And so, but O3 sometimes does this.
0:44:36 I mean, I think it’s an interesting question.
0:44:38 Do they hallucinate more than the average person?
0:44:39 I think they’ll know less.
0:44:43 Again, hallucinate meaning like getting something wrong and then when they push them on it,
0:44:44 they’re like, no, whatever.
0:44:47 And eventually they’ll like accede if they’re clearly wrong.
0:44:50 I think like they’re actually more reliable than the average human.
0:44:53 But so the thing about the average human is you can get the average human to not do that
0:44:55 with the right consequences.
0:44:59 And maybe AI, we haven’t found the right like reinforcement learning function or whatever
0:45:01 to get them to not do that.
0:45:02 Okay.
0:45:03 Now let’s get to the view that it’s 30 years away.
0:45:04 What’s that view?
0:45:09 Just this thing of reasoning is relatively easy in comparison to, forget about robotics,
0:45:13 which is just going to be, evolution spent billions of years trying to get like robotics
0:45:19 to work, but there’s like other things involved with like tracking long run state of, you know,
0:45:24 a lion can follow up, pray for a month or something, but these models can’t do a job for a month.
0:45:28 And these kinds of things are actually much more complicated than even reasoning.
0:45:32 And where you’ve netted out is it’s either going to happen in a few years or not for quite
0:45:33 some time.
0:45:39 Yeah, basically the progress in AI that we’ve seen over the last decade has been largely driven
0:45:42 by stupendous increases in compute.
0:45:49 So the compute used on training a frontier system has grown 4x a year for I think like the last
0:45:49 decade.
0:45:52 And that just over four years is 160x, right?
0:45:55 So that’s over the course of a decade, that’s hundreds of thousands of times more compute.
0:46:00 That physically cannot continue if you just like, okay, what would it mean right now we’re
0:46:02 spending 1.2% of GDP or something on data centers?
0:46:07 Not all of that is returning, of course, but what would it mean to continue this for another
0:46:08 decade?
0:46:13 For maybe five more years, you could keep increasing the share of energy that we’re spending on training
0:46:19 data centers or the fraction of TSMC’s leading edge nodes, wafers that we dedicate to making
0:46:23 AI chips or even the fraction of GDP that we can dedicate to AI training.
0:46:27 But at some point, you can’t keep this like 4x trend going a year.
0:46:30 And after that point, then it has to just like come from new ideas of like, here’s a new way
0:46:31 we could train a model.
0:46:36 And by the way, when I was writing that comparative advantage post, and I was thinking about AI
0:46:40 specific aggregate constraints, resource constraints, that’s what I was thinking of, actually.
0:46:40 Yeah.
0:46:43 That that expansion of compute has to slow down.
0:46:44 But I don’t know how much that matters.
0:46:48 That’s for training and yeah, for the labor that will be like the inference will also use
0:46:50 the same bucket of compute.
0:46:55 It is the case that for the amount of compute it costs to train a system, if you like set
0:47:01 up a cluster to train a system, you can usually run 100,000 copies of that model at typical
0:47:03 token speeds on that same cluster.
0:47:04 That’s still obviously not like billions.
0:47:09 But if we’ve got all this compute to be training these huge systems in the future, it would still
0:47:12 allow us to sustain a population of hundreds of millions, if not billions of AIs.
0:47:15 At that point, maybe we obviously will still need more AIs.
0:47:17 What does a single AI mean in this instance?
0:47:20 Oh, like when you’re talking to Claude, it’s like a single instance.
0:47:20 Okay.
0:47:21 That’s talking to you.
0:47:22 So instances.
0:47:23 Yeah.
0:47:23 Okay.
0:47:26 So what’s going to determine whether it’s in a few years or?
0:47:29 Right now, we’re basically riding the wave of this extra compute.
0:47:31 That’s why AI is getting better every year, mostly.
0:47:35 In terms of the contribution of new algorithms, it’s a smaller fraction of the progress that’s
0:47:36 explained by that.
0:47:40 So if we’ve just got this like rocket, like how high will it take us?
0:47:41 And does it get us space or not?
0:47:44 And if it doesn’t, then we just have to rely on the algorithmic progress, which has been
0:47:44 this.
0:47:45 Slutter.
0:47:45 Yeah.
0:47:47 But you think it might get us space?
0:47:48 Yeah.
0:47:51 I think there’s like a chance that like, oh, continual learning is also like, you know,
0:47:53 I had this whole theory about, oh, it’s so hard.
0:47:54 And how do you slot it in?
0:47:55 And they’re like, I fucking trained it to do this.
0:47:57 Like, what are we talking about here?
0:48:03 That leads into another thing that I’ve thought about, which is how poor our track record
0:48:06 for making predictions about the future of AI has been.
0:48:09 The first time you and I hung out, I don’t know if you remember this, was with Leopold.
0:48:10 Yeah.
0:48:10 Oh, really?
0:48:11 Yeah.
0:48:11 I remember this.
0:48:12 It was at your old house.
0:48:12 Yes.
0:48:16 And Leopold is just pronouncing a whole bunch of pronouncements from the couch.
0:48:17 Yeah.
0:48:20 And he released this big situational awareness thing.
0:48:21 How long ago was that?
0:48:21 A year and a half?
0:48:22 Yeah.
0:48:22 Yeah.
0:48:28 I would say that already most of the things he predicted have been invalidated or made
0:48:30 irrelevant in the last year and a half.
0:48:34 And especially in terms of like all the stuff about competition with China, like it turns
0:48:37 out filtration was able to get them a whole lot of things that he never predicted.
0:48:41 It turns out that so many of the things other than just the idea that AI would keep getting
0:48:43 better, which he predicts and a lot of people predict.
0:48:47 But then I feel like a lot of the specific predictions about US capabilities and Chinese capabilities
0:48:50 and what would be the bottlenecks and what would be the things that, you know, here’s how
0:48:53 we can compete with China that has all been proven wrong since.
0:48:57 I think this is actually an interesting trend in the history of science where like some
0:49:02 of the scientists who are the smartest in thinking about the progression of the atom bomb or progression
0:49:05 of physics just had these like ideas about the only way we can sustain this is if we have
0:49:06 one world government.
0:49:07 I’m talking about after World War II.
0:49:10 There’s no other way we can deal with this new technology.
0:49:15 I do think relative to the technological predictions, Leo, I think the main way in which he’s been
0:49:19 wrong is that it didn’t take some like breaking the servers in order to learn how
0:49:21 O3 or something works.
0:49:24 It was just public, just see you being able to use the model.
0:49:26 You can talk to it and learn what it knows.
0:49:30 Just knowing a reasoning model works and then you can like use it and you see like, oh,
0:49:30 what is the latency?
0:49:32 Like how fast is it outputting tokens?
0:49:33 That will teach you like how big is the model?
0:49:36 Like you learn a lot just from publicly using a model and knowing a thing is possible.
0:49:42 He has been right in one big way, which is like he identified three key things that would
0:49:47 be required to get us from GPT-4 to BBAGI kind of thing, which was being able to think.
0:49:49 So test time compute, onboarding.
0:49:50 Would you talk about test time compute in that?
0:49:51 Yeah, yeah.
0:49:53 It was like one of his three big unhobblings.
0:49:56 Then like onboarding in terms of the workplace.
0:49:58 And then I think the final one was computer use.
0:50:00 Look, one out of three.
0:50:01 And it was a big deal.
0:50:04 So I think you got some things right, something’s wrong, but yeah.
0:50:09 And what’s your take on the model of automating AI research as the path to AGI?
0:50:15 The meter uplift paper, contrary to expectations, they found that whenever senior developers
0:50:19 working in repositories that they understood well used AI, they were actually slowed down
0:50:20 by 20%.
0:50:21 Yeah, I did see that.
0:50:21 Yeah.
0:50:24 Whereas they themselves thought that they were sped up 20%.
0:50:25 And right.
0:50:26 So there’s a bunch of things.
0:50:27 I’m getting things done.
0:50:31 This was back to your theory about the phones are destroying us.
0:50:38 That is an update towards the idea that AI is not on this trend to be this super useful
0:50:43 assistant that’s helping us already make the process of training AI much faster.
0:50:45 And this will just be the feedback loop and exponential.
0:50:47 I have other independent reasons.
0:50:48 I’m like, I don’t know.
0:50:51 I’m like 20% that like we’ll have some sort of intelligence explosion.
0:50:54 One of the other Leopold predictions was nationalization.
0:50:56 Is that something you could potentially foresee in the next few years?
0:51:01 I don’t think it’s politically plausible, especially given this administration.
0:51:03 I don’t think it’s desirable.
0:51:08 First, I think it would like just drastically slow down AI progress because look, this is not
0:51:09 1945 America.
0:51:13 And also building an atom bomb is like a way easier project than building AGI.
0:51:16 But China’s quasi-nationalized most of it.
0:51:19 I mean, China doesn’t control BYD’s day-to-day decisions about what to build.
0:51:23 But then if China says, do this, BYD does it, as does every Chinese.
0:51:26 I mean, that’s kind of the relationship American companies have the U.S. government as well.
0:51:26 You think so?
0:51:28 I mean, somewhat.
0:51:30 Also, the big difference is, what do we mean by nationalization?
0:51:34 There’s one thing which is like, there’s a party cadre who is…
0:51:35 In your company.
0:51:35 Exactly.
0:51:40 There’s another, which is that each province is like just pouring a bunch of money into
0:51:44 building their own competitor to BYD in this potentially wasteful way.
0:51:49 That like distributed competitive process seems like the opposite of nationalization to me.
0:51:52 Like when people imagine AGI nationalization, I don’t think they’re saying like Montana will
0:51:56 have their AGI and Wyoming will have their AGI and they’ll all compete against each other.
0:51:59 I think they imagine that all the labs will merge, which is actually the opposite of how
0:52:00 China does industrial policy.
0:52:05 But then you do think that the American government basically, if it says, do this, then like
0:52:07 XAI and OpenAI will do it.
0:52:08 No.
0:52:10 Actually, I think in that way, obviously, the Chinese system and the U.S. is more different.
0:52:15 Although it has been interesting to see that whenever, I don’t know, we’ve noticed the way that
0:52:18 different lab leaders have changed their tweets in the aftermath of the election.
0:52:18 I mean, also…
0:52:18 Yeah.
0:52:20 More bullish open source.
0:52:25 And didn’t Sam have a thing where, I think previously he said that AI will take jobs.
0:52:26 How do we deal with this?
0:52:30 And then, didn’t he recently say something at a panel where I think President Trump is correct
0:52:32 that AI will like create jobs or something.
0:52:34 I don’t think in the long run you believe this.
0:52:37 But the reason why humans should be excited about even their jobs being taken is just
0:52:40 they’ll be so rich that, why do they even need it?
0:52:40 Yeah.
0:52:40 Yeah.
0:52:41 Much richer than they are now.
0:52:42 Right.
0:52:46 Modulo, this redistribution slash not fucking it over with some guild-like thing.
0:52:47 Yeah.
0:52:51 You mentioned the atomic bomb and we also mentioned off camera that you don’t think the nuke is
0:52:55 a good comparison for what happens, how does it play out when a lab figures out AGI?
0:52:57 What then happens?
0:53:00 Is there a huge advantage if one country has it first or if one lab has it first, do they dominate?
0:53:06 I think it’s less like the nuclear bomb where there’s a self-contained technology that is
0:53:10 so obviously relevant to specifically this like offensive capability.
0:53:13 And you can say, well, like there’s nuclear power as well, but like neither of those, like
0:53:14 nuclear power is just like this very self-contained thing.
0:53:20 Whereas I think intelligence is much more like the industrial revolution where there’s not
0:53:22 like this one machine that is the industrial revolution.
0:53:28 It is just this like broader process of growth and automation and so forth.
0:53:31 So Brad DeLong’s right and Robert Gordon is wrong.
0:53:34 If Robert Gordon said there’s only, it’s four things, it’s just four big things.
0:53:34 Oh, really?
0:53:37 And Brad DeLong is like, no, it’s a process of discovering things.
0:53:37 Anyway.
0:53:37 Interesting.
0:53:39 What were Rob’s four things again?
0:53:40 Do you remember?
0:53:41 Oh, I mean, electricity.
0:53:42 Test times compute.
0:53:42 Not kidding.
0:53:43 Test times compute.
0:53:46 The internal combustion engine, steam power.
0:53:47 And then what was the fourth one?
0:53:50 Like maybe like plumbing?
0:53:50 Right.
0:53:51 I think was the fourth one.
0:53:52 Yeah.
0:53:55 Or even in that case, maybe that actually is, maybe that’s closer to how I think about it.
0:53:58 But then you need so many complementary innovations.
0:54:01 So internal combustion engines, I think, invented in the 1870s.
0:54:05 Drake finds the oil well in Pennsylvania in the 1850s.
0:54:08 Obviously, it takes like a bunch of complementary innovations before like these two things can
0:54:08 merge.
0:54:11 Before they’re just like using the oil for the kerosene to light lamps.
0:54:16 But regardless, so if it’s this kind of process, it was the case that many countries achieved
0:54:17 industrialization before other countries.
0:54:24 And like China was dismembered and went through a terrible century because the Qing dynasty wasn’t
0:54:26 up to date on the industrialization stuff.
0:54:28 And much smaller countries were able to dominate it.
0:54:33 But that is not like we developed the atom bomb first and now we have decisive advantage.
0:54:38 If that had been Nazi Germany or the Soviet Union, it would have gone differently.
0:54:42 How do you see the U.S.-China competition playing out in terms of AI?
0:54:46 I genuinely don’t know.
0:54:47 Yeah.
0:54:50 I think it’s like possible that there could be some positive, some like not like a nuclear
0:54:53 weapon where both countries can just adopt AI.
0:54:58 And there is this dynamic where if you have higher inference capacity, not only can you deploy
0:55:01 the AI is faster and you have more economic value that’s generated, but you can have a
0:55:07 single model learn from the experience of all of its copies and you can have this basically
0:55:08 broadly deployed intelligence explosion.
0:55:13 So I think it really matters to get to that discontinuity first.
0:55:18 I don’t have a sense of at what point, if ever, is it treated like the main geopolitical
0:55:21 issue that countries are prioritizing.
0:55:26 I also from the misalignment stuff, the main thing I worry about is the AI playing us off each
0:55:30 other rather than us playing the AIs off each other.
0:55:35 You mean AI just like telling us all to hate each other the way like Russian trolls currently
0:55:36 tell us all to hate each other?
0:55:40 More so like the way that the East India Company was able to play different provinces in India
0:55:41 off of each other.
0:55:44 And ultimately, at some point you realize, okay, like they control India.
0:55:47 And so you could have a scenario like, okay, think about the conquistadors, right?
0:55:52 A couple hundred people show up to your border and they take over an empire of 10 million
0:55:52 people.
0:55:55 And this happened not like once, it happened two to three times.
0:55:56 Okay, so why was this possible?
0:56:02 Well, it’s that the Aztecs, the Incas, weren’t communicating with each other.
0:56:04 Like they didn’t even know the other empire existed.
0:56:09 Whereas Cortes learns from the subjugation of Cuba and then he takes over the Aztecs.
0:56:13 Pizarro learns from the subjugation of the Aztecs and takes over the Incas.
0:56:16 And so they’re able to like just learn about, okay, you take the emperor hostage and then
0:56:18 this is the strategy you employ, et cetera.
0:56:21 It’s interesting that the Aztecs and Incas never met each other and that worked both
0:56:23 times, sort of.
0:56:24 Yeah.
0:56:28 That’s interesting that these totally disconnected civilizations both had the similar vulnerabilities.
0:56:29 Yeah.
0:56:31 I mean, it was like literally the exact same playbook.
0:56:36 The crucial thing that went wrong is that at this point in the 1500s, we actually don’t
0:56:37 have modern guns.
0:56:38 We have arquebuses.
0:56:42 But the main advantage that the Spanish had was they had horses and then secondly, they
0:56:43 had armor.
0:56:45 And it was just incredibly, you’d have thousands of warriors.
0:56:49 If you’re fighting on an open plain, the horses with armor will just like trounce all of them.
0:56:54 Eventually, the Incas had this rebellion and they learned they can like roll the rocks downhills
0:56:57 and the rebellion was moderately successful, even though it’s eventually, we know what happened.
0:57:01 You could say that the Spanish on their side had guns, germs, and steel.
0:57:07 So how could this have turned out differently if the Aztecs had learned this and then had like
0:57:08 told the Incas?
0:57:10 I mean, they weren’t in contact, but if there’s some way for them to communicate, like, here’s
0:57:11 how you take down a horse.
0:57:15 I think what I would like to see happen between the U.S. and China, basically, is like the
0:57:19 equivalent of some red telephone during the Cold War where you can communicate, look, we
0:57:23 notice this, especially when AI becomes more integrated with the economy and government,
0:57:24 et cetera.
0:57:29 We notice this crazy attempt to do some sabotage, like be aware that this is a thing they can
0:57:30 do, like train against it, et cetera.
0:57:32 AI is trying to trick you into doing this.
0:57:33 Watch out.
0:57:34 Yeah, exactly.
0:57:37 Though it would require a level of trust, I’m not sure it’s plausible, but.
0:57:38 That’s the optimal thing that would happen.
0:57:43 At the lab level, do you think it’s a multipolar or is there consolidation and who’s your bet
0:57:44 to win?
0:57:45 I’ve been surprised.
0:57:48 So you would expect over time as the cost of competing at the frontier has increased,
0:57:51 you would expect there to be fewer players at the frontier.
0:57:53 This is what we’ve seen in semiconductor companies, right?
0:57:55 That it gets more expensive over time.
0:57:58 There’s now maybe one company that’s at the frontier in terms of like global semiconductor
0:57:58 manufacturing.
0:58:02 We’ve seen the opposite trend in AI where there’s like more competitors today than there were a
0:58:04 year ago, even though it’s gotten more expensive.
0:58:09 I don’t know where the equilibrium here is because the cost of training these models
0:58:13 is still much less than the value they generate.
0:58:16 So I think it would still make sense to 10x the amount of investment.
0:58:19 Somebody new to come into this field and 10x the amount of investment.
0:58:21 Do you have a take on where the equilibrium is?
0:58:25 Well, I mean, it has to do with entry barriers.
0:58:27 Basically, it’s all about entry barriers.
0:58:30 It’s the question of if I just decide to plunk down this amount of money.
0:58:35 So if the only entry barrier is fixed costs, I’d say we have such a good system for like
0:58:39 just loaning people money that that’s not going to be that big a deal.
0:58:43 But if there’s entry barriers that have to do with if you make the best AI, it gets even
0:58:44 better.
0:58:45 So, you know, why enter?
0:58:47 That’s the big question.
0:58:48 I don’t actually know the answer to the question.
0:58:51 There’s a broad question we ask in general is like, what are the network effects here?
0:58:51 Right.
0:58:52 And what is the visibility?
0:58:54 And it seems obvious to be brand.
0:58:55 Yeah.
0:58:57 I mean, I’m not sure it’s a network effect.
0:59:03 But brand, like OpenAI, ChatGPT is the Kleenex of AI in that Kleenex is actually called a
0:59:04 tissue.
0:59:06 But we call it a Kleenex because there was a company called Kleenex.
0:59:06 Where are you going with this?
0:59:08 Are we back to the Gronk doing anything?
0:59:10 Oh, no.
0:59:13 Well, what’s another example?
0:59:14 Xerox.
0:59:14 Yeah.
0:59:15 You Xerox this thing.
0:59:17 Xerox is just one company that makes a copier, right?
0:59:18 Not even the biggest.
0:59:20 But everybody knows that it’s Xeroxing.
0:59:20 Right.
0:59:25 And so ChatGPT gets massive rents from the fact that everyone just says, I’ll use AI.
0:59:26 What’s an AI?
0:59:27 ChatGPT.
0:59:28 I’ll use it.
0:59:31 And so like brand is the most important thing.
0:59:35 But I think that’s mostly due to the fact that this key capability of learning on the
0:59:36 job has not been unlocked.
0:59:37 And so.
0:59:37 Right.
0:59:41 And I was saying that could be a technological network effect that could supersede the brand
0:59:41 effect.
0:59:42 Right.
0:59:42 Yeah.
0:59:43 Yeah.
0:59:47 And I think that that will have to be unlocked before most of the economic value of these
0:59:48 models can be unlocked.
0:59:52 And so by the point they’re generating hundreds of billions of dollars a year or maybe trillions
0:59:56 of dollars a year, they will have had to come up with this thing, which will be a bigger
0:59:59 advantage, in my opinion, than brand network effects.
1:00:04 Is Zuck throwing away money, wasting it on hiring all these guys?
1:00:07 You know, people have been saying, like, look, the messaging could have been better or whatever.
1:00:12 I mean, I think it’s just much better to have worse messaging or something, but then not
1:00:14 sleepwalk towards losing.
1:00:18 Also, if you just think about, like, if you pay an employee a hundred million dollars and
1:00:22 they’re a great AI researcher and they make your compute, your training or your inference
1:00:22 one percent more efficient.
1:00:27 Zuck is spending on the order of like 80 billion dollars a year on compute.
1:00:29 That’s made one percent more efficient.
1:00:31 That’s easily worth a hundred million dollars.
1:00:34 Like, a hundred million dollars is below the break-even point for this extra researcher.
1:00:38 So the real question is, like, why haven’t we hit that break-even point yet?
1:00:44 And if we, as podcasters, encourage one researcher to join Meta, I mean, what’s the, how do you
1:00:44 put a price on that?
1:00:45 Yes.
1:00:48 And this has been a phenomenal conversation.
1:00:50 Noah and Dorkesh, thank you so much for coming on.
1:00:50 It’s been great.
1:00:50 Awesome.
1:00:51 Thanks, Eric.
1:00:56 Thanks for listening to the A16Z podcast.
1:01:01 If you enjoyed the episode, let us know by leaving a review at ratethispodcast.com slash
1:01:01 A16Z.
1:01:04 We’ve got more great conversations coming your way.
1:01:05 See you next time.
In this episode, Erik Torenberg is joined in the studio by Dwarkesh Patel and Noah Smith to explore one of the biggest questions in tech: what exactly is artificial general intelligence (AGI), and how close are we to achieving it?
They break down:
- Competing definitions of AGI — economic vs. cognitive vs. “godlike”
- Why reasoning alone isn’t enough — and what capabilities models still lack
- The debate over substitution vs. complementarity between AI and human labor
- What an AI-saturated economy might look like — from growth projections to UBI, sovereign wealth funds, and galaxy-colonizing robots
- How AGI could reshape global power, geopolitics, and the future of work
Along the way, they tackle failed predictions, surprising AI limitations, and the philosophical and economic consequences of building machines that think, and perhaps one day, act, like us.
Timecodes:
0:00 Intro
0:33 Defining AGI and General Intelligence
2:38 Human and AI Capabilities Compared
7:00 AI Replacing Jobs and Shifting Employment
15:00 Economic Growth Trajectories After AGI
17:15 Consumer Demand in an AI-Driven Economy
31:00 Redistribution, UBI, and the Future of Income
31:58 Human Roles and the Evolving Meaning of Work
41:21 Technology, Society, and the Human Future
45:43 AGI Timelines and Forecasting Horizons
54:04 The Challenge of Predicting AI’s Path
57:37 Nationalization, Geopolitics, and the Global AI Race
1:07:10 Brand and Network Effects in AI Dominance
1:09:31 Final Thoughts
Resources:
Find Dwarkesh on X: https://x.com/dwarkesh_sp
Find Dwarkesh on YT: https://www.youtube.com/c/DwarkeshPatel
Subscribe to Dwarkesh’s Substack: https://www.dwarkesh.com/
Find Noah on X: https://x.com/noahpinion
Subscribe to Noah’s Substack: https://www.noahpinion.blog/
Stay Updated:
Let us know what you think: https://ratethispodcast.com/a16z
Find a16z on Twitter: https://twitter.com/a16z
Find a16z on LinkedIn: https://www.linkedin.com/company/a16z
Subscribe on your favorite podcast app: https://a16z.simplecast.com/
Follow our host: https://x.com/eriktorenberg
Please note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures.
Leave a Reply