The 2045 Superintelligence Timeline: Epoch AI’s Data-Driven Forecast

0
0
AI transcript
0:00:03 People are spending a lot on these models.
0:00:06 They’re presumably doing this because they’re getting value from them.
0:00:10 You can maybe argue like, oh, well, I don’t think that value is real.
0:00:11 I think people are just playing around, whatever.
0:00:14 But like, whatever, they’re paying for it.
0:00:15 That’s a pretty solid sign.
0:00:18 We’re almost giving you here the useful answer of like,
0:00:20 I don’t think it’s a bubble because it’s not burst yet.
0:00:22 When it’s burst yet, then you’ll know it’s a bubble.
0:00:25 People often make the case, oh, AI hasn’t been profitable yet
0:00:27 and they’re spending more to make it profitable.
0:00:30 In reality, they’ll have paid off the cost of all of the development
0:00:32 they’ve done in the past very soon.
0:00:35 It’s just that they’re doing more development for the future.
0:00:37 Will they regret that spending?
0:00:38 How much are they spending?
0:00:42 You can look at NVIDIA and how much they’re selling each year
0:00:44 and you can see whether it keeps on growing
0:00:47 and you can see whether stuff is kind of looking good to continue.
0:00:50 Math seems unusually easy for AI.
0:00:51 I’m going to be honest.
0:00:54 People often make claims about it being like this, you know,
0:00:57 intuitive deep thing that it would mean that AI has achieved
0:01:00 something, some huge level of intelligence for it to solve.
0:01:04 I think in practice, this is just like, you know, making a piece of art.
0:01:06 It turns out to be farther down the capability street
0:01:07 than people might have guessed.
0:01:10 We sort of had this with chess decades ago, right?
0:01:13 Like computers solved chess very well
0:01:17 and everyone was thinking of this as the pinnacle of reasoning
0:01:19 and everyone as a result kind of concluded by,
0:01:21 oh, well, of course computers can do chess.
0:01:23 The like interesting scenario to think about,
0:01:26 you know, 20% chance, 30% chance something like this will happen.
0:01:30 The next decade is like, you know, a 5% increase in unemployment
0:01:34 over a very short period of time, like six months due to AI.
0:01:36 The public’s reaction to this will determine a lot.
0:01:39 There will be very, very strong feelings about AI once this happens.
0:01:43 I think there will be a bunch of very strong consensus on what to do
0:01:46 on things that we don’t normally think of as things that people are considering.
0:01:51 I know when this happened with COVID, there was a several trillion dollar stimulus package
0:01:53 in a matter of weeks to days.
0:01:55 It was breakneck speed.
0:01:57 I don’t know what that will look like for AI,
0:02:00 but I think it’s like everything else in AI.
0:02:05 It’s exponential, which means it will pass the point of people sort of care about it
0:02:07 to people really care about it quite fast.
0:02:11 I just expect wherever we end up, there will be this certain thing,
0:02:14 which we would have considered unimaginable a year ago.
0:02:23 Right now, AI labs are burning billions on compute.
0:02:27 Anthropic just built a data center that uses as much power as Indiana State Capital
0:02:30 and Microsoft’s planning one that rivals New York City.
0:02:31 The bet?
0:02:35 That AI will eliminate entire categories of work before the money runs out.
0:02:40 David Owen and Yafa Edelman from Epoch AI have done something unusual.
0:02:42 They’ve actually measured what’s happening.
0:02:45 They tracked down permits, analyzed satellite imagery,
0:02:48 and calculated exactly how fast these data centers are scaling.
0:02:52 Their conclusion challenges both the skeptics and the true believers.
0:02:54 They don’t see a bubble.
0:02:57 They see revenue doubling every year with inference already profitable.
0:03:02 But they also don’t see the software-only singularity that some predict,
0:03:04 where AI recursively improves itself overnight.
0:03:07 Instead, they forecast something stranger,
0:03:12 a world where AI solves the Riemann hypothesis before it can reliably fold your laundry.
0:03:16 Where 10% of current jobs vanish, but unemployment might barely budge.
0:03:20 Where we hit artificial general intelligence, not with a bang,
0:03:24 but through a series of increasingly surreal milestones that keep moving the goalposts.
0:03:27 Along with A16Z partner Marco Mascuoro,
0:03:29 we cover their timeline predictions,
0:03:31 what stops or doesn’t stop the scaling,
0:03:35 and why the political response might happen faster than anyone expects.
0:03:40 Guys, there’s a lot of conversation about the macro.
0:03:41 Are we in a bubble?
0:03:43 How should we even think about this question?
0:03:45 We’re going to get into forecasting later on,
0:03:49 but why don’t you just take your first stab at how you approach such a big general question?
0:03:56 For me, at least, the way that I thought about this a little bit is I look at the big indicator being
0:04:00 how much people are spending on stuff like compute,
0:04:03 and I guess maybe some sense of will they regret that spending?
0:04:04 That’s relevant.
0:04:06 But how much are they spending thing?
0:04:07 You can see.
0:04:10 You can look at NVIDIA and how much they’re selling each year,
0:04:12 and you can see whether it keeps on growing,
0:04:16 and you can see whether stuff is kind of looking good to continue.
0:04:17 Will they regret its side?
0:04:19 I mean, that’s just 2BC, right?
0:04:21 Like, we’ll actually have to wait and see.
0:04:25 It does seem as if most compute gets spent on inference
0:04:29 that companies don’t so far regret, like, using to offer their products.
0:04:35 So, I mean, on that side, I’m, like, thinking not too bubbly yet.
0:04:39 But, yeah, I’m low confidence, and there’s other stuff to think about.
0:04:40 Yeah.
0:04:43 Right now, the amount of money companies are actually earning in profit,
0:04:45 not including the cost to develop the models initially,
0:04:47 seems to be, like, very positive,
0:04:50 such that if they stop developing bigger and bigger models
0:04:52 and just stick with the ones they’ve had,
0:04:55 they’d have earned a profit pretty quickly at the current margins.
0:04:56 And in this sense, it doesn’t seem bubbly.
0:04:59 On the other hand, at any given time,
0:05:01 they’re investing in building even larger and larger models,
0:05:04 and if that goes well, then they’ll earn more money,
0:05:06 and if that doesn’t go well,
0:05:08 then no matter how profitable they are right now,
0:05:12 it’ll be a small amount of money compared to how much they would have spent.
0:05:15 So, I think right now there are not financial signs that there’s a bubble.
0:05:20 A lot of people worrying about bubbles just aren’t necessarily used to the level of spending
0:05:24 and just, like, the level of success that sort of happened and, like, scaling.
0:05:29 But if there is a bubble, it could happen very suddenly and be pretty bad.
0:05:33 Yeah, I think we’re almost giving you here the useful answer of, like,
0:05:35 I don’t think it’s a bubble because it’s not burst yet.
0:05:37 When it’s burst yet, then you’ll know it’s a bubble.
0:05:38 Yeah, yeah.
0:05:41 I do think, like, you could imagine a world at which there’s all the spending
0:05:45 and the current level of success does not, like,
0:05:48 people often make the case, oh, AI hasn’t been profitable yet,
0:05:50 and they’re spending more to make it profitable,
0:05:52 but right now it’s not making anything.
0:05:53 And in reality, they’re making,
0:05:56 they’ll have paid off the cost of all of the development
0:05:58 they’ve done in the past very soon.
0:06:01 It’s just that they’re doing more development for the future.
0:06:04 So I think there is this underlying financial success so far
0:06:08 that I wouldn’t expect to see if they’re at the very least an obvious bubble.
0:06:11 Yeah, that does seem very relevant.
0:06:15 People are spending a lot on these models.
0:06:17 They’re presumably, like, you know, users to use them.
0:06:21 They’re presumably doing this because they’re getting value from them.
0:06:24 You can maybe argue, like, oh, well, I don’t think that value’s real.
0:06:26 I think people are just playing around.
0:06:28 Whatever, but, like, whatever, they’re paying for it.
0:06:30 That’s a pretty solid sign.
0:06:32 I guess one quick question related to this is, like,
0:06:35 you talked in the report of the AI in 2030.
0:06:39 Basically, you haven’t seen signs of, basically, these models kind of plateauing
0:06:41 or, like, the capabilities keep increasing.
0:06:44 And you have the benchmarks, you have the amount of data that is going,
0:06:45 the amount of compute.
0:06:49 Do you think phases or parts of the models are plateauing, though?
0:06:53 Like, for instance, pre-training, are we seeing some sort of plateauing in that?
0:06:56 Or do you think people are still exploring some innovations in that stage?
0:06:58 And Curison, what do you think about that?
0:07:02 Yeah, I think this gets a bit harder to look at.
0:07:08 Like, we get to an area where there isn’t as much public data to say a lot, right?
0:07:13 It seems as if pre-training is comparatively less of a focus than it was before,
0:07:19 partly because, like, you have this exciting new direction of, well, newish direction of
0:07:22 post-training where they’ve done so much about reasoning, whatever.
0:07:26 But then I don’t necessarily take that as evidence of, like, oh, no, and that means pre-training,
0:07:28 you couldn’t scale further, whatever.
0:07:31 Like, it seems as if there is meaningfully more data out there.
0:07:36 It seems as if plausibly, like, even a lot of this stuff is quite synergistic.
0:07:38 You develop a better model.
0:07:41 You, like, use post-training stuff to make it better.
0:07:46 You get a load of data of the model actually being used successfully or not.
0:07:49 A lot of that can probably go into pre-training next time.
0:07:54 You are projecting a software-only singularity where AI is able to automate AI research,
0:07:56 which is an automated feedback loop.
0:07:56 Why not?
0:08:01 Yeah, I mean, I guess, like, I’m not serious, and I’ll yet to say more.
0:08:07 And it’s like, for me, it’s like, that report, it’s no one person’s kind of,
0:08:10 oh, this is, like, the forecast.
0:08:11 This is the prediction, right?
0:08:15 This report very specifically looks at what are the current trends?
0:08:18 Are there reasons that they clearly couldn’t continue or might not?
0:08:21 And if they do continue, where do they lead?
0:08:26 I think whether you see this self-improvement thing, that’s very hard to do from a sort of
0:08:28 trend extrapolation basis, right?
0:08:35 Like, currently, AI stuff does help AI R&D at least a little in terms of stuff like coding
0:08:38 or selecting your data sets and creating those, whatever.
0:08:43 But it’s quite hard to actually measure, and it’s not really helping in some big way, like
0:08:46 this kind of self-improving thing would suggest.
0:08:49 There are reasons that you might think it could be very hard.
0:08:55 People have discussed before how possibly, you know, if stuff just depend a lot on scaling
0:09:00 up compute, then maybe automating a load of the R&D isn’t that helpful.
0:09:05 I find that somewhat compelling, but I think it’s also just, it’s pretty uncertain.
0:09:11 It’s hard to speculate about something that’s quite out of regime like that.
0:09:17 One thing that needs to happen in order for a software-only singularity to occur is you
0:09:22 need to be in this world where scaling up the amount of researcher R&D time, basically, allows
0:09:28 you to, like, improve AI enough that it makes up for the lack of being able to scale experimental
0:09:29 compute or pre-training.
0:09:34 I think that something you would expect to see if this were the case is maybe not that much
0:09:38 experimental compute being used in practice, and instead all of the money is going towards
0:09:38 researchers.
0:09:42 Now, there’s a very good case that there’s a very large amount of money going towards
0:09:42 researchers.
0:09:48 But as far as we can tell, experimental compute, which you seem to need to do research, is receiving
0:09:53 a similar amount of money in that, in fact, it’s receiving many times more money than the
0:09:56 final training runs that are actually, of the models that are actually being released.
0:10:02 I think this is, in my mind, is a strong update towards, oh, you need to do very large-scale
0:10:03 experiments to do research.
0:10:09 And that we don’t really have good evidence that researchers and just researchers would
0:10:11 be able to speed things up without doing more experiments.
0:10:15 However, there are, like, pretty good arguments on either side of this.
0:10:19 I tend to lean towards, no, you actually need to do more experiments, and that means you can’t
0:10:21 get this software-only singularity.
0:10:24 But I don’t think the people who claim otherwise are, like, crazy.
0:10:29 I think they’re making some, like, they have, like, very reasonable differences, and we’re
0:10:33 both speculating on something where the data is currently pretty sparse.
0:10:39 Actually, related to that, like, what do you think on, so if you have, like, some of the
0:10:42 exploration that researchers are trying, I mean, obviously, like, people are exploring a lot
0:10:45 with RL trying to go beyond verifiable domains.
0:10:50 And what do you think about the argument, for instance, that gradient descent is really
0:10:53 good on learning in the current data set that you’re giving, right?
0:10:58 And if you keep training this over and over, it’s going to start forgetting things that
0:10:59 it was trained before, right?
0:11:00 Like, catastrophic forgetting.
0:11:03 And there is this argument, right?
0:11:05 Like, well, kids don’t learn that way.
0:11:07 Or, like, maybe there’s some imitation learning that kids do.
0:11:10 Maybe there’s some sort of exploration that they do.
0:11:12 And I wonder what you think about it.
0:11:16 I mean, if, and it sounds right, like, if kids really would just learn on imitation learning,
0:11:19 I think parents would have a great time just raising kids.
0:11:23 But it seems like the reason why they have such a hard time raising kids is because they
0:11:24 explore all these different things.
0:11:28 What do you think about it in terms of the algorithms and, like, the things we need to
0:11:31 keep improving these models over and over beyond the data and the compute?
0:11:37 I am cautious about comparing, like, how AI is learned to how humans learn.
0:11:40 Not because I don’t think they are comparable, but because I think we
0:11:44 know a lot more about how AIs learn right now than we know about how humans learn.
0:11:48 And people like making sort of assumptions about how human learning works and saying,
0:11:49 oh, yeah, it doesn’t do it that way.
0:11:50 And I don’t know.
0:11:51 Maybe that’s true.
0:11:54 Maybe human kids learn via RL.
0:12:03 I’m not very, I think that, yeah, I don’t have strong opinions on whether or not, like,
0:12:07 you know, you need to change to a method that’s more like what we think kids do right now.
0:12:10 I suspect people will find some method that works to use the compute available because
0:12:12 they’ve been able to do this in the past.
0:12:16 Yeah, I’m also sort of reluctant.
0:12:23 I guess as well, it’s one of those things where when we point to particular issues, like the
0:12:31 example of catastrophic forgetting, it’s sort of, well, OK, but as we’ve scaled up, we have
0:12:35 managed to do quite well at having models that remember more and more things.
0:12:38 This isn’t to say that, hence, the problem is solved.
0:12:39 Hence, we’re done.
0:12:42 Hence, then we’re making negotiations necessary or anything like that.
0:12:45 But I’m not exactly going to write it off.
0:12:55 Yeah, I definitely don’t think we’ve seen any slowdown yet in capabilities from any of these
0:12:56 concerns people have.
0:12:58 I think that people always have these sorts of concerns.
0:13:05 I’m reluctant to believe any given one of them until this actually shows up in numbers I can
0:13:08 see on a graph, which I just don’t think has happened yet.
0:13:16 Dario Danthropic has said, he said in March 2025 that within six months, AI will write 90% of code.
0:13:18 And of course, that hasn’t happened yet.
0:13:22 He also said we have, you know, we could have AI systems equivalent of a country of geniuses
0:13:25 in a data center as soon as 2026 or 2027.
0:13:30 How do you evaluate why Anthropic is so bullish or what is the crux of difference between what
0:13:32 they believe and perhaps what you believe?
0:13:41 My model, at least, which I don’t know if it’s right, but what it is, is that they think
0:13:47 a bit more like the people who believe in you automate R&D and that gives you very quick
0:13:48 takeoff.
0:13:53 So they see it as like, yep, we’re working on these AIs that are great for kind of research
0:13:54 engineering type coding.
0:14:00 And at some point, they’re going to be useful and that’s going to rapidly accelerate us to
0:14:01 develop the next ones.
0:14:03 And then it’s going to be quick progress.
0:14:14 Yeah, I think that it’s hard to tell the extent to which I don’t think we’ve gotten a lot of
0:14:19 evidence that there’s sort of views of this like software only takeoff are wrong insofar
0:14:22 as like they were taking a little bit longer to get to like the minimum level of competence
0:14:25 for AI to get you there definitely seems to be the case.
0:14:32 But it, I don’t know, it’s hard to tell the extent to which we’ve actually had significant
0:14:32 updates on this.
0:14:37 I know Dario often qualifies what he says by like saying as soon as or something like
0:14:38 this.
0:14:44 So this is like maybe the more, more so the faster timelines he gives, although I’m not
0:14:44 sure.
0:14:50 Yeah, there has also been, I think, sort of, you know, Talmud style commentary where people
0:14:55 are carefully looking at his exact wording and then a wording of other people’s discussion
0:15:01 of how many lines of code that are generated by some teams at Anthropic are generated by
0:15:04 code code and whether this does or doesn’t satisfy what he said.
0:15:06 So it gets a bit tricky.
0:15:13 Yeah, I remember there was the paper from the uplift paper that was claiming that actually
0:15:14 models would slow you down.
0:15:17 But I think like it mattered a lot what models they were using at the time because I think
0:15:20 they were pretty outdated by the time the report came out.
0:15:23 And I mean, in my personal experience, you definitely become way faster.
0:15:29 And it just saw so much more for you, like you’re just having the whole context on your
0:15:29 code base.
0:15:33 That’s such a huge advantage that I think for a human just would be really hard to do.
0:15:39 I mean, far more than 90% of the code I write is written by AI these days.
0:15:44 But I know I’m not like the average coder at all.
0:15:49 But it’s definitely, it’s definitely, I don’t think it’s like a wild prediction at this point
0:15:50 that 90% of code is going to be written by AI.
0:15:57 I mean, for all I know, somewhere at OpenAI, there’s someone just, you know, or that, you
0:16:04 know, with alpha code doing evolutionary algorithms on having tons and tons of trials, trying
0:16:06 to, you know, million shot some hard problem.
0:16:12 But it’s just like, it’s really unclear how many lines of code are actually being written
0:16:12 by AI right now.
0:16:14 I don’t think it’s such a wild.
0:16:20 It’s by a lot of like people’s intuitive sense in terms of like, oh, is 90% the job of a programmer
0:16:21 being done by AIs?
0:16:23 Definitely not.
0:16:27 But there’s this more complicated sense of like, how much is being written by AI?
0:16:31 Probably not 90%, but it’s hard to tell.
0:16:35 Yeah, and I think that is a very meaningful distinction.
0:16:40 You know, like, if you were to measure how many lines of code are being written, quote unquote,
0:16:43 by like tab completion, then it’s probably quite high.
0:16:49 But you don’t necessarily expect that that’s taking on that much of the programmer’s really
0:16:50 hard work.
0:16:55 That uplift paper that you mentioned, like, I find it really interesting and really good.
0:16:57 And it’s also surprisingly recent in a way.
0:16:59 Like, you know, you mentioned, ah, the models are outdated.
0:17:01 But I mean, this was early 2025.
0:17:04 So these were models that people actually did think were helping them.
0:17:08 And in the paper, they even got them to say ahead of time, like, how much do you think
0:17:09 this will speed you up?
0:17:11 And they said, yeah, I think however much.
0:17:15 They then asked them afterwards, how much do you think this sped you up?
0:17:16 And they’re like, yeah, yeah, it sped me up.
0:17:22 And I feel it does reveal, actually, like, it might be hard for us to judge whether we were
0:17:23 sped up or not.
0:17:27 Yeah, one thing that might be happening here is that a lot of the code that’s getting written
0:17:29 by AI is code that wouldn’t have been written otherwise.
0:17:32 So it’s not really speeding up things that would normally happen.
0:17:37 But, you know, there’s a lot of simple graphs or simulations I run that might have not gotten
0:17:38 written otherwise.
0:17:47 And so it’s hard to tell exactly what’s going on here in terms of the impacts.
0:17:51 I think at the end of the day, the most reliable indicator here is going to be how much money
0:17:55 these people are making from programmers and from, you know, subscriptions in general.
0:17:56 And it’s a lot of money.
0:18:00 I think there’s definitely indications that people are finding a use for them and probably
0:18:06 a decent amount of that use is for coding, but not exactly for the metric of doing 90%
0:18:07 of an existing coder’s job.
0:18:08 Yeah.
0:18:14 Biology is this phrase that’s been being used a lot, which is AI is an end-to-end.
0:18:16 It’s middle-to-middle.
0:18:21 And maybe which is meant to imply that, you know, we’re going to need a lot more human
0:18:24 involvement than some people, you know, typically think.
0:18:32 What is your mental model of what AI is going to do for labor markets, either on the sort of
0:18:36 lower end and on the higher end in the next, you know, decade, let’s say?
0:18:43 Oh, in the next decade, like, on the higher end, I’m definitely like, you know, probably
0:18:44 I expect new jobs to be created.
0:18:46 Everyone can still be influencers.
0:18:53 But on the higher end, it’s like, there are not very good individual things that you can
0:18:58 point to where it’s very obvious that AI can’t automate that job at this point.
0:19:03 Now, you could argue, okay, but there’s some unknowns, and I think it’s, like, pretty reasonable.
0:19:09 But those unknowns, we sometimes, you know, AI gets up against its limits and we figure out
0:19:12 what they are, and then it learned, surpasses that.
0:19:16 And I don’t know, at the higher end, it definitely seems plausible that it could just automate
0:19:21 all of the, basically all of existing jobs, with the exceptions of ones that require manual
0:19:23 labor, that people actually care about being done by a human.
0:19:31 It just, like, does not seem at all implausible to me that that can happen, or that that could
0:19:39 happen very fast, with the caveat there being, like, there’s probably some regulatory pushback
0:19:39 if that happens.
0:19:45 On the lower end, I don’t know, it could just, you know, could be a bubble and doesn’t have
0:19:45 any impact.
0:19:51 The thing I talk about when I’m talking about, like, the, like, interesting scenario to think
0:19:55 about, which I’m not, I don’t know, you know, 20% chance, 30% chance something like this will
0:20:00 happen in the next decade is, like, you know, a 5% increase in unemployment over a very short
0:20:07 period of time, like six months, due to AI being released, is something that I think will have
0:20:12 a very substantial impact on the world, both in terms of how people think about AI and sort
0:20:15 of how much attention it gets, and seems plausible to me.
0:20:17 But, you know, far from guaranteed.
0:20:25 Yeah, I think I strongly agree with being just highly uncertain.
0:20:31 It seems very plausible to me that you end up more or less kind of, you know, this generation
0:20:34 actually is exactly where we run out of progress.
0:20:36 It would be kind of crazy, but it could happen.
0:20:43 And then it’s like, oh, okay, everything is very much just generating more jobs for technical
0:20:48 people to try to integrate it into doing kind of useful but janky things for all of the
0:20:49 existing work people do.
0:20:57 The stuff where it kind of becomes a crazy runaway thing that you can, yeah, really automate large
0:20:59 swathes of remote work with.
0:21:04 I mean, my timelines are, I guess, pretty a bit longer than the others.
0:21:09 But yeah, I mean, it seems hard to rule out that something really big happens in a decade.
0:21:10 A decade’s quite a long time.
0:21:17 I think I would be surprised if there were not 5% of jobs that exist now, which AI has
0:21:20 automated away over the course of the next decade.
0:21:24 Honestly, I’d be surprised if it’s not 10% of the jobs that exist now, I think.
0:21:30 How fast that happens and, like, the extent to which those people find other jobs is something
0:21:38 which I don’t think I have seen compelling evidence for either way and probably depends on how fast
0:21:41 various things go and exactly what jobs are automated.
0:21:48 I think that 10% over the next, 10% of current jobs seems like a pretty reasonable lower, it’s
0:21:51 not quite my lower bound, but, you know, a pretty reasonable number over the next decade.
0:21:55 But this might not show up in overall employment numbers, yeah.
0:21:57 This is interesting.
0:22:04 I mean, definitely, like, the kind of, to the extent there is a mainstream economics view of
0:22:10 this stuff, it would probably be that automation happens at the level of tasks rather than occupations.
0:22:14 And occupations can, as a result, you know, go down quite a bit.
0:22:20 But a lot of the time you’re automating these, like, similar tasks across lots of jobs.
0:22:22 I think this is compatible with what you’re saying.
0:22:25 It’s just that some jobs get really hit by it.
0:22:26 I don’t know.
0:22:28 I find it, yeah, quite hard to think about.
0:22:34 I’m not sure what the, even the historic base rate for kind of jobs ceasing to exist is.
0:22:38 I know there are problems with this, like the historic employment data series.
0:22:43 There is actually quite a high, I believe, base rate of just the tasks in a job changing,
0:22:47 jobs themselves changing, jobs kind of going away, coming in.
0:22:51 So, yeah, even this 5% thing, I don’t know what to think.
0:22:55 Yeah, that would be like a big effect or kind of, yeah, that’s actually roughly the size of
0:22:57 the fact you’ve already seen from something like software.
0:22:58 I don’t know.
0:23:03 Yeah, probably 5% of jobs that existed before software no longer exist.
0:23:06 It seems pretty reasonable.
0:23:09 But I’m not confident of this.
0:23:12 It’s definitely something which, like, I don’t know.
0:23:17 I expect, especially if revenue trends continue, I expect to know a lot more about this in a
0:23:22 couple, in a year or two, probably within the next year, because it will just be the case
0:23:29 that, okay, we will have AIs earning enough to be like a substantial part of the economy.
0:23:33 If it’s not showing up in unemployment, then we’ve learned something about what it’s doing.
0:23:36 We’ve learned that, like, it’s able to do this without showing up in unemployment numbers.
0:23:39 Or maybe it will show up in unemployment numbers and we’ll see exactly what.
0:23:43 There’s been, like, some early work looking at, like, indicators of this.
0:23:52 There’s a lot of things that complicate looking into this because interest rates also have effects
0:23:55 on, like, the sort of things you might care about or just, like, normal churn.
0:24:00 Or also it’s possible that tech companies, you know, maybe they’ll lay off a bunch of programmers
0:24:02 so that they have the capital to build data centers.
0:24:05 And are those programmers being laid off because of AI?
0:24:07 I don’t know.
0:24:08 Maybe.
0:24:14 If you had a kid that was a freshman in college and they were asking, hey, you know, what should
0:24:16 I major in if I want to have a great career?
0:24:17 You know, what might you tell them?
0:24:20 And if they asked you about, you know, computer science or math or, you know.
0:24:21 Prompt engineer.
0:24:22 Yeah, exactly.
0:24:23 Yeah, what would you say?
0:24:29 I mean, I’d probably say not prompt engineer.
0:24:34 I think in general people get better at using AI is very easy to use.
0:24:38 Yeah, I think it’s a good question.
0:24:44 I think they should probably measure in something where if they’re majoring in programming, the
0:24:47 thing that they should be or computer science, the thing that they should be looking for is
0:24:51 not being a person who’s going to like, like, the skills that are going to be useful are not
0:24:52 going to be knowing a programming language.
0:24:55 It’s going to be more general purpose skills.
0:25:03 Ability to, like, work with other people, communication skills, this sort of thing.
0:25:06 I don’t really know entirely if this points to a particular major.
0:25:11 Most majors are probably not majors that are, like, actually relevant for your job.
0:25:17 Yeah, I guess I’d sort of be like, well, there’s not too much that you can do to plan
0:25:19 around the super crazy futures.
0:25:24 So I guess go for something that you’re passionate about that’s useful in the world, but don’t
0:25:26 go crazy in that way.
0:25:30 I actually think that, yeah, computer science, maths, if you’re passionate about them, they’re
0:25:35 very good because you’ll learn interesting things that are valuable in many worlds.
0:25:37 But I don’t know.
0:25:41 I gave advice to a younger relative recently and they chose to study drama instead.
0:25:49 I do think that, you know, one of the things that if you have a better time in college, that’s
0:25:51 like four years of your life you’ve had a better time during.
0:25:57 And at the end of the day, like, you know, if it’s a crapshoot, which of those things is
0:25:59 actually going to give you a better time in the future?
0:26:02 Planning for the present is a lot easier.
0:26:06 I mean, it’s definitely become really hard to know, right?
0:26:09 I mean, I remember like the prompt engineer was obviously a joke because everyone believed
0:26:13 two years ago that that was sort of some sort of viable thing.
0:26:19 And obviously models are phenomenally better at like just being great prompters.
0:26:23 So obviously that’s kind of like one thing that has been happening.
0:26:27 It’s just really hard to predict what’s happening as these models keep getting better.
0:26:32 One question that I have related to this is obviously code is such a big market and it
0:26:34 has had such a big impact.
0:26:38 One that I’m very excited about, but it’s still much earlier, I think is computer use, right?
0:26:42 It’s basically automating all the digital tasks that you’re doing in your computer.
0:26:47 And there’s very few benchmarks around this, like whether it’s Web Arena or the Always World.
0:26:50 And you talk a little bit on your report about benchmarks.
0:26:53 Curious on like, what do you think is missing in that space?
0:26:58 It’s like why we haven’t seen yet that moment where the moment, for example, when Sonnet 3.5
0:27:04 came out or Cloud Code or Codex, where we saw significant improvement on coding in general.
0:27:07 We haven’t had that moment for computer use.
0:27:08 What do you think is missing there?
0:27:10 Interesting.
0:27:15 I mean, there have been improvements on computer use for sure.
0:27:22 I do have, I mean, this, maybe I’m going out on a limb here slightly, but also I do think
0:27:28 that there is a sense in which models are a little bit artificially hobbled by their vision
0:27:29 capabilities.
0:27:36 Like it does seem as if a common pattern you see when you try to get models to do stuff with a GUI
0:27:39 is they kind of get a bit confused about manipulating it.
0:27:45 And, you know, in a way where it’s like, okay, this is interacting with your general propensity
0:27:50 to get infused in long, as you would in like difficult long coding problems.
0:27:54 But it’s kind of exacerbated because like, you’re not able to just easily look back on
0:27:56 the thing and see kind of, oh, I was wrong.
0:28:01 You instead go down like some awful dead end of just, I’m just going to click this again
0:28:02 and again and again.
0:28:05 So I think that’s part of it.
0:28:11 I think there is something here also probably about kind of long context coherent stuff.
0:28:16 like those tokens to represent the GUI are pretty big.
0:28:22 And then you’re filling up your context window as you go with like, oh, yeah, well, I had all of this stuff that’s happened before.
0:28:28 And you seem to just run into a kind of spiral of increasingly less sensible outputs.
0:28:31 So I feel like these are two of the big things, but I don’t know if that answers your question.
0:28:38 I found computer use, I don’t know, this was the first year I found computer use actually useful.
0:28:46 We use ChatGPT agent in our data center research, because a lot of what we have to do is find permits,
0:28:53 which are all going to be on janky county by county databases of air permits for, you know,
0:28:55 the county that Abilene, Texas is in.
0:29:01 And I don’t know what databases exist for every county in the U.S.
0:29:04 ChatGPT does.
0:29:11 Normal ChatGPT can’t search them because it’s these, you know, these actual user interfaces.
0:29:15 You can’t just search them with, you know, URLs because they definitely don’t work that well.
0:29:21 And it’s able to navigate this such that I can just ask it to find me permits on a data center in a particular city,
0:29:28 and it will come back with air pollution permits and like tax abatement documents and all of this stuff that let me learn a huge amount.
0:29:34 And this is just like because of the improvements we’ve seen in computer use over the past year or so.
0:29:39 I’m excited to, yeah, I think it’s just going to get better from there,
0:29:42 but I’ve definitely found it starting to get to the point where it’s actually useful.
0:29:54 What’s your mental model more broadly for what is going to happen to productivity or just sort of economy statistically in general?
0:29:59 Are you, some people say GDP growth would be, you know, 5%, I think it’s the Tyler Cowen view.
0:30:07 I think some people would say, no, no, we should get up to 10% of growth or maybe even higher if we truly have AGI in terms of how we understand it.
0:30:09 What’s your model of what happens to the productivity?
0:30:27 I think my kind of baseline guessing would be, you know, I forecast out kind of if revenue keeps growing the way it has in theory for it to be worth spending that much on that, you know, those chips to do that inference.
0:30:31 You should be getting something kind of similar to that value after those chips by then.
0:30:37 So then you could just draw from that kind of like, oh, okay, so extrapolating to 2030, you need.
0:30:45 And I think for there it was in the report, I don’t know, I calculated it, but I think it was on the order of like a percent kind of GDP increase.
0:30:46 That’s in a few years, right?
0:30:48 That’s not presuming AGI.
0:30:59 That’s presuming like if NVIDIA stock revenues keep like growing as they sort of previously have and you assume that they make roughly as much compute from it as before and so on.
0:31:07 If you actually get something, I mean, AGI is like, yeah, people use it to be umpteen different things.
0:31:18 I think if you actually get something that can do any tasks that humans can do remotely, then presumably you see a lot of growth.
0:31:24 It feels sort of difficult to guess exactly what kind of a lag you’re going to see.
0:31:28 I think there’s reasons to think, oh, well, maybe people will be slow to adopt stuff.
0:31:30 How do they learn to trust it?
0:31:30 Whatever.
0:31:33 There’s other reasons to think, well, they’re already using these technologies.
0:31:41 A lot of it might actually be quicker than most growth and indeed adoption has been quicker for LLMs than for many previous technologies.
0:31:47 So, yeah, I think it sort of gets hard at that point to model.
0:31:53 At some point on our site, we had some rough numbers where it was stuff like, what if you, you know, doubled the virtual labor force?
0:31:55 What if you 10 times did it?
0:31:55 Whatever.
0:31:58 Then you see these like crazy GDP boosts.
0:32:05 I don’t know whether that’s the most reasonable way to think about it.
0:32:22 I sort of, I think a lot of it comes down to whether you imagine that like, yeah, you really get something that can do everything versus you get something first, but can do a meaningful fraction of remote tasks, but maybe can’t do like an entire bucket of them and then it bottlenecks you more.
0:32:37 So I guess it’s, again, this thing of like, my best guess on current trends is this fairly well defined, you know, few percent of GDP in 2030 thing, which is already pretty crazy by economic standards.
0:32:43 But then once you go much further, it’s like, God, you know, my predictions are just going to be even crazier.
0:32:45 I’m reluctant to make them.
0:32:50 I am going to be slightly less reluctant and make some claims.
0:32:51 That’s what we’re here for.
0:32:56 Assuming in the next 10 years we get AI that is capable of doing any remote job as well as any human.
0:33:02 I think, you know, 30% GDP growth seems like a lower bound on something that’s reasonable.
0:33:09 Assuming you get, this is a big assumption that a lot of people are going to, that, you know, it’s, there’s a lot going on in that assumption.
0:33:17 But assuming that happens, I think you either are going to get like 30% GDP growth or, you know, negative 100% GDP growth because everyone’s dead.
0:33:25 It’s just like, you know, it’s just like at the end of the day, it seems like you’re going to have AI that can scale.
0:33:29 That if you have AI that can scale there, you can probably have AI that scales even farther.
0:33:55 Right now, I think the like economic models I have seen of what happens if you get this sort of full replacement, you can automate a job are, you know, I either show this sort of an extremely fast wild takeoff or with a couple of, or, you know, you have some people attempting to do this who then say, and then you like look down through paragraphs.
0:34:16 And it’s like assuming current levels of, assuming AI is as capable as GPT-3, you know, I think the smaller numbers just like, you know, they’re, they’re newer to, they’re either newer term predictions or predictions that aren’t looking at like the full, the, the more, the upper end of what sort of capabilities you might see in the next 10 years.
0:34:28 Yeah, I mean, it does seem hard to imagine a world where you have this supply of virtual labor that literally can do any stuff that humans can do, and then it doesn’t lead to crazy things.
0:34:29 I definitely agree with that.
0:34:34 I guess perhaps maybe some sort of a, I don’t know, a heavy regulation situation.
0:34:42 But there are, yeah, I think there exist worlds in which things don’t go crazy after that.
0:34:56 It does seem like those worlds are not in an indefinite stable state, but, you know, it’s not impossible, but it does seem like the default there is you either go crazy up or you either go crazy down.
0:35:03 And it’s probably going to be one of those two if you get to a world where it’s like genuinely AI can do any job as well as any human.
0:35:11 I think people, I don’t know, it seems wild to me to claim that, you know, given that your default case should be, you know, not super ridiculous changes.
0:35:16 It’s just like, that’s a lot of things that your AI can do right there.
0:35:21 And that’s like, yeah, it just like seems like it should have fundamentally changed the economy in one direction or another.
0:35:24 My intuition is a lot of the disagreement.
0:35:29 I mean, probably some of it does come down to sort of cached beliefs people already have.
0:35:40 But I do also think some of it is that when people talk about like, oh, yeah, AGI, AI that can do a remote job, whatever, even though we feel like we’re talking about the same thing, maybe sometimes we’re not.
0:35:41 I don’t know.
0:35:46 I’ve certainly had examples of conversations where it’s like, yeah, AI can do any remote job.
0:35:49 And then they discuss stuff that it can’t do and the stuff that it can’t do.
0:35:52 It’s like, well, no, like that’s also a remote job.
0:35:54 Like that’s the kind of thing people currently do.
0:35:56 So I think there is some of this.
0:36:06 What do you think, like, I mean, you talk about benchmarks on your report, but I wonder like 2027, 2028, what are going to be the right benchmarks to measuring the progress?
0:36:12 More than the economic growth, more of the capabilities on the model, like intelligence on the model.
0:36:21 Like we had in 2012, AlexNet, obviously that got solved long ago, but that was probably not a measure of AGI by any means.
0:36:24 Do you think the same would happen with the current benchmarks we have?
0:36:30 So Sweebench, MLU, let’s say we maxed out on those benchmarks.
0:36:31 What comes after that?
0:36:33 How do we measure that?
0:36:36 Is it sort of like GDP growth with these models?
0:36:38 Is it sort of breakthroughs in science?
0:36:40 How do you think is the right measure going forward?
0:36:45 Yeah, I mean, I think most of what we have is likely to be solved.
0:36:49 And indeed, the examples you gave are like pretty close already.
0:36:52 Like, I don’t know, UA is basically solved.
0:36:58 Sweebench is like possibly close, depends a bit on how ambiguous some of the questions are.
0:37:01 There’s some details, but it’s really getting there.
0:37:04 I mean, I think some directions are obvious.
0:37:09 You kind of do similar things, but harder and a bit better and try to make them a bit more realistic.
0:37:11 And people are doing this.
0:37:18 There are harder software benchmarks that people have made more of an effort to try to curate and cover larger tasks, for example.
0:37:24 I think there’s also perhaps some question of kind of budgets involved.
0:37:30 I do think there’s this kind of thing where like, obviously, if you just burn money, it doesn’t intrinsically make the benchmark better.
0:37:37 But probably you are going to see something where you’re just going to have to devote more resources on average to them.
0:37:46 Like, if you’re trying to prove a sort of higher level of capabilities to a higher standard of proof, probably it’s going to involve kind of more effort in developing them.
0:37:55 I do also think, though, you’re going to see examples of, you know, relatively small kind of small numbers of things that are just very impressive.
0:38:07 And these are also a valuable signal, like when you see LMS being able to do things like, oh, yeah, I just refactored this entire code base and it was really useful.
0:38:10 Then this is going to be useful.
0:38:16 And even if it’s not yet formalized into a benchmark, if you’ve seen it for yourself, it’s going to be kind of useful for you as evidence.
0:38:22 And then people are probably going to make benchmarks that cover things like this to try to systematize them.
0:38:32 I want to go back to our question on timelines, and I want to ask you about a few different sort of milestones and get your perspective on timelines there.
0:38:37 So first is, what is a rough timeline for a major unsolved math problem being solved by AI?
0:38:44 I actually wondered, yeah, because you had a few of these that you said trust to look at.
0:38:49 When you say that it solves this, I mean, is this unassisted entirely?
0:38:56 Or is it kind of a news, you know, report or someone tweets that, hey, like I dumped this at GPT and it solved it?
0:38:57 And what counts as major?
0:39:08 Something that we would all agree, like a substantive, you know, version of it, not a, you know, just an anecdotal, you know, person describing it.
0:39:11 But does it have to solve it on its own?
0:39:13 Yeah, let’s go with that.
0:39:14 Sure.
0:39:15 Yes, unassisted.
0:39:27 Oh, yeah, because I mean, there’s already cases, it seems, of LMS being, you know, like people are debating a little bit, but mathematicians who seem trustworthy are saying like, wow, I used this and it was really helpful during my proof.
0:39:35 I would not be surprised if AI solves like a major unsolved math problem, like the Rayman hypothesis or similar in the next five years.
0:39:41 I’m not going to say that, like, that’s my, you know, median case necessarily, but I definitely wouldn’t be that surprised.
0:39:47 It’s like, right now, it doesn’t look like math is that hard for AI.
0:39:52 It’s just like, some things turn out to be hard and some things don’t.
0:39:57 And math is just like one of the domains where it’s, RL seems to work pretty well.
0:40:03 And where it’s, most other domains, it’s not at the point where it’s like, useful to a full professor.
0:40:08 To the same extent, I think it is for math or getting very close to for math.
0:40:18 Yeah, and also it’s like very unclear to what extent certain capabilities that it has unusually well might actually turn out to be very, very useful.
0:40:26 Like maybe it’ll turn out that there’s like four papers out there that it knows about that have obscure results in them that when combined solve some big conjecture.
0:40:33 Which is the sort of thing that it like, might be much more feasible to figure out with AI than for a human to figure out.
0:40:35 Or something similar.
0:40:42 There’s a lot of uncertainty here, but it just like, does not currently seem like something that AI is actually going to struggle with.
0:40:51 People often make claims about it being like this, you know, intuitive deep thing that it would mean that AI has achieved something, some huge level of intelligence for it to solve.
0:40:56 I think in practice, this is just like, you know, making a piece of art, it turns out.
0:41:03 AI could just do that before it could do a lot of other, before it can, you know, remember things for more than a couple of days or whatever.
0:41:05 Yeah.
0:41:09 It turns out to be farther down the capabilities tree than people might have guessed.
0:41:24 Yeah, I think I’m also bullish, though I do think that, yeah, it’s one of those things where it’s tricky and you really probably do need to define it quite well to get a good forecast on it, to hope to get a good forecast on it.
0:41:35 Like, I don’t know, we’ve had this experience that with benchmarking mathematics, you know, we got mathematicians to come up with problems that I think aren’t as difficult as the kind of problems you’re talking about.
0:41:41 But nevertheless, they’re like, yeah, AI could solve this, it’d be like a big deal for AI progress, it would mean something to me.
0:41:43 And then AI has solved them.
0:41:47 And usually the response has been kind of like, oh, yeah, that updates me a bit.
0:41:54 Although, man, when I look at it, I just realize like, yeah, you can kind of brute force this, you can kind of choose this, you can get through.
0:42:02 And it’s a bit like, oh, okay, I mean, what if there’s a problem that for humans we consider, sort of, oh, this would be quite big.
0:42:05 And then, yeah, AI solves it, okay, ah, well, it solved it, whatever.
0:42:08 We sort of had this with chess decades ago, right?
0:42:12 Like, computers solved chess very well.
0:42:15 And everyone was thinking of this as the pinnacle of reasoning.
0:42:17 And then they did.
0:42:21 And everyone, as a result, kind of concluded by, oh, well, of course, computers can do chess.
0:42:24 So, yeah, I don’t know, I…
0:42:31 I suspect that math is quite nice for AI to do.
0:42:40 I’m reluctant to go out and assert, like, oh, yeah, definitely AI is going to, like, solve some of the Millennium Prize problems in the next few years.
0:42:46 But it would not at all surprise me if it solves quite impressive-seeming things in the next few years.
0:42:50 What about a breakthrough in biology or medicine?
0:42:57 And we’ve already seen some of that with the, what’s it called, AlphaFold.
0:43:03 Math seems unusually easy for AI, I’m going to be honest.
0:43:09 So to the extent where I’m like, ah, is it going to do the same exact level of, like, oh, it on its own did this huge thing?
0:43:11 That seems to be a much bigger stretch to me.
0:43:31 It definitely seems plausible, but there’s a lot of other concerns there where it needs to, it needs to be able to, like, actually do experiments and get data and interact with the real world for a lot of these in a way that does not need to happen at all for math.
0:43:37 In particular, yeah, it’s just, they in fact seem farther off.
0:43:53 What seems more plausible to me is that we see, like, you know, it become ubiquitous that some tools, like, of using AI in some sort of aspect of, like, biology or chemistry or something useful like that, that it, like, certain aspects of it are enhanced.
0:44:02 It also is possible that AI will, you know, make incredible strides without, yeah, I think without humans, but it’s harder.
0:44:06 Yeah, I think, again, it’s a bit tricky for where you draw the line.
0:44:12 I mean, I think you’re not counting tools like AlphaFold, because if you were, then probably you’d argue for that, right?
0:44:16 Like, the inventors kind of won the shared Nobel Prize.
0:44:21 But, yeah, I mean, I guess there’s kind of different directions.
0:44:26 In biology, you could have AI being able to predict quite, you know, specific things like that.
0:44:37 Or you could have something that’s more general purpose, this so-called, like, co-scientist or whatever they want to call it approach, where it’s more about, like, oh, it was able to look through the literature and have good ideas.
0:44:40 And there’s different extents of human involvement.
0:44:45 There already seem to be some results where impressive stuff is happening.
0:44:55 I’ve not vetted them enough to really have a sense of, like, would this already count as having satisfied, yeah, the sort of level of impressiveness you’re looking for.
0:45:06 I sort of assume that finding things that end up being meaningful will happen pretty soon if it hasn’t already happened.
0:45:17 But then maybe there’s a question of kind of, okay, but is it doing as well as human researchers are actually, like, prioritizing the best few ones to work on?
0:45:23 I think most of these co-scientist results have probably had pretty involved humans prioritizing.
0:45:25 Though, again, I’ve not looked enough to say.
0:45:32 Lastly, how about for real superintelligence, for your definition of superintelligence?
0:45:43 I have, I have, I think I am, I am on the record as saying that the median timeline I discussed, or the modal timeline, sorry.
0:46:03 I think Smoodle, yeah, which might be on the early side compared to where my median is, is, you know, 2045 was where, when I did the podcast with Jaime, we discussed, like, our forecasting, breaking down, and everything going bananas is the terminology I have used.
0:46:22 Um, and that, like, looks like superintelligence, um, I, you know, um, I think that it’s, like, the case that if we get AI that can do every single job, uh, that a human can do, as well as any human can do that job, in the near future.
0:46:41 And this is, you know, means that scaling just works to get things much, much better, and probably means that you are not that many steps, that you are just a bit more scaling away from getting AI that could do anything, uh, that humans, uh, sorry, two things vastly better than humans.
0:46:42 Um.
0:46:52 Yeah, it gets hard to predict, and I think, as well, it gets to be one of these things where the predictions get a bit unmoored from the stuff that you can, like,
0:47:08 properly model, like, my, my sort of, you know, guesses, my, like, judgmental forecasts, to use the fancy term, for just, kind of, can do any remote work tasks, probably have a median of about, like, 20, 25 years.
0:47:22 Um, I kind of struggle to imagine a world where that happens, and people are, like, deploying it, and doing research, and yet they’re not making further progress to being able to do stuff much better.
0:47:29 So, I guess they have to be, like, not too much longer after that for some definition of superintelligence.
0:47:35 But, yeah, all very uncertain, and, yeah, it seems to break down a bit.
0:47:48 You talk a lot about the progress in data centers, benchmarks, biology, and there was one interesting part that I noticed just in the field, that is robotics is making a lot of progress with, let’s say, world models, and, like, the physical space a little bit.
0:47:51 So, Kirsten, like, what is your take here?
0:47:55 Like, what do you think it’s, it seems like a lot of the problems in robotics can be solved purely with imitation learning.
0:47:59 You might not need, like, a lot of, sort of, like, breakthroughs in math or whatever.
0:48:02 Like, you can just basically learn it from a lot of data.
0:48:08 And I think in the last couple of years has been remarkable just in robotics and world models overall.
0:48:12 Kirsten, your take a little bit on this, and if you did some kind of research in this space.
0:48:19 So, we’ve looked into what sort of amount of compute is actually being used to, like, do these training runs.
0:48:34 And what we found is that, like, compute, the training runs that are being used for robotics are, like, 100 times smaller than the training runs that are being used for, than the training runs that are being used for, like, frontier models.
0:48:37 And so, there’s a lot of skill you can do there.
0:48:45 I don’t think that until, plausibly until very, very recently, there have been serious attempts to gather data for robotics at a massive scale.
0:48:50 It’s just the case that you can hire a bunch of people to move around in motion capture suits if you need to.
0:48:53 And there have been a lot of attempts to do that, although I think this might be changing.
0:49:21 I think of robotics as mostly a hardware problem, a hardware and, like, economics problem of, if it costs $100,000 to build a robot, then, you know, it’s not necessarily better than a human who could work for $20,000 a year, or a very cheap human in certain countries, or something, sorry, a, like, sort of minimum wage in some countries that you might be able to afford labor for.
0:49:27 It’s just not obvious to me that there is a software problem here.
0:49:32 The hardware, it does seem, like, unclear.
0:49:36 It’s very unclear to me how much of a hardware problem is left.
0:49:42 In particular, there’s certain tasks which robots might be able to do, but are they actually the tasks that you care about a robot being able to do?
0:49:50 If you want your robot to be able to, like, nimbly walk around while lifting up heavy things and moving fast and react, then that’s hard.
0:49:53 That’s a hardware problem that I don’t think they’ve seen solutions for yet.
0:49:57 Yeah, I think my impression roughly matches this.
0:50:06 It’s sort of, I don’t know, people fairly often talk about this distinction between remote work and physical work.
0:50:20 I think there’s this perception of robotics progress lagging behind a bit, and there even is some intuition that maybe, maybe this physical manipulation stuff is actually just harder.
0:50:24 But I wouldn’t conclude that with much certainty.
0:50:37 Like Jeff has said, it feels like you’d kind of also want to see, well, okay, what happens if it gets scaled up in a similar way to even get a sense of, like, oh, okay, was it actually harder versus was it just deprioritized?
0:50:43 Is there anything we didn’t get to that you feel is important that we leave our audience with?
0:50:46 We did discuss the data center’s release we just did.
0:50:48 I’m not sure if there’s a good way to leave the audience with that.
0:50:49 Yeah, let’s get into it.
0:50:52 Okay, so you guys just did a, you know, release the data center project.
0:50:56 Why don’t you talk a little bit about what you were trying to achieve there and what you hope people take from it?
0:51:01 Yeah, so we took 13 of the largest data centers we can find.
0:51:05 These include a few from each of the major labs in the U.S.
0:51:08 And we found permits.
0:51:12 We took satellite images, including new satellite images, of all these data centers.
0:51:20 We figured out how to determine how much compute is in them based off the cooling infrastructure that they’re building, as well as when they’re coming online and their future timelines.
0:51:26 So we understand this, like, real-world data, and it’s all available online on our website for free.
0:51:36 This, like, to give insight into this giant infrastructure buildup that’s happening and the pace of it, there’s some things about it that surprised me a lot.
0:51:44 For instance, we learned that the most likely candidate to have the first gigawatt-scale data center is Anthropic, which would not have been my pick.
0:51:55 But Anthropic Amazon’s New Carlisle Project Rainier development seems on track to come online in January, followed shortly thereafter by Colossus 2.
0:52:02 We also learned a lot about what the largest concrete plans are rather than just, like, marketing plans.
0:52:04 Some people will throw around numbers.
0:52:18 But the one we found that’s actually seriously underway and has permits and is, you know, setting up the electrical infrastructure for is one by Microsoft, which is going to be used by OpenAI, at least in part, in Mount Pleasant.
0:52:29 They’re calling it Microsoft Fairwater, and that one’s going to be used a size, used not quite as much power as New York City, but I think more than half.
0:52:35 What’s stopping us from significantly increasing the cluster size?
0:52:38 Is it cost?
0:52:39 Is it supply lead times?
0:52:41 Are there any other engineering breakthroughs required?
0:52:51 I think that people are approximately wrong that there’s something stopping us and we are scaling up as fast as there is money to scale up, approximately.
0:52:59 I suppose they could want there to be all of the clusters literally today, but they’re scaling up really quite fast.
0:53:11 You’re seeing these data centers, which are using, I think the one I mentioned for Anthropic Amazon is using about as much power, nearly as much power as the state capital of Indiana, which is where it’s located.
0:53:23 The timelines on some of these, like the Colossus 2, are, you know, two years or less, which is just an insane thing to build this thing that’s using as much power as the city.
0:53:27 I think that plausibly, you know, you don’t want to buy chips now.
0:53:29 You want to wait for there to be better chips.
0:53:43 I think that people think of, there’s a lot of noise about things being difficult and scaling up, and I think this is because people are having to spend a little bit more than they would ordinarily have to spend.
0:53:51 You can’t use the ordinary sort of power pipeline, which is designed to deliver this affordable infrastructure at a slow pace.
0:53:58 You have to, you know, buy things that you wouldn’t ordinarily have to buy and spend more than you would ordinarily have to spend, but not buy enough to slow it down.
0:54:04 All of these things pale in comparison to the cost of your GPUs.
0:54:13 So, my actual takeaway from a lot of this has been, oh, we’re not having too much trouble scaling up.
0:54:22 These plans are going really quite fast, and it’s not obvious that people would actually have the finances and desire to do them faster.
0:54:34 When people are talking about energy as a major potential bottleneck or having to increase our capabilities significantly, you’re not worried that that’s going to be a durable, sustainable bottleneck.
0:54:35 That’s not right.
0:54:44 I think people like complaining because they can’t just use the traditional cog into the grid for cheap, affordable power four years down the line pipeline.
0:54:51 At the end of the day, there are expensive technologies that exist right now.
0:54:54 You could pay for solar power plus batteries.
0:54:56 This is fairly small lead times.
0:55:02 It might cost twice as much as normal power, but that’s still way less than your GPUs, so you’re going to do it if you have to.
0:55:08 And you see people doing these sort of emergency things that cost them a bit more, you know, starting up their data centers.
0:55:13 A common thing we see is people starting their data centers before their data centers are connected to the grid.
0:55:15 I think Abilene was an example.
0:55:21 XAI Colossus 1 is a prominent example of just finding ways around this that are expensive.
0:55:25 And you complain about it because, you know, it would be nice if you could do the cheaper way.
0:55:28 And no one’s used to having to do it this expensive way.
0:55:40 At the end of the day, though, it’s just like does not – there seem to be enough solutions, especially if you are as willing to pay as people are in AI, that I don’t really expect it to be a significant bottleneck.
0:55:43 Maybe we’ll close with this.
0:55:52 If these systems get as powerful as we’re discussing, I’m curious how the sort of political system is going to respond.
0:55:59 I’m curious if you’re sympathetic to the Ashton Brenner view that there’s some potential nationalization that occurs.
0:56:03 But how do you expect governments to respond?
0:56:10 It’s kind of remarkable of how not in the political discourse it is, given how powerful it is already.
0:56:12 I’m curious how you think about that.
0:56:20 I expect – so the thing I – calling back to what I mentioned earlier, this concept of, you know, the potential for 5% unemployment increase in like six months.
0:56:24 I think that the public’s reaction to this will determine a lot.
0:56:27 There will be very, very strong feelings about AI once this happens.
0:56:36 I think there will be a bunch of, you know, very strong consensus on what to do or on things that we don’t normally think of as things that people are considering.
0:56:44 I know when this happened with COVID, there was a several trillion dollar stimulus package passed at like, you know, in a matter of weeks to days.
0:56:46 It was breakneck speed.
0:56:50 Um, I don’t know what that will look like for AI.
0:56:54 But I think it’s like everything else in AI.
0:57:03 It’s like, you know, exponential, which means it will pass the point of, you know, people sort of care about it to people really care about it quite fast if things keep going.
0:57:17 Um, I – I just don’t know where we’re going to end up, I just expect, you know, wherever we end up there will be – it will look like, oh, everyone suddenly agrees that why – that’s – that’s to do this certain thing, which we would have considered unimaginable a year ago.
0:57:19 So, and I don’t know what that will look like.
0:57:29 It might look like nationalization, it might look like pausing, um, it might look like, I don’t know, going faster, uh, guaranteeing better unemployment benefits, who knows.
0:57:37 Uh, I – I – I just think there’s going to be some sort of, like, strong response of some sort, and it’s going to happen very fast.
0:57:49 Yeah, I mean, you know, you make the point that governments are maybe less interested than you’d expect now, but I mean, the current impacts, I think, aren’t really that large.
0:57:55 I feel like the attention is getting larger, but it’s not that AI, as of right now, is that powerful.
0:57:58 And yet governments are already talking about it a lot, right?
0:58:08 And you have people meeting with heads of state, uh, from various hardware manufacturers and AI companies and, like, countries talking about their AI strategies, stuff like this.
0:58:14 So, I feel clearly, country, national governments are going to be quite involved.
0:58:16 It’s just a question of how.
0:58:18 And yeah, I also am a bit unclear on that.
0:58:25 I think that, right now, we’ve seen this thing in revenue and finances, where it’s been doubling or tripling every year.
0:58:35 And my default assumption is that attention that AI gets from policymakers and governments is going to follow a similar trend, where it will double and triple every year.
0:58:42 This means that in the future, if trends continue, there will be a huge amount of attention, and it means that right now there’s a lot more attention than last year.
0:58:52 But you don’t suddenly skip from very little attention to all of the attention, although you do move quite, we are moving, I think, quite fast.
0:59:01 I think we made enough predictions that we’ll have to have you back next year and at the end of the year and check in and see where we’re at and then make it for next year.
0:59:03 Yeah, but David, thank you so much for coming to the podcast.
0:59:05 Thank you.
0:59:05 Thank you.
0:59:06 Thanks so much for having us.
0:59:12 Thanks for listening to this episode of the A16Z podcast.
0:59:19 If you liked this episode, be sure to like, comment, subscribe, leave us a rating or review, and share it with your friends and family.
0:59:23 For more episodes, go to YouTube, Apple Podcasts, and Spotify.
0:59:29 Follow us on X at A16Z and subscribe to our substack at A16Z.substack.com.
0:59:32 Thanks again for listening, and I’ll see you in the next episode.
0:59:47 As a reminder, the content here is for informational purposes only, should not be taken as legal business, tax, or investment advice, or be used to evaluate any investment or security, and is not directed at any investors or potential investors in any A16Z fund.
0:59:52 Please note that A16Z and its affiliates may also maintain investments in the companies discussed in this podcast.
0:59:59 For more details, including a link to our investments, please see A16Z.com forward slash disclosures.

Epoch AI researchers reveal why Anthropic might beat everyone to the first gigawatt datacenter, why AI could solve the Riemann hypothesis in 5 years, and what 30% GDP growth actually looks like. They explain why “energy bottlenecks” are just companies complaining about paying 2x for power instead of getting it cheap, why 10% of current jobs will vanish this decade, and the most data-driven take on whether we’re racing toward superintelligence or headed for history’s biggest bubble.

 

Resources:

Follow Yafah Edelman on X: https://x.com/YafahEdelman

Follow David Owen on X: https://x.com/everysum

Follow Marco Mascorro on X: https://x.com/Mascobot

Follow Erik Torenberg on X: https://x.com/eriktorenberg

 

Stay Updated:

If you enjoyed this episode, be sure to like, subscribe, and share with your friends!

Find a16z on X: https://x.com/a16z

Find a16z on LinkedIn: https://www.linkedin.com/company/a16z

Listen to the a16z Podcast on Spotify: https://open.spotify.com/show/5bC65RDvs3oxnLyqqvkUYX

Listen to the a16z Podcast on Apple Podcasts: https://podcasts.apple.com/us/podcast/a16z-podcast/id842818711

Please note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see http://a16z.com/disclosures.

 

 

Stay Updated:

Find a16z on X

Find a16z on LinkedIn

Listen to the a16z Podcast on Spotify

Listen to the a16z Podcast on Apple Podcasts

Follow our host: https://twitter.com/eriktorenberg

 

Please note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures.

Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

Leave a Reply

a16z Podcasta16z Podcast
Let's Evolve Together
Logo