Dwarkesh and Ilya Sutskever on What Comes After Scaling

0
0
AI transcript
0:00:06 Now that compute is big, computer is now very big, in some sense we are back to the age of research.
0:00:14 We got to the point where we are in a world where there are more companies than ideas by quite a bit.
0:00:22 Now there is the Silicon Valley saying that says that ideas are cheap, execution is everything.
0:00:25 What is the problem of AI and AGI?
0:00:28 The whole problem is the power.
0:00:34 AI models look incredibly smart on benchmarks, yet their real-world performance often feels far behind.
0:00:38 Why is there such a gap and what does that say about the path to AGI?
0:00:48 From the Dwarkesh podcast, here’s a rare long-form conversation with Ilya Sutskiver, co-founder of SSI, exploring what’s actually slowing down progress toward AGI.
0:00:55 Dwarkesh and Ilya dig into the core problems in modern AI systems, from why RL and pre-training scale so differently,
0:01:00 to why generalization, reliability, and sample efficiency still fall short of human learning.
0:01:07 They also explore continual learning, value functions, superintelligence, and what a future economy shaped by AI might look like.
0:01:10 You know what’s crazy?
0:01:11 I know.
0:01:12 That all of this is real.
0:01:13 Yeah.
0:01:14 Meaning what?
0:01:15 Don’t you think so?
0:01:16 Meaning what?
0:01:18 Like all this AI stuff and all this big area.
0:01:19 Yeah, that it’s happened.
0:01:22 Like, isn’t it straight out of science fiction?
0:01:23 Yeah.
0:01:27 Another thing that’s crazy is like how normal the slow takeoff feels.
0:01:35 The idea that we’d be investing 1% of GDP in AI, like I feel like it would have felt like a bigger deal, you know?
0:01:36 Where right now it just feels like…
0:01:38 We get used to things pretty fast, turns out, yeah.
0:01:42 But also it’s kind of like it’s abstract, like, what does it mean?
0:01:44 What it means is that you see it in the news.
0:01:45 Yeah.
0:01:48 That such and such company announced such and such dollar amount.
0:01:48 Right.
0:01:50 That’s all you see.
0:01:51 Right.
0:01:54 It’s not really felt in any other way so far.
0:01:56 Now, should we actually begin here?
0:01:57 I think this is an interesting discussion.
0:01:57 Sure.
0:02:06 I think your point about, well, from the average person’s point of view, nothing is that different will continue being true even into the singularity.
0:02:07 No, I don’t think so.
0:02:08 Okay, interesting.
0:02:21 So, the thing which I was referring to, not feeling different, is, okay, so such and such company announced some difficult to comprehend dollar amount of investment.
0:02:21 Right.
0:02:24 I don’t think anyone knows what to do with that.
0:02:24 Yeah.
0:02:30 But I think that the impact of AI is going to be felt.
0:02:33 AI is going to be diffused through the economy.
0:02:35 There are very strong economic forces for this.
0:02:39 And I think the impact is going to be felt very strongly.
0:02:41 When do you expect that impact?
0:02:46 I think the models seem smarter than their economic impact would imply.
0:02:48 Yeah.
0:02:53 This is one of the very confusing things about the models right now.
0:03:01 How to reconcile the fact that they are doing so well on evals.
0:03:05 And you look at the evals and you go, those are pretty hard evals.
0:03:05 Right.
0:03:07 They are doing so well.
0:03:13 But the economic impact seems to be dramatically behind.
0:03:22 And it’s almost like, it’s very difficult to make sense of how can the model, on the one hand, do these amazing things.
0:03:32 And then on the other hand, like, repeat itself twice in some situation in a kind of a, an example would be, let’s say you use vibe coding to do something.
0:03:35 And you go to some place and then you get a bug.
0:03:38 And then you tell the model, can you please fix the bug?
0:03:39 Yeah.
0:03:42 And the model says, oh my God, you are so right.
0:03:42 I have a bug.
0:03:43 Let me go fix that.
0:03:44 And it introduces a second bug.
0:03:45 Yeah.
0:03:48 And then you tell it, you have this, you have this new, this second bug.
0:03:49 Right.
0:03:51 And it tells you, oh my God, how could I have done it, you are so right again.
0:03:53 And brings back the first bug.
0:03:53 Yeah.
0:03:54 And you can alternate between those.
0:03:55 Yeah.
0:03:56 And it’s like, how is that possible?
0:03:57 Yeah.
0:04:00 It’s like, I’m not sure.
0:04:05 But it does suggest that the, something strange is going on.
0:04:07 I have two possible explanations.
0:04:17 So here, this is the more kind of a whimsical explanation is that maybe RL training makes the models a little bit too single-minded and narrowly focused.
0:04:25 A little bit too, I don’t know, unaware, even though it also makes them aware in some other ways.
0:04:29 And because of this, they can’t do basic things.
0:04:41 But there is another explanation, which is back when people were doing pre-training, the question of what data to train on was answered.
0:04:44 Because that answer was everything.
0:04:48 When you do pre-training, you need all the data.
0:04:53 So you don’t have to think, is it going to be this data or that data?
0:04:53 Yeah.
0:04:57 But when people do RL training, they do need to think.
0:05:03 They say, okay, we want to have this kind of RL training for this thing and that kind of RL training for that thing.
0:05:11 And from what I hear, all the companies have teams that just produce new RL environments and just add it to the training mix.
0:05:13 And the question is, well, what are those?
0:05:14 There are so many degrees of freedom.
0:05:18 There is such a huge variety of RL environments you could produce.
0:05:30 And one of the, one thing you could do, and I think that’s something that is done inadvertently, is that people take inspiration from the evals.
0:05:35 You say, hey, I would love our model to do really well when we release it.
0:05:36 I want the evals to look great.
0:05:42 What would be RL training that could help on this task, right?
0:05:47 I think that is something that happens, and I think it could explain a lot of what’s going on.
0:05:56 If you combine this with generalization of the models actually being inadequate, that has the potential to explain a lot of what we are seeing.
0:06:09 I think there’s a disconnect between eval performance and actual real world performance, which is something that we don’t today exactly even understand what we mean by that.
0:06:16 I like this idea that the real reward hacking is the human researchers who are too focused on the evals.
0:06:24 I think there’s two ways to understand or to try to think about what you have just pointed out.
0:06:38 One is, look, if it’s the case that simply by becoming superhuman at a coding competition, a model will not automatically become more tasteful and exercise better judgment about how to improve your code base.
0:06:45 Well, then you should expand the suite of environments such that you’re not just testing it on having the best performance in a coding competition.
0:06:50 It should also be able to make the best kind of application for X thing or Y thing or Z thing.
0:07:03 And another, maybe this is what you’re hinting at, is to say, why should it be the case in the first place that becoming superhuman at coding competitions doesn’t make you a more tasteful programmer more generally?
0:07:16 Maybe the thing to do is not to keep stacking up the amount of environments and the diversity of environments to figure out an approach that lets you learn from one environment and improve your performance on something else.
0:07:21 So I have a human analogy, which might be helpful.
0:07:25 So even the case, let’s take the case of competitive programming since you mentioned that.
0:07:27 And suppose you have two students.
0:07:37 One of them, work decided they want to be the best competitive programmer, so they will practice 10,000 hours for that domain.
0:07:48 They will solve all the problems, memorize all the proof techniques, and be very, very, you know, be very skilled at quickly and correctly implementing all the algorithms.
0:07:52 And by doing so, they became the best, one of the best.
0:07:56 Student number two thought, oh, competitive programming is cool.
0:07:59 Maybe they practiced for 100 hours.
0:08:00 Much, much less.
0:08:01 And they also did really well.
0:08:05 Which one do you think is going to do better in their career later on?
0:08:05 The second.
0:08:06 Right?
0:08:08 And I think that’s basically what’s going on.
0:08:11 The models are much more like the first student, but even more.
0:08:14 Because then we say, okay, so the model should be good competitive programming.
0:08:18 So let’s get every single competitive programming problem ever.
0:08:20 And then let’s do some data augmentation.
0:08:23 So we have even more competitive programming problems.
0:08:23 Yes.
0:08:25 And we train on that.
0:08:27 And so now you’ve got this great competitive programmer.
0:08:29 And with this analogy, I think it’s more intuitive.
0:08:40 I think it’s more intuitive with this analogy that, yeah, okay, so if it’s so well trained, okay, it’s like all the different algorithms and all the different proof techniques are like right at its fingertips.
0:08:47 And it’s more intuitive that with this level of preparation, it will not necessarily generalize to other things.
0:08:56 But then what is the analogy for what the second student is doing before they do the 100 hours of fine tuning?
0:09:00 I think it’s like they have it.
0:09:02 I think it’s the it factor.
0:09:03 Yeah.
0:09:03 Right?
0:09:08 And like I know, like when I was an undergrad, I remember there was a student like this that studied with me.
0:09:10 So I know it exists.
0:09:10 Yeah.
0:09:14 I think it’s interesting to distinguish it from whatever pre-training does.
0:09:24 So one way to understand what you just said about we don’t have to choose the data in pre-training is to say, actually, it’s not dissimilar to the 10,000 hours of practice.
0:09:31 It’s just that you get that 10,000 hours of practice for free because it’s already somewhere in the pre-training distribution.
0:09:35 But it’s like maybe you’re suggesting actually there’s actually not that much generalization from pre-training.
0:09:37 There’s just so much data in pre-training.
0:09:40 But it’s like it’s not necessarily generalizing better than RL.
0:09:44 The main strength of pre-training is that there is A, so much of it.
0:09:44 Yeah.
0:09:50 And B, you don’t have to think hard about what data to put into pre-training.
0:09:57 And it’s a very kind of natural data and it does include in it a lot of what people do.
0:09:58 Yeah.
0:10:08 People’s thoughts and a lot of the features of, you know, it’s like the whole world as projected by people onto text.
0:10:08 Yeah.
0:10:12 And pre-training tries to capture that using a huge amount of data.
0:10:25 It’s, it’s very, the pre-training is very difficult to reason about because it’s so hard to understand the manner in which the model relies on pre-training data.
0:10:34 And whenever the model makes a mistake, could it be because something by chance is not as supported by the pre-training data?
0:10:37 You know, and pre-support by pre-training is maybe a loose term.
0:10:47 I don’t know if I can add anything more useful on this, but I don’t think there is a human analog to pre-training.
0:10:52 Here’s analogies that people have proposed for what the human analogy to pre-training is.
0:10:56 And I’m curious to get your thoughts on why they’re potentially wrong.
0:11:05 One is to think about the first 18 or 15 or 13 years of a person’s life when they aren’t necessarily economically productive,
0:11:12 but they are doing something that is making them understand the world better and so forth.
0:11:18 And the other is to think about evolution as doing some kind of search for 3 billion years,
0:11:22 which then results in a human lifetime instance.
0:11:26 And then I’m curious if you think either of these are actually analogous to pre-training or
0:11:31 how would you think about at least what lifetime human learning is like, if not pre-training?
0:11:39 I think there are some similarities between both of these two pre-training and pre-training tries to play the role of both of these.
0:11:42 But I think there are some big differences as well.
0:11:48 The amount of pre-training data is very, very staggering.
0:11:49 Yes.
0:11:57 And somehow a human being, after even 15 years with a tiny fraction of that pre-training data,
0:11:58 they know much less.
0:11:58 Yeah.
0:12:01 But whatever they do know, they know much more deeply somehow.
0:12:08 And the mistakes, like already at that age, you would not make mistakes that RAIs make.
0:12:08 Yeah.
0:12:12 There is another thing you might say, could it be something like evolution?
0:12:13 And the answer is maybe.
0:12:16 But in this case, I think evolution might actually have an edge.
0:12:25 Like there is this, I remember reading about this case where some, you know, that one thing that neuroscientists do,
0:12:34 or rather one way in which neuroscientists can learn about the brain is by studying people with brain damage to different parts of the brain.
0:12:38 And so, and some people have the most strange symptoms you could imagine.
0:12:40 It’s actually really, really interesting.
0:12:43 And there was one case that comes to mind that’s relevant.
0:12:57 I read about this person who had some kind of brain damage that took out, I think, a stroke or an accident that took out his emotional processing.
0:12:59 So he stopped feeling any emotion.
0:13:09 And as a result of that, you know, he still remained very articulate and he could solve little puzzles and on tests, he seemed to be just fine.
0:13:11 But he felt no emotion.
0:13:12 He didn’t feel sad.
0:13:14 He didn’t feel angry.
0:13:14 He didn’t feel animated.
0:13:20 And he became somehow extremely bad at making any decisions at all.
0:13:23 It would take him hours to decide on which socks to wear.
0:13:26 And he would make very bad financial decisions.
0:13:42 And that’s very, what does it say about the role of our built-in emotions in making us like a viable agent, essentially?
0:13:45 And I guess to connect to your question about pre-training.
0:13:53 It’s like, maybe if you are good enough at like getting everything out of pre-training, you could get that as well.
0:14:04 But that’s the kind of thing which seems, well, it may or may not be possible to get that from pre-training.
0:14:07 What is that?
0:14:10 Clearly not just directly emotion.
0:14:20 It seems like some almost value function-like thing, which is giving, telling you which decision to be made, like what the end reward for any decision should be.
0:14:24 And you think that doesn’t sort of implicitly come from…
0:14:25 I think it could.
0:14:28 I’m just saying it’s not 100% obvious.
0:14:28 Yeah.
0:14:31 But what is that?
0:14:32 Like, how do you think about emotions?
0:14:34 What is the ML analogy for emotions?
0:14:37 It should be some kind of a value function thing.
0:14:38 Yeah.
0:14:45 But I don’t think there is a great ML analogy because right now value functions don’t play a very prominent role in the things people do.
0:14:49 It might be worth defining for the audience what a value function is if you want to do that.
0:15:04 So when people do reinforcement learning, the way reinforcement learning is done right now, how do people train those agents?
0:15:10 So you have a neural net and you give it a problem and then you tell the model, go solve it.
0:15:18 The model takes maybe thousands, hundreds of thousands of actions or thoughts or something and then it produces a solution, a solution is created.
0:15:28 And then the score is used to provide a training signal for every single action in your trajectory.
0:15:42 So that means that if you are doing something that goes for a long time, if you’re training a task that takes a long time to solve, you will do no learning at all until you solve it, until you come up with a proposed solution.
0:15:44 That’s how reinforcement learning is done naively.
0:15:47 That’s how O1, R1 ostensibly are done.
0:15:59 The value function says something like, okay, look, maybe I could sometimes, not always, could tell you if you are doing well or badly.
0:16:03 The notion of a value function is more useful in some domains than others.
0:16:08 So for example, when you play chess and you lose a piece, you know, I messed up.
0:16:17 You don’t need to play the whole game to know that what I just did was bad and therefore whatever preceded it was also bad.
0:16:23 So the value function lets you short circuit the weight until the very end.
0:16:31 Like let’s suppose that you started to pursue some kind of, okay, let’s suppose that you are doing some kind of a math thing or a programming thing.
0:16:34 And you’re trying to explore a particular solution direction.
0:16:42 And after, let’s say after a thousand steps of thinking, you concluded that this direction is unpromising.
0:16:50 As soon as you conclude this, you could already get a reward signal a thousand time steps previously.
0:16:57 When you decided to pursue down this path, you say, oh, next time I shouldn’t pursue this path in a similar situation.
0:17:00 Long before you actually came up with the proposed solution.
0:17:15 This was in the Deep Seek R1 paper is that the space of trajectories is so wide that maybe it’s hard to learn a mapping from an intermediate trajectory and value.
0:17:19 And also given that, you know, in coding, for example, you will have the wrong idea.
0:17:20 Then you’ll go back.
0:17:21 Then you’ll change something.
0:17:24 This sounds like such lack of faith in deep learning.
0:17:30 Like, I mean, sure, it might be difficult, but nothing deep learning can’t do.
0:17:30 Yeah.
0:17:38 So my expectation is that like value function should be useful.
0:17:43 And I fully expect that they will be used in the future, if not already.
0:18:03 What was I alluding to with the person whose emotional center got damaged is more that maybe what it suggests is that the value function of humans is modulated by emotions in some important way that’s hardcoded by evolution.
0:18:09 And maybe that is important for people to be effective in the world.
0:18:11 That’s the thing I was actually planning on asking you.
0:18:24 There’s something really interesting about emotions of the value function, which is that it’s impressive that they have this much utility while still being rather simple to understand.
0:18:26 So I have two responses.
0:18:38 I do agree that compared to the kind of things that we learn and the things that we are talking about, the kind of as we are talking about, emotions are relatively simple.
0:18:44 They might even be so simple that maybe you could map them out in a human understandable way.
0:18:46 I think it would be cool to do.
0:19:07 In terms of utility, though, I think there is a thing where, you know, there is this complexity, robustness trade-off, where complex things can be very useful, but simple things are very useful in a very broad range of situations.
0:19:22 And so I think one way to interpret what we are saying is that we’ve got these emotions that essentially evolved mostly from our mammal ancestors and then fine-tuned a little bit while we were hominids, just a bit.
0:19:27 We do have like a decent amount of social emotions, though, which mammals may lack.
0:19:29 But they’re not very sophisticated.
0:19:37 And because they’re not sophisticated, they serve us so well in this very different world compared to the one that we’ve been living in.
0:19:39 Actually, they also make mistakes.
0:19:43 For example, our emotions, well, I don’t know, does hunger count as an emotion?
0:19:45 It’s debatable.
0:19:58 But I think, for example, our intuitive feeling of hunger is not succeeding in guiding us correctly in this world with an abundance of food.
0:19:58 Yeah.
0:20:04 People have been talking about scaling data, scaling parameter, scaling compute.
0:20:07 Is there a more general way to think about scaling?
0:20:08 What are the other scaling axes?
0:20:15 So, the thing, so here is a perspective.
0:20:17 Here’s a perspective I think might be, might be true.
0:20:31 So, the way ML used to work is that people would just tinker with stuff and try to, and try to get interesting results.
0:20:32 That’s what’s been going on in the past.
0:20:40 And then, the scaling insight arrived, right?
0:20:41 Scaling laws.
0:20:42 GPT-3.
0:20:46 And suddenly, everyone realized we should scale.
0:20:53 And it’s just this, this is an example of how language affects thought.
0:21:00 Scaling is just one word, but it’s such a powerful word because it informs people what to do.
0:21:02 They say, okay, let’s try to scale things.
0:21:05 And so, you say, okay, so what are we scaling?
0:21:07 And pre-training was a thing to scale.
0:21:10 It was a particular scaling recipe.
0:21:16 The big breakthrough of pre-training is the realization that this recipe is good.
0:21:26 So, you say, hey, if you mix some compute with some data into a neural net of a certain size, you will get results.
0:21:30 And you will know that it will be better if you just scale the recipe up.
0:21:40 And this is also great, companies love this, because it gives you a very low-risk way of investing your resources.
0:21:41 Right?
0:21:44 It’s much harder to invest your resources in research.
0:21:46 Compare that.
0:21:51 You know, if you research, you need to have, like, go forth, researchers, and research, and come up with something.
0:21:56 Versus get more data, get more compute, you know, you’ll get something from pre-training.
0:22:10 And indeed, you know, it looks like I, based on various things, people say, some people say on Twitter, maybe it appears that Gemini have found a way to get more out of pre-training.
0:22:13 At some point, though, pre-training will run out of data.
0:22:14 The data is very clearly finite.
0:22:16 And so then, okay, what do you do next?
0:22:24 Either you do some kind of a souped-up pre-training, different recipe from the one you’ve done before, or you’re doing REL, or maybe something else.
0:22:31 But now that compute is big, compute is now very big, in some sense, we are back to the age of research.
0:22:33 So maybe here’s another way to put it.
0:22:39 Up until 2020, from 2012 to 2020, it was the age of research.
0:22:46 Now, from 2020 to 2025, it was the age of scaling, or maybe plus-minus, let’s add error bars to those years.
0:22:51 Because people say, this is amazing, you’ve got to scale more, keep scaling, the one word, scaling.
0:23:01 But now the scale is so big, like, is the belief really that, oh, it’s so big, but if you had to honeydex more, everything would be so different.
0:23:03 Like, it would be different, for sure.
0:23:10 But, like, is the belief that if you just honeydex the scale, everything would be transformed?
0:23:12 I don’t think that’s true.
0:23:15 So it’s back to the age of research again, just with big computers.
0:23:17 That’s a very interesting way to put it.
0:23:21 But let me ask you the question you just posed then.
0:23:25 What are we scaling, and what would it mean to have a recipe?
0:23:34 Because I guess I’m not aware of a very clean relationship that almost looks like a law of physics, which existed in pre-training.
0:23:39 It was a power law between data or computer parameters and loss.
0:23:47 What is the kind of relationship we should be seeking, and how should we think about what this new recipe might look like?
0:23:59 So we’ve already witnessed a transition from one type of scaling to a different type of scaling, from pre-training to RL.
0:24:02 Now people are scaling RL.
0:24:09 Now, based on what people say on Twitter, they spend more compute on RL than on pre-training at this point.
0:24:12 Because RL can actually consume quite a bit of compute.
0:24:15 You know, you do very, very long rollouts.
0:24:15 Yes.
0:24:18 So it takes a lot of compute to produce those rollouts.
0:24:24 And then you get relatively small amount of learning power rollouts, so you really can spend a lot of compute.
0:24:33 And I could imagine, like I wouldn’t, at this stage, it’s more like, I wouldn’t even call it a scaling.
0:24:35 I would say, hey, like, what are you doing?
0:24:40 And is the thing you are doing the most productive thing you could be doing?
0:24:41 Yeah.
0:24:45 Can you find a more productive way of using your compute?
0:24:47 We’ve discussed the value function business earlier.
0:24:55 And maybe once people get good at value functions, they will be using their resources more productively.
0:25:05 And if you find a whole other way of training models, you could say, is this scaling or is it just using your resources?
0:25:12 I think it becomes a little bit ambiguous in a sense that when people were in the age of research back then, it was like people say, hey, let’s try this and this and this.
0:25:13 Let’s try that and that and that.
0:25:15 Oh, look, something interesting is happening.
0:25:18 And I think there will be a return to that.
0:25:25 So if we’re back in the era of research, stepping back, what is the part of the recipe that we need to think most about?
0:25:32 When you say value function, people are already trying the current recipe, but then having LLM as a judge and so forth.
0:25:36 You could say that’s a value function, but it sounds like you have something much more fundamental in mind.
0:25:44 Do we need to go back to, should we even rethink pre-training at all and not just add more steps to the end of that process?
0:25:45 Yeah.
0:25:50 So the discussion about value function, I think it was interesting.
0:25:57 I want to emphasize that I think the value function is something like, it’s going to make our realm more efficient.
0:26:01 And I think that makes a difference.
0:26:06 But I think that anything you can do with a value function, you can do without just more slowly.
0:26:15 The thing which I think is the most fundamental is that these models somehow just generalize dramatically worse than people.
0:26:18 And it’s super obvious.
0:26:21 That seems like a very fundamental thing.
0:26:22 Okay.
0:26:24 So this is the crux, generalization.
0:26:28 And there’s two sub-questions.
0:26:34 There’s one which is about sample efficiency, which is, why should it take so much more data for these models to learn than humans?
0:26:45 There’s a second about, even separate from the amount of data it takes, there’s a question of, why is it so hard to teach the thing we want to a model than to a human?
0:26:59 Which is to say, to a human, we don’t necessarily need a verifiable reward to be able to, you’re probably mentoring a bunch of researchers right now, and you’re talking with them, you’re showing them your code, and you’re showing them how you think.
0:27:03 And from that, they’re picking up your way of thinking and how they should do research.
0:27:09 You don’t have to set a verifiable reward for them that’s like, okay, this is the next part of your curriculum, and now this is the next part of your curriculum.
0:27:14 And, oh, this training was unstable, and there’s not this schleppy, bespoke process.
0:27:18 So, perhaps these two issues are actually related in some way.
0:27:27 But I’d be curious to explore this second thing, which was more like continual learning, and this first thing, which feels just like sample efficiency.
0:27:28 Yeah.
0:27:38 So, you know, you could actually wonder, one possible explanation for the human sample efficiency that needs to be considered is evolution.
0:27:46 And evolution has given us a small amount of the most useful information possible.
0:27:57 And for things like vision, hearing, and locomotion, I think there’s a pretty strong case that evolution actually has given us a lot.
0:28:08 So, for example, human dexterity far exceeds, I mean, robots can become dexterous too if you subject them to like a huge amount of training and simulation.
0:28:16 But to train a robot in the real world to quickly like pick up a new skill like a person does seems very out of reach.
0:28:28 And here you could say, oh, yeah, like locomotion, all our ancestors needed great locomotion, squirrels like, so locomotion may be like you’ve got like some unbelievable prior.
0:28:29 Yeah.
0:28:30 You could make the same case for vision.
0:28:39 You know, I believe Yann LeChan made the point, oh, like children learn to drive after 16 hours, after 10 hours of practice, which is true.
0:28:49 But our vision is so good, at least for me, when I remember myself being five-year-old, I was very excited about cars back then.
0:28:55 And I’m pretty sure my car recognition was more than adequate for self-driving already as a five-year-old.
0:28:58 You don’t get to see that much data as a five-year-old.
0:28:59 You spend most of your time in your parents’ house.
0:29:02 So you have very low data diversity.
0:29:04 But you could say maybe that’s evolution too.
0:29:08 But then language and math and coding, probably not.
0:29:11 It still seems better than models.
0:29:15 I mean, obviously models are better than the average human at language and math and coding.
0:29:18 But are they better at the average human at learning?
0:29:19 Oh, yeah.
0:29:20 Oh, yeah, absolutely.
0:29:37 What I meant to say is that language, math and coding, and especially math and coding, suggests that whatever it is that makes people good at learning is probably not so much a complicated prior, but something more, some fundamental thing.
0:29:40 Wait, I’m not sure why should that be the case?
0:30:05 So consider a skill that people exhibit some kind of great reliability or, you know, if the skill is one that was very useful to our ancestors for many millions of years, hundreds of millions of years, you could say, you could argue that maybe humans are good at it because of evolution.
0:30:15 Because we have a prior, an evolutionary prior, that’s encoded in some very non-obvious way that somehow makes us so good at it.
0:30:37 But if people exhibit great ability, reliability, robustness, ability to learn in a domain that really did not exist until recently, then this is more an indication that people might have just better machine learning, period.
0:31:07 Mm-hmm.
0:31:10 interaction with the machine and with the environment.
0:31:16 And yeah, it takes much of your samples, it seems more unsupervised, it seems more robust.
0:31:17 Much more robust.
0:31:21 The robustness of people is really staggering.
0:31:22 Yeah.
0:31:27 So is it like, okay, and do you have a unified way of thinking about why are all these things happening at once?
0:31:33 What is the ML analogy that would, that could be, it could realize something like this?
0:31:46 So this is where, you know, one of the things that you’ve been asking about is how can, you know, the teenage driver kind of self-correct and learn from their experience without an external teacher?
0:31:55 And the answer is, well, they have a general sense, which is also, by the way, extremely robust in people.
0:32:06 Like, whatever it is, the human value function, whatever the human value function is, with a few exceptions around addiction, it’s actually very, very robust.
0:32:19 And so for something like a teenager that’s learning to drive, they start to drive, and they already have a sense of how they’re driving immediately, how badly they’re unconfident.
0:32:26 And then they see, okay, and they, and then, of course, the learning speed of any teenager is so fast, after 10 hours, you’re good to go.
0:32:30 So, yeah, it seems like humans have some solution, but I’m curious about like, well, how are they doing it?
0:32:36 And like, why is it so hard to, like, how do we need to reconceptualize the way we’re training models to make something like this possible?
0:32:39 You know, that is a great question to ask.
0:32:44 And it’s a question I have a lot of opinions about.
0:32:52 But unfortunately, we live in a world where not, not all machine learning ideas are discussed freely.
0:32:54 And this is, this is one of them.
0:32:57 So there’s probably a way to do it.
0:32:59 I think it can be done.
0:33:05 The fact that people are like that, I think it’s a proof that it can be done.
0:33:15 There may be another blocker, though, which is, there is a possibility that the human neurons actually do more compute than we think.
0:33:22 And if that is true, and if that plays an important role, then things might be more difficult.
0:33:32 But regardless, I do think it points to the existence of some machine learning principle that I have opinions on.
0:33:37 But unfortunately, circumstances make it hard to discuss in detail.
0:33:39 Nobody listens to this podcast, Elias.
0:33:41 Yeah.
0:33:44 I am curious, if you say we are back in the era of research.
0:33:47 You were there from 2012 to 2020.
0:33:55 And do you have, yeah, what is now the vibe going to be, if we go back to the era of research?
0:34:03 For example, even after AlexNet, the amount of compute that was used to run experiments kept increasing.
0:34:06 And the size of frontier systems kept increasing.
0:34:13 And do you think now that this era of research will still require tremendous amounts of compute?
0:34:19 Do you think it will require going back into the archives and reading old papers?
0:34:30 What is, maybe what was the vibe of, like, you were at Google and OpenAI and Stanford at these places when there was, like, more of a vibe of research.
0:34:33 What kind of things should we be expecting in the community?
0:34:45 So, one consequence of the age of scaling is that there was this, scaling sucked out all the air in the room.
0:34:45 Yeah.
0:34:54 And so, because scaling sucked out all the air in the room, everyone started to do the same thing.
0:35:03 We got to the point where we are in a world where there are more companies than ideas by quite a bit.
0:35:12 Actually, on that, you know, there is this Silicon Valley saying that says that ideas are cheap, execution is everything.
0:35:15 And people say that a lot.
0:35:15 Yeah.
0:35:16 And there is truth to that.
0:35:25 But then I saw, I saw someone say on Twitter, something like, if ideas are so cheap, how come no one’s having any ideas?
0:35:27 And I think it’s true, too.
0:35:37 I think, like, if you think about research progress in terms of bottlenecks, there are several bottlenecks.
0:35:44 If you go back to the, if you, and one of them is ideas, and one of them is your ability to bring them to life.
0:35:44 Yeah.
0:35:46 Which might be compute, but also engineering.
0:35:51 So if you go back to the 90s, let’s say, you had people who had pretty good ideas.
0:35:57 And if they had much larger computers, maybe they could demonstrate that their ideas were viable, but they could not.
0:36:01 So they could only have very, very small demonstration and did not convince anyone.
0:36:02 Yeah.
0:36:04 So the bottleneck was compute.
0:36:08 Then in the age of scaling, computers increased a lot.
0:36:15 And, of course, there is a question of how much compute is needed, but compute is large.
0:36:27 So compute is large enough such that it’s, like, not obvious that you need that much more compute to prove some idea.
0:36:29 Like, I’ll give you an analogy.
0:36:32 AlexNet was built on two GPUs.
0:36:35 That was the total amount of compute use for it.
0:36:41 The transformer was built on eight to 64 GPUs.
0:36:49 No single transformer paper experiment used more than 64 GPUs of 2017, which would be, like, what, two GPUs of today?
0:37:03 So the ResNet, right, many, like, even the, you could argue that the, like, O1 reasoning was not the most compute-heavy thing in the world.
0:37:17 So there are definitely, for research, you need, like, definitely some amount of compute, but it’s far from obvious that you need the absolutely largest amount of compute ever for research.
0:37:28 You might argue, and I think it is true, that if you want to build the absolutely best system, if you want to build the absolutely best system, then it helps to have much more compute.
0:37:36 And especially if everyone is within the same paradigm, then compute becomes one of the big differentiators.
0:37:44 Yeah, I guess, while it was possible to develop these ideas, I’m asking you for the history because you were actually there.
0:37:50 I’m not sure what actually happened, but it sounds like it was possible to develop these ideas using minimal amounts of compute.
0:37:52 But it wasn’t, the transformer didn’t immediately become famous.
0:38:01 It became the thing everybody started doing and then started experimenting on top of and building on top of because it was validated at higher and higher levels of compute.
0:38:02 Correct.
0:38:17 And if you at SSI have 50 different ideas, how will you know which one is the next transformer and which one is, you know, brittle without having the kinds of compute that other frontier labs have?
0:38:36 So I can comment on that, which is, the short comment is that, you know, you mentioned SSI, specifically for us, the amount of compute that SSI has for research is really not that small.
0:38:45 And I want to explain why, like a simple math can explain why the amount of compute that we have is actually a lot more comparable for research than one might think.
0:38:47 Now explain.
0:39:03 So SSI has raised $3 billion, which is like, not small, but it’s like a lot by any absolute sense, but you could say, but look at the other companies raising much more.
0:39:07 But a lot of what they’re, a lot of their compute goes for inference.
0:39:13 Like these big numbers, these big loans, it’s earmarked for inference.
0:39:14 That’s number one.
0:39:23 Number two, you need, if you want to have a product on which you do inference, you need to have a big staff of engineers, of salespeople.
0:39:30 A lot of the research needs to be dedicated for producing all kinds of product related features.
0:39:35 So then when you look at what’s actually left for research, the difference becomes a lot smaller.
0:39:46 Now, the other thing is, is that if you’re doing something different, do you really need the absolute maximal scale to prove it?
0:39:47 I don’t think it’s true at all.
0:39:57 I think that in our case, we have sufficient compute to prove, to convince ourselves and anyone else that what we’re doing is correct.
0:40:07 There’s been public estimates that, you know, companies like OpenAI spend on the order of five, six billion dollars a year, even just so far on experiments.
0:40:11 This is separate from the amount of money they’re sending on inference and so forth.
0:40:18 So it seems like they’re spending more a year running research experiments than you guys have in total funding.
0:40:20 I think it’s a question of what you do with it.
0:40:22 It’s a question of what you do with it.
0:40:30 Like they have, like, is the more, I think in their case, in the case of others, I think there is a lot more demand on the training compute.
0:40:32 There’s a lot more different work streams.
0:40:34 There is, there are different modalities.
0:40:37 There is just more stuff.
0:40:38 And so it becomes fragmented.
0:40:41 How will SSI make money?
0:40:46 You know, my answer to this question is something like,
0:40:53 We just, right now, we just focus on the research and then the answer to this question will reveal itself.
0:40:56 I think there will be lots of possible answers.
0:40:59 Is SSI’s plan still to straight shot super intelligence?
0:41:00 Maybe.
0:41:04 I think that there is merit to it.
0:41:11 I think there’s a lot of merit because I think that it’s very nice to not be affected by the day-to-day market competition.
0:41:19 I think there are two reasons that may cause us to change the plan.
0:41:21 One is pragmatic.
0:41:25 If timelines turned out to be long, which they might.
0:41:38 And second, I think there is a lot of value in the best and most powerful AI being out there impacting the world.
0:41:41 I think this is a meaningfully valuable thing.
0:41:44 But then, so why is your default plan to straight shot super intelligence?
0:41:55 Because it sounds like, you know, OpenAI, Anthropic, all these other companies, their explicit thinking is, look, we have weaker and weaker intelligences that the public can get used to and prepare for.
0:42:01 And why is it potentially better to build a super intelligence directly?
0:42:03 So I’ll make the case for and against.
0:42:14 The case for is that you are, so one of the challenges that people face when they’re in the market is that they have to participate in the rat race.
0:42:21 And the rat race is quite difficult in that it exposes you to difficult trade-offs which you need to make.
0:42:32 And there is, it is nice to say, we’ll insulate ourselves from all this and just focus on the research and come out only when we are ready and not before.
0:42:35 But the counterpoint is valid too.
0:42:39 And those are opposing forces.
0:42:45 The counterpoint is, hey, it is useful for the world to see powerful AI.
0:42:50 It is useful for the world to see powerful AI because that’s the only way you can communicate it.
0:42:53 Well, I guess not even just that you can communicate the idea.
0:42:56 Communicate the AI, not the idea.
0:42:57 Communicate the AI.
0:42:59 What do you mean communicate the AI?
0:43:02 Okay, so let’s suppose you read an essay about AI.
0:43:07 And the essay says AI is going to be this, and AI is going to be that, and it’s going to be this.
0:43:10 And you read it and you say, okay, this is an interesting essay.
0:43:14 Now suppose you see an AI doing this, an AI doing that.
0:43:16 It is incomparable.
0:43:24 Like basically, I think that there is a big benefit from AI being in the public.
0:43:30 So that would be a reason for us to not be quite straight shot.
0:43:31 Yeah.
0:43:35 Well, I guess it’s not even that, but I do think that is an important part of it.
0:43:41 The other big thing is, I can’t think of another discipline in human engineering and research where
0:43:49 the end artifact was made safer mostly through just thinking about how to make it safe,
0:43:55 as opposed to, why are airplane crashes per mile so much lower today than they were decades ago?
0:44:00 Why is it so much harder to find a bug in Linux than it would have been decades ago?
0:44:03 And I think it’s mostly because these systems were deployed to the world.
0:44:05 You noticed failures.
0:44:08 Those failures were corrected, and the systems became more robust.
0:44:13 Now I’m not sure why AGI and superhuman intelligence would be any different,
0:44:17 especially given, and I hope we’re going to get to this,
0:44:25 it seems like the harms of superintelligence are not just about having some malevolent paper clipper out there,
0:44:27 but it’s just like, this is a really powerful thing,
0:44:30 and we don’t even know how to conceptualize how people will interact with it,
0:44:31 what people will do with it.
0:44:39 And having gradual access to it seems like a better way to maybe spread out the impact of it,
0:44:40 and to help people prepare for it.
0:44:45 Well, I think on this point, even in the straight shot scenario,
0:44:51 you would still do a gradual release of it, is how I would imagine it.
0:44:58 Gradualism would be an inherent component of any plan.
0:45:01 It’s just a question of what is the first thing that you get out of the door.
0:45:02 That’s number one.
0:45:04 Number two, I also think, you know,
0:45:09 I believe you have advocated for continual learning more than other people.
0:45:14 And I actually think that this is an important and correct thing.
0:45:15 And here is why.
0:45:23 So one of the things, so I’ll give you another example of how language affects thinking.
0:45:27 And in this case, it is going to be two words,
0:45:31 two words that have shaped everyone’s thinking, I maintain.
0:45:34 First word, AGI.
0:45:37 Second word, pre-training.
0:45:38 Let me explain.
0:45:44 So the word, the term AGI, why does this term exist?
0:45:46 It’s a very particular term.
0:45:47 Why does it exist?
0:45:48 There’s a reason.
0:45:56 The reason that the term AGI exists is, in my opinion, not so much because it’s like a very
0:46:07 important essential descriptor of some end state of intelligence, but because it is a reaction
0:46:10 to a different term that existed.
0:46:11 The term is narrow AI.
0:46:19 If you go back to ancient history of game playing AI, of checkers AI, chess AI, computer games AI,
0:46:23 everyone would say, look at this narrow intelligence.
0:46:26 Sure, the chess AI can beat Kasper off, but it can’t do anything else.
0:46:29 It is so narrow, artificial narrow intelligence.
0:46:36 So in response, as a reaction to this, some people said, well, this is not good.
0:46:37 It is so narrow.
0:46:40 What we need is general AI.
0:46:42 General AI.
0:46:44 An AI that can just do all the things.
0:46:51 The second, and that term just got a lot of traction.
0:46:51 Yeah.
0:46:55 The second thing that got a lot of traction is pre-training.
0:46:58 Specifically, the recipe of pre-training.
0:47:07 I think the current, the way people do RL now is maybe, is undoing the conceptual imprint
0:47:09 of pre-training, but pre-training had the property.
0:47:15 You do more pre-training and the model gets better at everything, more or less uniformly.
0:47:16 Yeah.
0:47:17 General AI.
0:47:20 Pre-training gives AGI.
0:47:30 But the thing that happened with AGI and pre-training is that in some sense, they overshot the target.
0:47:37 Because by the kind, if you think about the term AGI, you will realize, and especially in
0:47:42 the context of pre-training, you will realize that a human being is not an AGI.
0:47:52 Because a human being, yes, there is definitely a foundation of skills, a human being, a human
0:47:54 being lacks a huge amount of knowledge.
0:47:57 Instead, we rely on continual learning.
0:48:00 We rely on continual learning.
0:48:04 And so then when you think about, okay, so let’s suppose that we achieve success and we
0:48:07 produce a safe super, some kind of safe super intelligence.
0:48:10 The question is, but how do you define it?
0:48:13 Where on the curve of continual learning is it going to be?
0:48:18 I produce like a super intelligent 15-year-old that’s very eager to go and you say, okay, I’m
0:48:20 going to, they don’t know very much at all.
0:48:22 The great student, very eager.
0:48:24 You go and be a programmer.
0:48:25 You go and be a doctor.
0:48:28 Go and learn.
0:48:33 So you could imagine that the deployment itself will involve some kind of a learning trial
0:48:33 and error period.
0:48:38 It’s a process as opposed to you drop the finished thing.
0:48:39 Okay.
0:48:40 I see.
0:48:46 So you’re, you’re suggesting that the thing you’re pointing out with super intelligence is
0:48:54 not some finished mind which knows how to do every single job in the economy.
0:48:59 Because the way, say, the original, I think, open AI charter or whatever defines AGI is
0:49:03 like, it can do every single job that every single thing a human can do.
0:49:09 You’re proposing instead a mind which can learn to do any single, every single job.
0:49:09 Yes.
0:49:10 And that is super intelligence.
0:49:17 And then, but once you have the learning algorithm, it gets deployed into the world the same way
0:49:19 a human laborer might join an organization.
0:49:23 And it seems like one of these two things might happen.
0:49:24 Maybe neither of these happens.
0:49:33 One, this super efficient learning algorithm becomes superhuman, becomes as good as you
0:49:38 and potentially even better at the task of ML research.
0:49:43 And as a result, the algorithm itself becomes more and more superhuman.
0:49:48 The other is, even if that doesn’t happen, if you have a single model, I mean, this is explicitly
0:49:49 your vision.
0:49:54 If you have a single model or instances of a model which are deployed through the economy,
0:49:59 doing different jobs, learning how to do those jobs, continually learning on the job, picking
0:50:02 up all the skills that any human could pick up, but actually picking them all up at the same
0:50:04 time and then amalgamating the learnings.
0:50:10 You basically have a model which functionally becomes super intelligent, even without any
0:50:14 sort of recursive self-improvement in software, right?
0:50:18 Because you now have one model that can do every single job in the economy and humans can’t
0:50:19 merge our minds in the same way.
0:50:23 And so do you expect some sort of like intelligence explosion from broad deployment?
0:50:30 I think that it is likely that we will have rapid economic growth.
0:50:41 I think the broad deployment, like there are two arguments you could make, which are conflicting.
0:50:49 One is that, look, if indeed you get, once indeed you get to a point where you have an AI that
0:50:58 can learn to do things quickly and you have many of them, then they will, then there will
0:51:05 be a strong force to deploy them in the economy unless there will be some kind of a regulation
0:51:07 that stops it, which by the way, there might be.
0:51:15 But I think the idea of very rapid economic growth for some time, I think it’s very possible
0:51:16 from broad deployment.
0:51:19 The other question is how rapid it’s going to be.
0:51:25 So I think this is hard to know because on the one hand, you have this very efficient worker.
0:51:32 On the other hand, there is, the world is just really big and there’s a lot of stuff and that
0:51:34 stuff moves at a different speed.
0:51:38 But then on the other hand, now the AI could, you know, so I think very rapid economic growth
0:51:44 is possible and we will see like all kinds of things like different countries with different
0:51:48 rules and the ones which have the friendlier rules, the economic growth will be faster.
0:51:49 Hard to predict.
0:51:56 It seems to me that this is a very precarious situation to be in where, look in the limit,
0:52:00 we know that this should be possible because if you have something that is as good as a human
0:52:06 at learning, but which can merge its brains, merge their different instances in a way that
0:52:07 humans can’t merge.
0:52:10 Already, this seems like a thing that should physically be possible.
0:52:11 Humans are possible.
0:52:12 Digital computers are possible.
0:52:15 You just need both of those combined to produce this thing.
0:52:24 And it also seems like this kind of thing is extremely powerful and economic growth is one
0:52:25 way to put it.
0:52:29 I mean, Dyson Spear is a lot of economic growth, but another way to put it is just like, you
0:52:34 will have potentially a very short period of time because a human on the job can, you know,
0:52:37 you’re hiring people to SSI in six months, they’re like net productive probably, right?
0:52:39 A human like learns really fast.
0:52:41 And so this thing is becoming smarter and smarter very fast.
0:52:45 What is, how do you think about making that go well?
0:52:47 And why is SSI positioned to do that well?
0:52:49 What is SSI’s plan there basically is what I’m trying to ask.
0:52:50 Yeah.
0:53:03 So one of the, one of the ways in which my thinking has been changing is that I now place more importance
0:53:11 on AI being deployed incrementally and in advance.
0:53:21 One very difficult thing about AI is that we are talking about systems that don’t yet exist.
0:53:24 And it’s hard to imagine them.
0:53:32 I think that one of the things that’s happening is that in practice, it’s very hard to feel the AGI.
0:53:35 It’s very hard to feel the AGI.
0:53:43 We can talk about it, but it’s like, it’s like talking about like the long few, like imagine
0:53:49 like having a conversation about like, how is it like to be old when you’re like old and frail
0:53:54 and you can have a conversation, you can try to imagine it, but it’s just hard and you come
0:53:56 back to reality where that’s not the case.
0:54:08 And I think that a lot of the issues around AGI and its future power stem from the fact
0:54:11 that it’s very difficult to imagine.
0:54:15 Future AI is going to be different.
0:54:17 It’s going to be powerful.
0:54:19 Indeed, the whole problem.
0:54:21 What is the problem of AI and AGI?
0:54:23 The whole problem is the power.
0:54:26 The whole problem is the power.
0:54:31 When the power is really big, what’s going to happen?
0:54:36 And one of the, one of the ways in which I’ve changed my mind over the past year.
0:54:43 And so that, that change of mind may back, may, I’ll say, I’ll, I’ll, I’ll, I’ll hedge
0:54:48 a little bit, may back propagate into, into the plans of our, of our company is that.
0:54:54 So if it’s hard to imagine, what do you do?
0:54:56 You got to be showing the thing.
0:54:57 You got to be showing the thing.
0:54:59 And I maintain that.
0:55:06 I think, I think most people who work on AI also can’t imagine it because it’s too different
0:55:08 from what people see on a day-to-day basis.
0:55:14 I do maintain, here’s something which I predict will happen.
0:55:15 That’s a prediction.
0:55:25 I maintain that as AI becomes more powerful, then people will change their behaviors.
0:55:32 And we will see all kinds of unprecedented things, which are not happening right now.
0:55:34 And I’ll give some examples.
0:55:41 I do like, I think, I think for better or worse, the, the frontier companies will play a very
0:55:44 important role in what happens as will the government.
0:55:52 And the kind of things that I think we’ll see, which you see the beginnings of companies that
0:55:57 are fierce competitors starting collaborate to, to collaborate on AI safety.
0:56:04 You may have seen open AI and Anthropic doing a first small step, but that did not exist.
0:56:09 That’s actually something which I predicted in one of my talks about three years ago, that
0:56:10 such a thing will happen.
0:56:18 I also maintain that as AI continues to become more powerful, more visibly powerful, there will
0:56:22 also be a desire from governments and the public to do something.
0:56:29 And I think that this is a very important force of showing the AI.
0:56:30 That’s number one.
0:56:31 Number two.
0:56:32 Okay.
0:56:33 So then the AI is being built.
0:56:35 What needs to, what needs to be done?
0:56:42 So one thing that I maintain that will happen is that right now people who are working on AI,
0:56:47 I maintain that the AI doesn’t feel powerful because of its mistakes.
0:56:52 I do think that at some point the AI will start to feel powerful, actually.
0:57:00 And I think when that happens, we will see a big change in the way all AI companies approach
0:57:00 safety.
0:57:02 They’ll become much more paranoid.
0:57:08 I think I say this as a predict, as a, as a, as a prediction that we will see happen.
0:57:09 We’ll see if I’m right.
0:57:13 But I think this is something that will happen because they will see the AI becoming more powerful.
0:57:20 Everything that’s happening right now, I maintain is because people look at today’s AI and it’s hard
0:57:22 to imagine the future AI.
0:57:26 And there is a third thing which needs to happen.
0:57:32 And I think this is, this, this, and I’m talking about it in, in broader terms, not just from the
0:57:37 perspective of SSI, because you asked me about our company, but the question is, okay, so then
0:57:39 what should, what should, what should the companies aspire to build?
0:57:41 What should they aspire to build?
0:57:47 And there has been one big idea that actually every, that everyone has been locked in, locked
0:57:50 into, which is the, the self-improving AI.
0:57:52 And why, why did this happen?
0:57:59 Because there is fewer ideas than companies, but I maintain that there is something that’s
0:57:59 better to build.
0:58:02 And I think that everyone will actually want that.
0:58:09 It’s like the AI that’s robustly aligned to care about sentient life specifically.
0:58:17 I think in particular, it will be, there’s a case to be made that it will be easier to build
0:58:23 an AI that cares about sentient life than an AI that cares about human life alone, because
0:58:24 the AI itself will be sentient.
0:58:30 And if you think about things like mirror neurons and human empathy for animals, which
0:58:34 is, you know, you might argue it’s not big enough, but it exists.
0:58:41 I think it’s an emergent property from the fact that we model others with the same circuit
0:58:44 that we used to model ourselves, because that’s the most efficient thing to do.
0:58:51 So even if you got an AI to hear about sentient beings, and it’s not actually clear to me that
0:58:53 that’s what you should try to do if you solve the alignment.
0:58:58 It would still be the case that most sentient beings will be AIs.
0:59:01 There will be trillions, eventually quadrillions of AIs.
0:59:04 Humans will be a very small fraction of sentient beings.
0:59:13 So it’s not clear to me if the goal is some kind of human control over this future civilization,
0:59:16 that this is the best criterion.
0:59:17 It’s true.
0:59:21 I think that it’s possible.
0:59:22 It’s not the best criterion.
0:59:23 I’ll say two things.
0:59:34 I think that thing number one, I think that if there is so, I think that care for sentient
0:59:35 life, I think there is merit to it.
0:59:37 I think it should be considered.
0:59:46 I think that it will be helpful if there was some kind of a short list of ideas that then
0:59:50 the companies, when they are in this situation, could use.
0:59:51 That’s number two.
0:59:59 Number three, I think it would be really materially helpful if the power of the most powerful super
1:00:04 intelligence was somehow capped because it would address a lot of these concerns.
1:00:10 The question of how to do it, I’m not sure, but I think that would be materially helpful
1:00:13 when you’re talking about really, really powerful systems.
1:00:14 Yeah.
1:00:18 Before we continue the alignment discussion, I want to double click on that.
1:00:20 How much room is there at the top?
1:00:21 How do you think about super intelligence?
1:00:27 Do you think, I mean, using this learning efficiency idea, maybe it’s just extremely
1:00:30 fast at learning new skills or new knowledge.
1:00:33 And does it just have a bigger pool of strategies?
1:00:39 Is there a single cohesive it in the center that’s more powerful or bigger?
1:00:45 And if so, do you imagine that this will be sort of godlike in comparison to the rest of
1:00:45 human civilization?
1:00:49 Or does it just feel like another agent or another cluster of agents?
1:00:53 So this is an area where different people have different intuitions.
1:00:56 I think it will be very powerful for sure.
1:01:05 I think that what I think is most likely to happen is that there will be multiple such AIs
1:01:08 being created roughly at the same time.
1:01:17 I think that if the cluster is big enough, like if the cluster is literally continent sized,
1:01:21 that thing could be really powerful indeed, right?
1:01:26 If you literally have a continent sized cluster, like those, those AIs can be very powerful.
1:01:33 And I, like, all I can tell you is that if you’re talking about extremely powerful AIs,
1:01:39 like truly dramatically powerful, then yeah, it would be nice if they could be restrained in
1:01:44 some ways or if there was some kind of an agreement or something.
1:01:52 Because I think that if you are saying, hey, like, if you really, like, what is the concern
1:01:53 of superintelligence?
1:01:54 What is one way to explain the concern?
1:02:01 If you imagine a system that is sufficiently powerful, like really sufficiently powerful,
1:02:06 and you could say, okay, you need to do something sensible, like care for sentient life, let’s say,
1:02:10 in a very single-minded way, we might not like the results.
1:02:11 That’s really what it is.
1:02:15 And so maybe, by the way, the answer is that you do not build a single,
1:02:18 you do not build an RL agent in the usual sense.
1:02:20 And actually, I’ll point, I’ll point several things out.
1:02:24 I think human beings are a semi-RL agent.
1:02:30 You know, we pursue a reward and then the emotions or whatever make us tire out of the reward,
1:02:31 we pursue a different reward.
1:02:38 The market is like, kind, it’s like a very short-sighted kind of agent.
1:02:39 Evolution is the same.
1:02:42 Evolution is very intelligent in some ways, but very dumb in other ways.
1:02:47 The government has been designed to be a never-ending fight between three parts.
1:02:49 which has an effect.
1:02:51 So I think things like this.
1:02:55 Another thing that makes this discussion difficult
1:02:58 is that we are talking about systems that don’t exist,
1:03:00 that we don’t know how to build.
1:03:02 That’s the other thing.
1:03:03 And that’s actually my belief.
1:03:08 I think what people are doing right now will go some distance and then peter out.
1:03:11 It will continue to improve, but it will also not be it.
1:03:14 So the it, we don’t know how to build.
1:03:21 And I think that a lot hinges on understanding reliable generalization.
1:03:32 And I’ll say another thing, which is like, you know, one of the things that you could say is what would that cause alignment to be difficult is that human value,
1:03:37 that it’s, it’s, um, your ability to learn human values is fragile.
1:03:39 Then your ability to optimize them is fragile.
1:03:41 You will, you actually learn to optimize them.
1:03:46 And then can’t you say, are these not all instances of unreliable generalization?
1:03:51 Why is it that human beings appear to generalize so much better?
1:03:53 What if generalization was much better?
1:03:54 What would happen in this case?
1:03:55 What would be the effect?
1:04:00 But those, we can’t, we can’t, we can’t, like those questions are right now still unanswerable.
1:04:06 Um, how does one think about what AI going well looks like?
1:04:09 Because I think you’ve scoped out how AI might evolve.
1:04:11 We’ll have these sort of continual learning agents.
1:04:12 AI will be very powerful.
1:04:15 Maybe there will be many different AIs.
1:04:20 How do you think about lots of continent compute size intelligences going around?
1:04:23 How dangerous is that?
1:04:26 How do we make that less dangerous?
1:04:34 And how do we do that in a way that protects a equilibrium where there might be misaligned
1:04:37 AIs out there and bad actors out there?
1:04:43 So one reason why I liked the AI that cares for sentient life, you know, and we can debate
1:04:53 on whether it’s good or bad, but if the first N of these dramatic systems actually do care
1:04:59 for, you know, love humanity or something, you know, care for sentient life, obviously
1:05:01 this also needs to be achieved.
1:05:03 This needs to be achieved.
1:05:13 So if this is achieved by the first N of those systems, then I can see it go well, at least
1:05:14 for quite some time.
1:05:17 And then there is the question of what happens in the long run.
1:05:18 What happens in the long run?
1:05:20 How do you achieve a long run equilibrium?
1:05:26 And I think that there, there is an answer as well.
1:05:28 And I don’t like this answer.
1:05:31 But it needs to be considered.
1:05:39 In the long run, you might say, okay, so if you have a world where powerful AI exist, in
1:05:44 the short run, you could say, okay, you have universal high income, you have universal high
1:05:44 income.
1:05:46 And we’re all doing well.
1:05:49 But we know that, what do the Buddhists say?
1:05:51 Change is the only constant.
1:05:52 And so things change.
1:05:56 And there is some kind of government, political structure thing.
1:05:57 And it changes.
1:05:59 Because these things have a shelf life.
1:06:03 You know, some new government thing comes up and it functions.
1:06:05 And then after some time, it stops functioning.
1:06:08 That’s something that you see happening all the time.
1:06:16 And so I think that for the long run equilibrium, one approach, you could say, okay, so maybe
1:06:19 every person will have an AI that will do their bidding.
1:06:21 And that’s good.
1:06:24 And if that could be maintained indefinitely, that’s true.
1:06:32 But the downside with that is, okay, so then the AI goes and like earns, you know, earns
1:06:36 money for the person and, you know, advocates for their needs in like the political sphere.
1:06:40 And maybe then writes a little report saying, okay, here’s what I’ve done.
1:06:41 Here’s the situation.
1:06:43 And the person says, great, keep it up.
1:06:46 But the person is no longer a participant.
1:06:49 And then you can say that’s a precarious place to be in.
1:06:59 But so I’m going to preface by saying, I don’t like this solution, but it is a solution.
1:07:06 And the solution is if people become part AI with some kind of neural link plus plus, because
1:07:10 what will happen as a result is that now the AI understands something and we understand it
1:07:15 too, because now the understanding is transmitted wholesale.
1:07:21 So now if the AI is in some situation, now it’s like you are involved in that situation
1:07:22 yourself fully.
1:07:26 And I think this is the answer to the equilibrium.
1:07:35 I wonder if the fact that emotions, which were developed millions or in many cases, billions
1:07:42 of years ago in a totally different environment are still guiding our actions so strongly is
1:07:44 an example of alignment success.
1:07:54 I don’t know if it’s more accurate to call it a value function or reward function, but
1:07:59 the brainstem has a directive where it’s saying mate with somebody who’s more successful.
1:08:03 The cortex is the part that understands what does success mean in the modern context.
1:08:09 But the brainstem is able to align the cortex and say, however you recognize success to be,
1:08:11 and I’m not smart enough to understand what that is.
1:08:13 You’re still going to pursue this directive.
1:08:15 I think, I think there is.
1:08:18 So I think there’s a more general point.
1:08:25 I think it’s actually really mysterious how the brain encodes high level desires.
1:08:27 Sorry, how evolution encodes high level desires.
1:08:35 Like it’s pretty easy to understand how evolution would endow us with the desire for food that smells
1:08:40 good because smell is a chemical and so just pursue that chemical.
1:08:43 It’s very easy to imagine such a evolution doing such a thing.
1:08:49 But evolution also has endowed us with all these social desires.
1:08:54 Like we really care about being seen positively by society.
1:08:56 We care about being in a good standing.
1:09:03 Like all these social intuitions that we have, I feel strongly that they’re baked in.
1:09:10 And I don’t know how evolution did it because it’s a high level concept that’s represented in the brain.
1:09:18 Like what people think, like let’s say you are like, you care about some social thing.
1:09:22 It’s not like a low level signal like smell.
1:09:25 It’s not something that for which there is a sensor.
1:09:32 Like the brain needs to do a lot of processing to piece together lots of bits of information to understand what’s going on socially.
1:09:35 And somehow evolution said, that’s what you should care about.
1:09:36 Yes.
1:09:37 How did it do it?
1:09:38 And it did it quickly too.
1:09:39 Yeah.
1:09:45 Because I think all these sophisticated social things that we care about, I think they evolved pretty recently.
1:09:50 So evolution had an easy time, hardcoding this high level desire.
1:09:57 And I maintain, or at least I’ll say, I’m unaware of good hypotheses for how it’s done.
1:10:05 I had some ideas that was kicking around, but none of them are satisfying.
1:10:06 Yeah.
1:10:10 And what’s especially impressive is if it was a desire that you learned in your lifetime,
1:10:13 it kind of makes sense because your brain is intelligent.
1:10:16 It makes sense why we’d be able to learn intelligent desires.
1:10:21 But your point is that the desire is, maybe this is not your point, but one way to understand it is,
1:10:26 the desire is built into the genome and the genome is not intelligent, right?
1:10:29 But it’s able to, you’re somehow able to describe this feature that requires,
1:10:34 like it’s not even clear how you define that feature and you can get it into the,
1:10:35 you can build it into the genes.
1:10:36 Yeah, essentially.
1:10:37 Or maybe I’ll put it differently.
1:10:45 If you think about the tools that are available to the genome, it says, okay, here’s a recipe for building a brain.
1:10:50 And you could say, here is a recipe for connecting the dopamine neurons to like the smell sensor.
1:10:50 Yeah.
1:10:54 And if the smell is a certain kind of, you know, good smell, you want to eat that.
1:10:56 I could imagine the genome doing that.
1:11:00 I’m claiming that it is harder to imagine.
1:11:08 It’s harder to imagine the genome saying you should care about some complicated computation that your entire brain,
1:11:10 that like a big chunk of your brain does.
1:11:11 That’s all I’m claiming.
1:11:13 I can tell you like a speculation.
1:11:15 I was wondering how it could be done.
1:11:18 And let me offer a speculation and I’ll explain why the speculation is probably false.
1:11:22 So the speculation is, okay.
1:11:29 So the brain, it’s like, the brain has those regions, you know, the brain regions.
1:11:31 We have our cortex, right?
1:11:31 Yeah.
1:11:32 And it has all those brain regions.
1:11:38 And the cortex is uniform, but the brain regions and the neurons in the cortex,
1:11:40 they kind of speak to their neighbors mostly.
1:11:42 And that explains why you get brain regions.
1:11:45 Because if you want to do some kind of speech processing,
1:11:47 all the neurons that do speech need to talk to each other.
1:11:50 And because neurons can only speak to their nearby neighbors,
1:11:52 for the most part, it has to be a region.
1:11:56 All the regions are mostly located in the same place from person to person.
1:12:00 So maybe evolution hard-coded literally a location on the brain.
1:12:05 So it says, oh, like when, when like, you know,
1:12:09 the GPS of the brain, GPS coordinates, such and such,
1:12:10 when that fires, that’s what you should care about.
1:12:12 Like maybe that’s what evolution did.
1:12:14 Because that would be within the toolkit of evolution.
1:12:15 Yeah.
1:12:18 Although there are examples where, for example,
1:12:21 people who are born blind have that area of their cortex
1:12:25 adopted by another sense.
1:12:33 And I have no idea, but I’d be surprised if the desires or the reward functions,
1:12:37 which require visual signal, no longer worked.
1:12:40 You know, people who have their different areas of their cortex co-opted.
1:12:44 For example, if you no longer have vision,
1:12:49 can you still feel the sense that I want people around me to like me and so forth,
1:12:51 which usually there’s also visual cues for.
1:12:53 So I actually fully agree with that.
1:12:55 I, I think there’s an even stronger counter argument to this theory,
1:12:58 which is like, if you think about people,
1:13:04 so there are people who get half of their brains removed in childhood.
1:13:07 And they still have all their brain regions,
1:13:09 but they all somehow move to just one hemisphere,
1:13:11 which suggests that the brain regions,
1:13:13 the location is not fixed.
1:13:15 And so that theory is not true.
1:13:16 It would have been cool if it was true,
1:13:18 but it’s not.
1:13:19 And so I think that’s a mystery,
1:13:20 but it’s an interesting mystery.
1:13:25 Like the fact is somehow evolution was able to endow us
1:13:28 to care about social stuff very, very reliably.
1:13:32 And even people who have like all kinds of strange mental conditions
1:13:35 and deficiencies and emotional problems tend to care about this also.
1:13:38 What is SSI planning on doing differently?
1:13:41 So presumably your plan is to be one of the frontier companies
1:13:43 when this time arrives.
1:13:45 And then what is,
1:13:49 presumably you started SSI because you’re like,
1:13:52 I think I have a way of approaching how to do this safely
1:13:54 in a way that the other companies don’t.
1:13:56 What is that difference?
1:13:58 So the way I would describe it as,
1:14:02 there are some ideas that I think are promising
1:14:04 and I want to investigate them
1:14:07 and see if they are indeed promising or not.
1:14:08 It’s really that simple.
1:14:09 It’s an attempt.
1:14:12 I think that if the ideas are now to be correct,
1:14:14 these ideas that we discussed around
1:14:17 understanding generalization,
1:14:20 if these ideas turn out to be correct,
1:14:25 then I think we will have something worthy.
1:14:27 Will they turn out to be correct?
1:14:28 We are doing research.
1:14:31 We are squarely age of research company.
1:14:33 We are making progress.
1:14:35 We’ve actually made quite good progress over the past year,
1:14:37 but we need to keep making more progress,
1:14:38 more research.
1:14:40 And that’s how I see it.
1:14:43 I see it as an attempt to be,
1:14:48 an attempt to be a voice and a participant.
1:14:54 people have asked your co-founder and previous CEO
1:14:57 left to go to Meta recently.
1:14:59 And people have asked,
1:14:59 well,
1:15:02 if there was a lot of breakthroughs being made,
1:15:04 that seems like a thing that should have been unlikely.
1:15:05 I wonder how you respond.
1:15:08 for this,
1:15:10 I will simply remind a few facts
1:15:13 that may have been forgotten.
1:15:15 And I think these facts,
1:15:16 which provide the context,
1:15:17 I think they explain the situation.
1:15:21 So the context was that we were fundraising
1:15:23 at a 32 billion valuation.
1:15:28 And then Meta came in
1:15:29 and offered to acquire us.
1:15:32 And I said,
1:15:32 no,
1:15:35 but my former co-founder,
1:15:37 like in some sense,
1:15:38 said yes.
1:15:40 And as a result,
1:15:42 he also was able to enjoy
1:15:43 from a lot of near-term liquidity.
1:15:47 And he was the only person from SSI to join Meta.
1:15:50 It sounds like SSI’s plan is to be a company
1:15:51 that is at the frontier
1:15:52 when you get to this
1:15:55 very important period in human history
1:15:57 where you have superhuman intelligence
1:15:59 and you have these ideas
1:16:01 about how to make superhuman intelligence go well.
1:16:04 But other companies will be trying their own ideas.
1:16:08 What distinguishes SSI’s approach
1:16:10 to making superintelligence go well?
1:16:14 The main thing that distinguishes SSI
1:16:16 is its technical approach.
1:16:19 So we have a different technical approach
1:16:20 that I think is worthy.
1:16:23 And we are pursuing it.
1:16:26 I maintain that in the end,
1:16:28 there will be a convergence of strategies.
1:16:31 So I think there will be a convergence of strategies
1:16:33 where at some point,
1:16:36 as AI becomes more powerful,
1:16:39 it’s going to become more or less clearer
1:16:41 to everyone what the strategy should be.
1:16:43 And it should be something like,
1:16:43 yeah,
1:16:46 you need to find some way to talk to each other.
1:16:50 And you want your first actual,
1:16:51 like real superintelligent AI
1:16:55 to be aligned and somehow be,
1:16:58 you know,
1:17:00 care for sentient life,
1:17:01 care for people,
1:17:02 democratic,
1:17:03 one of those,
1:17:04 some combination of thereof.
1:17:08 And I think this is the condition
1:17:12 that everyone should strive for.
1:17:14 And that’s what the SSI is striving for.
1:17:18 And I think that this time,
1:17:19 if not already,
1:17:21 all the other companies will realize
1:17:22 that they’re striving towards the same thing.
1:17:23 And we’ll see.
1:17:25 I think that the world will truly change
1:17:26 as AI becomes more powerful.
1:17:29 And I think a lot of these forecasts will,
1:17:29 like,
1:17:32 I think things will be really different
1:17:34 and people will be acting really differently.
1:17:36 Speaking of forecasts,
1:17:37 what are your forecasts to
1:17:39 this system you’re describing,
1:17:41 which can learn as well as a human
1:17:44 and subsequently,
1:17:45 as a result,
1:17:45 become superhuman?
1:17:48 I think like five to 20.
1:17:50 Five to 20 years?
1:17:50 Mm-hmm.
1:17:53 So I just want to unroll your,
1:17:56 how you might see the world coming.
1:17:56 It’s like,
1:17:58 we have a couple more years
1:17:59 where these other companies
1:18:01 are continuing the current approach
1:18:02 and it stalls out.
1:18:03 And stalls out here,
1:18:05 meaning they earn no more than
1:18:07 low hundreds of billions in revenue.
1:18:08 Or how do you think about
1:18:09 what stalling out means?
1:18:10 Yeah.
1:18:14 I think it could stall out.
1:18:18 And I think stalling out will look like,
1:18:20 it will all look very similar.
1:18:21 Yeah.
1:18:23 Among all the different companies,
1:18:24 something like this.
1:18:25 I’m not sure because I think,
1:18:25 I think,
1:18:26 I think even with,
1:18:27 I think even,
1:18:29 I think even with stalling out,
1:18:30 I think these companies could make
1:18:31 a stupendous,
1:18:32 stupendous revenue.
1:18:34 Maybe not profits
1:18:34 because they will be,
1:18:35 it will be,
1:18:37 they will need to work hard
1:18:38 to differentiate each other
1:18:38 from themselves.
1:18:40 But revenue definitely.
1:18:42 But there’s,
1:18:44 something in your model implies that
1:18:48 when the correct solution does emerge,
1:18:49 there will be convergence
1:18:50 between all the companies.
1:18:51 And I’m curious why you think
1:18:52 that’s the case.
1:18:53 Well,
1:18:54 I was talking more about convergence
1:18:55 on their largest strategies.
1:18:57 I think eventual convergence
1:18:58 on the technical approach
1:18:59 is probably going to happen as well.
1:19:02 But I was alluding to convergence
1:19:03 to the largest strategies.
1:19:05 What exactly is the thing
1:19:05 that should be done?
1:19:07 I just want to better understand
1:19:09 how you see the future enrolling.
1:19:10 So currently we have
1:19:11 these different companies
1:19:12 and you expect their approach
1:19:13 to continue generating revenue.
1:19:13 Yes.
1:19:15 But not get to this human-like learner.
1:19:16 Yes.
1:19:18 So now we have these different
1:19:18 forks of companies.
1:19:19 We have you,
1:19:20 we have Thinking Machines,
1:19:21 there’s a bunch of other labs.
1:19:22 Yes.
1:19:24 And maybe one of them figures out
1:19:25 the correct approach.
1:19:27 But then the release of the product
1:19:28 makes it clear to other people
1:19:29 how to do this thing.
1:19:32 I think it won’t be clear
1:19:33 how to do it thing,
1:19:33 but it will be clear
1:19:35 that something different is possible.
1:19:35 Right.
1:19:36 And that is information.
1:19:38 And I think people will,
1:19:41 will then be trying to figure out
1:19:41 how,
1:19:41 how that’s,
1:19:42 how that works.
1:19:44 I do think though,
1:19:46 that one of the things that’s,
1:19:47 that I think,
1:19:48 you know,
1:19:49 not addressed here,
1:19:51 not discussed is that
1:19:54 with each increase
1:19:56 in the AI’s capabilities,
1:19:57 I think there will be
1:19:59 some kind of changes,
1:20:01 but I don’t know exactly which ones
1:20:03 in how things are being done.
1:20:04 And so like,
1:20:07 I think it’s going to be important,
1:20:08 yet I can’t spell out
1:20:09 what that is exactly.
1:20:10 And how,
1:20:11 how are the,
1:20:13 by default,
1:20:14 you would expect the company
1:20:15 that has,
1:20:16 the model company
1:20:16 that has that model
1:20:18 to be getting all these gains
1:20:19 because they have the model
1:20:20 that is learning how to do all,
1:20:22 has the skills and knowledge
1:20:24 that it’s building up in the world.
1:20:26 What is the reason to think
1:20:27 that the benefits of that
1:20:28 would be widely distributed
1:20:29 and not just end up at
1:20:30 whatever model company
1:20:31 gets this continuous learning
1:20:33 loop going first?
1:20:34 Like,
1:20:34 I think that
1:20:36 empirically what happened,
1:20:37 so here,
1:20:38 here is what I think
1:20:39 is going to happen.
1:20:40 Number one,
1:20:41 I think empirically
1:20:43 when,
1:20:45 let’s,
1:20:46 let’s,
1:20:46 let’s look at,
1:20:48 let’s look at how things
1:20:49 have gone so far
1:20:51 with the AI’s of the past.
1:20:52 So one company
1:20:53 produced in advance
1:20:55 and the other company
1:20:55 scrambled
1:20:57 and
1:20:58 produced some competitive,
1:20:59 some,
1:21:00 some similar things
1:21:00 after
1:21:02 some amount of time
1:21:03 and they started to compete
1:21:04 in the market
1:21:06 and push their,
1:21:08 push the prices down.
1:21:09 And so I think
1:21:10 from the market perspective,
1:21:12 I think something similar
1:21:13 will happen there as well.
1:21:14 Even if someone,
1:21:14 it’s okay,
1:21:15 we are talking about
1:21:16 the good world,
1:21:17 by the way,
1:21:18 where,
1:21:20 what’s the good world?
1:21:22 What’s the good world?
1:21:25 where we have these
1:21:26 powerful,
1:21:27 human-like learners
1:21:30 that are also like,
1:21:31 and by the way,
1:21:32 maybe there’s another thing
1:21:33 we haven’t discussed
1:21:33 on the,
1:21:34 on the,
1:21:35 the spec of the
1:21:36 super intelligent AI
1:21:38 that I think is
1:21:39 worth considering
1:21:40 is that you make it
1:21:41 narrow,
1:21:43 can be useful
1:21:43 and narrow
1:21:44 at the same time.
1:21:45 So you can have lots of
1:21:46 narrow super intelligent AI’s,
1:21:48 but suppose you have
1:21:49 many of them
1:21:52 and you have some,
1:21:53 and you have some company
1:21:54 that’s producing
1:21:54 a lot of
1:21:56 profits from it
1:21:58 and then you have
1:21:58 another company
1:21:59 that comes in
1:22:00 and starts to compete
1:22:02 and the way the competition
1:22:03 is going to work
1:22:04 is through specialization.
1:22:06 I think what’s going to happen
1:22:06 is that
1:22:08 the way
1:22:11 competition,
1:22:12 like competition
1:22:13 loves specialization
1:22:15 and you see it
1:22:15 in the market,
1:22:16 you see it in evolution
1:22:17 as well.
1:22:17 So you’re going to have
1:22:18 lots of different niches
1:22:19 and you’re going to have
1:22:20 lots of different companies
1:22:21 who are occupying
1:22:22 different niches
1:22:26 in this kind of world
1:22:26 where you might say,
1:22:26 yeah,
1:22:28 like one AI company
1:22:30 is really quite a bit better
1:22:31 at some area
1:22:32 of really complicated
1:22:33 economic activity
1:22:34 and a different company
1:22:35 is better at another area
1:22:37 and a third company
1:22:38 is really good at litigation
1:22:38 and that’s the way
1:22:38 you want to go to.
1:22:39 But is this contradicted
1:22:40 by what human-like learning
1:22:41 implies?
1:22:42 Is that like it can learn?
1:22:43 It can,
1:22:43 but,
1:22:45 but you have
1:22:45 accumulated learning,
1:22:47 you have a big investment,
1:22:49 you spent a lot of compute
1:22:50 to become really,
1:22:50 really,
1:22:51 really good,
1:22:52 really phenomenal
1:22:53 at this thing
1:22:54 and someone else
1:22:56 spent a huge amount
1:22:56 of computer
1:22:56 and a huge amount
1:22:57 of experience
1:22:57 to get really,
1:22:57 really good
1:22:58 at some other thing.
1:22:59 You apply a lot
1:23:00 of human learning
1:23:00 to get there
1:23:02 but now you are
1:23:04 at this high point
1:23:06 where someone else
1:23:06 would say,
1:23:06 look,
1:23:07 like I don’t want
1:23:08 to start learning
1:23:08 what you’ve learned
1:23:09 to do for this.
1:23:09 I guess that would require
1:23:10 many different companies
1:23:12 to begin at the human-like
1:23:14 continual learning agent
1:23:15 at the same time
1:23:17 so that they can start
1:23:18 their different research
1:23:20 in different branches
1:23:21 but if one company,
1:23:24 you know,
1:23:25 gets that agent first
1:23:26 or gets that learner first,
1:23:29 it does then seem like,
1:23:29 well,
1:23:30 you know,
1:23:32 like we just think
1:23:33 about every single job
1:23:33 in the economy,
1:23:37 you just have
1:23:38 instance learning
1:23:38 each one
1:23:39 seems tractable
1:23:40 for a company.
1:23:40 Yeah,
1:23:42 that’s a valid argument.
1:23:44 My strong intuition
1:23:45 is that it’s not
1:23:45 how it’s going to go.
1:23:48 My strong intuition
1:23:48 is that,
1:23:48 yeah,
1:23:49 like the argument
1:23:50 says it will go this way
1:23:52 but my strong intuition
1:23:52 is that it will not
1:23:53 go this way.
1:23:55 That this is the,
1:23:56 you know,
1:23:57 in theory,
1:23:58 there is no difference
1:23:59 between theory and practice
1:24:00 and practice there is
1:24:01 and I think that’s going
1:24:01 to be one of those.
1:24:03 A lot of people’s models
1:24:04 of recursive self-improvement
1:24:06 literally explicitly state
1:24:07 we will have
1:24:09 a million ILYAs
1:24:10 in a server
1:24:10 that are coming in
1:24:11 with different ideas
1:24:12 and this will lead
1:24:13 to a super intelligence
1:24:14 emerging very fast.
1:24:15 Do you have some intuition
1:24:16 about how parallelizable
1:24:18 the thing you are doing is?
1:24:19 How,
1:24:20 what are the gains
1:24:22 from making copies of Ilya?
1:24:24 I don’t know.
1:24:25 I think,
1:24:28 I think there will definitely
1:24:29 be a,
1:24:29 there will be
1:24:30 diminishing returns
1:24:31 because you want,
1:24:32 you want people
1:24:33 who think differently
1:24:34 rather than the same.
1:24:35 I think that if they were
1:24:36 little copies of me,
1:24:37 I’m not sure how much
1:24:39 more incremental value
1:24:39 you’d get.
1:24:41 I think that,
1:24:43 but people who think
1:24:44 differently,
1:24:45 that’s what you want.
1:24:47 why is it that it’s been,
1:24:48 if you look at different
1:24:49 models even released
1:24:50 by totally different
1:24:52 companies trained on
1:24:54 potentially non-overlapping
1:24:55 data sets,
1:24:56 it’s actually crazy how
1:24:58 similar LLMs are
1:24:58 to each other.
1:24:59 Maybe the data sets
1:25:00 are not as non-overlapping
1:25:01 as it seems.
1:25:04 but there’s,
1:25:04 there’s some sense
1:25:05 that it’s like,
1:25:06 even if an individual human
1:25:07 might be less productive
1:25:08 than the future AI,
1:25:08 maybe there’s something
1:25:09 to the fact that human teams
1:25:10 have more diversity
1:25:12 than teams of AIs might have,
1:25:13 but how do we elicit
1:25:14 meaningful diversity
1:25:16 among AI?
1:25:17 So I think just raising
1:25:17 the temperature
1:25:19 just results in gibberish.
1:25:20 I think you want something
1:25:20 more like
1:25:21 different scientists
1:25:23 have different prejudices
1:25:24 or different ideas.
1:25:25 How do you get that kind of
1:25:27 diversity among AI agents?
1:25:28 So the reason there has
1:25:29 been no diversity,
1:25:31 I believe,
1:25:32 is because of pre-training.
1:25:34 All the pre-trained models
1:25:35 are the same,
1:25:36 pretty much,
1:25:38 because they pre-trained
1:25:39 on the same data.
1:25:41 Now, RL and post-training
1:25:42 is where some differentiation
1:25:43 starts to emerge
1:25:44 because different people
1:25:46 come up with different
1:25:47 RL training.
1:25:48 Yeah.
1:25:50 And then I’ve heard you
1:25:50 hint in the past
1:25:52 about self-play
1:25:53 as a way to
1:25:54 either get data
1:25:55 or match agents
1:25:57 to other agents
1:25:58 with equivalent intelligence
1:26:00 to kick off learning.
1:26:01 How should we think about
1:26:03 why
1:26:04 there’s no
1:26:05 public
1:26:07 proposals
1:26:08 of this kind of thinking
1:26:09 working with LLMs?
1:26:10 I would say there are
1:26:11 two things to say.
1:26:13 I would say that
1:26:14 the reason why I thought
1:26:15 self-play was interesting
1:26:17 is because
1:26:18 it offered a way
1:26:20 to create models
1:26:21 using compute
1:26:22 only without data.
1:26:23 Right?
1:26:24 And if you think
1:26:24 that data is the
1:26:25 ultimate bottleneck,
1:26:27 then using compute
1:26:28 only is very interesting.
1:26:29 so that’s
1:26:30 what makes it
1:26:30 interesting.
1:26:31 Now,
1:26:32 the
1:26:34 thing is
1:26:37 that self-play,
1:26:38 at least the way
1:26:40 it was done
1:26:40 in the past,
1:26:41 when you have agents
1:26:42 which somehow
1:26:43 compete with each other,
1:26:44 it’s only good
1:26:45 for developing
1:26:46 a certain set of skills.
1:26:47 It is too narrow.
1:26:49 It’s only good
1:26:49 for like
1:26:50 negotiation,
1:26:52 conflict,
1:26:54 certain social skills,
1:26:56 strategizing,
1:26:57 that kind of stuff.
1:26:58 And so if you care
1:26:59 about those skills,
1:27:00 then self-play
1:27:00 will be useful.
1:27:01 Now,
1:27:02 actually,
1:27:03 I think that self-play
1:27:05 did
1:27:07 find a home,
1:27:08 but just in a
1:27:09 different form,
1:27:10 in a different form.
1:27:11 so things like
1:27:12 debate,
1:27:14 prove a verifier,
1:27:16 you have some kind
1:27:17 of an LLM
1:27:17 as a judge
1:27:18 which is also
1:27:19 incentivized to find
1:27:20 mistakes in your work.
1:27:21 You could say
1:27:22 this is not exactly
1:27:22 self-play,
1:27:23 but this is,
1:27:24 you know,
1:27:25 a related adversarial
1:27:25 setup that people
1:27:26 are doing,
1:27:26 I believe.
1:27:27 And really,
1:27:28 self-play is an example
1:27:29 of,
1:27:30 is a special case
1:27:31 of more general
1:27:33 like competition
1:27:34 between agents.
1:27:35 Right?
1:27:36 The response,
1:27:37 the natural response
1:27:37 to competition
1:27:38 is to try to be different.
1:27:39 And so if you were
1:27:41 to put multiple agents
1:27:42 and you tell them,
1:27:42 you know,
1:27:43 you all need to
1:27:44 work on some problem
1:27:45 and you’re an agent
1:27:47 and you’re inspecting
1:27:48 what everyone else
1:27:48 is working,
1:27:49 you’re going to say,
1:27:50 well,
1:27:51 if they’re already
1:27:52 taking this approach,
1:27:53 it’s not clear
1:27:54 I should pursue it.
1:27:54 I should pursue
1:27:55 something differentiated.
1:27:57 And so I think
1:27:58 that something like this
1:27:58 could also create
1:27:59 an incentive
1:28:01 for a diversity
1:28:01 of approaches.
1:28:04 Final question.
1:28:07 What is research taste?
1:28:07 You’re obviously,
1:28:10 the person
1:28:11 in the world
1:28:13 who is considered
1:28:14 to have the best
1:28:16 taste in doing
1:28:17 research in AI.
1:28:18 You were
1:28:21 the co-author
1:28:22 on many of the biggest,
1:28:24 the biggest things
1:28:24 that have happened
1:28:25 in the history
1:28:25 of deep learning
1:28:26 from AlexNet
1:28:26 to GPT-3
1:28:27 to so on.
1:28:28 What is it that,
1:28:29 how do you characterize
1:28:32 how you come up
1:28:33 with these ideas?
1:28:34 I can answer,
1:28:35 so I can comment
1:28:37 on this for myself.
1:28:38 I think different people
1:28:38 do it differently.
1:28:42 But one thing
1:28:44 that guides me
1:28:44 personally
1:28:46 is
1:28:48 an aesthetic
1:28:50 of how AI
1:28:50 should be
1:28:52 by thinking
1:28:53 about how people
1:28:53 are,
1:28:54 but thinking
1:28:55 correctly.
1:28:57 it’s very easy
1:28:57 to think about
1:28:58 how people
1:28:59 are incorrectly.
1:29:00 But what does it
1:29:00 mean to think
1:29:01 about people
1:29:01 correctly?
1:29:02 So I’ll give you
1:29:03 some examples.
1:29:05 The idea
1:29:07 of the artificial
1:29:07 neuron
1:29:08 is directly
1:29:09 inspired by the
1:29:09 brain.
1:29:11 And it’s a great
1:29:11 idea.
1:29:11 Why?
1:29:12 Because you say,
1:29:12 sure,
1:29:13 the brain has all
1:29:14 these different
1:29:14 organs,
1:29:15 it has the folds,
1:29:16 but the folds
1:29:17 probably don’t matter.
1:29:18 Why do we think
1:29:18 that the neurons
1:29:19 matter?
1:29:19 Because there’s
1:29:20 many of them.
1:29:21 It kind of feels
1:29:22 right,
1:29:23 so you want
1:29:23 the neuron.
1:29:24 You want some
1:29:25 kind of local
1:29:25 learning rule
1:29:26 that will change
1:29:26 the connections.
1:29:27 You want some
1:29:28 local learning
1:29:29 rule that will
1:29:29 change the
1:29:29 connections
1:29:30 between the
1:29:30 neurons.
1:29:32 Right?
1:29:33 It feels
1:29:34 plausible that
1:29:34 the brain
1:29:34 does it.
1:29:35 The idea
1:29:35 of the
1:29:35 distributed
1:29:36 representation.
1:29:38 The idea
1:29:39 that the
1:29:39 brain,
1:29:41 you know,
1:29:41 the brain
1:29:42 responds to
1:29:42 experience,
1:29:42 our neural net
1:29:43 should learn
1:29:44 from experience,
1:29:44 not response.
1:29:44 The brain
1:29:45 learns from
1:29:45 experience,
1:29:47 the neural
1:29:48 net should
1:29:48 learn from
1:29:48 experience.
1:29:50 And you
1:29:50 kind of ask
1:29:50 yourself,
1:29:51 is something
1:29:52 fundamental?
1:29:52 fundamental or
1:29:52 not fundamental?
1:29:53 How things
1:29:54 should be.
1:29:55 And I think
1:29:56 that’s been
1:29:57 guiding me a
1:29:57 fair bit,
1:29:58 kind of thinking
1:30:00 from multiple
1:30:00 angles and
1:30:01 looking for
1:30:02 almost beauty,
1:30:02 beauty,
1:30:03 simplicity,
1:30:04 ugliness.
1:30:05 There’s no room
1:30:05 for ugliness.
1:30:06 It’s just
1:30:06 beauty,
1:30:07 simplicity,
1:30:09 elegance,
1:30:10 correct inspiration
1:30:10 from the brain.
1:30:11 And all of
1:30:12 those things need
1:30:12 to be present
1:30:13 at the same
1:30:13 time.
1:30:14 And the more
1:30:15 they are present,
1:30:16 the more confident
1:30:17 you can be in a
1:30:17 top-down belief.
1:30:19 And then the
1:30:20 top-down belief
1:30:20 is the thing
1:30:21 that sustains
1:30:21 you,
1:30:22 when the
1:30:23 experiments
1:30:24 contradict you.
1:30:25 Because if you
1:30:25 just trust the
1:30:26 data all the
1:30:26 time,
1:30:27 well,
1:30:28 sometimes you
1:30:28 can be doing
1:30:28 a correct
1:30:29 thing,
1:30:29 but there’s
1:30:29 a bug.
1:30:30 But you don’t
1:30:31 know that there
1:30:31 is a bug.
1:30:31 How can you
1:30:32 tell that there
1:30:32 is a bug?
1:30:34 How do you
1:30:34 know if you
1:30:34 should keep
1:30:35 debugging or you
1:30:36 conclude it’s the
1:30:37 wrong direction?
1:30:37 Well,
1:30:38 it’s the top-down.
1:30:38 Well,
1:30:39 how should,
1:30:39 you can say,
1:30:40 the things have
1:30:41 to be this
1:30:41 way,
1:30:42 something like
1:30:42 this has to
1:30:43 work,
1:30:44 therefore,
1:30:44 we got to keep
1:30:45 going.
1:30:46 That’s the top-down.
1:30:47 And it’s based
1:30:47 on this
1:30:49 multifaceted
1:30:50 beauty and
1:30:51 inspiration by
1:30:51 the brain.
1:30:52 All right.
1:30:54 We’ll leave it
1:30:54 there.
1:30:54 Thank you so
1:30:55 much.
1:30:55 Thank you so
1:30:55 much.
1:30:58 All right.
1:30:58 Appreciate it.
1:30:59 That was great.
1:30:59 Yeah.
1:31:00 I enjoyed it.
1:31:00 Yes,
1:31:01 me too.
1:31:05 Thanks for
1:31:05 listening to this
1:31:06 episode of the
1:31:07 A16Z Podcast.
1:31:08 If you liked this
1:31:09 episode,
1:31:09 be sure to like,
1:31:10 comment,
1:31:11 subscribe,
1:31:12 leave us a rating
1:31:12 or a review,
1:31:13 and share it with
1:31:14 your friends and
1:31:14 family.
1:31:16 For more episodes,
1:31:16 go to YouTube,
1:31:18 Apple Podcasts,
1:31:18 and Spotify.
1:31:19 Follow us on
1:31:21 X at A16Z,
1:31:22 and subscribe
1:31:22 to our
1:31:23 Substack at
1:31:25 a16z.substack.com.
1:31:26 Thanks again for
1:31:26 listening,
1:31:27 and I’ll see you
1:31:28 in the next episode.
1:31:30 This information is
1:31:31 for educational purposes
1:31:32 only and is not a
1:31:33 recommendation to buy,
1:31:35 hold, or sell any
1:31:36 investment or financial
1:31:36 product.
1:31:37 This podcast has been
1:31:38 produced by a third
1:31:39 party and may include
1:31:40 paid promotional
1:31:40 advertisements,
1:31:41 other company
1:31:42 references,
1:31:42 and individuals
1:31:43 unaffiliated with
1:31:44 A16Z.
1:31:45 Such advertisements,
1:31:46 companies, and
1:31:47 individuals are not
1:31:48 endorsed by AH
1:31:48 Capital Management,
1:31:50 LLC, A16Z,
1:31:51 or any of its
1:31:51 affiliates.
1:31:53 Information is from
1:31:53 sources deemed
1:31:54 reliable on the
1:31:55 date of publication,
1:31:56 but A16Z does not
1:31:57 guarantee its
1:31:57 accuracy.

AI models feel smarter than their real-world impact. They ace benchmarks, yet still struggle with reliability, strange bugs, and shallow generalization. Why is there such a gap between what they can do on paper and in practice

In this episode from The Dwarkesh Podcast, Dwarkesh talks with Ilya Sutskever, cofounder of SSI and former OpenAI chief scientist, about what is actually blocking progress toward AGI. They explore why RL and pretraining scale so differently, why models outperform on evals but underperform in real use, and why human style generalization remains far ahead.

Ilya also discusses value functions, emotions as a built-in reward system, the limits of pretraining, continual learning, superintelligence, and what an AI driven economy could look like.

 

Resources:

Transcript: https://www.dwarkesh.com/p/ilya-sutsk…

Apple Podcasts: https://podcasts.apple.com/us/podcast…

Spotify: https://open.spotify.com/episode/7naO…

 

Stay Updated:

If you enjoyed this episode, be sure to like, subscribe, and share with your friends!

Find a16z on X: https://x.com/a16z

Find a16z on LinkedIn: https://www.linkedin.com/company/a16z

Listen to the a16z Podcast on Spotify: https://open.spotify.com/show/5bC65RDvs3oxnLyqqvkUYX

Listen to the a16z Podcast on Apple Podcasts: https://podcasts.apple.com/us/podcast/a16z-podcast/id842818711

Follow our host: https://x.com/eriktorenberg

Please note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures](http://a16z.com/disclosures.

 

Stay Updated:

Find a16z on X

Find a16z on LinkedIn

Listen to the a16z Show on Spotify

Listen to the a16z Show on Apple Podcasts

Follow our host: https://twitter.com/eriktorenberg

 

Please note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures.

Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

Leave a Reply

a16z Podcasta16z Podcast
Let's Evolve Together
Logo