AI transcript
0:00:05 The following is a conversation with Demis Hassabis, his second time on the podcast.
0:00:11 He is the leader of Google DeepMind and is now a Nobel Prize winner.
0:00:16 Demis is one of the most brilliant and fascinating minds in the world today,
0:00:24 working on understanding and building intelligence, and exploring the big mysteries of our universe.
0:00:28 This was truly an honor and a pleasure for me.
0:00:31 And now, a quick few-second mention of each sponsor.
0:00:35 Check them out in the description or at lexfriedman.com slash sponsors.
0:00:38 It’s the best way to support this podcast.
0:00:42 We’ve got Hampton for connecting with founders and CEOs,
0:00:47 Finn for AI customer service, Shopify for building e-commerce businesses,
0:00:52 Element for daily electrolytes, and AG1 for your health.
0:00:53 Choose wisely, my friends.
0:00:55 And now on to the full ad reads.
0:00:58 I do try to make them interesting, but if you must skip, friends,
0:01:00 please still check out our sponsors.
0:01:01 I enjoy their stuff.
0:01:02 Maybe you will too.
0:01:07 And also, to get in touch with me, for whatever reason, go to lexfriedman.com slash contact.
0:01:09 All right, let’s go.
0:01:17 This episode is brought to you by Hampton, a private community for high growth founders and CEOs.
0:01:26 That’s the interesting thing about starting a company and running a company, especially one that’s growing really quickly, has to hire a lot, has to scale a lot.
0:01:28 It’s perhaps a little bit counterintuitive.
0:01:31 But for the founder, it can be deeply lonely.
0:01:34 I suppose that’s one of the reasons they recommend to have a co-founder.
0:01:43 But even outside of that, there’s just a deep loneliness with putting it all on the line, risking everything, knowing that the chances of success are low.
0:01:46 But if you do succeed, the gains are huge.
0:01:48 And you have your heart in it.
0:01:50 You have your dreams in it.
0:01:51 You believe in it.
0:02:01 But also, there’s a constant rollercoaster of fear and doubt and hope and moments of triumph and moments of failure.
0:02:04 All those go back and forth and just it’s a constant psychological turmoil.
0:02:10 Anyway, through all that, it’s just nice to connect with other people that are going through the same thing.
0:02:12 And that’s what Hampton is about.
0:02:18 It does a thing where every month, eight founders face-to-face have real conversations about their daily struggles.
0:02:23 Groups are forming in a bunch of places in New York City, Austin, San Francisco, LA, Miami, Denver, and so on.
0:02:33 If you are a founder who’s tired of carrying it all alone, visit joinhampton.com slash lex to see if it’s a fit for you.
0:02:37 That’s joinhampton.com slash lex.
0:02:40 This episode is also brought to you by Finn.
0:02:43 It’s an AI agent for customer service.
0:02:50 So they are focused, laser focused on the customer service application and they want to do that better than anybody else in the world.
0:02:58 In fact, if you measure by the metric of the number of resolutions, so when you have the agent resolve the customer service issue, that’s resolution.
0:03:07 They have a 59% average resolution rate, which makes it the highest performing customer service agent on the market.
0:03:14 It’s trusted by over 5,000 customer service leaders and even top AI companies, including Anthropic.
0:03:23 The way they design the system is it can continuously improve from the interaction so you can continuously analyze, train, test, and deploy.
0:03:30 Also, probably important to say, they give you a 90-day money-back guarantee.
0:03:36 Go to fin.ai slash lex to learn more about transforming your customer service and scaling your support team.
0:03:39 That’s fin.ai slash lex.
0:03:47 This episode is also brought to you by Shopify, a platform designed for anyone to sell anywhere with a great-looking online store.
0:03:52 Even I figured out how to create an online store at lexfreedman.com slash store.
0:03:54 I put up a few shirts.
0:03:57 I haven’t done anything with it since because I’m not a serious person.
0:04:02 There’s a lot of serious people that build real businesses on top of Shopify.
0:04:13 It’s a platform that connects you with millions of people that want to buy stuff and gives you all the tools you need and all the integrations you need to do just that at scale.
0:04:22 As we talked about with DHH about the incredible beauty and power of Ruby on Rails that Shopify is powered by.
0:04:28 I have not yet built a serious sort of medium-scale project on Rails.
0:04:30 I need to.
0:04:38 It’s just I need to actually find things that I need to do web dev type of stuff with to inspire myself to build something useful.
0:04:45 I don’t want to build some weird variant of a to-do list, especially now with the help of LLMs who can generate so much of the code.
0:04:53 So I need to figure out how to learn a new framework and new programming languages when LLMs can generate so much of it.
0:05:02 And I don’t want to do it exclusively by vibe coding because I feel like that’s not a way to learn fully a thing.
0:05:05 But vibe coding does remove some of the friction of learning.
0:05:07 So balancing that out is a tricky thing to do.
0:05:11 Anyway, that’s about the programming language and the framework that powers Shopify.
0:05:18 But Shopify itself connects buyers and sellers in an incredible scale that’s awe-inspiring.
0:05:22 Sign up for a $1 per month trial period at shopify.com slash lex.
0:05:23 That’s all lowercase.
0:05:28 Go to shopify.com slash lex to take your business to the next level today.
0:05:36 This episode is also brought to you by Element, my daily zero sugar and delicious electrolyte mix.
0:05:41 I’ve been traveling recently and I have a lot of Element packets with me.
0:05:47 And I bring that and I bring bands, whatever you call them.
0:05:48 I don’t know what they’re called.
0:05:51 They’re like rubber bands for like basic shoulder exercises.
0:05:59 So if I have to do a lot of either heavy lifting or heavy jiu-jitsu training, I like to warm up the shoulders really well.
0:06:05 Probably because I have issues with shoulders from many years of playing tennis and many years of doing stupidly bench press.
0:06:09 Anyway, I think of Element as a critical component of my workout routine.
0:06:11 Hydrate before, rehydrate after.
0:06:15 Fully embrace the deliciousness of watermelon salt flavor.
0:06:17 The flavor of champions, the one I recommend.
0:06:19 It’s been quite a while since I tried the others.
0:06:20 They’re all good.
0:06:23 But for me, I’m a man of focus and dedication.
0:06:26 And I’m dedicated to watermelon salt.
0:06:30 I think they have actually, I saw a lemonade flavor.
0:06:32 I think a lot of people love lemonade.
0:06:33 So maybe that’s your thing.
0:06:35 For me, I’m sticking to watermelon salt.
0:06:39 Get a free 8-count sample pack with any purchase.
0:06:42 Try it at drinkelement.com slash lex.
0:06:50 This episode was also brought to you by AG1, an all-in-one daily drink to support better health and peak performance.
0:06:51 I travel with it.
0:06:55 It makes me feel like I take a little piece of home with me.
0:06:58 I drink it at least once a day, very often twice a day.
0:07:00 And they keep innovating.
0:07:01 They keep improving it.
0:07:08 They recently introduced AG1 Next Gen, improving every aspect, more vitamins and minerals and upgraded probiotics.
0:07:14 It’s funny how a morning routine can be the source of peace and happiness.
0:07:26 Because I find that if I check my phone at all in the first couple hours of the day, I get this weird anxiety that ultimately morphs into unhappiness.
0:07:30 And if I don’t, I’m much more likely to sort of maintain that deep focus.
0:07:35 And a part of that early in the morning is some coffee or caffeinated drink.
0:07:37 And then a few hours on is AG1.
0:07:41 And it’s just many hours of deep focus in between.
0:07:48 It makes me feel happy, makes me feel at one with the universe, and it helps me get shit done.
0:07:55 Anyway, they’ll give you one month’s supply of fish oil when you sign up at drinkag1.com slash lex.
0:07:58 This is the Lex Friedman Podcast.
0:08:03 To support it, please check out our sponsors in the description or at lexfriedman.com slash sponsors.
0:08:09 And consider subscribing, commenting, and sharing the podcast with folks who might find it interesting.
0:08:19 I promise to work extremely hard to always bring you nuanced and long-form conversations with a wide variety of interesting people from all walks of life.
0:08:23 And now, dear friends, here’s Demis Hassabis.
0:08:46 In your Nobel Prize lecture, you propose what I think is a super interesting conjecture that, quote,
0:08:54 any pattern that can be generated or found in nature can be efficiently discovered and modeled by a classical learning algorithm.
0:08:59 What kind of patterns or systems might be included in that?
0:09:04 Biology, chemistry, physics, maybe cosmology, neuroscience?
0:09:06 What are we talking about?
0:09:06 Sure.
0:09:12 Well, look, I felt that it’s sort of a tradition, I think, of Nobel Prize lectures that you’re supposed to be a little bit provocative.
0:09:14 And I wanted to follow that tradition.
0:09:18 What I was talking about there is if you take a step back and you look at all the work that we’ve done,
0:09:20 especially with the Alpha X projects.
0:09:23 So I’m thinking Alpha Go, of course, Alpha Fold.
0:09:30 What they really are is we’re building models of very combinatorially high-dimensional spaces that, you know,
0:09:36 if you try to brute force a solution, find the best move and go, or find the exact shape of a protein,
0:09:41 and if you enumerated all the possibilities, there wouldn’t be enough time in the, you know, the time of the universe.
0:09:44 So you have to do something much smarter.
0:09:48 And what we did in both cases was build models of those environments,
0:09:53 and that guided the search in a smart way, and that makes it tractable.
0:09:59 So if you think about protein folding, which is obviously a natural system, you know, why should that be possible?
0:10:00 How does physics do that?
0:10:02 You know, proteins fold in milliseconds in our bodies.
0:10:08 So somehow physics solves this problem that we’ve now also solved computationally.
0:10:13 And I think the reason that’s possible is that in nature, natural systems have structure
0:10:18 because they were subject to evolutionary processes that shape them.
0:10:23 And if that’s true, then you can maybe learn what that structure is.
0:10:25 This perspective, I think, is a really interesting one.
0:10:35 You’ve hinted at it, which is almost like crudely stated, anything that can be evolved can be efficiently modeled.
0:10:36 Think there’s some truth to that?
0:10:42 Yeah, I sometimes call it survival of the stabilist or something like that because, you know, it’s, of course,
0:10:45 there’s evolution for life, living things.
0:10:50 But there’s also, you know, if you think about geological time, so the shape of mountains,
0:10:54 that’s been shaped by weathering processes, right, over thousands of years.
0:10:56 But then you can even take it cosmological.
0:11:02 The orbits of planets, the shapes of asteroids, these have all been survived kind of processes
0:11:04 that have acted on them many, many times.
0:11:10 So if that’s true, then there should be some sort of pattern that you can kind of reverse
0:11:16 learn and a kind of manifold really that helps you search to the right solution, to the right
0:11:21 shape, and actually allow you to predict things about it in an efficient way because it’s not
0:11:23 a random pattern, right?
0:11:28 So it may not be possible for man-made things or abstract things like factorizing large numbers
0:11:32 because unless there’s patterns in the number space, which there might be, but if there’s
0:11:35 not and it’s uniform, then there’s no pattern to learn.
0:11:37 There’s no model to learn that will help you search.
0:11:38 You have to do brute force.
0:11:42 So in that case, you know, you maybe need a quantum computer, something like this.
0:11:46 But in most things in nature that we’re interested in are not like that.
0:11:51 They have structure that evolved for a reason and survived over time.
0:11:54 And if that’s true, I think that’s potentially learnable by a neural network.
0:12:01 It’s like nature is doing a search process and it’s so fascinating that it’s in that search
0:12:04 process is creating systems that can be efficiently modeled.
0:12:06 Yes, right.
0:12:06 Yeah.
0:12:07 So interesting.
0:12:12 So they can be efficiently rediscovered or recovered because nature is not random, right?
0:12:16 These, everything that we see around us, including like the elements that are more
0:12:21 stable, all of those things, they’re subject to some kind of selection process, pressure.
0:12:25 Do you think, because you’re also a fan of theoretical computer science and complexity,
0:12:30 do you think we can come up with a kind of complexity class, like a complexity zoo type
0:12:38 of class where maybe it’s the set of learnable systems, the set of learnable natural systems,
0:12:38 LNS?
0:12:39 Yeah.
0:12:46 This is a demo, this new class of systems that could be actually learnable by classical systems
0:12:50 in this kind of way, natural systems that can be modeled efficiently.
0:12:57 I mean, I’ve always been fascinated by the P equals MP question and what is modelable by
0:13:02 classical systems, i.e. non-quantum systems, you know, Turing machines in effect.
0:13:07 And that’s exactly what I’m working on actually in kind of my few moments of spare time with a few
0:13:12 colleagues about is, should there be, you know, maybe a new class of problem that is solvable by
0:13:18 this type of neural network process and kind of mapped onto these natural systems. So, you know,
0:13:25 the things that exist in physics and have structure. So I think that could be a very interesting new way
0:13:29 of thinking about it. And it sort of fits with the way I think about physics in general, which is that,
0:13:33 you know, I think information is primary. Information is the most sort of fundamental unit
0:13:37 of the universe, more fundamental than energy and matter. I think they can all be converted into
0:13:41 each other. But I think of the universe as a kind of informational system.
0:13:46 So when you think of the universe as an informational system, then the P equals MP question is a,
0:13:48 is a physics question.
0:13:48 That’s right.
0:13:54 And it’s a question that can help us actually solve the entirety of this whole thing going on.
0:13:58 Yeah. I think it’s one of the most fundamental questions, actually, if you think of physics as
0:14:03 informational. And the answer to that, I think it’s going to be, you know, very enlightening.
0:14:10 More specific to the PNP question, this, again, some of the stuff we’re saying is kind of crazy
0:14:15 right now, just like the Christian Atkinson Nobel Prize speech controversial thing that he said
0:14:20 sounded crazy. And then you went and got a Nobel Prize for this with John Jumper, solved the problem.
0:14:26 So let me, let me just stick to the P equals MP. Do you think there’s something in this thing we’re
0:14:35 talking about that could be shown if you can do something like a polynomial time or constant time
0:14:42 compute ahead of time and construct this gigantic model, then you can solve some of these extremely
0:14:46 difficult problems in a theoretical computer science kind of way?
0:14:51 Yeah. I think that there are actually a huge class of problems that could be couched in this way,
0:14:56 the way we did alpha go and the way we did alpha fold, where, you know, you, you model what the
0:15:01 dynamics of the system is, the, the, the, the properties of that system, the environment that
0:15:07 you’re trying to understand. And then that makes the search for the solution or the prediction of the
0:15:15 next step efficient, basically polynomial time. So tractable by a classical system, which a neural
0:15:20 network is, it runs on normal computers, right? Classical computers, uh, Turing machines in effect.
0:15:26 And, um, I think it’s one of the most interesting questions there is, is how far can that paradigm
0:15:31 go? You know, I think we’ve proven, uh, and the AI community in general, that classical systems,
0:15:36 Turing machines can go a lot further than we previously thought, you know, they can do things
0:15:42 like model the structures of proteins and play go to better than world champion level. And, uh, you know,
0:15:47 a lot of people would have thought maybe 10, 20 years ago, that was decades away, or maybe you
0:15:52 would need some sort of quantum machines to, to quantum systems, to be able to do things like
0:15:59 protein folding. And so I think we haven’t really, uh, even sort of scratched the surface yet of what,
0:16:06 uh, classical systems so-called, uh, uh, could do. And of course, AGI being built on a, on a neural
0:16:10 network system on top of a neural network system on top of a classical computer would be the ultimate
0:16:15 expression of that. And I think the limit that, you know, the, the, what, what the bounds of that
0:16:20 kind of system, what it can do, it’s a very interesting question and, and, and directly speaks
0:16:21 to the P equals MP question.
0:16:28 What do you think, again, hypothetical might be outside of this, maybe emergent phenomena.
0:16:33 Like if you look at cellular automata, some of the, you have extremely simple systems and then
0:16:39 some complexity emerges, maybe that would be outside or even, would you guess even that might
0:16:43 be amenable to efficient modeling by a classical machine?
0:16:49 Yeah. I think those systems would be right on the boundary, right? So, um, I think most emergent
0:16:53 systems, cellular automata, things like that could be modelable by a classical system. You just sort of
0:16:58 do a forward simulation of it and it’d probably be efficient enough. Um, of course, there’s the question
0:17:04 of things like chaotic systems where the initial conditions really matter. And then you get to some,
0:17:10 you know, uncorrelated end state. Now those could be difficult to model. So I think these are kind of
0:17:15 the open questions, but I think when you step back and look at what we’ve done with the systems and the,
0:17:21 and the problems that we’ve solved, and then you look at things like VO3 on like video generation,
0:17:27 sort of rendering physics and lighting and things like that, you know, really in core fundamental
0:17:31 things in physics. Um, it’s pretty interesting. I think it’s telling us something quite fundamental
0:17:36 about how the universe is structured in my opinion. Um, so, you know, in, in a way that’s what I want
0:17:43 to build AGI for is to help, uh, us, uh, as scientists answer these questions, uh, like P equals MP.
0:17:48 Yeah. I think we might be continuously surprised about what is modelable by classical computers. I
0:17:55 mean, alpha fold three on the interaction side is surprising that you can make any kind of progress
0:18:03 on that direction. Alpha genome is surprising that you can map the genetic code to the function kind of
0:18:07 playing with the emergent kind of phenomena. You think there’s so many combinatorial options that,
0:18:10 and then here you go, you can find the kernel that is efficiently model.
0:18:15 Yes. Because there’s some structure, there’s some landscape, you know, in the energy landscape
0:18:19 or whatever it is that you can follow some gradient you can follow. And of course, what neural networks
0:18:24 are very good at is following gradients. And so if there’s one to follow an object and you can specify
0:18:30 the objective function correctly, you know, you don’t have to deal with all that complexity, which I think
0:18:36 is how we maybe have naively thought about it for decades. Those problems, if you just enumerate all
0:18:40 the possibilities, it looks totally intractable and there’s many, many problems like that. And then you
0:18:46 think, well, it’s like 10 to the 300 possible protein structures, uh, 10 to the hundred and,
0:18:51 you know, 70 possible go positions. All of these are way more than atoms in the universe. So how could
0:18:57 one possibly find the right solution or predict the next step? And, and it, but it turns out that it is
0:19:03 possible. And of course, reality in nature does do it right. Proteins do fold. So that, that gives you
0:19:09 confidence that there must be, if we understood how physics was doing that, uh, in a sense, uh, then,
0:19:15 and we could mimic that process, I model that process. Uh, it should be possible on our classical
0:19:20 systems is, is, is, is basically what the conjecture is about. And of course there’s nonlinear dynamical
0:19:26 systems, highly nonlinear dynamical systems, everything involving fluid. Yes. Right. You know,
0:19:31 I recently had a conversation with Terence Tao who mathematically, uh, contends with a very difficult
0:19:38 aspect of systems that have some singularities in them that break the mathematics. And it’s just
0:19:42 hard for us humans to make any kind of clean predictions about highly nonlinear dynamical
0:19:48 systems. But again, to your point, we might be very surprised what classical learning systems might
0:19:53 be able to do about even fluid. Yes, exactly. I mean, the fluid dynamics, Navier-Stokes equations,
0:19:58 these are traditionally thought of as very, very difficult, intractable kind of problems to do on
0:20:02 classical systems. They take enormous amounts of compute, you know, weather prediction systems,
0:20:08 you know, these kinds of things all involve fluid dynamics calculations. And, um, but again,
0:20:14 if you look at something like VO, our video generation model, it can model liquids quite well,
0:20:20 surprisingly well and materials, specular lighting. I love the ones where, you know, there’s, there’s
0:20:24 people who generate videos where there’s like clear liquids going through hydraulic presses and then
0:20:30 being squeezed out. I used to write, uh, physics engines and graphics engines and in my early days in
0:20:36 gaming. And I know it’s just so painstakingly hard to build programs that can do that. And yet somehow
0:20:43 these systems are, you know, reverse engineering from just watching YouTube videos. So presumably what’s
0:20:49 happening is it’s extracting some underlying structure around how these materials behave.
0:20:55 So perhaps there is some kind of lower dimensional manifold that can be learned if we actually fully
0:21:00 understood what’s going on under the hood. That’s maybe, you know, maybe true of most of reality.
0:21:06 Yeah. I’ve been continuously precisely by this aspect of VO3. I think a lot of people highlight
0:21:11 different aspects, including the comedic and the meat and all that kind of stuff. And then the ultra
0:21:18 realistic ability to capture humans in a really nice way that’s compelling and feels close to
0:21:23 reality. And then combine that with native audio. All of those are marvelous things about VO3, but
0:21:29 the exactly the thing you’re mentioning, which is the physics. Yeah. It’s not perfect, but it’s pretty
0:21:36 damn good. And then the really interesting scientific question is what is it understanding about our world
0:21:43 in order to be able to do that? Because of the cynical take with diffusion models, there’s no way it understands
0:21:49 anything, but it seems, I mean, I don’t think you can generate that kind of video without understanding.
0:21:54 And then our own philosophical notion of what it means to understand then is like brought to the surface.
0:21:58 To what degree do you think VO3 understands our world?
0:22:04 I think to the extent that it can predict the next frames, you know, in a coherent way,
0:22:08 that’s some, that is a form, you know, of understanding, right? Not in the anthropomorphic
0:22:13 version of, you know, it’s not some kind of deep philosophical understanding of what’s going on.
0:22:18 I don’t think these systems have that, but they, they certainly have modeled enough of the dynamics,
0:22:24 you know, put it that way that they can pretty accurately generate whatever it is, eight seconds
0:22:29 of consistent video that by eye, at least, you know, at a glance is quite hard to distinguish
0:22:34 what the issues are. And imagine that in two or three more years time, that’s the thing I’m thinking
0:22:38 about and how incredible that will, they will look, uh, given where we’ve come from, you know,
0:22:44 the early versions of that, uh, uh, one or two years ago. And so, um, the rate of progress is
0:22:50 incredible. And I think, um, I’m like you is like a lot of people love all of the, the, the,
0:22:55 the standup comedians and the, the, the actually captures a lot of human dynamics very well and,
0:22:59 and body language. But actually the thing I’m most impressed with and fascinated by is the physics
0:23:07 behavior, the lighting and materials and liquids. And it’s pretty amazing that it can do that. And I
0:23:14 think that shows it, that it has some notion of at least intuitive physics, right? Um, how things
0:23:19 are supposed to work, uh, intuitively may be the way that a human child would understand physics,
0:23:25 right? As opposed to, uh, you know, a PhD student really, uh, being able to unpack all the equations.
0:23:31 It’s more of an intuitive physics understanding. Well, that intuitive physics understanding, that’s
0:23:36 the base layer. That’s the thing people sometimes call it common sense. Again, it really understands
0:23:41 something. I think that really surprised a lot of people. It blows my mind that I just didn’t think
0:23:48 would be possible to generate that level of realism without understanding. There’s this notion that you
0:23:54 can only understand the physical world by having an embodied AI system, a robot that interacts with
0:23:59 that world. That’s the only way to construct an understanding of that world. Yeah. But VO3 is
0:24:01 directly challenging that. Right.
0:24:05 It feels like, yes. And it’s very interesting, you know, even if we, if you were to ask me five,
0:24:09 10 years ago, I would have said, even though I was a must in all of this, I would have said, well,
0:24:13 yeah, you probably need to understand intuitive physics. You know, like if I push this off the
0:24:19 table, this glass, it will maybe shatter, you know, um, and the, and the liquid will spill out.
0:24:23 Right. So we know all of these things, but I thought that, you know, and there’s a lot of theories
0:24:27 in neuroscience is called action in perception where, you know, you, you need to act in the
0:24:32 world to really truly perceive it in a deep way. And there was a lot of theories about you’d need
0:24:38 embodied intelligence or robotics or something, or maybe at least simulated action, uh, so that you
0:24:43 would understand things like intuitive physics. But it seems like, um, you can understand it through
0:24:48 passive observation, which is pretty surprising to me. And, and again, I think hints at something
0:24:54 underlying about the nature of, uh, reality in, in, in my opinion, beyond, um, just the, you know,
0:24:59 the cool videos that it generates. Um, and, and of course there’s next stages is maybe even making
0:25:05 those videos interactive. So, uh, one can actually step into them and move around them, um, which would
0:25:11 be really mind blowing, especially given my games background. So you can imagine, uh, and then,
0:25:14 and then I think, you know, you’re, we’re starting to get towards what I would call a world model,
0:25:19 a model of how the world works, the mechanics of the world, the physics of the world and the things
0:25:23 in that world. And of course, that’s what you would need for a true AGI system.
0:25:29 I have to talk to you about video games. So you, you were being a bit trolly. Uh, I think you’re,
0:25:34 you’re having more and more fun on Twitter on X, which is great to see. So guy named Jimmy Apples
0:25:41 tweeted, let me play a video game of my VO3 videos already. Uh, Google cooks so good playable world
0:25:47 models when spelled W E N question mark. Um, and then you quote tweeted that with now,
0:25:53 wouldn’t that be something? So how, how hard is it to build game worlds with AI? Maybe can you look
0:26:00 out into the future, uh, of video games five, 10 years out? What do you think that looks like?
0:26:06 Well, games were my first love really. And doing AI for games was the first thing I did professionally
0:26:13 in my teenage years and was the first, uh, major AI systems that I built. And, uh, I always want to
0:26:18 have, I want to scratch that itch one day and come back to that. So, you know, and I will do, I think.
0:26:24 And, um, I think I’d sort of dream about, you know, what would I have done back in the nineties if I’d had
0:26:28 access to the kind of AI systems we have today. And I think you could build absolutely mind blowing
0:26:33 games. Um, and I think the next stage is I always used to love making all the games I’ve made
0:26:38 are open world games. So they’re games where there’s a simulation and then there’s AI characters
0:26:44 and then the player, uh, interacts with that simulation and the simulation adapts to the way
0:26:48 the player plays. And I always thought they were the coolest games because, uh, so games like theme
0:26:53 park that I worked on where everybody’s game experience would be unique to them, right? Because
0:26:58 you’re kind of co-creating the game, right? Uh, we set up the parameters, we set up initial conditions
0:27:03 and then you as the player immersed in it. And then you are co-creating it with the, with the
0:27:08 simulation. But of course it’s very hard to program open world games. You know, you’ve got to be able
0:27:13 to create a content, whichever direction the player goes in and you want it to be compelling no matter
0:27:19 what the player chooses. Um, and so it was always quite difficult to build, uh, things like cellular
0:27:23 automata actually type of those kinds of classical systems, which created some emergent behavior.
0:27:28 Um, but they’re always a little bit fragile, a little bit limited. Now we’re maybe on the cusp in
0:27:33 the next few years, five, 10 years of having AI systems that can truly create around your imagination,
0:27:40 um, can sort of dynamically change the story and storytell the narrative around, uh, and make it
0:27:45 dramatic no matter what you end up choosing. So it’s like the ultimate choose your own adventure
0:27:50 sort of game. And, uh, you know, I think maybe we’re within reach if you think of a kind of
0:27:56 interactive version of VO, uh, and then wind that forward five to 10 years and, um, you know,
0:27:57 imagine how good it’s going to be.
0:28:03 Yeah. So you said a lot of super interesting stuff there. So one, the open world built into
0:28:08 that as a deep personalization, the way you’ve described it. So it’s not just that it’s open
0:28:13 world, but you can open any door and there’ll be something there. It’s that the choice of which
0:28:21 door you’re open in an unconstrained way defines the worlds you see. So some games try to do that.
0:28:27 They give you choice, but it’s really just an illusion of choice because the only, uh, like,
0:28:33 like Stanley Parable, it’s, it’s, it’s really, there’s a couple of doors and it really just
0:28:37 takes you down the narrative. Stanley Parable is a great video game. I recommend people play
0:28:44 that kind of, uh, in a meta way, uh, mocks the illusion of choice and there’s philosophical
0:28:50 notions of free will and so on. But, uh, I do like one of my favorite games of Elder Scrolls
0:28:58 is Daggerfall. I believe that they really played with a, like random generation of the dungeons
0:29:04 of if you can step in and they give you this feeling of an open world. And there you mentioned
0:29:09 interactivity. You don’t need to interact. That’s a first step because you don’t need to interact that
0:29:15 much. You just, when you open the door, whatever you see is randomly generated for you. And that’s
0:29:19 already an incredible experience because you might be the only person to ever see that.
0:29:25 Yeah, exactly. And, and so, but what you’d like is a little bit better than just sort of a random
0:29:31 generation, right? So you’d like, uh, and, and also better than a simple AB hard code of choice,
0:29:36 right? That’s not really, uh, open world, right? As you say, it’s just giving you the illusion of
0:29:43 choice. What you want to be able to do is potentially anything in that game environment. Um, and I think
0:29:48 the only way you can do that is to have, uh, generated systems, systems that, uh, will generate
0:29:52 that on the fly. Of course, you can’t create infinite amounts of game assets, right? It’s expensive
0:29:58 enough already how AAA games are made today. And that was obvious to, to us back in the nineties,
0:30:03 when I was working on all these games, I think maybe black and white, uh, was the game that I worked on
0:30:08 early stages of that, that had the still probably the best AI learning AI in it. It was an early
0:30:13 reinforcement learning system that you, you know, you were, you were looking after this mythical creature
0:30:18 and growing it and nurturing it. And depending how you treated it, it would treat the villagers in that
0:30:22 world in the same way. So if you were mean to it, it would be mean. If you were good, it would be
0:30:28 protective. And so it was really a reflection of the way you played it. So actually all of the, uh,
0:30:33 I’ve been working on sort of simulations and AI through the medium of games at the beginning of
0:30:38 my career. And, and really the whole of what I do today is still a follow on from, uh, those early,
0:30:43 more hard coded ways of doing the AI to now, you know, fully general learning systems that,
0:30:48 that are trying to achieve the same thing. Yeah. It’s been, uh, interesting, hilarious,
0:30:54 and, uh, fun to watch you and Elon, obviously itching to create games because you’re both gamers.
0:31:00 And one of the sad aspects of your, uh, incredible success in so many domains of science,
0:31:07 like serious adult stuff that you might not have time to really create a game. You might end up
0:31:14 creating the tooling that others would create the game and you have to watch others create the thing
0:31:19 you’ve always dreamed of. Do you think it’s possible you can somehow in your extremely busy
0:31:25 schedule, actually find time to create something like black and white, some, some, an actual video
0:31:32 game where like you could make the childhood dream. Yeah. Well, you know, there’s two things where
0:31:37 I think about that is maybe with vibe coding as it gets better and there’s a possibility that I could,
0:31:41 you know, one could do that actually in the, in your spare time. So I’m quite excited about that as a,
0:31:47 as that would be my project. If I got the time to do some vibe coding, um, I’m actually itching to do
0:31:52 that. And then the other thing is, you know, maybe it’s a sabbatical after AGI has been safely
0:31:56 stewarded into the world and delivered into the world, you know, that, and then working on my physics
0:32:02 theory, as we talked about at the beginning, those would be the two, my, my two post AGI projects,
0:32:08 let’s call it that way. I would love to see which post AGI, which you choose solving, uh, the,
0:32:14 the problem that some of the smartest people in human history contended with. So P equals MP
0:32:19 or creating a cool video. Yeah. Well, but they might, but in my world, they’d be related because
0:32:26 it would be an open world simulated game, uh, as realistic as possible. So, you know, what,
0:32:30 what is, what is the universe? That’s, that’s, that’s speaking to the same question, right?
0:32:33 MP equals MP. I think all these things are related, at least in my mind.
0:32:38 I mean, in a really serious way, it’s like video games, sometimes are looked down upon.
0:32:45 That’s just this fun side activity, but especially as, uh, AI does more and more of, um, the difficult,
0:32:54 uh, boring tasks, something that we in modern world called work, you know, video games is the thing in
0:32:59 which we may find meaning in which we may find like what to do with our time. You could create
0:33:06 incredibly rich, meaningful experiences. Like that’s what human life is. And then in video games,
0:33:16 you can create more sophisticated, more diverse ways of living. Yeah. I think so. I mean, those of us who
0:33:24 love games and I still do is, is, is, um, you know, it’s almost can let your imagination run wild, right?
0:33:30 Like I, I used to love games, um, and working on games so much because it’s the fusion, especially in the
0:33:36 nineties and two early two thousands, the sort of golden era, maybe the eighties of, of, of game of the games
0:33:40 industry. And it was all being discovered. New genres were being discovered. We weren’t just making games.
0:33:45 We felt we were, we were creating a new entertainment medium that never existed before, especially with
0:33:49 these open world games and simulation games where you were co-create you as the player were co-creating
0:33:55 the story. There’s no other media, uh, entertainment media where you do that, where you as the audience
0:34:01 actually co-create the story. And of course now with multiplayer games as well, it can be a very social
0:34:07 activity and can explore all kinds of interesting worlds in that. But on the other hand, you know,
0:34:13 it’s very important to, um, also enjoy and experience, uh, the physical world. But the
0:34:17 question is then, you know, I think we’re going to have to kind of confront the question again of what
0:34:21 is the fundamental nature of reality? Uh, what is there going to be the difference between these
0:34:28 increasingly realistic simulations and, uh, multiplayer ones and emergent, um, and what we do in the real
0:34:35 world? Yeah, there’s clearly a huge amount of value to experiencing the real world nature. There’s
0:34:40 also a huge amount of value in experiencing other humans directly in person, the way we’re sitting
0:34:47 here today. But we need to really scientifically rigorously answer the question, why? Yeah. And
0:34:53 which aspect of that can be mapped into the virtual world? Exactly. And it’s not, it’s not enough to say,
0:34:59 yeah, you should go touch grass and hang out in nature. It’s like, why exactly is that valuable?
0:35:04 Yes. And I guess that’s maybe the thing that’s been, uh, haunting me, obsessing me from the beginning of
0:35:07 my career. If you think about all the different things I’ve done, that’s, they’re all related in
0:35:13 that way. The simulation, nature of reality, and what is the bounds of, you know, what can be modeled?
0:35:18 Sorry for the ridiculous question, but so far, what is the greatest video game of all time? What’s up
0:35:23 there? Well, my favorite one of all time is civilization. I have to say that that was the,
0:35:29 the, the civilization one and civilization two, my favorite games of all time. Um, I can only assume
0:35:35 you’ve avoided the most recent one because it would probably, you would, that would be your sabbatical
0:35:40 that you would disappear. Yes, exactly. They take a lot of time, these civilization games. So,
0:35:47 uh, I’ve got to be careful with them. Fun question. You and Elon seem to be somehow solid gamers. Uh,
0:35:53 is there a connection between being great at gaming and, and, uh, being great leaders of AI companies?
0:35:59 I don’t know. I, it’s an interesting one. I mean, uh, we both love games and, uh, it’s interesting. He
0:36:04 wrote games as well to start off with. It’s probably, especially in the era I grew up in where home
0:36:09 computers were just became a thing, you know, in the late eighties and nineties, especially in the UK,
0:36:13 I had a spectrum and then a Commodore Omega 500, which is my favorite computer ever.
0:36:19 And that’s why I learned all my programming. And of course, it’s a very fun thing, uh, to program is
0:36:26 to program games. So I think it’s a great way to learn programming probably still is. And, um, and then
0:36:31 of course I immediately took it in directions of AI and simulations, which, so I may, it was able to
0:36:38 express my interest in, in games and my sort of wider scientific interests altogether. And then the
0:36:45 final thing I think that’s great about games is it fuses, um, artistic design, you know, art with the,
0:36:51 the, the most cutting edge programming. Um, so again, in the nineties, all of the most interesting,
0:36:57 uh, technical advances were happening in gaming, whether that was AI graphics, physics engines,
0:37:02 uh, hardware, even GPUs, of course, were designed for gaming originally. Um, so everything that was
0:37:09 pushing computing forward in the, in the nineties was due to gaming. So interestingly, that was where
0:37:15 the forefront of research was going on. And it was this incredible fusion with, with art, um, you know,
0:37:21 graphics, but also music and just the whole new media of storytelling. And I love that for me, it’s
0:37:26 sort of multidisciplinary kind of effort is again, something I’ve enjoyed my whole, my whole life.
0:37:32 I have to ask you, I almost forgot about one of the many, and I would say one of the most
0:37:37 incredible things recently, uh, that somehow didn’t yet get enough attention is alpha evolve.
0:37:43 We talked about evolution a little bit, but it’s the Google deep mind system that evolves algorithms.
0:37:48 Yeah. Are these kinds of evolution like techniques promising as a component of future super
0:37:52 intelligence system? So for people who don’t know, it’s kind of, um, I don’t know if it’s fair
0:38:00 to say it’s LLM guided evolution search. Yeah. So evolutionary algorithms are doing the search
0:38:06 and LLMs are telling you where. Yes, exactly. So LLMs are kind of proposing some possible solutions.
0:38:12 And then you do, you use evolutionary computing on top to, to, to find some novel part of the,
0:38:18 of the search space. So actually, I think it’s an example of very promising directions where you
0:38:24 combine LLMs or foundation models with other computational techniques. Evolutionary methods is
0:38:30 one, but you could also imagine Monte Carlo tree search, basically many types of search algorithms
0:38:37 or reasoning algorithms sort of on top of, or using the foundation models as a basis. So I actually
0:38:41 think there’s quite a lot of interesting, uh, things to be discovered probably with these sort
0:38:47 of hybrid systems, let’s call them. But not to romanticize evolution. Yeah. I’m only human,
0:38:52 but you think there’s some value in whatever that mechanism is? Cause we already talked about natural
0:38:58 systems. Do you think we’re, there’s a lot of low hanging fruit of us understanding being,
0:39:05 being able to model, uh, being able to simulate evolution and then using that, whatever we
0:39:10 understand about that nature inspired mechanism to, to then do search better and better and better.
0:39:16 Yes. So if you think about, uh, again, uh, breaking down the solar systems we’ve built, uh, to their
0:39:22 really fundamental core, you’ve got like the model of the, of the underlying dynamics of the system.
0:39:27 Uh, and then if you want to discover something new, something novel that hasn’t been seen before,
0:39:33 um, then you need some kind of search process on top to take you to a novel region of the,
0:39:39 of the, of the search space. And, um, you can do that in a number of ways. Evolutionary computing is
0:39:45 one, um, with alpha go, we just use Monte Carlo tree search, right? And that’s what found move 37,
0:39:52 the new kind of never seen before strategy in go. And so that’s how you can go beyond potentially what is
0:39:56 already known. So the model can model everything that you currently know about, right? All the data
0:40:01 that you currently have, but then how do you go beyond that? So that starts to speak about the
0:40:05 ideas of creativity. How can these systems create something new fight, discover something new?
0:40:10 Obviously this is super relevant for scientific discovery or pushing med science and medicine
0:40:16 forward, which we want to do with these systems. And you can actually bolt on some, uh, fairly
0:40:22 simple search systems on top of these models and get you into a new region of space. Of course,
0:40:27 you also have to, um, make sure that, uh, you’re not searching that space totally randomly. It was
0:40:31 to be too big. So you have to have some objective function that you’re trying to optimize and hill climb
0:40:36 towards and that guides that search. But there’s some mechanism of evolution that are interesting,
0:40:41 maybe in the space of programs, but then the space of programs that extremely important space. Cause you
0:40:48 can probably generalize the, uh, to everything, but you know, for example, mutation, this is not just
0:40:55 Monte Carlo tree search where it’s like a search. You could every once in a while combine things,
0:41:01 yeah. Combine things out there like sub like a components of a thing. Yes. So then, you know,
0:41:08 what evolution is really good at is not just the natural selection. It’s combining things and
0:41:14 building increasingly complex hierarchical systems. Yes. So that component is super interesting. Yeah.
0:41:18 Especially like with alpha evolve in the space of programs. Yeah, exactly. So there’s a,
0:41:23 you can get a bit of an extra property out of evolutionary systems, which is some new emergent
0:41:29 capability may come about, right? Of course, like happened with life. Interestingly with naive,
0:41:34 uh, sort of traditional evolutionary computing methods without LLMs and the modern AI, the problem
0:41:40 with them, they were very well studied in the nineties and early two thousands and some promising results.
0:41:45 But the problem was they could never work out how to evolve new properties, new emergent properties.
0:41:51 You always had a sort of subset of the properties that you put into the system, but maybe if we combine them
0:41:56 with these foundation models, perhaps we can overcome that limitation. Obviously, uh, natural
0:42:01 evolution clearly did because it, it did evolve new capabilities, right? So, uh, bacteria to where we
0:42:09 are now. So clearly that it must be possible with evolutionary systems to generate, uh, new patterns,
0:42:15 you know, going back to the first thing we talked about and, uh, new capabilities and emergent properties.
0:42:17 And maybe we’re on the cusp of discovering how to do that.
0:42:23 Yeah. Listen, uh, alpha wall is one of the coolest things I’ve ever seen. I’ve, I’ve, uh, uh,
0:42:27 on my desk at home, you know, most of my time is spent behind that computer is just programming.
0:42:37 And next to the three screens is a skull of a, uh, uh, Tiktalic, which is one of the early organisms that
0:42:45 crawled out of the water onto land. And I just kind of watch that little guy, it’s like you, the,
0:42:52 whatever the computation mechanism of evolution is, is quite incredible. It’s truly, truly incredible.
0:42:57 Yeah. Now, whether that’s exactly the thing we need to do to do our search, but never, never, uh,
0:43:03 dismiss the power of nature with what it did here. Yeah. And it’s amazing. Um, which is a relatively
0:43:09 simple algorithm, right? Effectively. And it can generate all of this immense complexity emerges,
0:43:15 obviously running over, you know, 4 billion years of time, but, but it’s, it’s, it’s, you know,
0:43:20 you can think about that as again, a process, a search process that ran over the physics substrate
0:43:25 of the universe for a long amount of computational time, but then it generated all this incredible,
0:43:31 uh, rich diversity. So, uh, so many questions I want to ask you. So one, you do have a dream.
0:43:38 One of the natural systems you want to try to model is, uh, is a cell. That’s a beautiful dream.
0:43:45 Uh, I could ask you about that. I also just for that purpose on the AI scientist front, just broadly.
0:43:52 So there’s a essay, uh, from Daniel Cucataglio, Scott Alexander, and others that outlines steps
0:43:59 along the way to get to ASI and has a lot of interesting ideas in it. One of which is, uh,
0:44:07 including a superhuman coder and a superhuman AI researcher. And in that there’s a term of research
0:44:12 taste. That’s really interesting. So in everything you’ve seen, do you think it’s possible for AI
0:44:21 systems to have research taste to help you in the way that AI co-scientists does to help steer human,
0:44:29 human brilliant scientists, and then potentially by itself to figure out what are the directions
0:44:34 where you want to generate truly novel ideas? Because that seems to be like a
0:44:37 really important component of how to do great science.
0:44:42 Yeah. I think that’s going to be one of the hardest things to, to, uh, mimic or model is,
0:44:47 is this, this idea of taste or, or judgment. I think that’s what separates the, you know,
0:44:52 the, the great scientists from the good scientists, like all, all professional scientists are good
0:44:56 technically, right? Otherwise it wouldn’t have been made it, uh, that far in, in academia and things
0:45:01 like that. But then do you have the taste to sort of sniff out what the right direction is,
0:45:06 what the right experiment is, what the right question is. So the crystal is picking the right question is,
0:45:12 is the hardest part of science, um, and, and making the right hypothesis. And, um, that’s what,
0:45:17 you know, today’s systems definitely, they can’t do. So, you know, I often say it’s harder to come
0:45:22 up with a conjecture, a really good conjecture than it is to solve it. So we may have systems soon
0:45:28 that can solve pretty hard conjectures. Um, you know, I, I, um, in math Olympiad problems,
0:45:32 where we, we, you know, alpha proof last year, our system got, you know, silver medal in that
0:45:37 really hard problems. Maybe eventually we’ll better solve a millennium price kind of problem,
0:45:43 but could a system have come up with a conjecture worthy of study that someone like Terence Tao would
0:45:47 have gone, you know what, that’s a really deep question about the nature of maths or the nature of
0:45:54 numbers or the nature of physics. And that is far harder type of creativity. And we don’t really know,
0:45:59 today’s systems clearly can’t do that. And we’re not quite sure what that mechanism would be. This
0:46:04 kind of leap of imagination, like, like Einstein had when he came up with, you know, special relativity
0:46:10 and then general relativity with the knowledge he had at the time. And for conjecture, the,
0:46:16 you want to come up with a thing that’s interesting. It’s amenable to proof. Yes. So like,
0:46:20 it’s easy to come up with a thing that’s extremely difficult. Yeah. It’s easy to come up with a thing
0:46:25 that’s extremely easy, but that, at that very edge, that sweet spot, right. Of, of basically advancing
0:46:30 the science and splitting the hypothesis space into two ideally, right. Whether if it’s true or not true,
0:46:37 you, you’ve learned something really useful and, um, and, and that’s hard and, and, and, and making
0:46:43 something that’s also, uh, you know, falsifiable and within sort of the technologies that you have,
0:46:49 you currently have available. So it’s a very creative process, actually highly creative process that,
0:46:54 um, I think just a kind of naive search on top of a model won’t be enough for that.
0:47:00 Okay. The idea of splitting the hypothesis space into super interesting. So, uh, I’ve heard you say
0:47:06 that there’s basically no failure in, or failure is extremely valuable if it’s done. If you construct
0:47:11 the questions, right. If you construct the experiments, right. If you design them right, that failure or
0:47:17 success are both useful. So perhaps because it splits the hypothesis basically too, it’s like a binary search.
0:47:23 That’s right. So when you do like, you know, real blue sky research, there’s no such thing as failure,
0:47:28 really, as long as you’re picking experiments and hypotheses that, that, that, that meaningfully
0:47:32 spit the hypothesis space. So, you know, and you learn something, you can learn something kind of
0:47:37 equally valuable from, uh, an experiment that doesn’t work. That should tell you if you’ve designed
0:47:42 an experiment well, and your hypotheses are interesting, it should tell you a lot about where to go next.
0:47:49 And, um, and then it’s you’re, you’re effectively doing a search process, um, and using that information
0:47:50 in, in, you know, very helpful ways.
0:47:59 So to go to your dream of modeling a cell, um, what are the big challenges that lay ahead for us to
0:48:04 make that happen? We should maybe highlight that alpha for, I mean, there’s just so many leaps.
0:48:05 Yeah.
0:48:10 So alpha fold solved, if it’s fair to say protein folding, and there’s so many incredible things we
0:48:16 could talk about there, including the open sourcing, uh, everything you’ve released alpha fold three
0:48:23 is doing protein, RNA, DNA interactions, which is super complicated and fascinating. This, uh, amenable
0:48:29 to modeling alpha genome, uh, predicts, uh, how small genetic changes. Like if we think about single
0:48:36 mutations, how they link to actual, uh, function. So, um, those are, it seems like it’s creeping along.
0:48:42 So sophisticated, to much more complicated, uh, things like a cell, but a cell has a lot of
0:48:44 really complicated components.
0:48:50 Yeah. So what I’ve tried to do throughout my career is I have these really grand dreams, and then I try
0:48:53 to, as you’ve noticed, and then I try to break, but I try to break them down at any, you know,
0:48:59 it’s easy to have a kind of, uh, a crazy ambitious dream, but the, the, the trick is how do you break it
0:49:05 down into manageable, achievable, uh, interim steps that are meaningful and useful in their own
0:49:11 right. And so virtual cell, which is what I call the project of modeling a cell, uh, I’ve had this
0:49:16 idea, you know, of wanting to do that for maybe more like 25 years. And, uh, I used to talk with
0:49:20 Paul nurse, who is a bit of a mentor of mine in biology. He runs the, the, you know, he founded the
0:49:27 Crick Institute and, and won the Noah prize in, in 2001, uh, is, is we’ve been talking about it since,
0:49:33 you know, before the, you know, in the nineties and, um, and I come used to come back to every five
0:49:37 years is like, what would you need to model of the full internals of a cell so that you could do
0:49:43 experiments on the virtual cell and what those experiment, you know, in silico and those predictions
0:49:47 would be useful for you to save you a lot of time in the wet lab, right? That would be the dream.
0:49:52 Maybe you could a hundred X speed up experiments by doing most of it in silico, the search in silico,
0:49:57 and then you do the validation step in the wet lab. That would be, that’s the, that’s the dream.
0:50:02 And so, uh, but maybe now finally, uh, so I was trying to build these components, alpha fold being
0:50:09 one that, that would allow you eventually to model the full interaction, a full simulation
0:50:14 of a cell. And I’d probably start with the yeast cell. And partly that’s what Paul nurse studied
0:50:18 because the yeast cell is like a full organism. That’s a single cell, right? So it’s the kind
0:50:24 of simplest single cell organism. And so it’s not just a cell, it’s a full organism. And, um,
0:50:31 and yeast is very well understood. And so that would be a good candidate for, uh, a kind of full simulated
0:50:37 model. Now alpha fold is the, is the solution to the kind of static picture of what does a, what does
0:50:42 a protein look 3d structure protein look like a static picture of it. But we know that biology,
0:50:46 all the interesting things happen with the dynamics, the interactions, and that’s what alpha fold three
0:50:51 is, is the first step towards is modeling those interactions. So first of all, pair wise, you know,
0:50:56 proteins with proteins, proteins with RNA and DNA, but then, um, the next step after that would be
0:51:01 modeling maybe a whole pathway, maybe like the TOR pathway that’s involved in cancer or something like
0:51:05 this. And then eventually you might be able to model, you know, a whole cell.
0:51:09 Also, there’s another complexity here that stuff in a cell happens at different timescales.
0:51:16 Is that tricky? It’s like the, you know, protein, uh, folding is, you know, super fast.
0:51:21 Yes. Um, I don’t know all the biological mechanisms, but some of them take a long time.
0:51:26 Yeah. And so is that, that’s a level. So the levels of interaction has a different temporal
0:51:27 scale that you have to be able to model.
0:51:32 So that would be hard. So you’d probably need several simulated systems that can interact at
0:51:37 these different temporal dynamics, or at least, uh, maybe it’s like a hierarchical system. So, um,
0:51:40 you can jump up and down the, the different temporal stages.
0:51:49 So can you avoid, I mean, one of the challenges here is not avoid simulating, for example, the,
0:51:53 the, the quantum mechanical aspects of any of this, right? You want to not over model.
0:52:00 You can skip ahead to just model the really high level things that get you a really good
0:52:01 estimate of what’s going to happen.
0:52:04 Yes. So you, you’ve got to make a decision when you’re modeling any natural system, what is the
0:52:09 cutoff level of the granularity that you’re going to model it to that then captures the dynamics
0:52:14 that you’re interested in. So probably for a cell, I would hope that would be the protein level,
0:52:20 uh, and that one wouldn’t have to go down to the atomic level. Um, so, you know, and of course,
0:52:26 that’s where alpha fold stock kicks in. So that would be kind of the basis. And then you’d build these,
0:52:33 um, uh, higher level simulations that, um, take those as building blocks and then you get the
0:52:34 emergent behavior.
0:52:39 Yeah. Apologize for the pothead questions ahead of time, but, uh, do you think, uh,
0:52:48 we’ll be able to simulate a model, the origin of life? So being able to simulate the first
0:52:53 from, from non-living organisms, the, the birth of a living organism.
0:52:58 I think that’s a, one of the, of course, one of the deepest and most fascinating questions. Um,
0:53:03 I love that area of biology, you know, uh, there’s people like, there’s a great book by Nick Lane,
0:53:09 one of the top, top experts in this area called the 10 great inventions of, of, of evolution. I think it’s
0:53:13 fantastic. And it also speaks to what the great filters might be, but you know, prior or are they
0:53:18 ahead of us? I think, I think they’re most likely in the past. If you read that book of how unlikely
0:53:23 to go, you know, have any life at all. And then single cell to multi-cell seems an unbelievably
0:53:28 big jump that took like a billion years, I think on earth to do. Right. So it shows you how hard it
0:53:29 was. Right.
0:53:33 Bacteria were super happy for a very long, very long time before they captured mitochondria somehow.
0:53:39 Right. I don’t see why not, why AI couldn’t help with that. Some kind of simulation again,
0:53:44 it’s again, it’s a bit of a search process through a combinatorial space. Here’s like all the,
0:53:49 you know, the chemical soup that, that you start with the primordial soup that, you know,
0:53:55 maybe it was on earth near these hot vents. Here’s some initial conditions. Can you generate
0:53:58 something that looks like a cell? So perhaps that would be a next stage after the virtual
0:54:04 cell project is, well, how, how could you actually, um, something like that emerge from the chemical
0:54:09 soup? Well, I would love it if there was a move 37 for the origin of life. I think that’s
0:54:13 one of the sorts of great mysteries. I think ultimately what we will figure out is their
0:54:17 continuum. There’s no such thing as a line between non-living and living, but if we can make that
0:54:24 rigorous, that the very thing from the big bang to today has been the same process. If we can break
0:54:30 down that wall that we’ve constructed in our minds of the actual origin of, from non-living to living,
0:54:35 and it’s not a line that it’s a continuum that connects physics and chemistry and biology.
0:54:40 Yeah. There’s no line. I mean, this is my whole reason why I worked on AI and AGI my whole life,
0:54:45 because I think it can be the ultimate tool to help us answer these kinds of questions. And
0:54:51 I don’t really understand why, um, you know, the average person doesn’t think like worry about this
0:54:56 stuff more. Like how, how, how can we not have a good definition of life and not, and not living and
0:55:02 non-living and the nature of time and let alone consciousness and gravity and all these things.
0:55:07 It’s, it’s just, and quantum mechanics, weirdness. It’s just, to me, it’s, I’ve always had this,
0:55:12 this sort of screaming at me in my face. The whole, I need that. It’s getting louder. You know,
0:55:16 it’s like how, what is going on here? You know, in, in, in, and I mean that in the deeper sense,
0:55:21 like in the, you know, the nature of reality, which has to be the ultimate question, uh, that would
0:55:25 answer all of these things. It’s sort of crazy. If you think about it, we can stare at each other
0:55:30 and all these living things all the time. We can inspect it microscopes and take it apart,
0:55:35 uh, almost down to the atomic level. And yet we still can’t answer that clearly in a simple way.
0:55:39 That question of how do you define living? It’s kind of amazing.
0:55:44 Yeah. Living. You can kind of talk your way out of thinking about, but like consciousness,
0:55:48 like we have this very obviously subjective conscious experience. Like we’re at the center
0:55:54 of our own world and it, it feels like something. And then how, how, how are you not screaming?
0:56:00 Yeah. At the mystery of it all. We haven’t, I mean, but really humans have been contending
0:56:05 with the mystery of the world around them, uh, for a long, long, there’s a lot of mysteries
0:56:12 like what’s up with the sun and, and the rain. Yeah. Like what’s that about? And then like last year,
0:56:16 we had a lot of rain and this year we don’t have rain. Like, what did we do wrong?
0:56:19 Humans have been asking that question for a long time.
0:56:23 Exactly. So we’re quite, I guess we’ve developed a lot of mechanisms to cope with this, uh,
0:56:27 these deep mysteries that we can’t fully, we can see, but we can’t fully understand. And
0:56:32 we have to have to just get on with daily life and, and, and we get, we keep ourselves busy,
0:56:34 right? In a way, do we keep ourselves distracted?
0:56:39 I mean, weather is one of the most important questions of human history. We still, that’s,
0:56:44 that’s the go-to small talk direction of the weather, especially in England.
0:56:51 And then it’s, which is, you know, famously is an extremely difficult system to model. And, uh,
0:56:56 even that system, uh, uh, Google deep mind has made progress on.
0:57:01 Yes. We’ve, we are, we’ve created the, the best weather prediction systems in the world,
0:57:06 and they’re better than traditional fluid dynamics sort of systems that usually calculated on massive
0:57:12 supercomputers takes days to calculate it. Uh, we’ve managed to model a lot of the weather dynamics
0:57:18 dynamics with neural network systems without where the next system. And again, it’s interesting that
0:57:23 those kinds of dynamics can be modeled, even though they’re very complicated, almost bordering on chaotic
0:57:28 systems. In some cases, a lot of the interesting aspects of that, um, uh, can be modeled by these
0:57:33 neural network systems, including very recently we had, you know, cyclone prediction of where,
0:57:37 you know, paths of hurricanes might go, of course, super useful, super important for the world.
0:57:42 And, and, and it’s super important to do that very timely and very quickly and as well as accurately.
0:57:47 And, uh, I think it’s very promising direction again of, you know, simulating and, uh, so that you can
0:57:51 run forward predictions and simulations of very complicated real world systems.
0:57:57 I should mention that, uh, I’ve got a chance in, uh, Texas to meet a community of folks called the
0:57:58 storm chasers.
0:57:59 Yes.
0:58:03 And what’s really incredible about them. I need to talk to them more is they’re extremely tech savvy
0:58:07 because what they have to do is they have to use models to predict where the storm is.
0:58:14 So there it’s this, it’s, it’s this beautiful mix of like crazy enough to like go into the eye of the
0:58:19 storm and like, in order to protect your life and predict where the extreme events are going to be,
0:58:23 they have to have increasingly sophisticated models of, uh, of weather.
0:58:24 Yeah.
0:58:24 Yeah.
0:58:31 It’s, it’s a beautiful balance of like being in it as living organisms and the, the cutting
0:58:35 edge of science. So they actually might be using a deep mind system. So that’s.
0:58:38 Yeah, they are. Hopefully they are. And I’d love to join them on one of those.
0:58:41 They look amazing. Right. To actually experience it one time.
0:58:46 Exactly. And then also to experience the correct prediction of where something will come
0:58:48 and how it’s going to evolve. It’s incredible.
0:58:56 Yeah. You’ve estimated that we’ll have AGI by 2030. Um, so there’s interesting questions around that.
0:59:04 How will we actually know that we got there? Uh, and, uh, what may be the move quote,
0:59:12 move 37 of AGI. My estimate is sort of 50% chance by in the next five years. So, you know, by 2030,
0:59:17 let’s say. And, uh, so I think there’s a good chance that that could happen. Part of it is what,
0:59:20 what is your definition of AGI. Of course, people are arguing about that now. And,
0:59:26 and, uh, mine’s quite a high bar and always has been of like, can we match the cognitive
0:59:31 functions that the brain has? Right. So we know our brains are pretty much general Turing machines,
0:59:38 approximate. And of course we created incredible modern civilization with our minds. So that also
0:59:45 speaks to how general the brain is. And, um, for us to know, we have a true AGI, we would have to like,
0:59:49 make sure that it has all those capabilities. It isn’t kind of a jagged intelligence where some
0:59:55 things it’s really good at like today’s systems, but other things it’s really, uh, flawed at. And,
0:59:58 and that’s what we currently have with today’s systems. They’re not consistent. So you’d want that
1:00:04 consistency of intelligence across the board. And then we have some missing, I think, capabilities,
1:00:09 like sort of, uh, the true invention capabilities and creativity that we were talking about earlier.
1:00:14 So you’d want to see those, how you test that. Um, I think you just test it. One way to do it would
1:00:20 be kind of brute force test of tens of thousands of cognitive tasks that, um, you know, we know that
1:00:27 humans can do, uh, and maybe also make the system available to, uh, a few hundred of the world’s top
1:00:33 experts, uh, Terence Tows of each, each subject area and see if they can find, you know, given,
1:00:39 give them a month or two and see if they can find an obvious flaw in the system. And if they can’t,
1:00:43 then I think you’re, you’re pretty, uh, you know, pretty, you can be pretty confident. We have a,
1:00:44 a fully general system.
1:00:48 Maybe to push back a little bit, it seems like humans are really incredible
1:00:55 as the intelligence improves across all domains to take it for granted. Uh, like you mentioned,
1:01:03 Dr. Terence Tao, uh, these brilliant experts, they might quickly in a span of weeks take for granted
1:01:08 all the incredible things it can do and then focus in while ha ha right there. You know, I, I consider
1:01:19 myself, uh, first of all, human. Yeah. Uh, I identify as human. Um, I, you know, some people listen to me
1:01:25 talk and they’re like, that guy is not good at talking, the stuttering, the, you know, so like even
1:01:31 humans have obvious across domains limits, uh, even just outside of calculus, mathematics and physics
1:01:38 and so on. It, I, I wonder if it will take something like a move 37. So on the positive
1:01:46 side versus like a barrage of 10,000 cognitive tasks where it would be one or two where it’s like,
1:01:47 yes, holy shit.
1:01:53 Exactly. So I think there’s the sort of blanket testing to just make sure you’ve got the consistency,
1:02:00 but I think there are the sort of lighthouse moments like the move 37 that I would be looking for. So
1:02:07 one would be inventing a new conjecture or a new hypothesis about physics like Einstein did. So
1:02:12 maybe you could even run the back test of that very rigorously, like have a cutoff of knowledge,
1:02:18 cutoff of 1900 and then give the system everything that was, you know, that was written up to 1900 and
1:02:22 then, and then see if it could come up with special relativity and general relativity, right? Like
1:02:28 Einstein did that, that would be an interesting test. Another one would be, can it invent a game
1:02:33 like go not just come up with move 37, a new strategy, but can it invent a game that’s as deep
1:02:39 as aesthetically beautiful, as elegant as go. And those are the sorts of things I would be looking out for,
1:02:44 uh, uh, and probably a system being able to do, uh, uh, several of those things, right. For it to be
1:02:50 very general, um, not just one domain. And so I think that would be the signs, at least that I would
1:02:56 be looking for that we’ve got a system that’s AGI level. And then maybe to fill that out, you would
1:03:00 also check their consistency, you know, make sure there’s no holes in that system either.
1:03:05 Yeah. Something like a new conjecture or scientific discovery, that would be a cool feeling.
1:03:10 Yeah. That would be amazing. So it’s not, not just helping us do that, but actually coming up with
1:03:16 something brand new. And you would be in the room for that. And so it would be like probably two or
1:03:22 three months before announcing it. And you would just be sitting there trying not to tweet.
1:03:28 something like that. Exactly. It’s like, what is this amazing new, you know, physics, uh, idea. And
1:03:34 then we would probably check it with world experts in that domain, right. And validate it and kind of
1:03:41 go through its workings. And it, I guess it would be explaining its workings to, um, yeah, be an amazing
1:03:45 moment. Do you worry that we as humans, even expert humans, like you might miss it?
1:03:51 Well, it may be pretty complicated. So it could be the analogy I give there is, I don’t think it will be,
1:03:57 um, uh, uh, totally mysterious to the, to the best human scientists, but it may be a bit like,
1:04:04 for example, in chess, if I was to talk to Gary Kasparov or Magnus Carlsen and play a game with them,
1:04:08 and they make a brilliant move, I might not be able to come up with that move, but they could explain
1:04:13 why afterwards that move made sense. And we will be able to understand it to some degree,
1:04:17 not to the level they do, but you know, if they were good at explaining, which is actually part of
1:04:22 intelligence too, is being able to explain in a simple way that what you’re thinking about. Um,
1:04:26 uh, I think that that will be very possible for the best human scientists.
1:04:31 But I wonder, maybe you can, you can educate me on the side of go. I wonder if there’s moves from
1:04:36 Magnus or Gary where they at first will dismiss it as a bad move.
1:04:41 Yeah, sure. It could be. But then afterwards they’ll figure out with their intuition that,
1:04:45 that this, why this works. And then, and then, and then empirically, the nice thing about games is
1:04:49 one of the great things about games is you can, it’s a sort of scientific test. Does it,
1:04:54 do you win the game or not win? And then, um, that tells you, okay, that move in the end was
1:04:59 good. That strategy was good. And then you can go back and analyze that and, and, and, and explain
1:05:05 even to yourself a little bit more why explore around it. And that’s how chess analysis and things
1:05:09 like that works. So perhaps that’s why my brain works like that. Cause I I’ve been doing that since
1:05:13 I was four and you’re trained, you know, it’s sort of hardcore training in that way.
1:05:17 But even, even now, like when I generate code,
1:05:24 there is this kind of nuanced, fascinating contention that’s happening where I might
1:05:30 at first identify as a set of generated code is incorrect in some interesting nuanced ways.
1:05:35 But then I’m always have to ask the question, is there a deeper insight here that
1:05:41 I’m the one who’s incorrect? And that’s going to, as the systems get more and more intelligent,
1:05:46 you’re going to have to contend with that. It’s like, what, what, what do you, is this a bug or
1:05:50 a feature where you just came up with? Yeah. And they’re going to be pretty complicated to do,
1:05:55 but of course it will be, you can imagine also AI systems that are producing that code or whatever that
1:06:00 is. And then human programmers looking at, but also not unaided with the help of AI tools,
1:06:05 tools as well. So it’s going to be kind of an interesting, you know, maybe different AI tools
1:06:09 to the ones that they’re more, you know, kind of monitoring tools to the ones that generated it.
1:06:17 So if we look at an AGI system, sorry to bring it back up, but Alpha Evolve, super cool. So Alpha
1:06:24 Evolve enables on the programming side, something like recursive self-improvement, potentially.
1:06:30 Like what, if we can imagine what that AGI system, maybe not the first version, but if
1:06:34 a few versions beyond that, what does that actually look like? Do you think it will be
1:06:39 simple? Do you think it will be something like a self-improving program and a simple one?
1:06:44 I mean, potentially that’s possible. I would say, um, I’m not sure it’s even desirable because that’s
1:06:50 a kind of like hard takeoff scenario, but, but you, you, these current systems like Alpha Evolve,
1:06:55 they have, you know, human in the loop deciding on various things. They’re separate hybrid systems
1:07:01 that interact. Uh, one could imagine eventually doing that end to end. I don’t see why that wouldn’t
1:07:06 be possible, but right now, um, you know, I think the systems are not good enough to do that in terms of
1:07:11 coming up with the architecture of the code. Um, and again, it’s a little bit reconnected to
1:07:16 this idea of coming up with a new conjectural hypothesis, how they’re good if you give them
1:07:21 very specific instructions about what you’re trying to do. Um, but if you give them a very vague,
1:07:26 high level instruction, that wouldn’t work currently. Like, uh, and I think that’s related to this idea of
1:07:31 like invent a game as good as go, right? Imagine that was the prompt that’s, that’s pretty underspecified.
1:07:36 And so the current systems wouldn’t know, I think what to do with that, how to narrow that down to
1:07:40 something tractable. And I think there’s similar, like, look, just make a better version of yourself.
1:07:45 That’s too, that’s too unconstrained, but we’ve done it in, you know, in, and as you know, with Alpha
1:07:51 Evolve, like things like faster matrix multiplication. So when you, when you hone it down to a very specific
1:07:56 thing you want, um, it’s very good at incrementally improving that. But at the moment, these are more
1:08:01 like incremental improvements, sort of small iterations. Whereas if, you know, if you wanted
1:08:07 a big leap in, uh, understanding, you’d need a, you need a much larger, uh, advance.
1:08:13 Yeah. But it could also be sort of to push back against hard takeoff scenario. It could be just
1:08:21 a sequence of, um, incremental improvements like matrix multiplication. Like it has to sit there for days
1:08:26 thinking how to incrementally improve a thing and that it does so recursively. And as you do more
1:08:32 and more improvement, it’ll slow down. So there’ll be a, like, uh, like, uh, the path to AGI won’t be
1:08:39 like a, uh, it’d be a gradual improvement over time. Yes. If it was just incremental improvements,
1:08:43 that’s how it would look. So the question is, could it come up with a new leap, like the transformers
1:08:49 architecture, right? Could it have done that back in 2017 when, you know, we did it and brain did it.
1:08:54 And it’s, it’s not clear that, that these systems, something that AlphaVolv wouldn’t be able to do,
1:08:59 make such a big leap. So for sure, these systems are good. We have systems, I think that can do
1:09:03 incremental hill climbing. And that’s a kind of bigger question about, is that all that’s needed
1:09:08 from here? Or do we actually need one or two more, um, uh, big breakthroughs?
1:09:13 And can the same kind of systems provide the breakthroughs also? So make it a bunch of
1:09:18 S curves, like incremental improvement, but also every once in a while leaps.
1:09:24 Yeah. I don’t think anyone has systems that can have shown unequivocally those big leaps that,
1:09:28 that, that, right. We have a lot of systems that do the hill climbing of the S curve that you’re
1:09:31 currently on. Yeah. And that would be the move 37.
1:09:37 Yeah. I think it would be a leap. Um, something like that. Uh, do you think the scaling laws are
1:09:43 holding strong on the pre-training, post-training test time compute? Uh, do you, uh, on the flip side
1:09:46 of that anticipate AI progress hitting a wall?
1:09:52 We certainly feel there’s a lot more room just in the scaling. So, um, actually all steps,
1:09:58 pre-training, post-training and inference time. So, uh, there’s sort of three scalings that are
1:10:06 happening concurrently. Um, and we, again, there it’s about how innovative you can be. And we, you know,
1:10:11 we pride ourselves on having the broadest and, um, deepest research bench. Uh, we have amazing,
1:10:16 you know, incredible, uh, researchers and, uh, people like Noam Shazir, who, you know,
1:10:21 came up with Transformers and, and Dave Silver, you know, who led the AlphaGo project and so on.
1:10:28 And, um, it’s, it’s, it’s that research base means that if some new, new breakthrough is required,
1:10:33 like an AlphaGo or Transformers, uh, I would back us to be the place that does that.
1:10:38 So I’m actually quite like it when the terrain gets harder, right? Because then it veers more from just
1:10:44 engineering to, to true research and, you know, research plus engineering, and that’s our sweet spot.
1:10:50 And I, I think that’s harder. It’s harder to invent things than to, than to, um, you know,
1:10:56 fast follow. And, um, so, you know, we don’t know, I would say it’s a, it’s kind of 50, 50,
1:11:02 whether new things are needed or whether the scaling, the existing stuff is going to be enough. And so
1:11:07 in true kind of empirical fashion, we’re pushing both of those as hard as possible, the new blue sky
1:11:13 ideas and, you know, maybe about half our resources on that and then, and then, uh, scaling to the max,
1:11:18 the, the current, the current capabilities. And, um, we’re still seeing some, you know,
1:11:22 fantastic progress on, uh, each different version of Gemini.
1:11:25 That’s interesting. The way you put it in terms of the deep bench,
1:11:35 that if, uh, progress towards AGI is more than just scaling compute. So the engineering side of the
1:11:41 problem and is more on the scientific side where there’s breakthroughs needed, then you feel confident
1:11:47 deep mind as well, uh, Google deep mind as well positioned to kick ass in that domain.
1:11:51 Well, I mean, if you look at the history of the last decade or 15 years, um, it’s been,
1:11:55 I mean, you know, maybe, I don’t know, 80, 90% of the breakthroughs that more that underpins
1:12:00 modern AI field today was from, you know, originally Google brain, Google research and deep mind. So,
1:12:03 yeah, I would back that to continue. Hopefully.
1:12:09 Uh, so on the data side, are you concerned about running out of high quality data, especially high
1:12:14 quality human data? I’m not very worried about that partly because I think there’s enough data,
1:12:19 data, uh, or, and it’s been proven to get the systems to be pretty good. And this goes back
1:12:24 to simulations. Again, if you do, you have enough data to make simulations or so that you can create
1:12:30 more synthetic data that are from the right distribution, obviously that’s the key. So you
1:12:35 need enough real world data in order to be able to, uh, uh, create those kinds of generator data
1:12:39 generators. And, um, I think that we’re at that step at the moment.
1:12:44 Yeah. You’ve done a lot of incredible stuff on the side of science and biology doing a lot with
1:12:48 not so much data. Yeah. I mean, it’s still a lot of data, but I guess enough.
1:12:56 Take that going. Exactly. Exactly. Uh, how crucial is the scaling of compute to building AGI? This is a
1:13:03 question. That’s an engineering question. It’s a, almost a geopolitical question because it also
1:13:09 integrated into that is the supply chains and energy, a thing that you care a lot about,
1:13:13 which is, um, potentially fusion. So innovating on the side of energy also, do you think we’re
1:13:15 going to keep scaling compute?
1:13:19 I think so for several reasons. I think compute there’s, there’s the amount of compute you have
1:13:25 for training, uh, often it needs to be co-located. So actually even like, you know, uh, bandwidth
1:13:30 constraints between data centers can affect that. So it’s, it’s, it’s, there’s additional constraints
1:13:34 even there. And that that’s important for training, obviously the largest models you can,
1:13:41 but there’s also, because now AI systems are in products and being used by billions of people around
1:13:46 the world, you need a ton of inference compute now. Um, and then on top of that, there’s the thinking
1:13:52 systems, the new paradigm, uh, of the last year that, uh, where they get smarter, the longer amount of
1:13:58 inference time you give them at test time. So all of those things need a lot of compute. And I don’t
1:14:04 really see that slowing down. Um, and as AI systems become better, they’ll become more useful and
1:14:08 there’ll be more demand for them. So both from the training side, the training side actually is,
1:14:13 is only just one part of that may even become the smaller part of, of what’s needed, um, uh,
1:14:19 in the overall compute that that’s required. Yeah. That’s one sort of almost meme-y kind
1:14:25 of thing, which is like the success and the incredible aspects of VO3. There’s, uh, people
1:14:29 kind of make fun of like the more successful it becomes the, you know, the servers are sweating.
1:14:32 Yes. Yeah, yeah, exactly. We did a little
1:14:38 video of, of, of the servers frying eggs and things. And, um, that’s right. And, and, and we’re
1:14:42 going to have to figure out how to do that. Um, there’s a lot of interesting hardware innovations
1:14:46 that we do, as you know, we have our own TPU line and we’re looking at like inference only things,
1:14:51 inference only chips, and how we can make those more efficient. We’re also very interested in building AI
1:14:57 systems. And we have done the help with energy usage. So help, um, data center energy, like for
1:15:03 the cooling systems, be efficient, um, grid optimization. Um, and then eventually things
1:15:08 like helping with, uh, plasma containment fusion reactors, we’ve done lots of work on that with
1:15:13 commonwealth fusion. And also, uh, one could imagine reactor design. Um, and then material design,
1:15:18 I think is one of the most exciting new types of solar material, solar panel material, super room
1:15:24 temperature superconductors has always been on my list of dream breakthroughs and, um, optimal batteries.
1:15:29 And I think a solution to any, you know, one of those things would be absolutely revolutionary
1:15:34 for, you know, climate and energy usage. And we’re probably close, you know, and again,
1:15:38 in the next five years to having AI systems that can materially help with those problems.
1:15:43 If you were to bet, sorry for the ridiculous question, what, what is the main source of energy
1:15:49 in like 20, 30, 40 years? Do you think it’s going to be nuclear fusion?
1:15:55 I think fusion and solar are the two that I, I would bet on. Um, solar, I mean, you know,
1:16:00 it’s the fusion reactor in the sky, of course, and I think really the problem there is, is,
1:16:04 is batteries and transmission. So, you know, as well as more efficient, more and more efficient
1:16:09 solar material, perhaps eventually, you know, in space, you know, these kinds of Dyson sphere type
1:16:17 ideas and fusion, I think is definitely doable seems, uh, if we have the right design of reactor
1:16:23 and we can control the plasma and, uh, fast enough and so on. And I think both of those things will
1:16:27 actually get solved. So we’ll probably have at least, those are probably the two primary
1:16:31 sources of renewable, clean, almost free, or perhaps free energy.
1:16:38 What a time to be alive. If I, uh, traveled into the future with you a hundred years from now,
1:16:45 how much would you be surprised if we’ve passed a type one Kardashev scale civilization?
1:16:50 I would not be that surprised if there’s a, like a hundred year timescale from here. I mean,
1:16:54 I think it’s pretty clear if we crack the energy problems in one of the ways we’ve just discussed
1:17:01 fusion or, or very efficient solar. Um, then if energy is kind of free and renewable and clean,
1:17:08 um, then that solves a whole bunch of other problems. So for example, the water access problem
1:17:13 goes away because you can just use desalination. We have the technology. It’s just too expensive.
1:17:18 So only, you know, uh, fairly wealthy countries like Singapore and Israel and so on, like actually
1:17:23 use it. But, but if it was cheap, then every, then, you know, all countries that have a coast could,
1:17:28 but also you’d have unlimited rocket fuel. You could just separate seawater out into hydrogen and oxygen
1:17:36 using energy and that’s rocket fuel. So, uh, combined with, you know, Elon’s amazing self landing rockets,
1:17:41 then it could be like, you sort of like a bus service to, to space. So that opens up, you know,
1:17:47 incredible new resources and domains, uh, asteroid mining, I think will become a thing and maximum
1:17:51 human flourishing to the stars. Like that’s what I, uh, dream about as well as like Carl Sagan’s sort
1:17:57 of idea of bringing consciousness to the universe, waking up the universe. And I think human civilization
1:18:03 will do that in the full sense of time. If we get AI right and, uh, and, and, and crack some of
1:18:07 these problems with it. Yeah. I wonder what it would look like if you’re just a tourist flying through
1:18:14 space, you would probably notice earth because if you solve the energy problem, you would see a lot of
1:18:21 space rockets probably. So it would be like traffic here in London, but in space, it’s just a lot of
1:18:27 rockets. Yes. And then you would probably see floating in space, some kind of source of energy,
1:18:34 like solar. Yeah. Potentially. So earth would just look more on the surface, more, um, technological.
1:18:40 And then, then you would use the power of that energy then to preserve the natural. Yes. Like
1:18:43 the rainforest and all that kind of stuff. Exactly. Because for the first time in, in human history,
1:18:51 we wouldn’t be, uh, resource constrained. And I think that could be amazing new era for humanity,
1:18:57 where it’s not zero sum, right? I have this land, you don’t have it. Or if we take, you know, if the
1:19:01 tigers have their forest, then the, the local villagers can’t, what are they going to use?
1:19:07 I think that this will help a lot. No, it won’t solve all problems because there’s still other human,
1:19:12 uh, foibles that will, will, will still exist, but it will at least remove one. I think one of the big
1:19:19 vectors, which is scarcity of resources, you know, including land and more materials and energy.
1:19:22 And, um, we, you know, we should be, I sometimes call it like, and others call it about this kind
1:19:27 of radical abundance era where, um, there’s plenty of resources to go around. Of course,
1:19:32 the next big question is making sure that that’s fairly, you know, shared fairly, uh,
1:19:37 and everyone in society benefits from that. So there is something about human nature where
1:19:45 I go, you know, it’s like Borat, like my neighbor, like, like you start trouble. We, we, we do start
1:19:52 conflicts and that’s why games throughout, as I’m learning actually more and more, even in ancient
1:19:59 history serve the purpose of pushing people away from war, actually a hot war. So maybe we can figure
1:20:06 out increasingly sophisticated video games that pull us, they, they give us that, uh, that scratch
1:20:13 the itch of like conflict, whatever that is about, about us, the human nature, and then avoid the actual
1:20:21 hot wars that would come with increasingly sophisticated technologies, because we’re now we’ve
1:20:26 long past the stage where the weapons we’re able to create can actually just destroy all of human
1:20:34 civilization. So it’s no longer, um, that’s no longer a great way to, uh, start shit with your
1:20:39 neighbor. It’s better to play a game of chess or football or football. Yeah. Yeah. And I think,
1:20:45 I mean, I think that’s what my modern sport is. So, and I love football watching it and, and I just feel
1:20:50 like, uh, and I used to play it a lot as well. And it’s, it’s, it’s, it’s, it’s, it’s very visceral and
1:20:55 it’s tribal. And I think it does channel a lot of those energies into a, which I think is a kind
1:21:02 of human need to belong to some, some group. And, um, but into a, into a, into a fun way,
1:21:08 a healthy way, and, and a not, and not destructive way, kind of constructive, uh, thing. And I think
1:21:12 going back to games again is I think they’re originally why they’re so great as well for kids
1:21:16 to play things like chess is they’re great little microcosm simulations of the world that they’re
1:21:20 simulations of the world too. They’re simplified versions of some real world situation, whether it’s
1:21:26 poker or, or go or chess, different aspects or diplomacy, different aspects of, of the real
1:21:32 world and allows you to practice at them too. And, and cause you know, how many times do you get to
1:21:37 practice a massive decision moment in your life? You know, what job to take, what university go to,
1:21:41 you know, you get maybe, I don’t know, a dozen or so key decisions one has to make, and you’ve got to
1:21:46 make those as best as you can. Um, and games is a kind of safe environment, repeatable environment
1:21:52 where you can get better at your decision-making process. Um, and it maybe has this additional
1:21:58 benefit of channeling some energies into, uh, into more creative and constructive pursuits.
1:22:02 Well, I think it’s also really important to practice, um, losing and winning, right?
1:22:07 Like losing is a really, you know, that’s why I love games. That’s why I love even, um, things like, uh,
1:22:13 Brazilian jiu-jitsu where you can get your ass kicked in a safe environment over and over.
1:22:18 It reminds you about the way, about physics, about the way the world works, about the,
1:22:23 sometimes you lose, sometimes you win. You can still be friends with everybody, but that, that feeling
1:22:30 of losing, I mean, it’s a weird one for us humans to like, really like make sense of like,
1:22:33 that’s just part of life. That is a fundamental part of life is losing.
1:22:37 Yeah. And I think the martial arts, as I understand it, but also in things like,
1:22:41 like chess is a lot, at least the way I took it, it’s a lot to do with self-improvement,
1:22:47 self-knowledge, you know, that, okay. So I did this thing. It’s not about really being the other
1:22:52 person. It’s about maximizing your own potential. If you do in a healthy way, you learn to use victory
1:22:57 and losses in a way, don’t get carried away with victory and, and think you’re the, just the best in
1:23:02 the world and, and, and the losses keep you humble and always knowing there’s always something more
1:23:07 to learn. There’s always a bigger expert that you can mentor you. You know, I think you learn that I’m
1:23:13 pretty sure in martial arts. And, and, and I think that’s also the way that at least I was trained in
1:23:17 chess. And so in the same way, and it can be very hardcore and very important. And of course you want
1:23:22 to win, but you also need to learn how to deal with setbacks, uh, in a, in a healthy way that,
1:23:27 and, and, and, and wire that, that feeling that you have when you lose something into a constructive
1:23:31 thing of next time I’m going to improve this, right. Or get better at this.
1:23:36 There is something that’s a source of happiness, a source of meaning that improvements that it’s not
1:23:37 about the winning or losing.
1:23:39 Yes. The mastery. Yeah.
1:23:43 There’s nothing more satisfying in a way is like, oh, wow, this thing I couldn’t do before.
1:23:48 Now I can. And, and, and again, games and physical sports and mental sports,
1:23:52 they’re ways of measuring. They’re beautiful because you can measure that, that progress.
1:23:56 Yeah. I mean, there’s something about, that is why I love role-playing games. Like the, uh,
1:24:02 number go up of like on the skill tree. Like literally that is a source of meaning for us
1:24:07 humans, whatever. Yeah. We’re quite, we’re quite addicted to this sort of, yeah, these numbers going
1:24:12 up and, uh, and, and, and, and maybe that’s why we made games like that because obviously that is
1:24:15 something where we’re, we’re, we’re hill climbing systems ourselves, right.
1:24:19 Yeah. It would be quite sad if we didn’t have any mechanism.
1:24:23 Different color belts. We do this everywhere, right. Where we just have this thing that
1:24:27 I don’t want to dismiss that. That is a source of deep meaning for us humans.
1:24:33 Um, so one of the incredible stories on the business, on the leadership side is, um, what
1:24:40 Google has done over the past year. So I, uh, I think it’s fair to say that Google was losing on
1:24:47 the LLM product side, uh, a year ago with Gemini 1.5 and now it’s winning with Gemini 2.5 and you took
1:24:52 the helm and you led this effort. What did it take to go from, let’s say, quote unquote losing to
1:24:55 quote unquote winning in the, in the span of a year?
1:25:00 Yeah. Well, firstly, it’s absolutely incredible team that we have, you know, led by Corey and Jeff
1:25:07 Dean and, and Oriol and the amazing team we have on Gemini, absolutely world-class. So you can’t do it
1:25:13 without the best talent. Um, and of course you have, you know, we have a lot of great compute as well,
1:25:19 but then it’s the research culture we’ve created, right. And basically coming together, both different
1:25:24 groups in, in Google, you know, there was Google brain world-class team and, and then the old deep
1:25:31 mind and pulling together all the best people and the best ideas and gathering around to make the
1:25:38 absolute greater system we could. And it was been hard. Um, but we’re all very competitive. Uh, and we,
1:25:44 you know, love research. This is so fun to do. Um, and we, you know, it’s great to see our trajectory.
1:25:49 It wasn’t a given, but we’re very pleased with, um, the, the, where we are in the rate of progress
1:25:54 is the most important thing. So if you look at where we’ve come to from two years ago to one year
1:25:59 ago to now, you know, I think our, we call it relentless progress along with relentless shipping
1:26:05 of that progress is, um, being very successful. And, you know, um, it’s unbelievably competitive,
1:26:11 uh, the whole space, the whole AI space with some of the greatest entrepreneurs and leaders,
1:26:16 uh, and companies in the world all competing now because everyone’s realized how important
1:26:20 AI is. Um, and it’s very, you know, been pleasing for us to see that progress.
1:26:25 You know, Google is a gigantic company. Uh, can you speak to the natural things that happen
1:26:30 in that case is the bureaucracy that emerges? Like you want to be careful, like, you know,
1:26:35 like the, the, the natural kind of there’s, uh, there’s meetings and there’s managers and that,
1:26:39 like what, what are some of the challenges from a leadership perspective of breaking through that
1:26:46 in order to, like you said, ship like the number of products. Yeah. Gemini related products has been
1:26:51 shipped over the past year. It’s just insane. Right. It is. Yeah, exactly. That’s, that’s what
1:26:57 relentlessness looks like. Um, I think it’s, it’s a question of like any big company, you know, ends up
1:27:02 having, uh, a lot of layers of management and things like that. It’s sort of the nature of how it works.
1:27:09 Um, but I still operate and I was always operating with old deep mind as a, as a startup, still large
1:27:14 one, but still as a startup. And that’s what we still act like today as with Google deep mind and
1:27:21 acting with decisiveness and the energy that you get from the best smaller organizations. And we try to
1:27:26 get the best of both worlds where we have this incredible billions of users surfaces, uh, incredible
1:27:32 products that we can power up with our AI and our, and our research. Um, and that’s amazing.
1:27:36 And you can, you know, that’s very few places in the world. You can get that do incredible world-class
1:27:41 research on the one hand, and then plug it in and improve billions of people’s lives the next day.
1:27:48 Uh, that’s a pretty amazing combination and we’re continually fighting and cutting away bureaucracy
1:27:53 to allow the research culture and the relentless shipping culture to flourish. And I think we’ve
1:27:58 got a pretty good balance whilst being responsible with it, you know, as you have to be as a large
1:28:03 company and also, uh, with a number of, you know, uh, huge products surfaces that we have.
1:28:09 Uh, so funny thing you mentioned about like the, the surface of the billion. I had a conversation
1:28:15 with a guy named, um, brilliant guy, uh, here at the British museum called Irvin Finkel. He’s a world
1:28:24 expert at Cuneiforms, which is a ancient writing on tablets. And he doesn’t know about Chad GPT or
1:28:31 Gemini. He doesn’t even know anything about AI, but this first encounter with this AI is AI mode on
1:28:36 Google. Yes. He’s like, is that what you’re talking about? This AI mode. And then, you know,
1:28:40 it’s just, it’s just a reminder that there’s a large part of the world that doesn’t know about
1:28:46 this AI thing. Yeah. I know. It’s funny. Cause if you live on, uh, X and Twitter, and I mean,
1:28:50 it’s sort of, at least my feed, it’s all AI and, and there’s certain places where, you know, in the
1:28:55 valley and certain pockets where everyone’s just, all they’re thinking about is AI, but a lot of the
1:29:01 normal world hasn’t, hasn’t come across it yet, but that’s a great responsibility to the, their first
1:29:07 interaction. Yeah. Um, the, the, the grand scale of the rural India or anywhere across the world.
1:29:11 Right. And we want it to be as good as possible. And in a lot of cases, it’s just under the hood
1:29:18 powering, making something like maps or search work better. And, um, and ideally for a lot of those
1:29:22 people should just be seamless. It’s just new technology that makes their lives more, you know,
1:29:27 productive and, and, and helps them. A bunch of folks on the Gemini product and engineering teams
1:29:32 spoken extremely highly of you on another dimension that I almost didn’t even expect. Cause I kind of
1:29:39 think of you as the like deep scientists and caring about these big research scientific questions,
1:29:44 but they also said you’re a great product guy, like how to create a thing that a lot of people would use
1:29:51 and enjoy using. So can you maybe speak to what it takes to create a AI based product that a lot of
1:29:56 people don’t enjoy using? Yeah. Well, I mean, again, that comes back from my game design days where I used
1:30:00 to design games for millions of gamers, people forget about that. I’ve had experience with
1:30:05 cutting edge technology in product that, that, that, that is how games was in the nineties.
1:30:12 And so I love actually the combination of cutting edge research and then being applied in a product
1:30:18 and to power a new experience. And so, um, I think it’s the same skill really of, of, you know,
1:30:24 imagining what it would be like to use it viscerally, um, and having good taste coming back to earlier,
1:30:29 the same thing that’s useful in science, um, I think is, is, can also be useful in,
1:30:35 in product design. And, um, I I’ve just had a very, you know, always being a sort of multidisciplinary
1:30:42 person. So I don’t see, uh, the boundaries really between, you know, arts and sciences or product and
1:30:46 research. It’s, it’s a continuum for me. I mean, I only work on, I like working on products that are
1:30:50 cutting edge. I wouldn’t be able to, you know, have cutting edge technology under the hood. I wouldn’t
1:30:55 be excited about them if they were just run-of-the-mill products. Um, so it requires this
1:30:57 invention creativity capability.
1:31:03 What are some specific things you kind of learned about when you, um, even on the LLM side, you’re
1:31:10 interacting with Gemini, you’re like, this doesn’t feel like the layout, the interface, maybe the trade
1:31:18 opportunity, the latency, like how, how to present to the user, how long to wait and how that waiting
1:31:22 is shown or the reasoning capabilities. There’s some interesting things. Cause like you said, it’s the
1:31:27 very cutting edge. We don’t know how to present it, how to present it correctly. So is there some
1:31:28 specific things you’ve, you’ve learned?
1:31:34 I mean, it’s such a fast evolving space. We’re evaluating this all the time, but where we are
1:31:39 today is that you want to continually simplify things. Um, the, whether that’s the interface
1:31:44 or the interact, what you build on top of the model, you kind of want to get out of the way of the model.
1:31:49 The model train is coming down the track and it’s improving unbelievably fast. This relentless progress
1:31:54 we talked about earlier, you know, you look at 2.5 versus 1.5 and it’s just a gigantic
1:31:59 improvement. And we expect that again for the future versions. And so the models are becoming
1:32:04 more capable. So you’ve got the interesting thing about the design space in, in, in today’s world,
1:32:09 these AI first products is you’ve got to design, not for what the thing can do today, the technology
1:32:15 can do today, but in a year’s time. So you actually have to be a very technical product person
1:32:21 because, uh, you’ve got to kind of have a good intuition for, and feel for, okay, that thing that
1:32:26 I’m dreaming about now can’t be done today, but is the research track on schedule to basically
1:32:31 intercept that in six months or a year’s time. So you kind of got to intercept where this highly
1:32:37 changing technology is going as well as the, um, uh, new capabilities are coming online all the time
1:32:42 that you didn’t realize before that can allow like deep research to work, or now we’ve got video
1:32:48 generation. What do we do with that? Um, this multimodal stuff, you know, is it one question I have is,
1:32:54 is it really going to be the current UI that we have today? These text box chats seems very unlikely
1:33:00 once you think about these super multimodal, uh, uh, systems, shouldn’t it be something more like
1:33:05 minority report where you’re, you’re sort of vibing with it in a, in a coat, in a kind of collaborative
1:33:10 way, right? It seems very restricted to that. I think we’ll look back on today’s interfaces and
1:33:15 products and systems as quite archaic in maybe in a, just a couple of years. So I think there’s a lot of
1:33:21 space actually for innovation to happen on the product side, as well as the research side.
1:33:28 And then we’re offline talking about this keyboard is the open question is how, when, and how much will
1:33:34 we move to audio as the primary way of interacting with the machines around us versus typing stuff?
1:33:39 Yeah. I mean, typing is a very low bandwidth way of doing, even if you’re very fast, you know,
1:33:44 typer. And I think we’re going to have to start utilizing other devices, whether that’s smart glasses,
1:33:52 you know, audio earbuds, um, and eventually maybe some sorts of neural devices where we can increase
1:33:57 the input and the output bandwidth to something, uh, you know, maybe a hundred X of what is today.
1:34:04 I think that, you know, under appreciated art form is the interface design, but I think you can not
1:34:09 unlock the power of the intelligence of a system. If you don’t have the right interface, their interface
1:34:15 is really the way you unlock its power. Yeah. It’s such an interesting question of how to do that.
1:34:20 Yeah. So how you would think like getting out of the way is in real art form.
1:34:24 Yes. You know, it’s the sort of thing that I guess Steve Jobs always talked about, right? It’s
1:34:29 simplicity, beauty, and elegance that we want. Right. And we’re not there. Nobody’s there yet,
1:34:34 in my opinion. And that’s what I would like us to get to. Again, it sort of speaks to like go again,
1:34:38 right. As a game, the most elegant, beautiful game. Can you, you know, that, uh, can you make an
1:34:43 interface as beautiful as that? And actually, I think we’re going to enter an era of AI generated
1:34:48 interfaces that are probably personalized to you. So it fits the way that you, your aesthetic,
1:34:54 your feel, the way that your brain works. And, um, and, and, and the AI kind of generates that
1:34:58 depending on the task, you know, that feels like that’s probably the direction we’ll end up in.
1:35:03 Yeah. Cause some people are power users and they want every single parameter on screen, everything,
1:35:08 everything based, like perhaps me with a keyboard, keyboard based navigation. I like to have shortcuts
1:35:12 for everything. And some people like the minimalism. Just hide all of that complexity. Yeah,
1:35:18 exactly. Yeah. Uh, well, I’m glad you have a Steve Jobs mode in you as well. This is great.
1:35:23 Einstein mode, Steve Jobs mode. Um, all right, let me try to trick you into answering a question.
1:35:29 When, when will Gemini three come out? This is before or after DTS six, the world waits for both.
1:35:37 And what does it take to go from two five to three Oh, because it seems like there’s been a lot of
1:35:42 releases of two five, which are already leaps in performance. So what, what does it even mean to
1:35:48 go to a new version? Is it about performance? Is this about a completely different flavor of an
1:35:55 experience? Yeah. Well, so the way it works with our different, uh, version numbers is we, you know,
1:36:01 we try to collect, so maybe it takes, you know, roughly six months or something to, to do a new
1:36:08 kind of full run and the full productization of a new version. And during that time, lots of new,
1:36:13 interesting research iterations and ideas come up and we sort of collect them all together that,
1:36:18 you know, you could imagine the last six months worth of interesting ideas on the architecture
1:36:23 front. Uh, maybe it’s on the data front. It’s like many different possible things. And we collect,
1:36:28 package that all up, test, which ones are likely to be useful for the next iteration,
1:36:34 and then bundle that all together. And then we start the new, you know, giant hero training run.
1:36:39 Right. And, and then, uh, and then of course that gets monitored. Uh, and then at the end,
1:36:42 then there’s the, of the pre-training, then there’s all the post-training, there’s many different ways
1:36:46 of doing that, different ways of patching it. So there’s a whole experiment and phase there,
1:36:50 which you can also get a lot of gains out. And that’s where you see the version numbers usually
1:36:56 referring to the base model, the pre-trained model, and then the interim versions of 2.5,
1:37:01 you know, and the different sizes and the different little additions, they’re often, uh, patches or
1:37:07 post-training ideas that can be done afterwards, uh, off the same basic architecture. And then of
1:37:12 course, on top of that, we also have different sizes, pro and flash and flashlight that are often
1:37:17 distilled from the biggest ones, you know, the flash model from the pro model. And that means we have a
1:37:23 range of different choices. If you are the developer of, do you want to prioritize performance
1:37:29 or speed, right? And cost. And we like to think of this Pareto frontier of, of, you know, on the one
1:37:35 hand, uh, the Y axis is, you know, like performance. And then the, the, the X axis is, you know, cost or
1:37:43 latency and speed, uh, basically. And we, we have models that completely define the frontier. So whatever
1:37:48 your trade-off is that you want as an individual user, or as a, as a developer, you should find one of
1:37:54 our models satisfies that constraint. So behind diversion changes, there is a big hero run.
1:38:05 Yes. And then there’s, uh, just an insane complexity of productization. Then there’s the distillation of
1:38:11 the different sizes along that Pareto front. And then as each step you take, you realize there might be a
1:38:14 cool product. There’s side quests. Yes, exactly.
1:38:18 But, and then you also don’t want to take too many side quests because then you have a million versions
1:38:22 of a million products. Yes, precisely. It’s very unclear. Yeah. But you also get super excited because
1:38:28 it’s super cool. Yeah. Like how does even you look at videos, very cool. How does it fit into the bigger
1:38:34 thing? Exactly. Exactly. And then you’re constantly, this process of converging upstream, we call it, you
1:38:40 know, ideas from the, from the product surfaces or, or, or from the post training and, and even further
1:38:45 downstream than that, you, you kind of upstream that into the, the core model training for the next
1:38:51 run. Right. So then the main model, the main Gemini track becomes more and more general and eventually,
1:38:52 you know, AGI.
1:38:55 One hero run at a time.
1:38:56 Yes, exactly. A few hero runs later.
1:39:05 Yeah. So sometimes when you release these new versions or every version really, are benchmarks, um,
1:39:09 productive or counterproductive for showing the performance of a model?
1:39:14 You need them. And, and I bet it’s important that you don’t overfit to them, right? So there
1:39:18 shouldn’t be the end with a be all and end all. So there’s, there’s LM arena or used to be called
1:39:23 a LMSS. That’s one of them that turned out sort of organically to be one of the, the main ways people
1:39:28 like to test these systems, at least the chatbots. Um, obviously there’s loads of academic benchmarks on
1:39:33 from, from, from the test, uh, mathematics and coding ability, general language ability,
1:39:38 science ability, and so on. And then we have our own internal benchmarks that we care about.
1:39:43 It’s a kind of multi-objective, you know, optimization problem, right? You want, you don’t want to be
1:39:47 good at just one thing. We’re trying to build general systems that are good across the board
1:39:54 and you try and make no regret, uh, improvements. So where you improve in like, you know, coding,
1:39:59 uh, but it doesn’t reduce your performance in other areas. Right. So that’s the hard part. Cause you,
1:40:04 you can, of course you could put more coding data in, or you could put more, um, I don’t know,
1:40:10 gaming data in, but then does it make worse your language, uh, system or, or, uh, in your
1:40:14 translation systems and other things that you care about. So it’s, you’ve got to kind of continually
1:40:21 monitor this increasingly larger and larger suite of, of benchmarks. And also there’s, uh, when you stick
1:40:27 them into products, these models, you also care about the direct usage and the direct stats and the
1:40:32 signals that you’re getting from the end users, whether they’re coders or, or, or the average
1:40:35 person using, uh, using the chat interfaces. Yeah. Because ultimately you want to measure
1:40:41 the usefulness, but it’s so hard to convert that into a number. Right. It’s, it’s really vibe based
1:40:47 benchmarks across a large number of users. And it’s hard to know. And I, it would be just terrifying to
1:40:54 me to, you know, you have a much smarter model, but it’s just something vibe based. It’s not,
1:40:59 not, not, not quite working. That’s just scary because, and everything you just said, it has to be smart
1:41:06 and useful across so many domains. So you, you get super excited because it’s all of a sudden solving
1:41:11 programming problems. It’d never been able to solve before, but now it’s crappy poetry or something.
1:41:18 And it’s just, I don’t know. That’s a stressful, that’s so difficult, um, to balance and because
1:41:23 you can’t really trust the benchmarks. You really have to trust the end users. Yeah. And then other
1:41:28 things that are even more is a terror come into play. Like, um, you know, the style of the persona
1:41:35 of the, the, the system, you know, how it, you know, is it verbose? Is it succinct? Is it humorous?
1:41:39 You know, and, and different people like different things. So, um, you know, it’s very interesting.
1:41:44 It’s almost like cutting edge part of psychology research or personal personality research. You
1:41:49 know, I used to do that in my PhD, like five factor personality. What do we actually want our
1:41:54 systems to be like? And different people will like different things as well. So these are all just sort
1:41:59 of new problems in product space that I don’t think have ever really been tackled before, but, um,
1:42:03 we’re going to sort of rapidly have to deal with now. I think it’s a super fascinating space
1:42:08 developing the character of the thing. Yeah. And in so doing, it puts a mirror to ourselves.
1:42:14 What are the kinds of things, um, that we like because prompt engineering allows you to control
1:42:22 a lot of those elements, but can the product, uh, make it easier for you to, uh, control the
1:42:26 different flavors of those experiences, the different characters that you interact with? Yeah, exactly.
1:42:31 So, so what’s the probability of Google DeepMind winning? Well, I don’t see it as sort of winning.
1:42:36 I mean, I think we need to think winning is the wrong way to look at it given how important
1:42:41 and consequential what it is we’re building. So funnily enough, I don’t, I try not to view
1:42:46 it like a game or competition, even though that’s a lot of my mindset. It’s, it’s about, in my
1:42:51 view, all of us have those of us at the leading edge, uh, have a responsibility to, um, steward
1:42:56 this unbelievable technology that could be used for incredible good, but also has risks.
1:43:02 Um, steward it safely into the world for the benefit of humanity. That’s always, um, what
1:43:07 I’ve, um, uh, I dreamed about and what we’ve always tried to do. And I hope that’s what
1:43:11 eventually the community, maybe the international community will rally around when it becomes
1:43:16 obvious that as we get closer and closer to, to AGI that, um, that’s what’s needed.
1:43:21 I agree with you. I think that’s beautifully put. You’ve said that, um, you talk to and
1:43:27 are on good terms with the leads of some of these, uh, labs as the competition heats up.
1:43:32 Um, how hard is it to maintain sort of those relationships?
1:43:38 It’s been okay. So if I tried to pride myself in being, uh, collaborative, I’m a collaborative
1:43:42 person. Research is a collaborative endeavor. Science is a collaborative endeavor, right? It’s
1:43:46 all good for humanity. In the end, if you cure incredible, you know, terrible diseases and you
1:43:52 come with an incredible cure, this is net win for humanity. And the same with energy, all of the
1:43:57 things that I’m interested in, in, in, in helping solve with AI. So I just want that technology to
1:44:02 exist in the world and be used for the right things. And, and, and the, the kind of the benefits of
1:44:08 that, the productivity benefits of that being shared for every, the benefit of everyone. So I try to
1:44:12 maintain good relations with all the leading lab people. They’ve very interesting characters, many
1:44:17 of them, as you might expect. Um, but yeah, I’m on good terms. I hope with pretty much all of them.
1:44:23 And, uh, I, I think that’s going to be important when, when things get even more serious than they
1:44:28 are now, uh, that there are those communication channels and, uh, that’s what will facilitate,
1:44:34 uh, cooperation or collaboration. If that’s what is required, especially on things like safety.
1:44:40 Yeah. I hope there’s some collaboration on stuff that’s, uh, sort of less high stakes and in so doing
1:44:45 serves as a mechanism for maintaining friendships and relationships. So for example, I think the internet
1:44:50 would love it if you and Elon somehow collaborate on creating a video game, that kind of thing that I
1:44:56 think that enables camaraderie and good terms and also you two are legit gamers. So it’s just fun to,
1:45:00 yeah, fun to, yeah, that would be awesome. And we’ve talked about that in the past and it may be a cool
1:45:05 thing that, that, you know, we can do. And I agree with you. It’d be nice to have, um, kind of side
1:45:12 projects in a way where, where one can just lean into the collaboration aspect of it. And it’s a sort of,
1:45:18 uh, win-win for both sides. And it’s, um, and it kind of builds up that, that, that, uh, collaborative
1:45:24 muscle. I see the scientific endeavor as that kind of side project for humanity. Yeah. And I think deep
1:45:30 Google deep mind has been really pushing that. Uh, I would love it if to see other labs do more
1:45:34 scientific stuff and then collaborate. Cause it just seems like easier to collaborate on the big
1:45:38 scientific questions. I agree. And I would love to see a lot of people, all of the other labs talk
1:45:42 about science, but I think we’re really the only ones using it for science and doing that.
1:45:47 And that’s why projects like AlphaFold are so important to me. And I think to our mission is to
1:45:54 show, uh, how AI can this, you know, be clearly used in a very concrete way for the benefit of
1:45:59 humanity. And, and also we spun out companies like Isomorphic off the back of AlphaFold to do drug
1:46:03 discovery. And it’s going really well and build sort of, you know, you can think of build additional
1:46:09 AlphaFold type type systems to go into chemistry space to help accelerate drug design. And the
1:46:14 examples I think we need to show, uh, and society needs to understand how well AI can bring these
1:46:20 huge benefits. Well, from the bottom of my heart, thank you for pushing the scientific efforts forward
1:46:25 with, with rigor, with fun, with humility, all of it. I just love to see it. And still talking about
1:46:32 P equals MP, I mean, it’s just incredible. So I love it. Um, there are, there’s been, uh, seemingly a war
1:46:38 for talent. Some of it is meme. I don’t know. Um, what do you think about meta buying up talent with
1:46:45 huge salaries and, and the heating up of this battle for talent? And I should say that I think a lot of
1:46:50 people see DeepMind as a really great place to do, uh, cutting edge work for the reasons that you’ve
1:46:57 outlined is like, there’s this vibrant scientific culture. Yeah. Well, look, I, of course, um, you
1:47:02 know, there’s a strategy that, that meta is taking right now. I think that, um, from my perspective,
1:47:07 at least, I think the people that are real, uh, believers in the mission of AGI and what it can
1:47:12 do and understand the real consequences, both good and bad from that. And what’s what that responsibility
1:47:18 entails. I think they’re mostly doing it to be like myself, to be on the frontier of that research.
1:47:23 So, you know, they can help influence the way that goes and steward that technology safely into the
1:47:28 world. And, you know, meta right now are not at the frontier. Maybe they’ll, they’ll manage to get
1:47:32 back on there. And, um, you know, it’s probably rational what they’re doing from their perspective
1:47:36 because they’re behind and they need to do something. But I think, um, there’s more important
1:47:40 things than, than just money. Of course, one has to pay, you know, people, their market rates and
1:47:45 all of these things, and that continues to go up. Um, but as pro and, and, and I was expecting this
1:47:50 because more and more people are finally realizing leaders of companies, what I’ve always known
1:47:56 for 30 plus years now, which is the AGI is the most important technology probably that’s
1:48:00 ever going to be invented. So in some senses it’s, it’s rational to be doing that. But I
1:48:05 also think there’s a much bigger question. I mean, people in AI these days are very well
1:48:10 paid. You know, I remember when we were starting out back in 2010, you know, I didn’t even pay
1:48:13 myself a couple of years because there wasn’t enough money. We couldn’t raise any money. And
1:48:18 these days interns are being paid, you know, the amount that we raised as our first entire seat
1:48:23 round. So it’s pretty funny. And I remember the days where we used, I used to have to, to work for
1:48:27 free and, and almost pay my own way to do an internship. Right now it’s all the other way
1:48:32 around, but that’s just how it is. It’s the new world. And, um, but I think that, you know,
1:48:36 we’ve been discussing like what happens post AGI and energy systems are solved and so on.
1:48:41 What is even money going to mean? So I think, uh, you know, in the economy and, and we’re going to
1:48:46 have much bigger issues to work through and how does the economy function in that world and companies.
1:48:52 So I think, you know, it’s a little bit of a side issue about, uh, uh, salaries and things of like
1:48:58 that today. Yeah. When you’re facing such gigantic consequences and, and gigantic, fascinating
1:49:03 scientific questions, which may be only a few years away. So, so on the practical sort of pragmatic
1:49:09 sense, uh, if we zoom in on jobs, we can look at programmers because it seems like AI systems are
1:49:15 currently doing incredibly well at programming and increasingly. So, so a lot of people that, uh,
1:49:22 program for a living, love programming are worried they will lose their jobs. How worried should they
1:49:28 be? Do you think? And what’s the right way to, uh, sort of adjust to the new reality and ensure that
1:49:31 you survive and thrive as a human in the programming world?
1:49:36 Well, it’s interesting that programming, and it’s again, counterintuitive to what we thought
1:49:41 years ago, maybe that some of the skills that we think of as harder skills are turned out maybe to be
1:49:46 the easier ones for various reasons, but you know, coding and math, because you can create a lot of
1:49:51 synthetic data and verify if that data is correct. So because of that nature of that, it’s easier to make
1:49:56 things like synthetic data to train from. Um, it’s also an area, of course, we’re all interested in
1:50:01 because as programmers, right, to help us and get faster at it and more productive. So I think the,
1:50:06 for the next era, like the next five, 10 years, I think what we’re going to find is people who are
1:50:12 kind of embrace these technologies become almost at one with them. Um, whether that’s in the creative
1:50:18 industries or the technical industries will become sort of superhumanly productive, I think. So the
1:50:22 great programmers will be even better, but there’ll be even 10x even what they are today. And because
1:50:29 there you’ll be able to use their skills to utilize the tools to the maximum, uh, exploit them to the
1:50:34 maximum. And, um, so I think that’s what we’re going to see in the next domain. Um, so that’s going to
1:50:39 cause quite a lot of change, right? And so that’s coming. A lot of people benefit from that. So I think
1:50:45 one example of that is if coding becomes easier, um, it becomes available to many more creatives
1:50:51 to do more. Uh, and, uh, but I think the top programmers will still have huge advantages as
1:50:56 terms of specifying, going back to specifying what the architecture should be. The question should be
1:51:02 how to guide these, um, uh, coding assistants in a way that’s useful, you know, check whether
1:51:09 the code they produce is good. So I think there’s plenty of, um, uh, headroom there for the foreseeable,
1:51:10 you know, next few years.
1:51:14 So I think there’s, there’s several interesting things there. One is there’s, uh, a lot of
1:51:20 imperative to just get better and better consistently of using these tools. So they are, they’re riding
1:51:27 the wave of the improvement, improving models versus like competing against them. But sadly,
1:51:33 but that’s the, the nature of, of life on earth. Um, there could be a huge amount of value to certain
1:51:40 kinds of programming at the cutting edge and less value to other kinds. For example, it could be like,
1:51:48 you know, front end web design might, uh, be more amenable to, to, to, as, as you mentioned,
1:51:55 to generation, uh, by AI systems and maybe, for example, game engine design or something like this,
1:52:01 or backend design or, or guiding systems in high performance situations, high performance programming
1:52:08 type of design decisions that might be extremely valuable, but it will shift where the humans are
1:52:11 needed most. And that’s scary for people to adjust.
1:52:15 Yeah, I can, I think that’s right. The, the, any time where there’s a lot of disruption and change,
1:52:19 you know, and we’ve had, this is not just this time. We’ve had this in many times in human history with
1:52:26 the internet, uh, mobile, but before that was the industrial revolution. Um, and it’s going to be one
1:52:30 of those eras where there will be a lot of change. I think there’ll be new jobs. We can’t even imagine
1:52:35 today, just like the internet created. And then those people with the right skillsets to
1:52:41 ride that wave will become incredibly valuable, right? Those skills, but maybe people will have
1:52:47 to relearn or adapt a bit, uh, their current skills. And it’s the, the thing that’s going to be harder
1:52:53 to deal with this time around is that I think what we’re going to see is something like probably 10 times
1:52:59 the impact the industrial revolution had and, but 10 times faster as well. Right. So instead of a hundred
1:53:03 years, it takes 10 years. And so that’s going to make it, you know, it’s like a hundred X, uh, the
1:53:09 impact and the speed combined. So that’s, what’s I think going to make it more difficult for society to,
1:53:14 to, to deal with. And it’s good. There’s a lot to think through. And I think we need to be discussing
1:53:20 that right now. And I, I, you know, encourage top economists in the world and philosophers to start
1:53:26 thinking about, um, uh, how should, is society going to be affected by this and what should we do,
1:53:32 including things like, um, you know, uh, universal basic provision or something like that, where a lot
1:53:39 of the, um, increased productivity, uh, gets shared out and distributed, uh, to society. Um, and maybe in
1:53:45 the form of surface services and other things where if you want more than that, you still go and get
1:53:51 some incredibly rare skills and things like that, um, and, and make yourself unique. Um, but, uh, but
1:53:55 there’s a basic provision that is provided. And if you think of government as technology, there’s also
1:54:01 interesting questions, not just in economics, but just politics. How do you design a system that’s
1:54:09 responding to the rapidly changing times such that you can represent the different pain that people feel
1:54:17 from the different groups and how do you reallocate resources in a way that, um, addresses that pain
1:54:23 and represents the hope and the pain and the fears of different people, uh, in a way that doesn’t lead
1:54:31 to division because politicians are often really good at sort of fueling the division and using that to
1:54:38 get elected. The other defining the other and then saying that’s bad. And so based on that,
1:54:45 I think that’s often counterproductive to leveraging a rapidly changing technology, how to help the world
1:54:52 flourish. So we almost, I need to improve our political systems as well rapidly. If you think
1:54:53 of them as a technology.
1:54:59 Definitely. And I think, I think we’ll need new governance structures, institutions probably to
1:55:04 help with this transition. So I think political philosophy and political science is going to be key,
1:55:11 uh, to that. But I think the number one thing, first of all, is to create more abundance of resources,
1:55:16 right? Then there’s the, so that’s the number one thing, increase productivity, get more resources,
1:55:22 maybe eventually get out of the zero sum situation. Then the second question is how to use those resources
1:55:27 and distribute those resources. But yeah, you can’t do that without having that abundance first.
1:55:34 Uh, you mentioned to me, uh, the book, the maniac, uh, by Benjamin, a little bit to a book on,
1:55:40 uh, first of all, about you, there’s a bio about you. Um, it’s strange. Yeah, it’s unclear. Yes,
1:55:47 sure. It’s unclear how much is fiction, how much is reality. Um, but I think the central figure that
1:55:52 that is, uh, John von Neumann, I would say it’s a haunting and beautiful exploration of madness and
1:56:00 genius. And let’s say the double-edged, uh, sword of discovery. And, you know, for, um, people who
1:56:05 don’t know, John von Neumann is a kind of legendary mind. He contributed to quantum mechanics. He was
1:56:11 on the Manhattan project. He is widely considered to be the father of, or pioneered the modern computer
1:56:18 and AI and so on. So as many people say, he is like one of the smartest humans ever. So it’s just
1:56:26 fascinating. And what’s also fascinating as a, as a person who saw nuclear science and physics become
1:56:33 the atomic bomb. So you, you got to see ideas become a thing that has a huge amount of impact on the world.
1:56:41 Um, he also foresaw the same thing for computing. Yeah. He’s he, and that’s the, a little bit again,
1:56:48 beautiful and haunting aspect of the book, uh, then taking a leap forward and looking at this at least
1:56:58 it all alpha zero alpha go alpha zero big moment that maybe John von Neumann’s thinking was brought to,
1:57:04 to, to, to, to, to reality. So I, I guess the question is, um, what do you think if you got to
1:57:09 hang out with John von Neumann now, what would, what would he say about what’s going on?
1:57:14 Well, that would be an amazing experience. You know, he’s a fantastic mind. And I also love the
1:57:19 way he spent a lot of his time at Princeton at the Institute of Advanced Studies is a very special place
1:57:26 for thinking. And, um, it’s amazing how much of a polymath he was in the spread of things he helped
1:57:30 invent, including of course, the von Neumann architecture that all the modern computers are
1:57:36 based on. And, um, he had amazing foresight. I think he would have loved where we are today.
1:57:42 And he would have, um, I think he would have really enjoyed alpha go being a game, he also did game
1:57:48 theory. I think he foresaw a lot of what would happen with learning machines systems that, that,
1:57:52 that, uh, kind of grown, I think he called it rather than programmed. I’m not sure how even,
1:57:57 maybe he wouldn’t even be that surprised. There’s the fruition of what I think he already foresaw
1:58:03 in the 1950s. I wonder what advice he would give. He got to see the building of the atomic bomb with
1:58:07 the Manhattan project. I’m sure there’s interesting stuff that maybe is not talked about enough. Maybe
1:58:13 some bureaucratic aspect, maybe the influence of politicians, maybe, maybe not enough of picking up
1:58:19 the phone and talking to people that are called enemies by the said politicians. There might be
1:58:22 some like deep wisdom that we just may have lost from that time actually.
1:58:27 Yeah, I’m sure. I’m sure there is. I mean, I’ve, you know, studied, I read a lot of books for that
1:58:33 time as well, Chronicle Time, um, and some brilliant people involved. I agree with you. I think maybe there
1:58:39 needs to be more dialogue and understanding. Um, I hope we can learn from those, those times. I think
1:58:44 the difference here is that the AI has so many, it’s a multi-use technology. Obviously we’re trying to do
1:58:52 things like that, like solve, you know, all diseases, um, uh, help with energy, uh, and scarcity. These
1:58:57 incredible things. This is why all of us and myself, you know, I worked, started on this journey 30 plus
1:59:04 years ago. And, um, but of course there are risks too. And probably von Neumann, my guess is he foresaw
1:59:10 both. And, um, and I think he sort of said, I think it is to his wife that, that, that it would be,
1:59:16 this is computers would be even more impactful in the world. And as we just discussed, you know,
1:59:20 I think that’s right. I think it’s going to be 10 times at least of the industrial revolution.
1:59:26 So I think he’s right. So I think he would have been, I imagine fascinated by, uh, uh, where we are
1:59:33 now. And I think one of the, maybe you can correct me, but one of the takeaways from the book is that
1:59:41 reason has, uh, said in the book, mad dreams of reason, it’s not enough for guiding humanity as we
1:59:46 build these super powerful technology that there’s something else. I mean, there’s also like a religious
1:59:52 component, whatever God, whatever religion gives, it pulls at something in the human spirit that
1:59:56 raw, cold reason doesn’t give us.
2:00:00 And I agree with that. I think we need to approach it with whatever you want to call it, the,
2:00:05 a spiritual dimension or humanist dimension doesn’t have to be to do with religion, right? But this
2:00:10 idea of, of a soul, what makes us human, this spark that we have, perhaps it’s to do with consciousness
2:00:16 when we finally understand that. Um, I think that has to be at the heart of the endeavor. Um, and
2:00:21 technology, I’ve always seen technology as the enabler, right? The tools that enable us to,
2:00:27 to flourish and to understand more about the world. And I, I’m sort of with Feynman on this,
2:00:33 and he used to always talk about science and art being companions, right? You can understand it from
2:00:38 both sides, the beauty of a flower, how beautiful it is, and also understand why the colors of the
2:00:42 flower evolved like that, right? That just makes it more beautiful that, that, that just the intrinsic
2:00:47 beauty of the flower. And, and I’ve always sort of seen it like that. And maybe, you know, in the
2:00:51 Renaissance times, the great discoverers then like people like Da Vinci, you know, they were,
2:00:56 I don’t think he saw any difference between science and art, uh, and perhaps religion,
2:01:01 right? They were, everything was, it’s just part of being human and, um, being inspired about the
2:01:06 world around us. And that’s what I, the philosophy I tried to take. And, um, one of my favorite
2:01:11 philosophers is Spinoza. And I think he combined that all very well, you know, this idea of trying
2:01:16 to understand the universe and understanding our place in it. And that was his kind of way of
2:01:21 understanding religion. And I think that’s quite beautiful. And for me, every, all of these things
2:01:28 are related, interrelated, the technology and, um, what it means to be human. And, uh, I think it’s
2:01:35 very important though, that we remember that as when we’re immersed in the technology and the research,
2:01:41 I think a lot of researchers that I see in, in our field are a little bit too narrow and only
2:01:47 understand the technology. And I think also that’s why it’s important for this to be debated by society
2:01:52 at large. And I’m very supportive of things like this, the AI summits that will happen and governments
2:01:57 understanding it. And I think that’s one good thing about the chatbot era and the product era of AI is
2:02:03 that everyday person can actually feel and interact with cutting edge AI and, and, and feel, feel it for
2:02:07 themselves. Yeah. Because they, they force the technologists to have the human conversation. Yeah,
2:02:13 that’s the hopeful aspect of it. Like you said, it’s a dual use technology that we’re forcefully
2:02:19 integrating the entire humanity into it by, into the discussion about AI, because ultimately AI, AGI
2:02:27 will be used for things that states use technologies for, which is a conflict and so on. And the more
2:02:35 we, uh, integrate humans into this picture by having chats with them, the more we will guide.
2:02:39 Yeah. Be able to adapt society will be able to adapt to these technologies like we’ve always done
2:02:44 in the past with, with, uh, the incredible technologies we’ve invented in the past.
2:02:53 Do you think there will be something like a Manhattan project where, um, there will be an escalation of
2:02:58 the power of this technology and states in their old way of thinking, we’ll try to use it as weapons
2:03:00 technologies. And there will be this kind of escalation.
2:03:09 I hope not. Um, I think that would be, uh, very dangerous to do. And I think also, um, you know,
2:03:13 not the right use of the technology. I hope we’ll end up with more, something more collaborative
2:03:21 if needed, like more like a, like a CERN project, you know, where, um, it’s research focused and the
2:03:28 best minds in the world come together to carefully complete the final steps and make sure it’s
2:03:34 responsibly done before, you know, like deploying it to the world. We’ll see. I mean, it’s difficult
2:03:39 with the current geopolitical climate, I think, uh, to, to see cooperation, but things can change.
2:03:44 And, um, I think at least on the scientific level, it’s important for the researchers to,
2:03:50 to, to, to keep in touch and, and, and keep close to each other on, at least on those kinds of topics.
2:03:55 Yeah. And I personally believe on the education side and, um, immigration side, it would be great if
2:04:02 both directions, uh, people from the West immigrated China and China back. I mean, there is some like
2:04:09 family human aspect of people just intermixing. Yeah. And thereby those ties grow strong. So you
2:04:15 can’t sort of divide against each other at this kind of old school way of thinking. And so, uh, multi,
2:04:20 uh, multicultural, multidisciplinary research teams working on scientific questions. That’s like the
2:04:26 hope. Don’t, don’t let the, the warm leaders that are warmongers divide us. I think science is the
2:04:31 ultimately really beautiful connector. Yeah. Science has always been, uh, I think quite a,
2:04:36 a very collaborative endeavor and, you know, scientists know that it’s, it’s a, it’s a collective
2:04:41 endeavor as well, and we can all learn from each other. So perhaps it could be a vector to get a bit
2:04:45 of cooperation. What’s your, uh, ridiculous question. What’s your P doom probability of the
2:04:52 human civilization destroys itself. Well, look, I, I don’t have a, it’s a, you know, I don’t have a
2:04:58 P doom number. The reason I don’t is because I think it’s would imply a level of precision that
2:05:03 is not there. So like, I don’t know how people are getting their P doom numbers. I think it’s a kind of
2:05:10 a little bit of a ridiculous notion because, um, what I would say is it’s definitely non-zero
2:05:17 and it’s probably non-negligible. So that in itself is pretty sobering. And my, my view is it’s just
2:05:22 hugely uncertain, right? What these technologies are going to be able to do, how fast are they going to
2:05:26 take off, how controllable they’re going to be. Some things may turn out to be, and hopefully
2:05:33 like way easier than we thought. Right. Um, but it may be, there’s some really hard, um, uh, problems
2:05:39 that are harder than we guess today. And I think, uh, we don’t know that for sure. And so in, under those
2:05:45 conditions of a lot of uncertainty, but huge stakes both ways, you know, on the one hand, we could solve
2:05:51 all diseases, energy problems, the, not the, the, the, the scarcity problem, and then travel to the stars and
2:05:55 consciousness of the stars and maximum human flourishing. On the other hand, is this sort of
2:06:00 P doom scenarios. So given the uncertainty around it and the importance of it, it’s clear to me,
2:06:06 the only rational, sensible approach is to proceed with cautious optimism. So we want the outcome. We
2:06:13 want the, um, uh, the benefits of course, uh, and, uh, all of the, the amazing things that AI can bring.
2:06:19 And actually I would be really worried for humanity. If I, if given the other challenges that we have
2:06:25 climate disease, you know, aging, uh, resources, all of that, if I didn’t know something like AI was
2:06:30 coming down the line, right. How would we solve all those other problems? I think it’s hard. Um,
2:06:35 so I think we’ve, you know, it could be amazingly transformative for good. Um, but on the other hand,
2:06:41 you know, there are these risks that we know are there, but we can’t quite quantify. So the,
2:06:47 the best thing to do is to use the scientific method to do more research, to try and, uh, more
2:06:53 precisely define those risks and of course, address them. Um, and I think that’s what we’re doing. I
2:06:59 think there probably needs to be, uh, 10 times more effort of that than there is now as we’re getting
2:07:03 closer and closer to the, to the, to the AGI line. What would be the source of worry for you more?
2:07:09 Would it be human caused or AI, AGI caused? Yeah.
2:07:13 Humans abusing that technology versus AGI itself through mechanism that you’ve spoken about,
2:07:18 which is fascinating deception or this kind of stuff getting better and better and better
2:07:22 secretly. And then I think they’re, they’re, they operate over different timescales and they’re
2:07:27 equally important to address. So there’s just the, the, the, the common garden or variety of like,
2:07:32 you know, bad actors using new technology, uh, in this case, general purpose technology and
2:07:39 repurposing it for harmful ends. And that’s a huge risk. And I think that has a lot of complications
2:07:45 because generally, you know, I’m in huge favor of open science and open source. And in fact,
2:07:49 we did it with all our science projects like alpha fold and all of those things, uh, for the benefit of,
2:07:56 of, of the scientific community. Um, but how does one restrict bad actors access to these powerful
2:08:03 systems, whether they’re individuals or even rogue states, uh, and, but enable access at the same time
2:08:08 to good actors to, to maximally build on top of, it’s pretty tricky problem that there’s, I’ve not
2:08:13 heard a clear solution to. So there’s the bad actor use case problem. And then there’s obviously,
2:08:19 uh, as the systems become more agentic and closer to AGI, um, uh, and more autonomous,
2:08:25 how do we ensure the guardrails and they stick to what we want them to do, uh, and under our control.
2:08:33 Yeah. I tend to, maybe on my mind is limited, worry more about the humans, the bad actors and there it
2:08:39 could be, uh, in part, how do you not put destructive technology in the hands of bad actors, but in
2:08:44 another part from, again, geopolitical technology perspective, how do you reduce the number of bad
2:08:50 actors in the world? That’s, that’s also interesting human problem. Yeah. It’s a hard problem. I mean,
2:08:57 look, we, we, we can, um, maybe also use the technology itself to help, um, uh, uh, early
2:09:04 warning on some of the bad actor use cases, right? Whether that’s bio or nuclear or whatever it is,
2:09:09 like AI could be potentially helpful there, as long as the AI that you’re using is itself reliable,
2:09:14 right? So it’s a sort of interlocking problem and that’s what makes it very tricky. And, and again,
2:09:20 it may require some agreement internationally, at least between China and the U and the U S of,
2:09:26 of, of, of some, uh, basic standards. Right. Uh, I have to ask you about the, uh, the book,
2:09:31 the maniac there, there’s this, the, the hand of God moment, at least it all’s move 78,
2:09:41 that perhaps the last time a human did a move of sort of pure human genius and beat AlphaGo or like
2:09:46 broke its brain. Yes. If sorry to anthropomorphize, but it’s an interesting moment because I think in
2:09:50 so many domains, it will keep happening. Yeah. It’s a special moment. And, you know,
2:09:55 it was great for Lisa doll. And, you know, I think it’s in a way they were sort of inspiring each other.
2:10:02 We as a team were inspired by Lisa doll’s brilliance and nobleness. And then maybe he got inspired by,
2:10:08 you know, what AlphaGo was doing to then conjure this incredible inspirational moment. It’s all,
2:10:13 you know, captured very well in the, in the documentary about it. And, um, I think that’ll
2:10:17 continue in many domains where there’s this, at least for the, for the, again, for the foreseeable,
2:10:24 future of like the humans bringing in their ingenuity, um, and asking the right question,
2:10:31 let’s say, uh, and then utilizing these tools, uh, in a way that, um, then cracks a problem.
2:10:37 Yeah. What is the AI becomes smarter and smarter? One of the interesting questions we can ask ourselves
2:10:44 is what makes humans special? It does feel perhaps biased that we humans are deeply special.
2:10:51 I don’t know if it’s our intelligence. It could be something else that, that other thing that’s
2:10:58 outside the mad dreams of reason. I think that’s what I’ve always imagined. Uh, when I was a kid and
2:11:04 starting on this journey of like, um, I was fascinated by things like consciousness, did, did a neuroscience
2:11:09 PhD to look at how the brain works, especially imagination and memory. I focused on the hippocampus.
2:11:12 And it’s sort of going to be interesting. I always thought the best way, of course, one can,
2:11:17 kind of philosophize about it and have thought experiments and maybe even do actual experiments
2:11:23 like you do in neuroscience on, on real brains. But in the end, I always imagined that building
2:11:28 AI, a kind of intelligent artifact, and then comparing that to the human mind and seeing what the differences
2:11:33 were, uh, would be the best way to uncover what’s special about the human mind. If indeed there is
2:11:38 anything special. And I suspect there probably is, but it’s going to be hard to, you know,
2:11:42 I think this journey we’re on will help us, uh, understand that and define that. And, you know,
2:11:48 there may be a difference between carbon based substrates that we are and silicon ones when they
2:11:53 process information. You know, one of the best definitions I like of, of, of consciousness is it’s
2:11:58 the way information feels when we process it, right? Um, it could be, I mean, it doesn’t have,
2:12:02 it’s not very helpful scientific explanation, but I think it’s kind of interesting intuition, intuitive
2:12:07 one. And, um, and so, you know, on this, this, this journey, this scientific journey we’re on will,
2:12:10 I think, um, help uncover that mystery.
2:12:15 Yeah. What I cannot create. I do not understand. That’s a, somebody you deeply admire, Richard Feynman,
2:12:23 like you mentioned, you also reach, um, for the, the Wigner’s dreams of universality that he saw in
2:12:29 constrained domains, but also broadly generally in, in mathematics and so on. So, so many aspects on
2:12:34 which you’re pushing towards not to start trouble at the end, but, uh, Roger Penrose.
2:12:35 Yes. Okay.
2:12:41 So, uh, you know, do you, do you think consciousness does this hard problem of
2:12:49 consciousness, how information feels? Um, do you think consciousness, first of all, is a computation?
2:12:55 And if it is, if it’s information processing, like you said, everything is, is it something that
2:12:57 could be modeled by a classical computer? Yeah.
2:12:59 Or is it a quantum mechanical in nature?
2:13:04 Well, look, Penrose is an amazing thinker, one of the greatest of the modern era. And he,
2:13:08 we’ve had a lot of discussions about this. Of course, we cordially disagree, which is,
2:13:12 you know, I, I feel like, um, I mean, he collaborated with a lot of good neuroscientists
2:13:18 to see if he could find mechanisms for quantum mechanics behavior in the brain. And they, to my
2:13:23 knowledge, they haven’t found anything, um, convincing yet. So my betting is there is,
2:13:29 is that it’s mostly, you know, it is just classical computing that’s going on in the brain, which suggests
2:13:35 that all the phenomena, uh, are modelable or mimicable by a classical computer. But we’ll see,
2:13:41 you know, that there may be this final mysterious things of the feeling of consciousness, the qualia,
2:13:47 these kinds of things that philosophers debate, where it’s unique to the substrate. We may even
2:13:52 come towards understanding that when, if we do things like neural link and, and, uh, have neural
2:13:57 interfaces to the AI systems, which I think we probably will eventually, um, maybe to keep up
2:14:02 with the AI systems, uh, we might actually be able to feel for ourselves what it’s like to compute
2:14:09 on silicon. Right. So, um, and maybe that will tell us. Uh, so I think it’s, it’s going to be
2:14:14 interesting. And I had a debate once with the late Daniel Dennett about why do we think each
2:14:19 other are conscious? Okay. So it’s for two reasons. One is you’re exhibiting the same behavior that I am.
2:14:24 So that’s one thing behaviorally, you seem like a conscious being if I am. But the second thing,
2:14:28 which is often overlooked is that we’re running on the same substrate. So if you’re behaving in the same
2:14:33 way and we’re running on the same substrate, it’s most parsimonious to assume you’re feeling the same
2:14:39 experience that I’m feeling, but with an AI, uh, that’s on silicon, we won’t be able to rely on the
2:14:43 second part, even if it exhibits the first part, that behavior looks like a behavior of a conscious
2:14:50 being. It might even claim it is. Um, but we, but, but we wouldn’t know how it actually felt. Um,
2:14:55 and it probably couldn’t know we, what we felt at least in the first stages, maybe when we get to
2:14:59 super intelligence and the technologies that builds, perhaps we’ll, we’ll be able to, um,
2:15:05 bridge that. No, I mean, that’s a huge test for radical empathy is to empathize with a different
2:15:11 substrate, right? Exactly. We never had to confront that before. Yeah. So maybe, maybe through brain
2:15:16 computer interfaces, be able to truly empathize what it feels like to be a computer.
2:15:20 Well, for information to be computed, not on a carbon system.
2:15:24 I mean, that’s deeply exciting. I mean, some people kind of think about that with plants,
2:15:30 with other life forms, which is different, similar substrate, but sufficiently far enough
2:15:36 on the, uh, evolutionary tree that it’s requires a radical empathy, but to do that with a computer.
2:15:40 I mean, no, we sort of, there are animal studies on this of like, of course, higher animals,
2:15:46 like, you know, killer whales and dolphins and dogs and, and monkeys, you know, they have some,
2:15:50 and elephants, you know, they have some aspects certainly of consciousness, right? Even though
2:15:54 they’re not, might not be that, that, that smart on an IQ sense. So, so we can already empathize with
2:15:59 that. And maybe even some of our systems one day, like we built this thing called dolphin Gemma,
2:16:04 you know, which can, one, a version of our system was trained on dolphin and whale sounds. And maybe
2:16:08 we’ll be able to build a, an interpreter or translator at some point, which would be pretty cool.
2:16:11 What gives you hope for the future of human civilization?
2:16:19 Well, what gives me hope is I think our almost limitless ingenuity. First of all, I think the
2:16:26 best of us and the best human minds are incredible. Um, and you know, I love, you know, meeting and
2:16:32 watching any human that’s the top of their game, whether that’s sport or science or art, you know,
2:16:36 it’s, it’s, it’s just nothing more wonderful than that. Seeing them in their element in flow.
2:16:42 Um, I think it’s almost limitless, you know, our brains are general systems, intelligent systems.
2:16:45 So I think it’s almost limitless what we can potentially do with them. And then the other
2:16:52 thing is our extreme adaptability. I think it’s gonna be okay in terms of, there’s gonna be a lot of
2:16:58 change, but that, but look where we are now without effectively our hunter gatherer brains. How is it we
2:17:05 can, you know, we can cope with the modern world, right? Flying on planes, doing podcasts,
2:17:09 you know, playing computer games and virtual simulations. I mean, it’s already mind blowing,
2:17:14 given that our mind was, was developed for, you know, hunting buffaloes on the, on the tundra.
2:17:20 And, and so I think this is just the next step. And, and, and it’s actually kind of interesting to see how
2:17:25 society is already adapted to this mind blowing AI technology we have today already. It’s sort of like,
2:17:28 Oh, I talk to chat bots totally fine.
2:17:33 And it’s very possible that this very podcast activity, which I’m here for will be completely
2:17:36 replaced by AI. I’m very replaceable and I’m waiting for it.
2:17:38 Not to the level that you can do it, Lex, I don’t think.
2:17:41 All right. Thank you. That’s, that’s what we humans do to each other. We compliment.
2:17:47 All right. And, uh, I’m, uh, deeply grateful for us humans to have this, uh, infinite capacity for
2:17:52 curiosity, adaptability, like you said, and also compassion and ability to love.
2:17:54 Exactly. All of those, all the things that are deeply human.
2:18:00 Well, this is a huge honor, Demis. You’re one of the truly special humans in the world. Uh,
2:18:03 thank you so much for doing what you do and for talking today.
2:18:04 Well, thank you very much, Lex.
2:18:10 Thanks for listening to this conversation with Demis Kasabas. To support this podcast,
2:18:15 please check out our sponsors in the description and consider subscribing to this channel.
2:18:21 And now let me answer some questions and try to articulate some things I’ve been thinking about.
2:18:28 If you’d like to submit questions, including in audio and video form, go to lexfreeman.com/ama.
2:18:34 I got a lot of amazing questions, thoughts, and requests from folks. I’ll keep trying to pick
2:18:39 some, uh, randomly and comment on it at the end of every episode.
2:18:47 I got a note on May 21st this year that said, “Hi Lex, 20 years ago today, David Foster Wallace
2:18:54 delivered his famous, this is water speech at Kenyan College. What do you think of this speech?”
2:19:03 Well, first, I think this is probably one of the greatest and most unique commencement speeches
2:19:08 ever given. But of course I have many favorites, including the one by Steve Jobs.
2:19:14 And David Foster Wallace is one of my favorite writers and one of my favorite humans.
2:19:23 There’s a tragic honesty to his work and it always felt as if he was engaging in a constant battle with
2:19:31 his own mind. And the writing, his writing, were kind of his notes from the front lines of that battle.
2:19:39 Now onto the speech, let me quote some parts. There’s of course the parable of the fish and the water
2:19:46 that goes, “There are these two young fish swimming along and they happen to meet an older fish swimming
2:19:55 the other way, who nods at them and says, ‘Morning boys, how’s the water?’ And the two young fish swim on for
2:20:01 a bit, and then eventually one of them looks over at the other and goes, ‘What the hell is water?’
2:20:09 In the speech, David Foster Wallace goes on to say, “The point of the fish story is merely that the
2:20:16 most obvious, important realities are often the ones that are hardest to see and talk about.” Stated as an
2:20:22 English sentence, of course, this is just a banal platitude. But the fact is that in the day-to-day
2:20:28 trenches of adult existence, banal platitudes can have a life or death importance, or so I wish to
2:20:35 suggest to you in this dry and lovely morning. I have several takeaways from this parable and the
2:20:41 speech that follows. First, I think we must question everything, and in particular, the most basic
2:20:49 assumptions about our reality, our life, and the very nature of existence. And that this project is a
2:20:56 deeply personal one. In some fundamental sense, nobody can really help you in this process of discovery.
2:21:05 The call to action here, I think, from David Foster Wallace, as he puts it, is to “to be just a little
2:21:13 less arrogant, to have just a little more critical awareness about myself and my certainties.” Because a
2:21:19 huge percentage of the stuff that I tend to be automatically certain of is, it turns out, totally
2:21:28 wrong, and deluded. All right, back to me, Lex speaking. The second takeaway is that the central
2:21:36 spiritual battles of our life are not fought on a mountaintop somewhere at a meditation retreat,
2:21:45 but it is fought in the mundane moments of daily life. Third takeaway is that we too easily give
2:21:53 away our time and attention to the multitude of distractions that the world feeds us, the insatiable
2:22:02 black holes of attention. David Foster Wallace’s call to action in this case is to be deeply aware of the
2:22:10 the beauty in each moment and to find meaning in the mundane. I often quote David Foster Wallace in his
2:22:18 advice that the key to life is to be unboreable. And I think this is exactly right. Every moment,
2:22:26 every object, every experience, when looked at closely enough, contains within it infinite richness to
2:22:34 explore. And since Demis Khasabas of this very podcast episode and I are such fans of Richard Feynman,
2:22:42 allow me to also quote Mr. Feynman on this topic as well. Quote, “I have a friend who’s an artist
2:22:49 and has sometimes taken a view which I don’t agree with very well. He’ll hold up a flower and say,
2:22:58 look how beautiful it is. And I’ll agree.” Then he says, “I as an artist can see how beautiful this is,
2:23:06 but you as a scientist take this all apart and it becomes a dull thing.” And I think that’s kind of nutty.
2:23:15 First of all, the beauty that he sees is available to other people and to me too, I believe. Although I
2:23:22 may not be quite as refined aesthetically as he is, I can appreciate the beauty of a flower. At the same
2:23:29 time, I see much more about the flower than he sees. I can imagine the cells in there, the complicated
2:23:36 actions inside which also have beauty. I mean, it’s not just beauty at this dimension, at one centimeter.
2:23:42 There’s also beauty at the smaller dimensions, the inner structure, also the processes. The fact that
2:23:49 the colors in the flower evolved in order to attract insects to pollinate it is interesting. It means that
2:23:56 the insects can see the color. It adds a question: does this aesthetic sense also exist in lower forms?
2:24:02 Why is it aesthetic? All kinds of interesting questions which the science knowledge only adds
2:24:08 to the excitement, the mystery, and the awe of a flower. It only adds.
2:24:18 All right, back to David Foster Wallace’s speech. He has a great story in there that I particularly enjoy.
2:24:25 It goes, “There are these two guys sitting together in a bar in the remote Alaskan wilderness.
2:24:31 One of the guys is religious. The other is an atheist. And the two are arguing about
2:24:36 the existence of God with that special intensity that comes after about the fourth beer. And the
2:24:42 atheist says, “Look, it’s not like I don’t have actual reasons for not believing in God. It’s not
2:24:50 like I haven’t ever experimented with the whole God and prayer thing. Just last month, I got caught away
2:24:57 from the camp in that terrible blizzard. And I was totally lost. And I couldn’t see a thing. And it was
2:25:04 50 below. And so I tried it. I fell to my knees in the snow and cried out, “Oh God, if there is a God,
2:25:10 I’m lost in this blizzard. And I’m going to die if you don’t help me.” And now back in the bar, the religious
2:25:17 guy looks at the atheist all puzzled. “Well, then you must believe now,” he says. “After all,
2:25:24 there you are, alive.” The atheist just rolls his eyes. “No, man. All that happened was a couple of
2:25:29 Eskimos happened to be wandering by and show me the way back to the camp.”
2:25:37 All this, I think, teaches us that everything is a matter of perspective. And that wisdom may arrive
2:25:43 if we have the humility to keep shifting and expanding our perspective on the world.
2:25:49 Thank you for allowing me to talk a bit about David Foster Wallace. He’s one of my favorite
2:25:56 writers and he’s a beautiful soul. If I may, one more thing I wanted to briefly comment on.
2:26:04 I find myself to be in this strange position of getting attacked online often from all sides,
2:26:10 including being lied about sometimes through selective misrepresentation, but often through
2:26:17 downright lies. I don’t know how else to put it. This all breaks my heart, frankly. But I’ve come to
2:26:23 understand that it’s the way of the internet and the cost of the path I’ve chosen. There’s been days when it’s
2:26:31 been rough on me mentally. It’s not fun being lied about, especially when it’s about things that are
2:26:38 usually, for a long time, have been a source of happiness and joy for me. But again, that’s life.
2:26:45 I’ll continue exploring the world of people and ideas with empathy and rigor, wearing my heart on my sleeve,
2:26:54 as much as I can. For me, that’s the only way to live. Anyway, a common attack on me is about my time
2:27:02 at MIT and Drexel, two great universities I love and have tremendous respect for. Since a bunch of lies
2:27:09 have accumulated online about me on these topics, to a sad and at times hilarious degree, I thought I
2:27:14 would once more state the obvious facts about my bio for the small number of you who may care.
2:27:23 TLGR, two things. First, as I say often, including in a recent podcast episode that somehow was listened
2:27:30 to by many millions of people, I proudly went to Drexel University for my bachelor’s, master’s, and
2:27:39 doctorate degrees. Second, I am a research scientist at MIT and have been there in a paid research position
2:27:46 for the last 10 years. Allow me to elaborate a bit more on these two things now, but please skip
2:27:53 if this is not at all interesting. So like I said, a common attack on me is that I have no real affiliation
2:28:02 with MIT. The accusation, I guess, is that I’m falsely claiming an MIT affiliation because I taught a lecture
2:28:12 there once. Nope, that accusation against me is a complete lie. I have been at MIT for over 10 years
2:28:23 in a paid research position from 2015 to today. To be extra clear, I’m a research scientist at MIT working
2:28:31 in LIDS, the Laboratory for Information and Decision Systems in the College of Computing. For now, since
2:28:41 I’m still at MIT, you can see me in the directory and on the various lab pages. I have indeed given many
2:28:48 lectures at MIT over the years, a small fraction of which I posted online. Teaching for me always has
2:28:55 been just for fun and not part of my research work. I personally think I suck at it, but I have always
2:29:02 learned and grown from the experience. It’s like Feynman spoke about, if you want to understand something
2:29:10 deeply, it’s good to try to teach it. But like I said, my main focus has always been on research. I published
2:29:18 many peer-reviewed papers that you can see in my Google Scholar profile. For my first four years at MIT,
2:29:26 I worked extremely intensively. Most weeks were 80 to 100 hour work weeks. After that, in 2019,
2:29:32 I still kept my research scientist position, but I split my time taking a leap to pursue projects in AI
2:29:39 and robotics outside MIT, and to dedicate a lot of focus to the podcast. As I’ve said, I’ve been continuously
2:29:45 surprised just how many hours preparing for an episode takes. There are many episodes of the podcast for
2:29:52 which I have to read, write, and think for 100, 200 or more hours across multiple weeks and months.
2:30:01 Since 2020, I have not actively published research papers. Just like the podcast, I think it’s something
2:30:08 that’s a serious full-time effort. But not publishing and doing full-time research has been eating at me.
2:30:15 Because I love research. And I love programming and building systems that test out interesting technical
2:30:23 ideas. Especially in the context of human AI or human robot interaction. I hope to change this in the coming
2:30:31 months and years. What I’ve come to realize about myself is, if I don’t publish or if I don’t launch
2:30:37 systems that people use, I definitely feel like a piece of me is missing. It legitimately is a source
2:30:46 of happiness for me. Anyway, I’m proud of my time at MIT. I was and am constantly surrounded by people much
2:30:54 smarter than me, many of whom have become lifelong colleagues and friends. MIT is a place I go to
2:30:59 escape the world, to focus on exploring fascinating questions at the cutting edge of science and
2:31:08 engineering. This, again, makes me truly happy. And it does hit pretty hard on a psychological level when
2:31:16 I’m getting attacked over this. Perhaps I’m doing something wrong. If I am, I will try to do better.
2:31:24 In all this discussion of academic work, I hope you know that I don’t ever mean to say that I’m an
2:31:32 expert at anything. In the podcast and in my private life, I don’t claim to be smart. In fact, I often call
2:31:39 myself an idiot and mean it. I try to make fun of myself as much as possible and, in general, to
2:31:49 celebrate others instead. Now, to talk about Drexel University, which I also love, am proud of, and am
2:31:55 deeply grateful for my time there. As I said, I went to Drexel for my bachelor’s, master’s, and doctorate
2:32:01 degrees in computer science and electrical engineering. I’ve talked about Drexel many
2:32:08 times, including, as I mentioned, at the end of a recent podcast, the Donald Trump episode, funny
2:32:14 enough, that was listened to by many millions of people, where I answered a question about graduate
2:32:21 school and explained my own journey at Drexel and how grateful I am for it. If it’s at all interesting to
2:32:28 you, please go listen to the end of that episode or watch the related clip. At Drexel, I met and worked
2:32:34 with many brilliant researchers and mentors from whom I’ve learned a lot about engineering, science,
2:32:40 and life. There are many valuable things I gained from my time at Drexel. First, I took a large number
2:32:46 of very difficult math and theoretical computer science courses. They taught me how to think deeply and
2:32:52 rigorously, and also how to work hard and not give up, even if it feels like I’m too dumb to find a
2:33:00 solution to a technical problem. Second, I programmed a lot during that time, mostly C, C++. I programmed
2:33:06 robots, optimization algorithms, computer vision systems, wireless network protocols, multimodal
2:33:15 machine learning systems, and all kinds of simulations of physical systems. This is where I really develop a love
2:33:21 for programming, including, yes, Emacs, and the Kinesis keyboard.
2:33:29 I also during that time, read a lot. I played a lot of guitar, wrote a lot of crappy poetry,
2:33:39 and trained a lot in judo and jiu-jitsu, which I cannot sing enough praises to. Jiu-jitsu humbled me
2:33:45 on a daily basis throughout my 20s, and it still does, to this very day, whenever I get a chance to train.
2:33:52 Anyway, I hope that the folks who occasionally get swept up in the chanting online crowds that want to tear
2:34:00 down others don’t lose themselves in it too much. In the end, I still think there’s more good than bad
2:34:11 in people. But we’re all, each of us, a mixed bag. I know I am very much flawed. I speak awkwardly. I
2:34:17 sometimes say stupid shit. I can get irrationally emotional. I can be too much of a dick when I
2:34:23 should be kind. I can lose myself in a biased rabbit hole before I wake up to the bigger,
2:34:33 more accurate picture of reality. I’m human. And so are you. For better or for worse. And I do still
2:34:39 believe we’re in this whole beautiful mess together. I love you all.
2:34:55 *music*

Demis Hassabis is the CEO of Google DeepMind and Nobel Prize winner for his groundbreaking work in protein structure prediction using AI.
Thank you for listening ❤ Check out our sponsors: https://lexfridman.com/sponsors/ep475-sc
See below for timestamps, transcript, and to give feedback, submit questions, contact Lex, etc.

Transcript:
https://lexfridman.com/demis-hassabis-2-transcript

CONTACT LEX:
Feedback – give feedback to Lex: https://lexfridman.com/survey
AMA – submit questions, videos or call-in: https://lexfridman.com/ama
Hiring – join our team: https://lexfridman.com/hiring
Other – other ways to get in touch: https://lexfridman.com/contact

EPISODE LINKS:
Demis’s X: https://x.com/demishassabis
DeepMind’s X: https://x.com/GoogleDeepMind
DeepMind’s Instagram: https://instagram.com/GoogleDeepMind
DeepMind’s Website: https://deepmind.google/
Gemini’s Website: https://gemini.google.com/
Isomorphic Labs: https://isomorphiclabs.com/
The MANIAC (book): https://amzn.to/4lOXJ81
Life Ascending (book): https://amzn.to/3AhUP7z

SPONSORS:
To support this podcast, check out our sponsors & get discounts:
Hampton: Community for high-growth founders and CEOs.
Go to https://joinhampton.com/lex
Fin: AI agent for customer service.
Go to https://fin.ai/lex
Shopify: Sell stuff online.
Go to https://shopify.com/lex
LMNT: Zero-sugar electrolyte drink mix.
Go to https://drinkLMNT.com/lex
AG1: All-in-one daily nutrition drink.
Go to https://drinkag1.com/lex

OUTLINE:
(00:00) – Introduction
(00:29) – Sponsors, Comments, and Reflections
(08:40) – Learnable patterns in nature
(12:22) – Computation and P vs NP
(21:00) – Veo 3 and understanding reality
(25:24) – Video games
(37:26) – AlphaEvolve
(43:27) – AI research
(47:51) – Simulating a biological organism
(52:34) – Origin of life
(58:49) – Path to AGI
(1:09:35) – Scaling laws
(1:12:51) – Compute
(1:15:38) – Future of energy
(1:19:34) – Human nature
(1:24:28) – Google and the race to AGI
(1:42:27) – Competition and AI talent
(1:49:01) – Future of programming
(1:55:27) – John von Neumann
(2:04:41) – p(doom)
(2:09:24) – Humanity
(2:12:30) – Consciousness and quantum computation
(2:18:40) – David Foster Wallace
(2:25:54) – Education and research

PODCAST LINKS:
– Podcast Website: https://lexfridman.com/podcast
– Apple Podcasts: https://apple.co/2lwqZIr
– Spotify: https://spoti.fi/2nEwCF8
– RSS: https://lexfridman.com/feed/podcast/
– Podcast Playlist: https://www.youtube.com/playlist?list=PLrAXtmErZgOdP_8GztsuKi9nrraNbKKp4
– Clips Channel: https://www.youtube.com/lexclips

Leave a Reply

Your email address will not be published. Required fields are marked *