AI transcript
0:00:12 And now a quick few second mention of the sponsor. Check them out in the description or at
0:00:19 lexfreedman.com slash sponsors. It’s the best way to support this podcast. We got Tax Network USA
0:00:27 for taxes, BetterHelp for mental health, Element for electrolytes, Shopify for selling stuff online,
0:00:33 and AG1 for your daily multivitamin drink. Choose wisely, my friends. And now on to the
0:00:37 philateries. You can skip them if you like, but if you do, please still check out our sponsors. I
0:00:41 enjoy their stuff. Maybe you will too. If you want to get in touch with me for whatever reason,
0:00:47 go to lexfreedman.com slash contact. All right, let’s go. This episode is brought to you by Tax
0:00:55 Network USA, a full service tax firm focused on solving tax problems for individuals and for
0:01:00 small businesses. I remember when I was preparing for the Roman Empire episode, I came across
0:01:08 a lot of places where there was a rigorous discussion about the intricate tax collection
0:01:17 algorithms used by the Roman Empire. The reason I use the word algorithms is basically there’s a
0:01:23 systematic process for determining how much you owe based on your location, based on your status,
0:01:30 based on your job, based on all these kinds of factors. It’s sad, but those rules in the early
0:01:36 days initially give power to the individual because they protect the individual. But when they become
0:01:45 too complicated, then the bureaucracy, the centralized power starts to abuse its power by using the rules.
0:01:52 And then the individual loses power because they can’t figure out the complexity of the rules. And
0:01:57 that’s essentially why you need the CPAs and the firms to figure out the complexity. Anyway, these guys are
0:02:05 good. Talk with one of their strategists for free today. Call 1-800-958-1000 or go to
0:02:16 tnusa.com slash lex. This episode is brought to you by BetterHelp, spelled H-E-L-P help. I got to recently meet a lot of
0:02:22 interesting people when I visited San Francisco. I was there in part to celebrate Yoshibak and the newly launched
0:02:27 California Institute for Machine Consciousness. I, by the way, encourage you to check it out. I think it’s
0:02:33 C-I-M-C dot A-I. And there I talked to a lot of brilliant people and one of them was a grad student
0:02:40 studying the so-called dark triad. These are the three personality traits of narcissism,
0:02:47 Machiavellianism, and psychopathy. A little bit for a brief moment, it made me wish I took that path
0:02:54 of studying the human mind. And perhaps that is the indirect way. Through all the A-I, through all the
0:03:01 programming through all the building of systems, and now with a podcast, maybe I somehow sneaked up
0:03:07 to that dream in the end. Anyway, I say all that because these topics are studying the extremes of
0:03:13 the human mind. But of course, the extremes are just the edges of an incredibly complicated system
0:03:21 that’s just so fascinating to study, to reflect on, to put a mirror to all those processes that you do
0:03:26 through talk therapy. They’re just fascinating. Anyway, you can check them out at betterhelp.com
0:03:31 slash lex and save on your first month. That’s betterhelp.com slash lex. This episode is also
0:03:37 brought to you by Element, my daily zero sugar and delicious electrolyte mix. I’m not going to go down
0:03:43 the rabbit hole, but there’s a lot of interesting studies that measure the decreased performance of
0:03:48 the human brain. So cognitive processing speed, for example. By what amount does it decrease? Reaction
0:03:56 time. By what amount does it decrease? When you decrease the brain’s sodium levels, for example. Sodium
0:04:00 and potassium really are important on a chemical level for the functioning of the human brain.
0:04:05 Now, obviously, all throughout human history, people understood the value of water. But
0:04:12 as a medical concept, the concept of dehydration only came about in the 19th century. If we just look at
0:04:17 the history of medicine, it’s kind of hilarious how little we knew before. And it makes me think we
0:04:24 know very little now relative to what we will know in a hundred and a thousand years. The human body,
0:04:31 the biological system of the human body is incredibly complicated. So for us to have the certainty that we
0:04:39 sometimes exude about the human body, about what we understand about disease, about health, it’s kind of
0:04:46 funny. Anyway, get a sample pack for free with any purchase. Try it at drinkelement.com slash lex.
0:04:53 This episode is also brought to you by Shopify, a platform designed for anyone to sell anywhere with
0:04:59 a great looking online store. Once again, I do this often where I don’t just or at all talk about
0:05:07 Shopify, but instead talk about the CEO of Shopify, Toby. He once again, like I mentioned with Yoshibak
0:05:15 and the newly launched CIMC, California Institute of Machine Consciousness. He’s a big supporter of
0:05:20 that too. And a bunch of people have asked me why I have not done a podcast with him yet. I don’t know
0:05:24 either. I’m sure it’s going to happen soon. And I haven’t seen him in quite a while.
0:05:32 A lot of people from a lot of walks of life deeply respect him for his intellect, for the way he does
0:05:37 business and just for the human being he is. So anyway, not sure why I mentioned that here, but
0:05:44 back to what this is supposed to be. You can sell shirts online like I did at lexcreement.com slash
0:05:51 shop. It’s super easy to set up a store. I did in a few minutes. What else can I say? You should do it
0:05:58 too. Sign up for $1 per month trial period at shopify.com slash lex. That’s all lowercase. Go to
0:06:04 shopify.com slash lex to take your business to the next level today. This episode is also brought to
0:06:10 you by AG1, an all-in-one daily drink to support better health and peak performance. I was training
0:06:16 jiu-jitsu the other day in that wonderful Texas heat. And I was reminded, first of all, how long
0:06:24 my journey with jiu-jitsu has been and how fulfilling it has been. How interesting the exploration of the
0:06:32 puzzle of two humans trying to break each other’s arms and legs, plus the wrestling and the grappling
0:06:40 component. Really interesting. Leverage, power, speed, all that could be neutralized. How to control
0:06:48 a human body with leverage, with technique, as opposed to raw generally misapplied strength, I should say.
0:06:55 Anyway, because there are times where there’s long stretches of weeks where I don’t train. You feel it in
0:07:02 the cardio. You do a bunch of rounds and you just, the breaths are shallow. You feel like the mind is
0:07:08 hazy from exhaustion. That you’re a little bit more risk-averse because you don’t want to end up in a
0:07:14 bad position. Have to battle out of that bad position after many rounds of exhausting battles. And after
0:07:22 sat training session when I got home, I enjoyed a nice cold AG1. They’ll give you a one-month supply of
0:07:28 fish oil when you sign up at drinkag1.com slash lex. This is the Lex Friedman podcast. To support
0:07:35 it, please check out our sponsors in the description or at lexfriedman.com slash sponsors. And now,
0:07:38 dear friends, hear Sundar Bichai.
0:08:02 your life story is inspiring to a lot of people. It’s inspiring to me. You grew up in India, whole family
0:08:10 living in a humble two-room apartment, very little, almost no access to technology. And from those humble
0:08:21 beginnings, you rose to lead a $2 trillion technology company. So if you could travel back in time and told that,
0:08:26 let’s say, 12-year-old Sundar, you’re now leading one of the largest companies in human history, what do you
0:08:28 think that young kid would say?
0:08:37 I would have probably laughed it off. You know, probably too far-fetched to imagine or believe at that time.
0:08:39 You would have to explain the internet first.
0:08:49 For sure. I mean, computers to me at that time, you know, I was 12 in 1984. So probably, you know,
0:08:53 by then I had started reading about them. I hadn’t seen one.
0:08:56 What was that place like? Take me to your childhood.
0:09:02 You know, I grew up in Chennai. It’s in south of India. It’s a beautiful, bustling city. Lots of people,
0:09:10 lots of energy. You know, simple life, definitely like fond memories of playing cricket outside the
0:09:15 home. We just used to play on the streets. All the neighborhood kids would come out and we would
0:09:22 play until it got dark and we couldn’t play anymore barefoot. Traffic would come. It would just stop the
0:09:27 game. Everything would drive through and you would just continue playing, right? Just to kind of get the
0:09:33 visual in your head. You know, pre-computed, there’s a lot of free time. Now that I think about it,
0:09:40 now you have to go and seek that quiet solitude or something. Newspapers, books is how I gained access
0:09:47 to the world’s information at the time, you will. My grandfather was a big influence. He worked in the
0:09:55 post office. He was so good with language. His English, you know, his handwriting till today is the
0:10:00 most beautiful handwriting I’ve ever seen. He would write so clearly. He was so articulate.
0:10:09 And so he kind of got me introduced into books. He loved politics. So we could talk about anything.
0:10:16 And, you know, that was there in my family throughout. So lots of books, trashy books, good books,
0:10:23 everything from Ayn Rand to books on philosophy to stupid crime novels. So books was a big part of my
0:10:29 life. But that kind of, this whole, it’s not surprising I ended up at Google because Google’s
0:10:35 mission kind of always resonated deeply with me. This access to knowledge, I was hungry for it,
0:10:41 but definitely have fond memories of my childhood. Access to knowledge was there. So that’s the wealth
0:10:48 we had. You know, every aspect of technology I had to wait for a while. I’ve obviously spoken before
0:10:52 about how long it took for us to get a phone, about five years, but it’s not the only thing.
0:10:53 A telephone.
0:11:01 There was a five-year waiting list. And we got a rotary telephone. But it dramatically changed our
0:11:07 lives. You know, people would come to our house to make calls to their loved ones. You know, I would
0:11:11 have to go all the way to the hospital to get blood test records. And it would take two hours to go.
0:11:17 And they would say, sorry, it’s not ready. Come back the next day. Two hours to come back. And that became a
0:11:22 five-minute thing. So as a kid, like, I mean, this light bulb went in my head, you know, this power of
0:11:29 technology to kind of change people’s lives. We had no running water. You know, it was a massive drought.
0:11:36 So they would get water in these trucks, maybe eight buckets per household. So me and my brother,
0:11:41 sometimes my mom, we would wait in line, get that and bring it back home.
0:11:49 Many years later, like, we had running water, and we had a water heater. And you would get hot water to
0:11:55 take a shower. I mean, like, so, you know, for me, everything was discreet like that.
0:12:02 And so I’ve always had this thing, you know, first-hand feeling of, like, how technology can
0:12:10 dramatically change, like, your life and, like, the opportunity it brings. So, you know, that was kind
0:12:16 of a subliminal takeaway for me throughout growing up. And, you know, I kind of actually observed it and
0:12:24 felt it, you know. So we had to convince my dad for a long time to get a VCR. Do you know what a VCR is?
0:12:32 Yeah. I’m trying to date you now. But, you know, because before that, you only had, like, kind of
0:12:40 one TV channel, right? That’s it. And so, you know, you can watch movies or something like that. But
0:12:48 this was by the time I was in 12th grade, we got a VCR, you know. It was a, like, a Panasonic, which we
0:12:53 had to go to some, like, shop, which had kind of smuggled it in, I guess. And that’s where we bought a VCR.
0:13:00 But then being able to record, like, a World Cup football game and then, or, like, get bootleg
0:13:06 videotapes and watch movies, like, all that. So, like, you know, I had these discrete memories growing
0:13:13 up. And so, you know, always left me with the feeling of, like, how getting access to technology
0:13:15 drives that step change in your life.
0:13:19 I don’t think you’ll ever be able to equal the first time you get hot water.
0:13:24 To have that convenience of going and opening a tap and have hot water come out? Yeah.
0:13:32 It’s interesting. We take for granted the progress we’ve made. If you look at human history, just those
0:13:38 plots that look at GDP across 2,000 years, and you see that exponential growth to where most of the
0:13:44 progress happened since the Industrial Revolution. And we just take for granted. We forget how far we’ve
0:13:53 gone. So our ability to understand how great we have it and also how quickly technology can improve
0:13:58 is quite poor. Oh, I mean, it’s extraordinary. You know, I go back to India now, the power of
0:14:03 mobile. You know, it’s mind-blowing to see the progress through the arc of time. It’s phenomenal.
0:14:11 What advice would you give to young folks listening to this all over the world who look up to you and
0:14:17 find your story inspiring? Who want to be maybe the next Sundar Brachai? Who want to start, create
0:14:21 companies, build something that has a lot of impact on the world?
0:14:26 Look, you have a lot of luck along the way, but you obviously have to make smart choices. You’re
0:14:31 thinking about what you want to do. Your brain is telling you something. But when you do things,
0:14:36 I think it’s important to kind of get that, listen to your heart and see whether you actually enjoy
0:14:45 doing it, right? That feeling of, if you love what you do, it’s so much easier and you’re going to
0:14:50 see the best version of yourself. It’s easier said than done. I think it’s tough to find things
0:14:57 you love doing. But I think kind of listening to your heart a bit more than your mind in terms of
0:15:02 figuring out what you want to do, I think is one of the best things I would tell people.
0:15:10 the second thing is, I mean, trying to work with people who you feel at various points in my life. I’ve
0:15:16 worked with people who I felt were better than me. I kind of like, you know, you almost are sitting in a
0:15:21 room talking to someone and they’re like, wow, like, you know, you know, and you want that feeling a few
0:15:27 times, trying to get yourself in a position where you’re working with people who you feel
0:15:34 are kind of like stretching your abilities is what helps you grow, I think. So putting yourself in
0:15:41 uncomfortable situations. And I think often you’ll surprise yourself. So I think being open-minded enough
0:15:46 to kind of put yourself in those positions is maybe another thing I would say.
0:15:51 Well, lessons can we learn, maybe from an outsider perspective, for me, looking at your story and
0:15:57 gotten to know you a bit, you’re humble, you’re kind. Usually when I think of somebody who has had a
0:16:03 journey like yours and climbs to the very top of leadership, they’re usually in a cutthroat world,
0:16:10 they’re usually going to be a bit of an asshole. So what wisdom are we supposed to draw from the fact
0:16:17 that your general approach is of balance, of humility, of kindness, listening to everybody?
0:16:18 What’s your secret?
0:16:25 I do get angry. I do get frustrated. I have the same emotions all of us do, right, in the context of work
0:16:35 and everything. But a few things, right? I think, you know, I, over time, I figured out the best way to
0:16:42 get the most out of people. You know, you kind of find mission-oriented people who are on the shad
0:16:49 journey, who have this inner drive to excellence, to do the best. And, you know, you kind of motivate
0:16:55 people and, and, and you can, you can achieve a lot that way. Right. And so it, it often tends to
0:17:01 work out that way. But have there been times like, you know, I lose it? Yeah. But, you know, not maybe
0:17:10 less often than others. And maybe over the years, less and less so, because, you know, I find it’s not
0:17:12 needed to achieve what you need to do.
0:17:14 So losing your shit has not been productive.
0:17:20 Yeah. Less often than not. I think people respond to that. Yeah. They may do stuff to react to that.
0:17:26 Like what you, you actually want them to do the right thing. And, and, and so, you know, maybe there’s a
0:17:34 bit of like sports, you know, you know, I’m a sports fan in football coaches, uh, in soccer, uh, that football, uh,
0:17:40 you know, people, people often talk about like man management, right? Great coaches do. Right. I think there is
0:17:44 an element of that in our lives. How do you get the best out of the people you work with?
0:17:50 You know, at times you’re working with people who, who are so committed to achieving. If they’ve done
0:17:57 something wrong, they feel it more than you, uh, you do. Right. So you treat them differently than,
0:18:01 you know, occasionally there are people who you need to clearly let them know, like that wasn’t okay or
0:18:05 whatever it is. But I’ve often found that not to be the case.
0:18:12 And sometimes the right words at the right time spoken firmly can reverberate through time.
0:18:18 Also, sometimes the unspoken words, you know, people can sometimes see that, like, you know,
0:18:24 you’re unhappy without you saying it. And so sometimes the silence can, uh, deliver that message even
0:18:30 more. Sometimes less is more. Um, who’s the greatest, uh, soccer player of all time, Messi or Ronaldo
0:18:34 or Pele or Maradona? I’m going to make, uh, you know, in this question,
0:18:36 Is this going to be a political answer?
0:18:44 I will tell the truthful answer because, uh, it is, you know, it’s been interesting because my son
0:18:51 is a big Cristiano Ronaldo fan. And, uh, so we’ve had to watch El Clasico’s together,
0:19:00 you know, with that dynamic in there. I so admire CR Simmons. I mean, I’ve never seen an athlete more
0:19:06 committed to that kind of excellence. And so he’s one of the all time greats, but, you know,
0:19:08 for me, Messi, is it?
0:19:15 Yeah. When I see Lionel Messi, you just are in awe that humans are able to achieve that level of
0:19:20 greatness and genius and artistry. When we talk, we’ll talk about AI, maybe robotics and this kind
0:19:26 of stuff, that level of genius. I’m not sure you can possibly match by AI in a long time. It’s just
0:19:31 an example of greatness. And you have that kind of greatness in other disciplines, but in sport,
0:19:38 you get to visually see it. I don’t like anything else. And just the, the timing, the movement,
0:19:41 uh, there’s just genius.
0:19:46 I had the chance to see him a couple of weeks ago. He played in, uh, San Jose. So, um, against the
0:19:53 quake. So I went to see it, see the game, was a fan on the, had good seats, knew where he would play in
0:19:58 the second half, hopefully. And, uh, even at his age, just watching him when he gets the ball,
0:20:04 that movement, uh, you know, you’re right. That special quality stuff to describe, but you feel it
0:20:10 when you see it. Yeah. He’s still got it. Uh, if we rank all the technological innovations
0:20:17 throughout human history, let’s go back, uh, maybe the history of human civilizations, 12,000 years ago.
0:20:23 And you rank them by the, how much of a productivity multiplier they’ve been.
0:20:30 So, uh, we can go to electricity or the labor mechanization of the industrial revolution,
0:20:36 or we can go back to the first agricultural revolution 12,000 years ago in that long list
0:20:42 of inventions. Do you think AI, when history is written a thousand years from now, do you think
0:20:45 it has a chance to be the number one productivity multiplier?
0:20:50 That’s a great question. Look, many years ago, I think it might’ve been 2017 or 2018. Um, you know,
0:20:55 I said at the time, like, you know, AI is the most profound technology humanity will ever work on.
0:21:01 It’ll be more profound than fire or electricity. So I have to back myself. I, you know, I still think,
0:21:07 uh, that’s the case. You know, when he asked this question, I’m, I was thinking, well, do we have a
0:21:12 recency bias, right? You know, like in sports, it’s very tempting to call the current person. You’re
0:21:22 seeing the greatest player, right? And, and so is there a recency bias? And, you know, I do think, uh,
0:21:28 from first principles, I would argue AI will be bigger than all of those. I didn’t live through those
0:21:33 moments. You know, two years ago, I had to go through a surgery and then I processed that there was a point
0:21:38 in time people didn’t have anesthesia when they went through these procedures. At that moment, I was like,
0:21:44 that has got to be the greatest invention humanity has ever, ever done. Right. So look, we, we don’t
0:21:50 know what it is to have, uh, uh, lived through those times, but you know, and many of what you’re
0:21:56 talking about were kind of this general things, which pretty much affected everything, you know,
0:22:03 electricity or internet, et cetera. But I don’t think we’ve ever dealt with the technology, both,
0:22:10 which is progressing so fast, becoming so capable. It’s not clear what the ceiling is.
0:22:17 And the main unique, it’s recursively self-improving, right? It’s capable of that.
0:22:24 And so the fact it is going, it’s the first technology will kind of dramatically accelerate
0:22:31 creation itself, like creating things, building new things, can, can improve and achieve things
0:22:37 on its own. Right. I think like puts it in a different league, right. And so, uh, different
0:22:44 league. And so I think the impact it will end up having, uh, will far surpass everything we’ve
0:22:49 seen before. Uh, obviously with that comes a lot of, uh, important things to think and
0:22:52 wrestle with, but I definitely think that’ll end up being the case.
0:22:56 Especially if it gets to the point of where we can achieve superhuman performance on the
0:23:03 AI research itself. So it’s a technology that may, it’s an open question, but it may be able
0:23:10 to achieve a level to where the technology itself can create itself better than it could yesterday.
0:23:15 It’s like the move 37 of alpha research or whatever it is. Right. Like, you know, and when,
0:23:23 when, yeah, you’re right. When, when it can do novel self-directed research, obviously for a long
0:23:28 time, we’ll, we’ll have hopefully always humans in the loop and all that stuff. And these are complex
0:23:34 questions to talk about, but yes, I think the underlying technology, you know, I’ve said this,
0:23:42 like if you watched seeing alpha go start from scratch, be clueless and like become better
0:23:48 through the course of a day, you know, like, you know, kind of like, kind of like, you know,
0:23:54 really it hits you when you see that happen. Even our, like the VO3 models, if you sample the models
0:23:58 when they were like 30% done and 60% done and looked at what they were generating.
0:24:06 And you kind of see how it all comes together. It’s kind of like, I would say it’s kind of
0:24:12 inspiring, a little bit unsettling, right? As a, as a human. So all of that is true, I think.
0:24:17 Well, the interesting thing of the industrial revolution, electricity, like you mentioned,
0:24:24 you can go back to the, again, the agriculture, the first agricultural revolution, there’s, um,
0:24:29 what’s called the Neolithic package of the first agricultural revolution, that it wasn’t just
0:24:37 that the nomads settled down and started planting food, but all this other kinds of technology
0:24:42 was born from that. And it’s included in this package. It wasn’t one piece of technology.
0:24:47 It’s, there’s these ripple effects, second and third order effects that happen. Everything from
0:24:54 something silly, like silly, profound, like pottery, it can store liquids and food, uh, to
0:25:00 something we’d kind of take for granted, but social hierarchies, uh, and political hierarchy.
0:25:07 So like early government was formed because it turns out if humans stopped moving and have some surplus
0:25:13 food, they start coming up with, uh, they get bored and they start coming up with interesting systems
0:25:19 and then trade emerges, which turns out to be a really profound thing. And like I said, government,
0:25:24 I mean, there’s just, uh, second and third order effects from that, including that package is
0:25:30 incredible and probably extremely difficult. If you’ll ask one of the people in the nomadic tribes
0:25:36 to predict that it would be impossible. It’s difficult to predict, but all that said, what do you think
0:25:42 are some of the early things we might see in the quote unquote AI package?
0:25:48 I mean, most of it probably we don’t know today, but like, you know, the one thing which we can
0:25:55 tangibly start seeing now is, you know, obviously with the coding progress, you got a sense of it.
0:26:01 It’s going to be so easy to imagine like thoughts in your head, translating that into things that
0:26:07 exist. That’ll be part of the package, right? Like it’s going to empower almost all of humanity
0:26:13 to kind of express themselves. Maybe in the past, you could have expressed with words,
0:26:20 but like you could kind of build things into existence, right? You know, maybe not fully today.
0:26:26 We are at the early stages of pipe coding. You know, I’ve been amazed at what people have put out
0:26:30 online with VO3, but it takes a bit of work, right? You have to stitch together a set of prompts,
0:26:37 but all this is going to get better. The thing I always think about, this is the worst it’ll ever be,
0:26:39 right? Like at any given moment in time.
0:26:44 Yeah. It’s interesting. You went there as kind of a first thought. So the exponential increase
0:26:55 of access to creativity, software creation, are you creating a program, a piece of content to be
0:27:02 shared with others, games down the line, all of that, like just becomes infinitely more possible.
0:27:09 Well, I think the big thing is that it makes it accessible. It unlocks the cognitive capabilities
0:27:10 of the entire 8 billion.
0:27:13 No, I agree. Look, think about 40 years ago.
0:27:20 Maybe in the US, there were five people who could do what you were doing, like go do an interview,
0:27:26 you know, but today think about with YouTube and other products, et cetera, like how many more
0:27:32 people are doing it? So I think this is what technology does, right? Like when the internet
0:27:42 created blogs, you know, you heard from so many more people. So I think, but with AI, I think that number
0:27:48 won’t be in the few hundreds of thousands, it’ll be tens of millions of people, maybe even a billion people
0:27:54 like putting out things into the world in a deeper way.
0:27:59 And I think it’ll change the landscape of creativity. And it makes a lot of people nervous. Like for
0:28:06 example, uh, whatever Fox, MSNBC, CNN are really nervous about this part. Like you mean, this dude in a
0:28:12 suit could just do this and you and YouTube and, and, and thousands of others, tens of thousands,
0:28:18 millions of other creators can do the same kind of thing. That makes them nervous. And now you get a
0:28:23 podcast from notebook LM that’s about five to 10 times better than any podcast I’ve ever done.
0:28:28 Um, I’m, I’m, I’m, I’m joking at this time, but maybe not. And that changes. You have to evolve
0:28:36 because I, on the podcasting front, I’m a fan of podcasts much more than I am a fan of being a host
0:28:41 or whatever. If there’s great podcasts that are both AIs, I’ll just stop doing this podcast. I’ll
0:28:45 listen to that podcast, but you have to evolve and you have to change. And that makes people really
0:28:50 nervous, I think, but it’s also really exciting future. The only thing I may say is I do think
0:28:59 like in a world in which there are two AI, I think people value and, uh, choose just like in chess.
0:29:04 You and I would never watch Stockfish 10 or whatever, an alpha go play against each,
0:29:10 like it would be boring for us to watch, but Magnus Carlsen and Gukesh, that game would be much
0:29:16 more fascinating to watch. So it’s tough to say, like one way to say is you’ll have a lot
0:29:21 more content. And so you will be listening to AI generated content because sometimes it’s efficient,
0:29:28 et cetera. But the premium experiences you value might be a version of like the human essence,
0:29:33 wherever it comes through. Going back to what we talked earlier about watching Messi dribble the ball.
0:29:38 I don’t know one day, I’m sure a machine will dribble much better than Messi, but I don’t know
0:29:42 that it would evoke that same emotion in us. So I think that’ll be fascinating to see.
0:29:50 I think the element of podcasting or audio books that is about information gathering,
0:29:57 that part might be removed or that might be more efficiently and in a compelling way done by AI,
0:30:04 but then it’ll be just nice to hear humans struggle with information, contend with the information,
0:30:09 try to internalize it, combine it with the complexity of our own emotions and consciousness
0:30:15 and all that kind of stuff. But if you actually want to find out about a piece of history, you go
0:30:22 to Gemini. If you want to see Lex struggle with that history, then you look, or other humans,
0:30:29 you look at that. But the point is, it’s going to change the nature, continue to change the nature of
0:30:33 how we discover information, how we consume the information, how we create that information.
0:30:40 The same way that YouTube changed everything completely, changed news. And that’s something our society is struggling with.
0:30:40 Yeah.
0:30:46 YouTube, look, YouTube enabled, I mean, you know this better than anyone else. It’s enabled so many creators.
0:30:55 There is no doubt in me that we will enable more filmmakers than there have ever been, right? You’re going to empower a lot more people.
0:31:02 So I think there is an expansionary aspect of this, which is underestimated, I think.
0:31:09 I think it’ll unleash human creativity in a way that hasn’t been seen before. It’s tough to internalize.
0:31:14 The only way it is, if you brought someone from the 50s or 40s and just put them in front of YouTube,
0:31:20 you know, I think it would blow their mind away. Similarly, I think we would get blown away by what’s
0:31:22 possible in a 10 to 20 year time frame.
0:31:27 Do you think there’s a future, how many years out is it that, let’s say, let’s put a marker on it,
0:31:36 50% of content, good content, 50% of good content is generated by VO4, 5, 6?
0:31:43 You know, I think it depends on what it is for. Like, you know, maybe if you look at movies today
0:31:49 with CGI, there are great filmmakers. Like, you still look at, like, who the directors are and who
0:31:55 use it. There are filmmakers who don’t use it at all. You value that. There are people who use it
0:32:00 incredibly. You know, think about somebody like a James Cameron, like, what he would do with these
0:32:05 tools in his hands. But I think there’ll be a lot more content created. Like, just like writers today
0:32:11 use Google Docs and not think about the fact that they’re using a tool like that. But people will be
0:32:16 using the future versions of these things. Like, it won’t be a big deal at all to them.
0:32:24 I’ve gotten a chance to get to know Darren Aronofsky. Well, he’s been really leaning in and
0:32:31 trying to figure out. It’s fun to watch a genius who came up before any of this was even remotely
0:32:36 possible. He created Pi, one of my favorite movies. And from there, just continued to create
0:32:42 a really interesting variety of movies. And now he’s trying to see how can AI be used to create
0:32:49 compelling films. You have people like that. You have people, I’ve gotten to just know, edgier
0:32:56 folks that are AI first, like Door Brothers. Both Aronofsky and Door Brothers create at the edge of
0:33:04 the Overton window of society. You know, they push whether it’s sexuality or violence. It’s edgy,
0:33:10 like artists are, but it’s still classy. It doesn’t cross that line. Whatever that line is,
0:33:18 you know, Hunter S. Thompson has this line that the only way to find out where the edge,
0:33:23 where the line is, is by crossing it. And I think for artists, that’s true. That’s kind of their purpose.
0:33:28 Sometimes comedians and artists just cross that line. I wonder if you can comment on the weird
0:33:35 place that puts Google. Because Google’s line is probably different than some of these artists.
0:33:44 What’s your, how do you think about specifically VO and Flow about like how to allow artists to do
0:33:52 crazy shit? But also like the responsibility of like, um, not for it not to be too crazy.
0:33:57 I mean, it’s a great question. Look, part of, you mentioned Darren, uh, you know, he’s a clear
0:34:04 visionary, right? Part of the reason we work, started working with them early on VO is he’s one of those
0:34:11 people who’s able to kind of see that future, get inspired by it, and kind of showing the way for how
0:34:18 creative people can express themselves with it. Look, I think when it comes to allowing artistic
0:34:24 free expression, that’s one of the most important values in a society, right? I think, you know,
0:34:30 artists have always been the ones to push, push boundaries, expand the frontiers of thought.
0:34:40 And so, look, I think, I think that’s going to be an important value we have. So I think we will provide
0:34:45 tools and put it in the hands of artists for them to use and put out their work.
0:34:52 Those APIs, I mean, I almost think of that as infrastructure, just like when you provide
0:34:56 electricity to people or something, you want them to use it and like, you’re not thinking about the
0:35:02 use cases on top of it. So it’s a pain brush. Yeah. And, and so I think that’s how obviously
0:35:07 there have to be some things and, you know, society needs to decide at a fundamental level,
0:35:14 what’s okay, what’s not, uh, will be responsible with it. Um, but I do think, you know, when it comes
0:35:20 to artistic free expression, I think that’s one of those values we should work hard to defend.
0:35:28 Uh, I wonder if you can comment on, um, maybe earlier versions of Gemini. We’re a little bit
0:35:34 careful on the kind of things you would be willing to answer. I just want to comment on, I was really
0:35:41 surprised and, uh, pleasantly surprised and enjoy the fact that Gemini two five pro is a lot less
0:35:46 careful in a good sense. Don’t ask me why, but I’ve been doing a lot of research on Genghis Khan
0:35:52 and the, uh, the Esteks. Uh, so there’s a lot of violence there in that history. It’s a very
0:35:56 violent history. I’ve also been doing a lot of research on world war one and world war two and
0:36:03 earlier versions of Gemini were very, um, basically this kind of sense. Are you sure you want to learn
0:36:10 about this? And now it’s actually very factual objective, uh, talks about very difficult parts
0:36:15 of human history and does so with nuance and depth. It’s, it’s been really nice, but there’s a line
0:36:20 there that I guess Google has to kind of walk. I wonder if it’s, and it’s also an engineering
0:36:26 challenge, how to, how to do that at scale across all the weird queries that people ask. What, um,
0:36:33 can you just speak to that challenge? How do you allow Gemini to say again, forgive, pardon my French,
0:36:39 crazy shit, but not too, not, not too crazy. I think one of the good insights here has been
0:36:47 as the models are getting more capable, the models are really good at this stuff. Right. And so I think
0:36:52 in some ways, maybe a year ago, the models weren’t fully there. So they would also do stupid things
0:37:00 more often. And so, you know, you’re trying to handle those edge cases, but then you make a mistake in
0:37:05 how you handle those edge cases and it compounds. But I think with 2.5, what we particularly found is
0:37:12 once the models cross a certain level of intelligence and sophistication, you know, they are, they are
0:37:16 able to reason through these nuanced issues pretty well. And I think users really want that, right? Like,
0:37:23 you know, you want as much access to the raw model as possible. Right. But I think it’s a great area
0:37:30 to think about, like, you know, over time, you know, we should allow more and more closer access to it.
0:37:36 Maybe obviously let people custom prompts if they wanted to, and like, you know, and, you know,
0:37:42 experiment with it, et cetera. Uh, I, I think that’s an important direction, but look, the first principles
0:37:48 we want to think about it is, you know, from a scientific standpoint, like making sure the
0:37:52 models, and I’m saying scientific in the sense of like how you would approach math or physics or
0:37:59 something like that, from first principles, having the models reason about the world, be nuanced,
0:38:05 et cetera, uh, you know, from the ground up is the right way to build these things, right? Not
0:38:13 like some subset of humans kind of hard coding things on top of it. Uh, so I think it’s the
0:38:16 direction we’ve been taking. And I think you’ll see us continue to push in that direction.
0:38:22 Yeah. I actually asked, uh, I gave these notes, I took extensive notes and I gave them to Gemini
0:38:30 and said, can you ask a novel question that’s not in these notes? And it wrote, Gemini continues to
0:38:37 really surprise me. Really surprised me. It’s been really beautiful. It’s an incredible model. Uh,
0:38:44 the, the question it’s, it, it generated was you meaning Sundar told the world Gemini is churning
0:38:51 out 480 trillion tokens a month. Uh, what’s the most life-changing five word sentence hiding in that
0:38:55 haystack? That’s a Gemini question, but it made me, it gave me a sense. I don’t think you can answer
0:39:02 that, but it gave me, it made, it woke me up to like all of these tokens are providing little aha
0:39:09 moments for people across the globe. So that’s like learning that those tokens are people are curious.
0:39:14 They ask a question and they find something out and it truly could be life-changing.
0:39:20 Oh, it is. I look, you know, I had the same feeling about search many, many years ago. You, you, you know,
0:39:26 you, you definitely, you know, this tokens per month is like grown 50 times in the last 12 months.
0:39:27 Is that accurate by the way?
0:39:33 Yeah, it is. It is, it is accurate. I’m glad it got it right. Um, but you know, that number was
0:39:40 9.7 trillion tokens per month, 12 months ago, right? It’s gone, gone up to 480, you know,
0:39:47 it’s a 50 X increase. So there’s no limit to human curiosity. Uh, and I think it’s, it’s one of those
0:39:55 moments. Uh, maybe I don’t think it is there today, but maybe one day there’s a five word phrase which
0:40:00 says what the actual universe is or something like that and something very meaningful, but I don’t think
0:40:07 we’re quite there yet. Do you think the scaling laws are holding strong on, um, there’s a lot of
0:40:12 ways to describe the scaling laws for AI, but on the pre-training, on the post-training fronts.
0:40:18 So the flip side of that, do you anticipate AI progress will hit a wall? Is there a wall?
0:40:24 You know, it’s a cherished micro kitchen conversation. Once in a while I have it, uh, you know,
0:40:31 like when Demis is visiting or, you know, if Demis, Cori, Jeff, Noam, Sergei, a bunch of our people,
0:40:37 like, you know, we sit and, uh, you know, you know, talk about this, right. And, um, look, I,
0:40:44 we see a lot of headroom ahead, right. I think, uh, we’ve been able to optimize and improve on all
0:40:53 fronts, right. Uh, pre-training, post-training, test time, compute, tool use, right. Over time,
0:41:00 making these more agentic. So getting these models to be more general world models in that direction,
0:41:06 like VO3, uh, you know, the physics understanding is dramatically better than what the VO1 or something
0:41:13 like that was. So you kind of see on all those dimensions, I, I feel, you know, progress is very
0:41:22 obvious to see. And I feel like there is significant headroom. More importantly, you know, I’m fortunate
0:41:28 to work with some of the best researchers on the planet, right. They think, uh, there is more
0:41:35 headroom to be had here. Uh, and so I think we have an exciting trajectory ahead. It’s tougher to say,
0:41:40 you know, each year I sit and say, okay, we are going to throw 10 X more compute over the course of
0:41:47 next year at it. And like, will we see progress? Sitting here today, I feel like the year ahead will
0:41:53 have a lot of progress. And do you feel any limitations like, uh, that are the bottlenecks
0:41:59 compute limited, uh, data limited, idea limited. Do you feel any of those limitations or is it full
0:42:03 steam ahead on all fronts? I think it’s compute limited in this sense, right? Like, you know,
0:42:09 we can all, part of the reason you’ve seen us do flash, nano flash and pro models,
0:42:15 but not an ultra model. It’s like for each generation, we feel like we’ve been able to get
0:42:23 the pro model at like, I don’t know, 80, 90% of ultra capability, but ultra would be a, a lot more,
0:42:33 uh, like slow and a lot more expensive to serve. But what we’ve been able to do is to go to the next
0:42:37 generation and make the next generation’s pro as good as the previous generation’s ultra,
0:42:43 be able to serve it in a way that it’s fast and you can use it and so on. So I do think scaling laws
0:42:51 are working, but it’s tough to get at any given time. The models we all use the most
0:43:00 is maybe like a few months behind the maximum capability we can deliver, right? Because that
0:43:06 won’t be the fastest, easiest to use, et cetera. Also that’s in terms of intelligence, it becomes
0:43:12 harder and harder to measure, uh, performance in quotes, because, you know, you could argue Gemini
0:43:20 flash is much more impactful than pro just because of the latency is super intelligent already.
0:43:25 I mean, sometimes like latency is, uh, maybe more important than intelligence,
0:43:31 especially when the intelligence is just a little bit less and flash not, it’s still incredibly smart
0:43:39 model. And so you, you have to not start measuring impact and then it feels like benchmarks are less
0:43:43 and less capable of capturing the intelligence of models, the effectiveness of models, the usefulness,
0:43:48 the real world usefulness of models. Uh, another kitchen question. So lots of folks are talking
0:43:56 about timelines for AGI or ASI artificial super intelligence. So AGI loosely defined is basically
0:44:06 human expert level at a lot of the main fields of pursuit for humans. And ASI is what AGI becomes
0:44:12 presumably quickly by being able to self-improve. So becoming far superior in intelligence across all
0:44:17 these disciplines in humans. When do you think we’ll have AGI? Is 2030 a possibility?
0:44:22 Uh, there’s one other term we should throw in there. I don’t know who, who used it first. Maybe
0:44:29 Karpathy did AGI. Have you, have you heard AGI? The artificial jagged intelligence sometimes feels
0:44:34 that way, right? Both there are progress and you see what they can do. And then like, you can trivially
0:44:40 find they make numeric letters or like, you know, counting hours and strawberry or something,
0:44:46 which seems to trip up most models or whatever it is, right? So, uh, so maybe we should throw that
0:44:53 term in there. I feel like we are in the AGI phase where like dramatic progress, some things don’t work
0:44:58 well, but overall, you know, you’re seeing, uh, lots of progress. But if your question is, will,
0:45:07 will it happen by 2030? Look, we constantly move the line of what it means to be AGI. There are moments
0:45:11 today, you know, like sitting in a Waymo in a San Francisco street with all the crowds and the
0:45:18 people and kind of work its way through. I see glimpses of it there. The car is sometimes kind of
0:45:25 impatient, trying to work its way, uh, using Astra, like in Gemini Live or seeing, uh, you know, asking
0:45:30 questions about the world. What’s this skinny building doing in my neighborhood? It’s a streetlight,
0:45:37 not a building. You, you see glimpses. That’s why I use the word AGI because then you see stuff,
0:45:43 which obviously, you know, we are far from AGI too. So you have both experiences simultaneously
0:45:48 happening to you. I’ll answer your question, but I’ll also throw out this. I almost feel the term
0:45:52 doesn’t matter. What I know is by 2030, there’ll be such dramatic progress.
0:46:01 we’ll be dealing with the consequences of that progress, both the positives, uh, both the positive
0:46:07 externalities and the negative externalities that come with it in a big way by 2030. So that I strongly
0:46:13 feel right. Whatever we may be arguing about the term, or maybe Gemini can answer what that moment is in
0:46:20 time in 2030, but I think the progress will be dramatic, right? So that I believe in. Will the AI
0:46:27 think it has reached AGI by 2030? I would say we will just fall short of that timeline, right? So I think it’ll
0:46:32 take a bit longer. It’s amazing in the early days of Google DeepMind in 2010, they talked about a 20 year
0:46:42 timeframe to achieve, uh, AGI. So which is, which is kind of fascinating to see, but you know, I, for me, the whole
0:46:50 thing, seeing what Google brain did in 2012. And when we acquired DeepMind in 2014, uh, right close to
0:46:56 where we are sitting in 2012, you know, Jeff Dean showed the image of when the neural networks could
0:47:01 recognize a picture of a cat, right. And identify it, you know, this is the early versions of brain,
0:47:08 right. And so, you know, we all talked about couple of decades. I don’t think we’ll quite get there by 2030.
0:47:15 So my sense is it’s slightly after that, but I, I would stress, it doesn’t matter like what that
0:47:23 definition is, because you will have mind blowing progress on many dimensions. Maybe AI can create
0:47:29 videos. We have to figure out as a society, how do we, we need some system by which we all agree that
0:47:34 this is AI generated and we have to disclose it in a certain way, because how do you distinguish reality
0:47:38 otherwise? Yeah. There’s so many interesting things you said. So first of all, just looking back at this
0:47:44 recent, now it feels like distant history, uh, with Google brain. I mean, that was before TensorFlow,
0:47:49 before TensorFlow was made public and open sourced. So the tooling matters too, combined with GitHub
0:47:56 ability to share code. Then you have the ideas of attention transformers and the diffusion now,
0:48:02 and then there might be a new idea that seems simple in retrospect, but it will change everything.
0:48:08 And that could be the post-training, the inference time innovations. And I think Shad Sien tweeted that
0:48:17 Google is just one great UI from completely winning the AI race, meaning like UI is a huge part of it.
0:48:23 like how that intelligence, uh, uh, uh, I think Logan Kerr project likes to talk about this right now.
0:48:29 It’s an LLM, but it become like, when is it going to become a system where you’re talking about shipping
0:48:34 systems versus shipping a particular model? Yeah. That matters too. How the system is, um,
0:48:39 manifests itself and how it presents itself to the world. That really, really matters.
0:48:46 Oh, hugely. So there are simple UI innovations, which have changed the world. Right. And, uh,
0:48:52 I absolutely think so. Um, we will see a lot more progress in the next couple of years. I think
0:49:02 AI itself, uh, on a self-improving track for UI itself, like, you know, today we are like constraining
0:49:09 the models. The models can’t quite express themselves in terms of the UI to, to people. Um,
0:49:14 but that is, uh, like, you know, if you think about it, we’ve kind of boxed them in that way,
0:49:21 but given these models can code, uh, you know, they should be able to write the best interfaces to
0:49:28 express their ideas over time. Right. That is incredible idea. So their API is already open.
0:49:35 So you can, you create a really nice agentic system that continuously improves the way you can be talking
0:49:42 to an AI. Yeah. But it, a lot of that is the interface. And then of course, uh, incredible
0:49:45 multimodal aspect of the interface that Google has been pushing.
0:49:47 These models are natively multimodal.
0:49:51 They can easily take content from any format, put it in any format.
0:49:57 They can write a good user interface. They probably understand your preferences better than over time.
0:50:06 Like, you know, and so, so all of this is like the evolution ahead. Right. And so, um, that goes back
0:50:10 to where we started the conversation. I, like, I think there’ll be dramatic evolutions in the years ahead.
0:50:19 Maybe one more kitchen question. Uh, this even, even further ridiculous concept of P doom. So the
0:50:26 philosophically minded folks in the AI community, you think about the probability that AGI and then ASI
0:50:34 might destroy all of human civilization. I would say my P doom is about 10%. Do you ever think about
0:50:40 this kind of long-term threat of ASI and what would your P doom be?
0:50:47 Look, I mean, for sure. Look, I’ve, uh, both been, uh, very excited about AI, uh, but I’ve always felt,
0:50:55 uh, this is a technology, you know, you have to actively think about the risks and work very,
0:51:02 very hard to harness it in a way that it, it all works out well. Um, on the P doom question, look,
0:51:05 it’s, uh, you know, wouldn’t surprise you to say that’s probably another micro kitchen conversation
0:51:11 that pops up once in a while. Right. And given how powerful the technology is, maybe stepping back,
0:51:16 you know, when you’re running a large organization, if you can kind of align the incentives of the
0:51:20 organization, you can achieve pretty much anything, right? Like, you know, if you can get kind of people
0:51:26 all marching in towards like a goal, uh, in a very focused way, in a mission driven way, you can
0:51:32 pretty much achieve anything, but it’s very tough to organize all of humanity that way. But
0:51:39 I think if P dome is actually high at some point, all of humanity is like aligned in making sure
0:51:44 that’s not the case. Right. And so we’ll actually make more progress against it, I think. So the
0:51:52 irony is, so there is a self-modulating aspect there. Like, I think if humanity collectively puts
0:51:58 their mind to solving a problem, whatever it is, I think we can get there. So because of that,
0:52:06 you know, I, I, I, I think I’m optimistic on the P doom scenarios, but that doesn’t mean,
0:52:13 I think the underlying risk is actually pretty high, but I’m, uh, you know, I have a lot of faith in
0:52:16 humanity kind of rising up to the, to meet that moment.
0:52:21 That’s really, that’s really, really well put. I mean, as the threat becomes more concrete and real,
0:52:26 humans do really come together and get their shit together. Well, the other thing I think people don’t
0:52:33 often talk about is probability of doom without AI. So there’s all these other ways that humans can
0:52:40 destroy themselves. And it’s very possible, at least I believe so, that AI will help us become smarter,
0:52:48 kinder to each other, uh, more efficient, uh, it’ll help more parts of the world flourish where
0:52:54 it would be less resource constrained, which is often the source of military conflict and tensions
0:53:02 and so on. So we also have to load into that. What’s the P doom without AI? With AI, P doom with AI,
0:53:07 P doom without AI, because it’s very possible that AI will be the thing that saves us,
0:53:09 saves human civilizations from all the other threats.
0:53:13 I agree with you. I think, I think it’s insightful. Uh, look, I felt
0:53:19 like to make progress on some of the toughest problems would be good to have AI, like pair
0:53:25 helping you. Right. And, and like, you know, so that resonates with me for sure. Yeah.
0:53:26 Quick pause, bath and break.
0:53:36 If notebook LM was the same, like what I saw today with beam, if it was compelling in the same kind of way,
0:53:41 blew my mind. It was incredible. I didn’t think it’s possible.
0:53:43 I didn’t think it’s possible.
0:53:48 My hope was like, can you imagine like the U S president, the Chinese president being able to do
0:53:56 something like beam with the live meat translation working well. So they both sitting and talking, make progress a bit more.
0:54:02 Yeah. Yeah. Just, uh, for people listening, we took a quick bathroom break and now we’re talking about the demo I did.
0:54:08 And we’ll probably post it somewhere, somehow, maybe here the, I got a chance to experience beam.
0:54:17 And it was, it’s hard to, it’s hard to describe it in words, how real it felt with just, what is it?
0:54:20 Six cameras. It’s incredible. It’s incredible.
0:54:27 It’s, it’s one of the toughest products of, you can’t quite describe it to people, even when we show it in slides, etc.
0:54:34 Like you don’t know what it is. You have to kind of experience it on the world leaders front on politics, geopolitics.
0:54:44 That there’s something really special. Again, we’re studying world war two and, uh, how much could have been saved if Chamberlain met Stalin in person.
0:54:52 And I sometimes also struggle explaining to people, articulating why I believe meeting in person for world leaders is powerful.
0:54:56 It just seems naive to say that, but there is something there in person.
0:55:20 And with beam, I felt that same thing. And then I’m unable to explain. All I kept doing is what like a child does. You look real, you know, and I mean, I don’t know if that makes meetings more productive or so on, but it certainly makes them more, uh, the same reason you want to show up to work versus remote.
0:55:40 Sometimes that human connection. I don’t know what that is. It’s hard to, it’s hard to put into words. Um, there’s some, there’s something beautiful about great teams collaborating on a thing that’s, that’s not captured by the productivity of that team or by whatever on paper.
0:55:50 Some of the most beautiful moments you experience in life is at work, pursuing a difficult thing together for many months. There’s nothing like it.
0:55:54 You’re in the trenches and yeah, you do form bonds that way for sure.
0:56:09 And to be able to do that, like somewhat remotely in that same personal touch. I don’t know. That’s a deeply fulfilling thing. I know a lot of people, I personally hate meetings because a significant percent of meetings when done poorly are, don’t, don’t serve a clear purpose.
0:56:19 So, but that’s a meeting problem. That’s not a communication problem. If you can improve the communication for the meetings that are useful, it’s just incredible.
0:56:28 So yeah, I was blown away by the great engineering behind it. And then we get to see what impact that has. That’s really interesting, but just incredible engineering. Really impressive.
0:56:45 It is. And obviously we’ll work hard over the years to make it more and more accessible, but yeah, even on a personal front, outside of work meetings, you know, a grandmother who’s far away from our grandchild and being able to, you know, have that kind of an interaction, right.
0:57:01 And all of that, I think we’ll end up being very mean, nothing substitutes being in person, you know, it’s not always possible. You know, you could be a soldier deployed, try trying to talk to your loved ones. So I think, uh, you know, so that’s what inspires us.
0:57:27 When you and I hung out last year and took a walk, I remember, I don’t think we talked about this, but, but I remember, uh, you know, outside of that, seeing dozens of articles written by analysts and experts and so on that, um, Sundar Pichai should step down because the perception was that Google was definitively losing the AI race has lost this magic touch.
0:57:58 And the, uh, and the, uh, rapidly evolving, uh, technological, uh, landscape. And now a year later, it’s crazy. You showed this plot of all the things that were shipped over the past year. It’s incredible. And Gemini Pro is winning across many benchmarks and products, uh, as we sit here today. So take me through that experience when there’s all these articles saying you’re the wrong guy to lead Google through this. Google is lost, is done. It’s over.
0:58:03 To today where Google is winning again. What were some low points during that time?
0:58:28 Look, I, um, I mean, lots to unpack, um, you know, obviously like, I mean, the main bet I made as a CEO was to really, uh, you know, make sure the company was approaching everything in a AI first way, really, you know, setting ourselves up to develop AGI responsibly. Right.
0:58:48 And, and, and, and, and make sure we’re putting out products, uh, which, which embodies that things that are very, very useful for people. So look, I, I knew even through moments like that last year, uh, you know, I had a good sense of what we were building internally. Right.
0:59:00 Right. So I had already made, you know, many important decisions, you know, bringing together teams of the caliber of brain and deep mind and setting up Google deep mind.
0:59:20 There were things like we made the decision to invest in TPUs 10 years ago. So we knew we were scaling up and building big models. Anytime you’re in a situation like that, a few aspects, uh, I’m good at tuning out noise, right. Separating signal from noise.
0:59:39 Do you scuba dive? Like, have you, no. You know, it’s amazing. Like I’m not good at it, but I’ve done it a few times, but, but sometimes you jump in the ocean. It’s so choppy, but you go down one feet under, it’s the calmest thing in the entire, uh, universe. Right.
1:00:02 So there’s a version of that, right. Like, you know, uh, running Google, you know, you may as well be coaching Barcelona or Real Madrid, right? Like, you know, you have a bad season. So there are aspects to that, but you know, like, look, I, I’m good at tuning out the noise. I do watch out for signals. You know, it’s important to separate the signal from the noise.
1:00:30 So there are good people sometimes making good points outside. So you want to listen to it. You want to take that feedback in, but, you know, internally, like, you know, you’re making a set of consequential decisions, right. As leaders, you’re making a lot of decisions. Many of them are like inconsequential. Like it feels like, but over time you learn that most of the decisions you’re making on a day-to-day basis doesn’t matter.
1:01:01 Like you have to make them and you’re making them just to keep things moving, but you have to make a few consequential decisions. Right. And, and, uh, we had set up the right teams, right leaders. We had world-class researchers. We were training Gemini. Internally, there are factors which were, for example, outside people may not have appreciated. I mean, TPUs are amazing, but we had to ramp up TPUs too.
1:01:13 That took time, right. And, and, and, uh, to scale actually having enough TPUs to get the compute needed. But I could see internally the trajectory we were on.
1:01:25 And, and, and B, you know, I was so excited internally about the possibility. To me, this moment felt like one of the biggest opportunities ahead for us as a company.
1:01:41 That the opportunity space ahead or the next decade, next 20 years is bigger than what has happened in the past. Um, and I thought we were set up like better than most companies in the world to go, uh, realize that vision.
1:01:57 I mean, you had to make some consequential, bold decisions. Like you mentioned the merger of deep mind and brain. Uh, maybe it’s my perspective, just knowing humans. I’m sure there’s a lot of egos involved.
1:02:13 It’s very difficult to merge teams. And I’m sure there were some hard decisions to be made. Can you take me through your process of how you think through that? Do you go to pull the trigger and make that decision? Maybe what were some painful points? How do you navigate those turbulent waters?
1:02:41 Look, we were fortunate to have two world-class teams, uh, but you’re right. Like, it’s like somebody coming and telling to you, take Stanford and MIT. Right. And then put them together and create a great department. Right. And, and easier said than done. Uh, but we are fortunate, you know, phenomenal teams, both had their strengths, you know, but they were run very differently. Right. Like, uh, brain was kind of a lot of diverse projects, bottoms up.
1:02:52 And out of it came a lot of important research breakthroughs. Deep mind at the time had a strong vision of how you want to build AGI. And so they were pursuing their direction.
1:03:09 But I think through those moments, luckily tapping into, um, you know, Jeff had expressed a desire to be more, to go back to more of a scientific individual contributor roots. You know, he felt like management was taking up too much of his time.
1:03:22 Uh, and, and, and, and Demis naturally, I think, uh, you know, uh, was running deep mind and was a natural choice there, but I think it was, you’re right. You know, it took us a while to bring the teams together.
1:03:43 A few sleepless nights here and there, as we put that thing together, uh, we were patient in how we did it so that it works well for the long term.
1:03:55 Right. And, and, and some of that in that moment, I think, yes, with things moving fast, uh, I think you definitely, uh, felt the pressure, but I think we pulled off that, uh, transition well.
1:04:03 And, you know, I think, I think, uh, you know, they’ve obviously, uh, doing incredible work and there’s a lot more incredible things I had coming from them.
1:04:18 Like we talked about, you have a very calm, even tempered, respectful demeanor during that time, whether it’s the merger or just dealing with the noise, uh, did, were there times where frustration boiled over?
1:04:25 Like, did you, uh, have to go a bit more intense on everybody than you usually would?
1:04:27 Probably, you know, probably, you’re right.
1:04:38 I think, I think in the sense that, you know, there was a moment where we were all driving hard, but when you’re in the trenches working with passion, you’re going to have days, right.
1:04:46 You disagree, you argue, but like all that, I mean, just part of the course of working intensely.
1:04:47 Right.
1:04:56 And, uh, you know, at the end of the day, all of us are doing what we are doing because, uh, the impact it can have, we are motivated by it.
1:05:02 It’s like, uh, you know, for many of us, this has been a long-term, uh, journey.
1:05:04 And so it’s been super exciting.
1:05:08 The positive moments far outweigh the kind of stressful moments.
1:05:14 Just early this year, I had a chance to celebrate back-to-back over two days.
1:05:21 Like, uh, you know, Nobel prize for Jeff Finton and the next day, a Nobel prize for, uh, Demis and John jumper.
1:05:24 You know, you worked with people like that.
1:05:25 All that is super inspiring.
1:05:38 Is there something like with you where you had to like put your foot down maybe with less, uh, versus more where like I’m the CEO and we’re doing this.
1:05:45 To my earlier point about consequential decisions you make, there are decisions you make people can disagree pretty vehemently.
1:05:54 And, but at some point, like, you know, you make a clear decision and you, you just ask people to commit, right?
1:06:00 Like, you know, you can disagree, but it’s time to disagree and commit so that we can get moving.
1:06:07 And whether it’s put, putting the foot down or, you know, like, you know, it’s, it’s a natural part of what all of us have to do.
1:06:13 And, you know, I think you can do that calmly and be very firm in the direction you’re making the decision.
1:06:18 And I think if you’re clear, actually people over time respect that, right?
1:06:27 Like, you know, if you can make decisions with clarity, I find it very effective in meetings where you’re making such decisions to hear everyone out.
1:06:32 I think it’s important when you can to hear everyone out.
1:06:37 Sometimes what you’re hearing actually influences how you think about and you’re wrestling with it and making a decision.
1:06:45 Sometimes you have a clear conviction and you state, so look, I, uh, I, you know, this is how I feel.
1:06:50 And, you know, this is my conviction and you kind of placed a bet and you move on.
1:06:52 Are there big decisions like that?
1:06:56 I’m kind of intuitively assume the merger was the big one.
1:07:02 I think that was a very important decision, uh, you know, for, for the company to, to meet the moment.
1:07:06 I think we had to make sure we were, uh, we were doing that and doing that well.
1:07:08 I think that was a consequential decision.
1:07:10 There were many other things.
1:07:23 We set up a AI infrastructure team, like to really go meet the moment to scale up the compute we needed to, and really brought teams from disparate parts of the company, kind of created it to, to move forward.
1:07:40 Um, you know, bringing people like getting people to kind of work together physically, both in London, the deep mind and what we call gradient canopy, which is where the mountain view Google deep mind teams are.
1:07:51 But one of my favorite moments is I routinely walk, uh, multiple times per week to the gradient canopy building where our top researchers are working on the models.
1:07:54 Sergey is often there amongst them, right?
1:08:01 Like, you know, just, you know, looking at, uh, you know, getting an update on the model, seeing the loss curve.
1:08:09 So all that, I think that cultural part of getting the teams together back with that energy, I think ended up playing a big role too.
1:08:13 What about the decision to recently add AI mode?
1:08:25 So Google search is the, uh, as they say, the front page of the internet, it’s like a legendary minimalist thing with 10 blue links.
1:08:31 Like that’s when people think internet, they think that page, and now you’re starting to mess with that.
1:08:35 So the AI mode, which is a separate tab and then integrating AI and the results.
1:08:39 I’m sure there were some battles in meetings on that one.
1:08:47 Look, uh, you know, in some ways when mobile came, you know, people wanted answers to more questions.
1:08:56 So we’ve kind of constantly evolving it, but you’re right this moment, you know, that evolution, uh, because the underlying technology is becoming much more capable.
1:09:04 You know, you can have AI give a lot of context, you know, but one of our important design goals, though, is when you come to Google search,
1:09:10 you’re going to get a lot of context, but you’re going to go and find a lot of things out on the web.
1:09:15 So that will be true in AI mode, in AI overviews and so on.
1:09:33 But I think to our earlier conversation, we are still giving you access to links, but think of the AI as a layer, which is giving you context, summary, maybe in AI mode, you can have a dialogue with it back and forth on your journey, right?
1:09:37 And, but through it all, you’re kind of learning what’s out there in the world.
1:09:39 So those core principles don’t change.
1:09:45 But I think AI mode allows us to push the, we have our best models there, right?
1:09:48 Models which are using search as a deep tool.
1:10:00 Really for every query you’re asking, kind of fanning out, doing multiple searches, like kind of assembling that knowledge in a way so you can go and consume what you want to, right?
1:10:02 And that’s how we think about it.
1:10:07 I got to just listen to a bunch of Elizabeth, Liz, read, describe this.
1:10:09 Two things stood out to me that she mentioned.
1:10:22 One thing is what you were talking about is the query fanout, which I didn’t even think about before, is the powerful aspect of integrating a bunch of stuff on the web for you in one place.
1:10:29 So yes, it provides that context so that you can decide which page to then go on to.
1:10:38 The other really, really big thing speaks to the earlier, in terms of productivity multiply that we’re talking about, that she mentioned was language.
1:10:57 So one of the things you don’t quite understand is through AI mode, you make, for non-English speakers, you make sort of, let’s say, English language websites accessible by, in the reasoning process, as you try to figure out what you’re looking for.
1:11:00 Of course, once you show up to a page, you can use a basic translate.
1:11:14 But that process of figuring it out, if you empathize with a large part of the world that doesn’t speak English, their web is much smaller in that original language.
1:11:18 And so it unlocks, again, unlocks that huge cognitive capacity there.
1:11:24 We don’t, you know, you take for granted here with all the bloggers and the journalists writing about AI mode.
1:11:30 You forget that this now unlocks, because Gemini is really good at translation.
1:11:31 No, it is.
1:11:39 I mean, the multimodality, the translation, its ability to reason, we are dramatically improving tool use.
1:11:47 Like, as of putting that power in the flow of search, I think, look, I’m super excited.
1:11:53 With the AI overviews, we’ve seen the product has gotten much better.
1:11:55 You know, we measured using all kinds of user metrics.
1:11:59 It’s obviously driven strong growth of the product.
1:12:07 And, you know, we’ve been testing AI mode, you know, it’s now in the hands of millions of people.
1:12:10 And the early metrics are very encouraging.
1:12:13 So, look, I’m excited about this next chapter of search.
1:12:16 For people who are not thinking through or are aware of this.
1:12:21 So there’s the 10 blue links with the AI overview on top that provides a nice summarization.
1:12:22 You can expand it.
1:12:26 And you have sources and links now embedded.
1:12:33 I believe, at least Liz said so, I actually didn’t notice it, but there’s ads in the AI overview also.
1:12:36 I don’t think there’s ads in AI mode.
1:12:40 When ads in AI mode?
1:12:42 So now, when do you think, I mean, it’s, okay.
1:12:53 We should say that in the 90s, I remember the animated GIFs, banner GIFs that take you to some shady websites that have nothing to do with anything.
1:12:55 AdSense revolutionized the advertisement.
1:13:05 It’s one of the greatest inventions in recent history because it allows us for free to have access to all these kinds of services.
1:13:08 So ads fuel a lot of really powerful services.
1:13:17 And at its best, it’s showing you relevant ads, but also very importantly, in a way that’s not super annoying, right?
1:13:24 So when do you think it’s possible to add ads into AI mode?
1:13:29 And what does that look like from a classy, not annoying perspective?
1:13:30 Two things.
1:13:36 Early part of AI mode will obviously focus more on the organic experience to make sure we are getting it right.
1:13:44 I think the fundamental value of ads are, it enables access to deploy the services to billions of people.
1:13:53 Second is ads are, the reason we’ve always taken ads seriously is we view ads as commercial information, but it’s still information.
1:13:56 And so we bring the same quality metrics to it.
1:14:06 I think with AI mode to our earlier conversation about, I think AI itself will help us over time figure out, you know, the best way to do it.
1:14:16 I think given we are giving context around everything, I think it’ll give us more opportunities to also explain, okay, here’s some commercial information.
1:14:22 Like today as a podcaster, you do it at certain spots and you probably figure out what’s best in your podcast.
1:14:35 I think so there are aspects of that, but I think, you know, I think the underlying need of people value commercial information, businesses are trying to connect to users.
1:14:41 All that doesn’t change in an AI moment, but look, we will rethink it.
1:14:46 You’ve seen us in YouTube now do a mixture of subscription and ads.
1:14:53 Like obviously, you know, we are now introducing subscription offerings across everything.
1:15:00 And so as part of that, we can optimize, the optimization point will end up being a different place as well.
1:15:09 Do you see a trajectory in the possible future where AI mode completely replaces the 10 blue links plus AI overview?
1:15:15 Our current plan is AI mode is going to be there as a separate tab for people who really want to experience that.
1:15:25 But it’s not yet at the level where our main search pages, but as features work, we’ll keep migrating it to the main page.
1:15:28 And so you can view it as a continuum.
1:15:31 AI mode will offer you the bleeding edge experience.
1:15:39 But it’ll, things that work will keep overflowing to AI overviews in the main experience.
1:15:43 And the idea that AI mode will still take you to the web, to the human created web.
1:15:43 Yes.
1:15:46 That’s going to be a core design principle for us.
1:15:49 So really, if users decide, right, they drive this.
1:15:49 Yeah.
1:15:54 It’s just exciting, a little bit scary that it might change the internet.
1:16:03 Because you, Google has been dominating with a very specific look and idea of what it means to have the internet.
1:16:10 And to, as you move to AI mode, I mean, it’s just a different experience.
1:16:18 I think Liz was talking about, I think you’ve mentioned that you ask more questions, you ask longer questions.
1:16:20 Dramatically different types of questions.
1:16:21 Yeah.
1:16:23 Like it actually fuels curiosity.
1:16:32 Like I think it’s, for me, I’ve been asking just a much larger number of questions of this black box machine, let’s say, whatever it is.
1:16:43 And with the AI overview, it’s interesting because I still value the human, I still ultimately want to end up on the human created web.
1:16:46 But I, like you said, the context really helps.
1:16:50 It helps us deliver higher quality referrals, right?
1:16:55 You know, where people are like, they have much higher likelihood of finding what they’re looking for.
1:17:00 They’re exploring, they’re curious, their intent is getting satisfied more.
1:17:02 So that’s what all our metrics show.
1:17:04 It makes the humans that create the web nervous.
1:17:06 The journalists are getting nervous.
1:17:07 They’ve already been nervous.
1:17:11 Like I mentioned, CNN is nervous because of podcasts.
1:17:13 It makes people nervous.
1:17:21 Look, I think news and journalism will play an important role, you know, in the future.
1:17:24 We’re pretty committed to it, right?
1:17:34 And so I think making sure that ecosystem, in fact, I think we’ll be able to differentiate ourselves as a company over time because of our commitment there.
1:17:41 So it’s something I think, you know, I definitely value a lot and as we are designing, we’ll continue prioritizing approaches.
1:17:50 I’m sure for the people who want, they can have a fine-tuned AI model that’s clickbait hit pieces that will replace current journalism.
1:17:52 That’s a shot at journalism.
1:17:53 Forgive me.
1:18:00 But I find that if you’re looking for really strong criticism of things, that Gemini is very good at providing that.
1:18:01 Oh, absolutely.
1:18:03 It’s better than anything for now.
1:18:18 I mean, people are concerned that there will be bias that’s introduced that as the AI systems become more and more powerful, there’s incentive from sponsors to roll in and try to control the output of the AI models.
1:18:22 But for now, the objective criticism that’s provided is way better than journalism.
1:18:25 Of course, the argument is the journalists are still valuable.
1:18:32 But then, I don’t know, the crowdsourced journalism that we get on the open internet is also very, very powerful.
1:18:36 I feel like they’re all super important things.
1:18:40 I think it’s good that you get a lot of crowdsourced information coming in.
1:18:47 But I feel like there is real value for high-quality journalism, right?
1:19:00 And I think these are all complementary, I think, like I view it as I find myself constantly seeking out also, like, try to find objective reporting on things, too.
1:19:06 And sometimes you get more context from the crowdfunded sources you read online.
1:19:08 But I think both end up playing a super important role.
1:19:20 So there’s, you’ve spoken a little bit about this, Demis talked about this, it’s sort of the slice of the web that will increasingly become about providing information for agents.
1:19:24 So we can think about it as, like, two layers of the web.
1:19:26 One is for humans, one is for agents.
1:19:29 Do you see the AI agents?
1:19:33 Do you see the one that’s for AI agents growing over time?
1:19:43 Do you see there still being long-term, five, ten years value for the human-created, human-created for the purpose of human consumption, web?
1:19:45 Or will it all be agents in the end?
1:19:59 Today, like, not everyone does, but, you know, you go to a big retail store, you love walking the aisle, you love shopping, or grocery store, picking out food, etc.
1:20:02 But you’re also online shopping and they’re delivering, right?
1:20:07 So both are complementary, and, like, that’s true for restaurants, etc.
1:20:13 So I do feel like, over time, websites will also get better for humans.
1:20:14 They will be better designed.
1:20:18 AI might actually design them better for humans.
1:20:25 So I expect the web to get a lot richer and more interesting and better to use.
1:20:33 At the same time, I think there’ll be an agentic web, which is also making a lot of progress.
1:20:40 And you have to solve the business value and the incentives to make that work well, right?
1:20:41 Like, for people to participate in it.
1:20:44 But I think both will coexist.
1:20:49 And obviously, the agents may not need the same…
1:20:50 I mean, not may not.
1:20:56 They won’t need the same design and UI paradigms which humans need to interact with.
1:20:59 But I think both will be there.
1:21:02 I have to ask you about Chrome.
1:21:06 I have to say, for me personally, Google Chrome was probably…
1:21:08 I don’t know.
1:21:10 I’d like to see where I would rank it.
1:21:13 But in this temptation…
1:21:16 And this is not a recency bias, although it might be a little bit.
1:21:21 But I think it’s up there, top three, maybe the number one piece of software for me of all time.
1:21:22 So it’s incredible.
1:21:23 It’s really incredible.
1:21:26 The browsers are a window to the web.
1:21:34 And Chrome really continued for many years, but even initially, to push the innovation on that front when it was stale.
1:21:36 And it continues to challenge.
1:21:39 It continues to make it more performant, so efficient.
1:21:41 You just innovate constantly.
1:21:44 And the Chromium aspect of it.
1:21:51 Anyway, you were one of the pioneers of Chrome, pushing for it when it was an insane idea.
1:21:57 Probably one of the ideas that was criticized and doubted and so on.
1:22:03 So can you tell me the story of what it took to push for Chrome?
1:22:04 What was your vision?
1:22:17 Look, it was such a dynamic time, you know, around 2004, 2005, with Ajax, the web suddenly becoming dynamic.
1:22:27 In a matter of a few months, Flickr, Gmail, Google Maps, all kind of came into existence, right?
1:22:37 Like the fact that you have an interactive, dynamic web, the web was evolving from simple text pages, simple HTML, to rich, dynamic applications.
1:22:45 But at the same time, you could see the browser was never meant for that world, right?
1:22:57 Like JavaScript execution was super slow, you know, the browser was far away from being an operating system for that rich, modern web, which was coming into place.
1:23:00 So that’s the opportunity we saw.
1:23:03 Like, you know, it’s an amazing early team.
1:23:11 I still remember the day we got a shell on WebKit running and how fast it was.
1:23:16 You know, we had the clear vision for building a browser.
1:23:21 Like we wanted to bring core OS principles into the browser, right?
1:23:25 Like, so we built a secure browser sandbox.
1:23:27 Each tab was its own.
1:23:31 These things are common now, but at the time, like it was pretty unique.
1:23:46 We found an amazing team in Aarhus, Denmark, with a leader who built a V8, the JavaScript VM, which at the time was 25 times faster than any other JavaScript VM out there.
1:23:48 And by the way, you’re right.
1:23:51 We open sourced it all and, you know, and put it in Chromium too.
1:24:00 But we really thought the web could work much better, you know, much faster, and you could be much safer browsing the web.
1:24:09 And the name Chrome came was because we literally felt people were like the Chrome of the browser was getting clunkier.
1:24:11 We wanted to minimize it.
1:24:13 And so that was the origins of the project.
1:24:20 Definitely, obviously, highly biased person here talking about Chrome.
1:24:24 But, you know, it’s the most fun I’ve had building a product from the ground up.
1:24:27 And, you know, it was an extraordinary team.
1:24:31 Had my co-founders in the project were terrific.
1:24:33 So, definitely fond memories.
1:24:38 So, for people who don’t know, Sundar, it’s probably fair to say you’re the reason we have Chrome.
1:24:44 Yes, I know there’s a lot of incredible engineers, but pushing for it inside a company that probably was opposing it.
1:24:46 Because it’s a crazy idea.
1:24:50 Because, as everybody probably knows, it’s incredibly difficult to build a browser.
1:24:56 Yeah, look, Eric, who was the CEO at that time, I think it was less than he was supposed to it.
1:24:59 He kind of firsthand knew what a crazy thing it is to go build a browser.
1:25:07 And so, he definitely was like, this is, you know, there was a crazy aspect to actually wanting to go build a browser.
1:25:11 But, he was very supportive.
1:25:13 You know, everyone, the founders were.
1:25:19 I think once we started, you know, building something and we could use it and see how much better.
1:25:23 From then on, like, you know, you’re really tinkering with the product and making it better.
1:25:25 It came to life pretty fast.
1:25:33 What wisdom do you draw from that, from pushing through on a crazy idea in the early days that ends up being revolutionary?
1:25:37 What, for future crazy ideas like it?
1:25:42 I mean, this, this is something Larry and Sergey have articulated clearly.
1:25:51 I really internalized this early on, which is, you know, their whole feeling around working on moonshots, like, as a way.
1:25:57 When you work on something very ambitious, first of all, it attracts the best people, right?
1:25:58 So, that’s an advantage you get.
1:26:03 Number two, because it’s so ambitious, you don’t have others working on something crazy.
1:26:06 So, you pretty much have the path to yourselves, right?
1:26:07 It’s like Waymo and self-driving.
1:26:13 Number three, it is, even if you end up quite not accomplishing what you set out to do,
1:26:17 and you end up doing 60, 80% of it, it’ll end up being a terrific success.
1:26:22 So, so, you know, that’s the advice I would give people, right?
1:26:27 I think, like, you know, it’s just aiming for big ideas, has all these advantages.
1:26:34 And, and it’s risky, but it also has all these advantages, which people, I don’t think, fully internalize.
1:26:38 I mean, you mentioned one of the craziest, biggest moonshots, which is Waymo.
1:26:47 It’s one, when I first saw, over a decade ago, a Waymo vehicle, a Google self-driving car vehicle.
1:26:52 It was, it was, for me, it was an aha moment for robotics.
1:26:56 It made me fall in love with robotics even more than before.
1:26:58 It gave me a glimpse into the future, so it’s incredible.
1:27:02 I’m truly grateful for that project, for what it symbolizes.
1:27:04 But it’s also a crazy moonshot.
1:27:10 It’s for, for a long time, Waymo has been just, like you mentioned, with scuba diving,
1:27:16 just not listening to anybody, just calmly improving the system better and better, more testing,
1:27:20 just expanding the operational domain more and more.
1:27:24 First of all, congrats on 10 million paid robo-taxi rides.
1:27:33 What lessons do you take from Waymo about, like, the, the, the perseverance, the persistence on that project?
1:27:37 I look really proud of the progress we have had with Waymo.
1:27:46 One of the things I think we were very committed to, you know, the final 20% can look like, I mean, we always say, right, the first 80% is easy.
1:27:48 The final 20% takes 80% of the time.
1:27:55 I think we’re working through that phase with Waymo, but I was aware of that.
1:27:57 So, but, you know, we knew we were at that stage.
1:28:07 We knew we were the technology gap between, while there were many people, many other self-driving companies, we knew the technology gap was there.
1:28:17 In fact, right at the moment when others were doubting Waymo is when, I don’t know, I made the decision to invest more in Waymo, right?
1:28:21 Because so, so in some ways it’s, it’s counterintuitive.
1:28:33 But I think, look, we’ve always been a deep technology company and like, you know, Waymo is a version of kind of building a AI robot that works well.
1:28:40 And so we get attracted to problems like that, the caliber of the teams there, you know, phenomenal teams.
1:28:43 And so I know you follow the space super closely.
1:28:50 You know, I’m talking to someone who knows the space well, but it was very obvious it’s going to get there.
1:29:00 And, you know, there’s still more work to do, but we, you know, it’s a good example where we always prioritized being ambitious and safety at the same time.
1:29:13 Right. And, and, and equally committed to both and pushed hard and, you know, couldn’t be more thrilled with how it’s working, how much people love, love the experience.
1:29:18 And it, this year has definitely, we’ve scaled up a lot and we’ll continue scaling up in 26.
1:29:22 That said, the competition is heating up.
1:29:29 You’ve been friendly with Elon, even though technically as a competitor, but you’ve been friendly with a lot of tech CEOs.
1:29:32 In that way, just showing respect towards them and so on.
1:29:35 What do you think about the robotaxi efforts that Tesla is doing?
1:29:36 Do you see this competition?
1:29:37 What do you think?
1:29:38 Do you like the competition?
1:29:46 We are one of the earliest and biggest backers of SpaceX as Google, right?
1:29:55 So, you know, thrilled with what SpaceX is doing and fortunate to be investors as a company there.
1:29:58 Right. And, and look, we don’t compete with Tesla directly.
1:30:00 We are not making cars, et cetera, right?
1:30:03 We are building L45 autonomy.
1:30:08 We’re building a Waymo driver, which is general purpose and can be used in many settings.
1:30:12 They’re obviously working on making Tesla self-driving too.
1:30:18 I’m just assuming it’s a de facto that Elon would succeed in whatever he does.
1:30:22 So like, you know, I, you know, that, that, that is not something I questioned.
1:30:29 So, but I think we are so far from these spaces are such vast spaces.
1:30:39 Like I think, think about transportation, the opportunity space, the Waymo driver is a general purpose technology we can apply in many situations.
1:30:50 So you have a vast green space, uh, in all future scenarios, I see Tesla doing well and, you know, Waymo doing well.
1:31:04 Like we mentioned with the Neolithic package, I think it’s very possible that in the quote unquote AI package, when the history is written, autonomous vehicles, self-driving cars is like the big thing that changes everything.
1:31:14 Imagine over a period of, uh, a decade or two, just the complete transition from manually driven to autonomous in ways we went, we might not predict.
1:31:20 It might change the way we move about the world completely so that, you know, the possibility of that.
1:31:34 And then the second and third order effects, as you’re seeing now with Tesla, very possibly you would see some, um, internally with alphabet, maybe Waymo, maybe some of the Gemini robotics stuff.
1:31:41 It might lead you into the other domains of robotics, because we should remember that Waymo is a robot.
1:31:44 It just happens to be on four wheels.
1:31:50 So you, you said that the next big thing, we can also throw that into AI package.
1:31:54 The big aha moment might be in the space of robotics.
1:31:57 What do you think that would look like?
1:32:01 Demis and the Google DeepMind team is very focused on Gemini robotics, right?
1:32:05 So we are definitely building the underlying models well.
1:32:08 So we have a lot of investments there.
1:32:11 And I think we are also pretty cutting edge in our research there.
1:32:14 So we are definitely driving that direction.
1:32:18 We obviously are thinking about applications in robotics.
1:32:20 We’ll, we’ll kind of work seriously.
1:32:25 We are partnering with a few companies today, but it’s an area I would say, stay tuned.
1:32:33 We are, you know, we are yet to fully articulate our plans outside, but it’s an area we are definitely committed to driving a lot of progress.
1:32:37 But I think AI ends up driving that massive progress in robotics.
1:32:41 The field has been held back for a while.
1:32:45 I mean, the hardware has made extraordinary progress.
1:33:00 The software had been the challenge, but, you know, with AI now and the generalized models we are building, you know, we are building these models, getting them to work in the real world in a safe way, in a generalized way.
1:33:02 It’s the frontier we’re pushing pretty hard on.
1:33:09 Well, it’s really nice to see that the models and the different teams integrated to where all of them are pushing towards one world model that’s being built.
1:33:16 So from all these different angles, multimodal, you’re ultimately trying to get Gemini.
1:33:29 The same thing that would make AI mode really effective in answering your questions, which requires a kind of world model, is the same kind of thing that would help a robot be useful in the physical world.
1:33:31 So everything’s aligned.
1:33:41 That is what makes this moment so unique because running a company, for the first time, you can do one investment in a very deep, horizontal way.
1:33:46 On top of it, you can, like, drive multiple businesses forward, right?
1:33:51 And, you know, that’s effectively what we are doing in Google and Alphabet, right?
1:33:55 Yeah, it’s all coming together like it was planned ahead of time, but it’s not, of course.
1:33:56 It’s all distributed.
1:34:03 I mean, if Gmail and Sheets and all these other incredible services, I can sing Gmail praises for years.
1:34:05 I mean, it’s just revolutionized email.
1:34:11 But the moment you start to integrate AI, Gemini, into Gmail, I mean, that’s the other thing.
1:34:15 Speaking of productivity multiplier, people complain about email, but that changed everything.
1:34:18 Email, like the invention of email changed everything.
1:34:19 And it’s been ripe.
1:34:24 There’s been a few folks trying to revolutionize email, some of them on top of Gmail.
1:34:26 But that’s, like, ripe for innovation.
1:34:35 Not just spam filtering, but you demoed a really nice demo of personalized responses.
1:34:40 And at first, I was like, at first, I felt really bad about that.
1:34:45 But then I realized that there’s nothing wrong to feel bad about.
1:34:53 Because the example you gave is when a friend asks, you know, you went to whatever hiking location, do you have any advice?
1:34:57 And they just search us through all your information to give them good advice.
1:34:59 And then you put the cherry on top.
1:35:01 Maybe some love or whatever, camaraderie.
1:35:05 But the informational aspect, the knowledge transfer it does for you.
1:35:07 I think there’ll be important moments.
1:35:14 Like, it should be, like, today, if you write a card in your own handwriting and send it to someone, that’s a special thing.
1:35:16 Similarly, there’ll be a time.
1:35:20 I mean, to your friends, maybe your friend wrote and said he’s not doing well or something.
1:35:26 You know, those are moments you want to save your times for writing something, reaching out.
1:35:36 But, you know, like saying, give me all the details of the trip you took, you know, to me, makes a lot of sense for an AI assistant to help you, right?
1:35:39 And so I think both are important.
1:35:42 But I think I’m excited about that direction.
1:35:46 Yeah, I think ultimately it gives more time for us humans to do the things we humans find meaningful.
1:35:53 And I think it scares a lot of people because we’re going to have to ask ourselves the hard question of, like, what do we find meaningful?
1:35:55 And I’m sure there’s answers.
1:36:00 I mean, it’s the old question of the meaning of existence is you have to try to figure that out.
1:36:06 That might be ultimately parenting or being creative in some domains of art or writing.
1:36:16 And it challenges to, like, you know, it’s a good question to ask yourself, like, in my life, what is the thing that brings me most joy and fulfillment?
1:36:21 And if I’m able to actually focus more time on that, that’s really powerful.
1:36:25 I think that’s the, you know, that’s the holy grail.
1:36:29 If you get this right, I think it allows more people to find that.
1:36:34 I have to ask you, on the programming front, AI is getting really good at programming.
1:36:37 Gemini, both the Agentec and just the LLM has been incredible.
1:36:43 So a lot of programmers are really worried that their jobs, they will lose their jobs.
1:36:46 How worried should they be?
1:36:53 And how should they adjust so they can be thriving in this new world where more and more code is written by AI?
1:37:09 I think a few things, looking at Google, you know, we’ve given various stats around, like, you know, 30% of code now uses, like, AI-generated suggestions or whatever it is.
1:37:21 But the most important metric, like, how much has our engineering velocity increased as a company due to AI, right?
1:37:24 And it’s, like, tough to measure and we kind of rigorously try to measure it.
1:37:28 And our estimates are that number is now at 10%, right?
1:37:38 Like, now, across the company, we’ve accomplished a 10% engineering velocity increase using AI.
1:37:44 But we plan to hire engineers, more engineers next year, right?
1:37:51 So because the opportunity space of what we can do is expanding too, right?
1:38:10 And so I think hopefully, you know, at least in the near to midterm, for many engineers, it frees up more and more of the, you know, even in engineering and coding, there are aspects which are so much fun.
1:38:29 You’re designing, you’re designing, you’re architecting, you’re solving a problem, there’s a lot of grunt work, you know, which all goes hand in hand, but it hopefully takes a lot of that away, makes it even more fun to code, frees you up more time to create, problem solve, brainstorm with your fellow colleagues and so on, right?
1:38:32 So that’s the opportunity there.
1:38:45 And second, I think, like, you know, it’ll attract, it’ll put the creative power in more people’s hands, which means people create more, that means there’ll be more engineers doing more things.
1:38:58 So it’s tough to fully predict, but, you know, I think in general in this moment, it feels like, you know, people adopt these tools and be better programmers.
1:39:02 Like there are more people playing chess now than ever before, right?
1:39:13 So, you know, it feels positive that way to me, at least speaking from within a Google context, is how I would, you know, talk to them about it.
1:39:19 I still, I just know anecdotally, a lot of great programmers are generating a lot of code.
1:39:29 So their productivity, they’re not always using all the code, just, you know, there’s still a lot of editing, but like, even for me, programming is a side thing.
1:39:32 I think I’m like 5x more productive.
1:39:46 I don’t, I think that’s, even for a large code base that’s touching a lot of users like Google’s does, I’m imagining like very soon that productivity should be going up even more.
1:39:52 The big unlock will be as we make the agentic capabilities much more robust, right?
1:39:55 I think that’s what unlocks that next big wave.
1:39:58 I think the 10% is like a massive number.
1:40:09 Like, you know, if tomorrow, like I showed up and said, like, you can improve like a large organization’s productivity by 10% when you have tens of thousands of engineers, that’s a phenomenal number.
1:40:18 And, you know, that’s different than what others cite a statistic saying, like, you know, like this percentage of code is now written by AI.
1:40:19 I’m talking more about like overall.
1:40:20 Actual productivity.
1:40:22 Actual productivity, right?
1:40:24 Engineering productivity, which is two different things.
1:40:28 And, and which is the more important metric.
1:40:32 And, but I think it’ll get better, right?
1:40:40 And like, you know, I think there’s no engineer who tomorrow, if you magically became 2x more productive, you’re just going to create more things.
1:40:42 You’re going to create more value added things.
1:40:45 And so I think you’ll, you’ll find more satisfaction in your job, right?
1:40:47 So, and there’s a lot of aspects.
1:40:56 I mean, the actual Google code base might just improve because it’ll become more standardized, more easier for people to move about the code base because AI will help with that.
1:41:02 And therefore that will also allow the AI to understand the entire code base better, which makes the engineering aspect.
1:41:13 And so I’ve been using cursor a lot as a way to program with Gemini and other models is like it, one of its powerful things is it’s aware of the entire code base.
1:41:15 And that allows you to ask questions of it.
1:41:20 It allows the agents to move about that code base in a really powerful way.
1:41:21 I mean, that’s a huge unlock.
1:41:27 Think about like, you know, migrations, refactoring old code bases.
1:41:27 Refactoring, yeah.
1:41:28 Yeah.
1:41:33 I mean, think about like, you know, once we can do all this in a much better, more robust way than where we are today.
1:41:38 I think in the end, everything will be written in JavaScript and run, run in Chrome.
1:41:40 I think it’s all going to that direction.
1:41:50 I mean, just for fun, Google has legendary coding interviews, like rigorous interviews for the engineers.
1:41:54 Can you comment on how that has changed in the era of AI?
1:42:01 It’s just such a weird, you know, the whiteboard interview, I assume is not allowed to have some prompts.
1:42:03 Such a good question.
1:42:14 Look, I do think, you know, we’re making sure, you know, we’ll introduce at least one round of in-person interviews for people.
1:42:15 Yeah.
1:42:18 Just to make sure the fundamentals are there, I think they’ll end up being important.
1:42:20 But it’s an equally important skill.
1:42:26 Look, if you can use these tools to generate better code, like, you know, I think that’s an asset.
1:42:32 And so, you know, I think, so overall, I think it’s a massive positive.
1:42:43 Vibe coding engineer, do you recommend people, students interested in programming still get an education in computer science, in college education?
1:42:44 What do you think?
1:42:44 I do.
1:42:46 If you have a passion for computer science, I would.
1:42:49 You know, computer science is obviously a lot more than programming alone.
1:42:50 So I would.
1:42:56 I still don’t think I would change what you pursue.
1:43:03 I think AI will horizontally allow impact every field.
1:43:06 It’s pretty tough to predict in what ways.
1:43:14 So any education in which you’re learning good first principles thinking, I think it’s good education.
1:43:16 You’ve revolutionized web browsing.
1:43:18 You’ve revolutionized a lot of things over the years.
1:43:22 Android changed the game.
1:43:24 It’s an incredible operating system.
1:43:26 We could talk for hours about Android.
1:43:28 What does the future of Android look like?
1:43:33 Is it possible it becomes more and more AI-centric?
1:43:46 Especially now, the throw-into-the-mix Android XR, with being able to do augmented reality, mixed reality, and virtual reality in the physical world.
1:43:53 You know, the best innovations in computing have come when you’re, through a paradigm, IO change, right?
1:44:02 Like, you know, with GUI, and then with a graphical user interface, and then with multi-touch in the context of mobile voice later on.
1:44:07 Similarly, I feel like, you know, AR is that next paradigm.
1:44:15 I think it was held back both the system integration challenges of making good AR is very, very hard.
1:44:21 The second thing is, you need AI to actually kind of, otherwise the IO is too complicated.
1:44:29 For you to have a natural, seamless IO to that paradigm, AI ends up being super important.
1:44:37 So, this is why Project Astra ends up being super critical for that Android XR world.
1:44:46 Well, but it is, I think when you use glasses and, you know, always been amazed, like, at the, how useful these things are going to be.
1:44:50 So, I, look, I think it’s a real opportunity for Android.
1:44:54 I think XR is one way it’ll kind of really come to life.
1:44:58 But I think there’s an opportunity to rethink the mobile OS too, right?
1:45:03 I think we’ve been kind of living in this paradigm of, like, apps and shortcuts.
1:45:05 All that won’t go away.
1:45:17 But again, like, if you’re trying to get stuff done at an operating system level, you know, it needs to be more agentic so that you can kind of describe what you want to do.
1:45:24 Or, like, it proactively understands what you’re trying to do, learns from how you’re doing things over and over again, and kind of is adapting to you.
1:45:27 All that is kind of, like, the unlock we need to go and do.
1:45:35 With a basic, efficient, minimalist UI, I’ve gotten a chance to try the glasses, and they’re incredible.
1:45:36 It’s the little stuff.
1:45:38 It’s hard to put into words, but no latency.
1:45:40 It just works.
1:45:46 Even that little map demo where you look down, and you look up, and there’s a very smooth transition between the two.
1:45:53 And useful, very small amount of useful information is shown to you.
1:45:59 Enough not to distract from the world outside, but enough to provide a bit of context when you need it.
1:46:07 And some of that, in order to bring that into reality, you have to solve a lot of the OS problems to make sure it works.
1:46:10 When you’re integrating the AI into the whole thing.
1:46:15 So, everything you do launches an agent that answers some basic question.
1:46:17 Good moonshot.
1:46:17 You know, I love it.
1:46:18 Yeah, it’s crazy.
1:46:26 But, you know, I think we are, you know, but it’s much closer to reality than other moonshots.
1:46:34 You know, we expect to have glasses in the hands of developers later this year, and, you know, in consumer science next year.
1:46:35 So, it’s an exciting time.
1:46:38 Yeah, extremely well executed.
1:46:41 Beam, all this stuff, you know, because sometimes you don’t know.
1:46:47 Like, somebody commented on a top comment on one of the demos of Beam.
1:46:55 They said this will either be killed off in five weeks or revolutionize all meetings in five years.
1:47:04 And there’s very much Google tries so many things and sometimes, sadly, kills off very promising projects because there’s so many other things to focus on.
1:47:06 I use so many Google products.
1:47:08 Google Voice, I still use.
1:47:10 I’m so glad that’s not being killed off.
1:47:11 That’s still alive.
1:47:14 Thank you, whoever is defending that because it’s awesome.
1:47:15 And it’s great.
1:47:16 They keep innovating.
1:47:19 I just want to list off just as a big thank you.
1:47:21 So, search, obviously, Google revolutionized.
1:47:22 Chrome.
1:47:24 And all of these could be multi-hour conversations.
1:47:26 Gmail.
1:47:29 I’ve been singing Gmail praises forever.
1:47:30 Maps.
1:47:33 Incredible technological innovation and revolutionizing mapping.
1:47:35 Android, like we talked about.
1:47:36 YouTube, like we talked about.
1:47:37 AdSense.
1:47:39 Google Translate.
1:47:44 For the academic mind, a Google Scholar is incredible.
1:47:46 And also the scanning of the books.
1:47:54 So, making all the world’s knowledge accessible, even when that knowledge is a kind of niche thing, which Google Scholar is.
1:47:59 And then, obviously, with DeepMind, with AlphaZero, AlphaFold, AlphaEvolve.
1:48:02 I could talk forever about AlphaEvolve.
1:48:03 That’s mind-blowing.
1:48:04 All of that released.
1:48:13 And as part of that set of things you’ve released in this year, when those brilliant articles were written about Google is done.
1:48:23 And like we talked about, pioneering self-driving cars and quantum computing, which could be another thing that is low-key, is scuba diving.
1:48:26 It’s way to changing the world forever.
1:48:31 So, another potheads slash micro-kitchen question.
1:48:36 If you build AGI, what kind of question would you ask it?
1:48:39 What would you want to talk about?
1:48:46 Definitively, Google has created AGI that can basically answer any question.
1:48:48 What topic are you going to?
1:48:51 Where are you going?
1:48:52 It’s a great question.
1:49:01 Maybe it’s proactive by then and should tell me a few things I should know.
1:49:10 But I think if I were to ask it, I think it’ll help us understand ourselves much better in a way that will surprise us, I think.
1:49:17 And so, maybe that’s, you already see people do it with the products.
1:49:20 And so, but, you know, in an AGI context, I think that’ll be pretty powerful.
1:49:23 At a personal level or a general human nature?
1:49:35 At a personal level, like you talking to AGI, I think, you know, there is some chance it’ll kind of understand you in a very deep way.
1:49:38 I think, you know, in a profound way, that’s a possibility.
1:49:52 I think there is also the obvious thing of, like, maybe it helps us understand the universe better, you know, in a way that expands the frontiers of our understanding of the world.
1:49:55 That is something super exciting.
1:50:00 But, look, I really don’t know.
1:50:04 I think, you know, I haven’t had access to something that powerful yet.
1:50:06 But I think those are all possibilities.
1:50:26 I think on the personal level, asking questions about yourself could, a sequence of questions like that about what makes me happy, I think we’d be very surprised to learn that those kind of, a sequence of questions and answers, we might explore some profound truths.
1:50:31 In the way that sometimes art reveals to us, great books reveal to us, great conversations we loved ones reveal.
1:50:37 Things that are obvious in retrospect, but are nice when they’re said.
1:50:41 But for me, number one question is about how many alien civilizations are there.
1:50:42 100%.
1:50:43 That’s going to be your first question.
1:50:47 Number one, how many living and dead alien civilizations?
1:50:50 Maybe a bunch of follow-ups, like how close are they?
1:50:51 Are they dangerous?
1:50:56 If there’s no alien civilizations, why?
1:51:02 Or if there’s no advanced alien civilizations, but bacteria like life everywhere, why?
1:51:05 What is the barrier preventing you from getting to that?
1:51:13 Is it because that there’s, that when you get sufficiently intelligent, you end up destroying ourselves?
1:51:23 Because you need competition in order to develop an advanced civilization, and when you have competition, it’s going to lead to military conflict, and conflict eventually kills everybody.
1:51:25 I don’t know, I’m going to have that kind of discussion.
1:51:26 Get an answer to the Fermi paradox, yeah.
1:51:27 Exactly.
1:51:29 And like have a real discussion about it.
1:51:37 I’m not sure, it’s a, I’m realizing now with your answer, it’s a more productive answer, because I’m not sure what I’m going to do with that information.
1:51:42 But maybe it speaks to the general human curiosity that Liz talked about, that we’re all just really curious.
1:51:51 And making the world’s information accessible allows our curiosity to be satiated some, with AI even more.
1:51:56 We can be more and more curious and learn more about the world, about ourselves.
1:52:24 And so doing, I always wonder if, I don’t know if you can comment on, like, is it possible to measure the, not the GDP productivity increase, like we talked about, but maybe whatever that increases, the breadth and depth of human knowledge that Google has unlocked with Google search, and now with AI mode, with Gemini, is a difficult thing to measure.
1:52:40 Many years ago, there was a, I think it was an MIT study, they just estimated the impact of Google search, and they basically said it’s the equivalent to, on a per-person basis, it’s a few thousands of dollars per year per person, right?
1:52:44 Like, it’s the value that got created per year, right?
1:52:48 And, but it’s, yeah, it’s tough to capture these things, right?
1:52:54 You kind of take it, take it for granted as these things come, and the frontier keeps moving.
1:53:00 But, you know, how do you measure the value of something like alpha fold over time, right?
1:53:02 And, and, and, and so on.
1:53:02 So it’s.
1:53:05 And also the increase in quality of life when you learn more.
1:53:13 I have to say, like, with some of the programming I do done by AI, for some reason, I’m more excited to program.
1:53:13 Yeah.
1:53:28 And so the same with knowledge, with discovering things about the world, it makes you more excited to be alive, it makes you more curious to, and it keeps, the more curious you are, the more exciting it is to live and experience the world.
1:53:35 And it’s very hard to, I don’t know if that makes you more productive, it probably, not nearly as much as it makes you happy to be alive.
1:53:38 And that’s a hard thing to measure.
1:53:41 The quality of life increases some of these things do.
1:53:49 As AI continues to get better and better at everything that humans do, what do you think is the biggest thing that makes us humans special?
1:54:05 Look, I, I, I, I think it’s tough to, I mean, the essence of humanity, there’s something about, you know, the consciousness we have, what makes us uniquely human.
1:54:17 Maybe the lines will blur over time and, and it’s tough to articulate, but I hope, hopefully, you know, we live in a world where if you make resources more plentiful
1:54:32 and make the world lesser of a zero-sum game over time, right, and, and, and, which it’s not, but, you know, in a resource-constrained environment, people perceive it to be, right?
1:54:48 And, and, and so I hope the, the values of what makes us uniquely human, empathy, kindness, all that surfaces more is the aspirational hope I have.
1:54:56 Okay, it multiplies the compassion, but also the curiosity, just the, the banter, the debates we’ll have about the meaning of it all.
1:55:16 And I, I, I also think in the scientific domains, all the incredible work that DeepMind is doing, I think we’ll still continue to, to play, to explore scientific questions, mathematical questions, physics questions, even as AI gets better and better at helping us solve some of the questions.
1:55:19 Sometimes the question itself is a really difficult thing.
1:55:29 Both the right new questions to ask and the answers to them and, and, and the self-discovery process, which it will drive, I think.
1:55:35 You know, our early work with both co-scientists and Alpha Evolve, just, it’s just super exciting to see.
1:55:39 What gives you hope about the future of human civilization?
1:55:56 Look, I’ve always, I’m, I’m, I’m, I’m an optimist and, you know, I, I, I look at, you know, if you were to say, you take the journey of human civilization, it’s been, you know, we’ve relentlessly made the world better, right?
1:56:01 In many ways, at any given moment in time, there are big issues to work through.
1:56:08 It may look, but, you know, I always ask myself the question, would you have been born now or any other time in the past?
1:56:16 I most often, not most often, almost always would rather be born now, right?
1:56:22 You know, and so that’s the extraordinary thing the human civilization has accomplished, right?
1:56:26 And like, you know, and we’ve, we’ve kind of constantly made the world a better place.
1:56:35 And so something tells me, as humanity, we always rise collectively to drive that frontier forward.
1:56:37 So I expect it to be no different in the future.
1:56:39 I agree with you totally.
1:56:41 I’m truly grateful to be alive in this moment.
1:56:44 And I’m also really excited for the future.
1:56:50 And the work you and the incredible teams here are doing is one of the big reasons I’m excited for the future.
1:56:51 So thank you.
1:56:54 Thank you for all the cool products you’ve built.
1:56:56 And please don’t kill Google Voice.
1:56:58 Thank you, Sundar.
1:56:59 We won’t, yeah.
1:57:01 Thank you for talking today.
1:57:01 This was incredible.
1:57:02 Thank you.
1:57:02 Real pleasure.
1:57:03 I appreciate it.
1:57:06 Thanks for listening to this conversation with Sundar Pichai.
1:57:14 To support this podcast, please check out our sponsors in the description or at lexfriedman.com slash sponsors.
1:57:20 Shortly before this conversation, I got a chance to get a couple of demos that frankly blew my mind.
1:57:23 The engineering was really impressive.
1:57:25 The first demo was Google Beam.
1:57:30 And the second demo was the XR glasses.
1:57:33 And some of it was caught on video.
1:57:37 So I thought I would include here some of those video clips.
1:57:40 Hey Lex, my name is Andrew.
1:57:43 I lead the Google Beam team and we’re going to be excited to show you a demo.
1:57:45 We’re going to show you, I think, a glimpse of something new.
1:57:46 So that’s the idea.
1:57:47 A way to connect.
1:57:50 A way to feel present from anywhere with anybody you care about.
1:57:52 Here’s Google Beam.
1:57:55 This is a development platform that we’ve built.
1:57:57 So there’s a prototype here of Google Beam.
1:57:59 There’s one right down the hallway.
1:58:01 I’m going to go down and turn that on in a second.
1:58:02 We’re going to experience it together.
1:58:04 We’ll be back in the same room.
1:58:04 Wonderful.
1:58:08 Whoa, okay.
1:58:09 Here we are.
1:58:10 All right.
1:58:11 This is real already.
1:58:12 Wow.
1:58:12 This is real.
1:58:13 Wow.
1:58:14 Good to see you.
1:58:14 This is Google Beam.
1:58:18 We’re trying to make it feel like you and I could be anywhere in the world.
1:58:21 But when these magic windows open, we’re back together.
1:58:24 I see you exactly the same way you see me.
1:58:27 It’s almost like we’re sitting at the table, sharing a table together.
1:58:32 I could learn from you, talk to you, share a meal with you, get to know you.
1:58:33 So you could feel the depth of this.
1:58:34 Yeah.
1:58:34 Great to meet you.
1:58:35 Wow.
1:58:41 So for people who probably can’t even imagine what this looks like, there’s a 3D version.
1:58:41 It looks real.
1:58:43 You look real.
1:58:44 It looks for me.
1:58:44 It looks real to you.
1:58:46 It looks like you’re coming out of the screen.
1:58:51 We quickly believe, once we’re in Beam, that we’re just together.
1:58:55 You settle into it, you’re naturally attuned to seeing the world like this,
1:58:57 and you just get used to seeing people this way.
1:59:00 But literally from anywhere in the world with these magic screens.
1:59:00 This is incredible.
1:59:02 It’s a neat technology.
1:59:02 Wow.
1:59:07 So I saw demos of this, but they don’t come close to the experience of this.
1:59:10 I think one of the top YouTube comments on one of the demos I saw was like,
1:59:11 why would I want a high definition?
1:59:15 I’m trying to turn off the camera, but this actually is,
1:59:19 this feels like the camera has been turned off and we’re just in the same room together.
1:59:20 This is really compelling.
1:59:22 That’s right.
1:59:26 I know it’s kind of late in the day too, so I brought you a snack just in case you’re a little bit hungry.
1:59:30 So can you push it farther and it just becomes…
1:59:31 Let’s try to float it between rooms.
1:59:33 You know, it kind of fades it from my room into your room.
1:59:36 And then you see my hand, the depth of my hand.
1:59:36 Of course, yeah.
1:59:37 Of course, yeah.
1:59:39 It feels like you’ve tried this.
1:59:41 Try giving me a high five and there’s almost a sensation of feeling in touch.
1:59:42 Yeah.
1:59:42 You almost feel.
1:59:43 Yes.
1:59:46 Because you’re so attuned to, you know, that should be a high five,
1:59:48 feeling like you could connect with somebody that way.
1:59:50 So it’s kind of a magical experience.
1:59:51 Oh, this is really nice.
1:59:52 How much does it cost?
1:59:55 We’ve got a lot of companies testing it.
1:59:59 We just announced that we’re going to be bringing it to offices soon as a set of products.
2:00:01 We’ve got some companies helping to build these screens.
2:00:04 But eventually, I think this will be in almost every screen.
2:00:04 There’s nothing.
2:00:06 I’m not wearing anything.
2:00:08 Well, I’m wearing a suit and tie, to clarify.
2:00:10 I am wearing clothes.
2:00:11 This is not a CGI.
2:00:14 But outside of that, cool.
2:00:15 And the audio is really good.
2:00:17 And you can see me in the same three-dimensional way.
2:00:18 Yeah.
2:00:19 The audio is spatialized.
2:00:22 So if I’m talking from here, of course, it sounds like I’m talking from here.
2:00:24 You know, if I move to the other side of the room.
2:00:25 Wow.
2:00:29 So these little subtle cues, these really matter to bring people together.
2:00:32 All the nonverbals, all the emotion, the things that are lost today.
2:00:33 Here it is.
2:00:35 We put it back into the system.
2:00:35 You pulled this off.
2:00:37 Holy shit.
2:00:38 They pulled it off.
2:00:42 And integrated into this, I saw the translation also.
2:00:44 Yeah, we’ve got a bunch of things.
2:00:45 Let me show you a couple kind of cool things.
2:00:47 Let’s do a little bit of work together.
2:00:50 Maybe we could critique one of your latest.
2:00:55 So, you know, you and I work together.
2:00:56 So, of course, we’re in the same room.
2:00:59 But with this superpower, I can bring other things in here with me.
2:01:01 And it’s nice.
2:01:03 You know, it’s like we could sit together.
2:01:04 We could watch something.
2:01:05 We could work.
2:01:08 We’ve shared meals as a team together in this system.
2:01:12 But once you do the presence aspect of this, you want to bring some other superpowers to it.
2:01:15 And so you could review code together.
2:01:16 Yeah, yeah, exactly.
2:01:18 I’ve got some slides I’m working on.
2:01:20 You know, maybe you could help me with this.
2:01:21 Keep your eyes on me for a second.
2:01:23 I’ll slide back into the center.
2:01:24 I didn’t really move.
2:01:26 But the system just kind of puts us in the right spot.
2:01:27 And knows where we need to be.
2:01:28 Oh, so you just turn to your laptop.
2:01:30 The system moves you.
2:01:32 And then it does the overlay automatically.
2:01:36 It kind of morphs the room to put things in the spot that they need to be in.
2:01:37 Everything has a place in the room.
2:01:41 Everything has a sense of presence or spatial consistency.
2:01:44 And that kind of makes it feel like we’re together with us and other things.
2:01:46 I should also say you’re not just three-dimensional.
2:01:50 It feels like you’re leaning like out of the screen.
2:01:53 You’re like coming out of the screen.
2:01:56 You’re not just in that world three-dimensional.
2:01:56 Yeah, exactly.
2:01:58 Holy crap.
2:02:00 Move back to center.
2:02:00 Okay, okay, okay, okay.
2:02:02 Let me tell you how this works.
2:02:04 You probably already have the premise of it.
2:02:05 But there’s two things.
2:02:07 Two really hard things that we put together.
2:02:10 One is an AI video model.
2:02:11 So there’s a set of cameras.
2:02:13 You asked kind of about those earlier.
2:02:18 There’s six color cameras, just like webcams that we have today, taking video streams and
2:02:22 feeding them into our AI model and turning that into a 3D video of you and I.
2:02:24 It’s effectively a light field.
2:02:27 So it’s kind of an interactive 3D video that you can see from any perspective.
2:02:31 That’s transmitted over to the second thing, and that’s a light field display.
2:02:33 And it’s happening bi-directionally.
2:02:35 I see you and you see me both in our light field displays.
2:02:42 These are effectively flat televisions or flat displays, but they have the sense of dimensionality,
2:02:45 depth, size is correct.
2:02:47 You can see shadows and lighting are correct.
2:02:50 And everything’s correct from your vantage point.
2:02:54 So if you move around ever so slightly, and I hold still, you see a different perspective
2:02:54 here.
2:02:57 You see kind of things that were occluded become revealed.
2:02:59 You see shadows that move in the way they should move.
2:03:04 All of that’s computed and generated using our AI video model for you.
2:03:06 It’s based on your eye position.
2:03:10 Where does the right scene need to be placed in this light field display for you just to
2:03:11 feel present?
2:03:12 It’s real time, no latency.
2:03:13 I’m not seeing latency.
2:03:14 You weren’t freezing up at all.
2:03:16 No, no, I hope not.
2:03:18 I think it’s you and I together, real time.
2:03:19 That’s what you need for real communication.
2:03:22 And at a quality level, this is awesome.
2:03:24 Realistic.
2:03:25 Is it possible to do three people?
2:03:27 Like, is that going to move that way also?
2:03:28 Yeah.
2:03:29 Let me kind of show you.
2:03:33 So if she enters the room with us, you can see her, you can see me.
2:03:37 And if we had more people, you eventually lose a sense of presence.
2:03:38 You kind of shrink people down.
2:03:40 You lose a sense of scale.
2:03:43 So think of it as the window fits a certain number of people.
2:03:46 If you want to fit a big group of people, you want, you know, the boardroom or the big
2:03:48 room, you need like a much wider window.
2:03:53 If you want to see, you know, just grandma and the kids, you can do smaller windows.
2:03:57 So everybody has a seat at the table or everybody has a sense of where they belong.
2:03:59 And there’s kind of the sense of presence that’s obeyed.
2:04:02 If you have too many people, you kind of go back to like 2D metaphors that we’re used to.
2:04:04 People in tiles placed anywhere.
2:04:06 For the image I’m seeing, did you have to get scanned?
2:04:08 I mean, I see you without being scanned.
2:04:10 So it’s just so much easier if you don’t have to wear anything.
2:04:11 You don’t have to pre-scan.
2:04:15 You just do it the way it’s supposed to happen without anybody having to learn anything or
2:04:16 put anything on.
2:04:20 I thought you had to solve the scanning problem, but here you don’t.
2:04:21 It’s just cameras.
2:04:22 It’s just vision.
2:04:22 That’s right.
2:04:24 It’s video.
2:04:29 Yeah, we’re not trying to kind of make an approximation of you because everything you do every day matters.
2:04:31 You know, I cut myself shaving.
2:04:32 I put on a pin.
2:04:36 All the little kind of, you know, aspects of you, those just happen.
2:04:40 We don’t have the time to scan or kind of capture those or dress avatars.
2:04:42 We kind of appear as we appear.
2:04:45 And so all that’s transmitted truthfully as it’s happening.
2:04:48 Chris, how are you doing?
2:04:48 Good to meet you.
2:04:49 Nice to meet you.
2:04:52 So as Max mentioned, we’ve got the eye glasses here.
2:04:56 We start with the foundation of great glasses, something stylish, lightweight, wearable.
2:04:59 I’m going to say how can we build great technology and experiences on top of that.
2:05:05 One of the core tenets of the Android XR platform, this idea of a multimodal conversational device,
2:05:06 see what you see, what you hear.
2:05:07 So you’ve got a camera.
2:05:10 You’ve got speakers, multiple microphones for speaker isolation.
2:05:13 I’ll give you a chance to try these yourself.
2:05:14 Yeah.
2:05:15 Sorry, I woke it up there.
2:05:16 Whoa.
2:05:17 Yeah.
2:05:22 So the first thing you see is a super simple, straightforward home screen.
2:05:22 Yes.
2:05:26 So you probably see the time, the weather, calendar appointments there.
2:05:30 This is designed to be sort of your one-stop shop for quick glanceable information throughout
2:05:30 the day.
2:05:34 You want to do something that’s easy to, you know, get what you need and then go back to
2:05:34 what you’re doing.
2:05:38 So you can imagine, you know, turning on the display, getting that quick glance and then
2:05:40 continuing on your way.
2:05:41 You can be fully conversational at this point.
2:05:44 You can ask questions, for example, about the paintings.
2:05:44 You can interrupt.
2:05:46 You can ask follow-up questions.
2:05:49 And as I mentioned before, if you want to at any time pause, you just tap there on the
2:05:49 right.
2:05:52 Gemini, how much does this painting cost?
2:05:57 The painting is called The Bridge of Lief a Lief, 1875, by Armand Guillaume.
2:06:02 It appears to be a print and it would be impossible to know its cost without finding the seller.
2:06:04 Okay, so this is fake.
2:06:04 I understand.
2:06:07 Why do humans pay so much money for paintings?
2:06:10 People pay a lot for paintings because of their artistic merit.
2:06:14 The artist’s thing is that sees its rarity and for the appreciation you learn.
2:06:18 So a few more things that we want to show you just for sake of time.
2:06:21 Do you go ahead and long press on the side again to sleep Gemini there?
2:06:22 There you go.
2:06:25 Did you catch Google I.O. last week by any chance?
2:06:25 Yes.
2:06:29 So you might have seen on stage the Google Maps experience very briefly.
2:06:32 I wanted to give you a chance to get a sense of what that feels like today.
2:06:34 You can imagine you’re walking down the street.
2:06:38 If you look up like you’re walking straight ahead, you get quick turn-by-turn directions.
2:06:41 So you have a sense of what the next turn is like.
2:06:42 Nice.
2:06:43 Keeping your phone in your pocket.
2:06:44 Oh, that’s so intuitive.
2:06:48 Sometimes you need that quick sense of which way is the right way.
2:06:48 Sometimes.
2:06:48 Yeah.
2:06:51 So let’s say you’re coming out of a subway, getting out of a cab.
2:06:53 You can just glance down at your feet.
2:06:55 We have it set up to translate from Russian to English.
2:06:59 I think I get to wear the glasses and you speak to me if you don’t mind.
2:07:01 I can speak Russian.
2:07:04 Hey friend, how are you doing?
2:07:07 I’m doing well.
2:07:08 How are you doing?
2:07:12 Attempted to swear, tempted to say inappropriate things.
2:07:17 Do you hear my voice immediately or do you need to wait?
2:07:21 I see it transcribed in real time.
2:07:26 And so obviously, you know, based on the different languages and the sequence of subjects and verbs,
2:07:30 there’s a slight delay sometimes, but it’s really just like subtitles for the real world.
2:07:30 Cool.
2:07:31 Thank you for this.
2:07:31 All right.
2:07:32 Back to me.
2:07:39 Hopefully watching videos of me having my mind blown like the apes in 2001 Space Odyssey playing
2:07:42 with a monolith was somewhat interesting.
2:07:44 Like I said, I was very impressed.
2:07:50 And now I thought if it’s okay, I could make a few additional comments about the episode and just in general.
2:07:56 In this conversation with Sundar Pichai, I discussed the concept of the Neolithic package,
2:08:02 which is the set of innovations that came along with the first agricultural revolution about 12,000 years ago,
2:08:08 which included the formation of social hierarchies, the early primitive forms of government,
2:08:14 labor specialization, domestication of plants and animals, early forms of trade,
2:08:23 large-scale cooperations of humans like that required to build, yes, the pyramids and temples like Gobekli Tepe.
2:08:29 I think this may be the right way to actually talk about the inventions that changed human history,
2:08:38 not just as a single invention, but as a kind of network of innovations and transformations that came along with it.
2:08:43 And the productivity multiplier framework that I mentioned in the episode,
2:08:50 I think is a nice way to try to concretize the impact of each of these inventions under consideration.
2:08:56 And we have to remember that each node in the network of the sort of fast follow-on inventions
2:08:59 is in itself a productivity multiplier.
2:09:02 Some are additive, some are multiplicative.
2:09:09 So in some sense, the size of the network in the package is the thing that matters
2:09:17 when you’re trying to rank the impact of inventions on human history.
2:09:20 The easy picks for the period of biggest transformation,
2:09:27 at least in sort of modern-day discourse, is the Industrial Revolution,
2:09:31 or even in the 20th century, the computer or the internet.
2:09:37 I think it’s because it’s easiest to intuit for modern-day humans,
2:09:42 the impact, the exponential impact of those technologies.
2:09:44 But recently, I suppose this changes week to week,
2:09:48 but I have been doing a lot of reading on ancient human history.
2:09:54 So recently, my pick for the number one invention would have to be the first agricultural revolution,
2:10:00 the Neolithic package that led to the formation of human civilizations.
2:10:05 That’s what enabled the scaling of the collective intelligence machine of humanity.
2:10:11 And for us to become the early bootloader for the next 10,000 years of technological progress,
2:10:16 which, yes, includes AI, and the tech that builds on top of AI.
2:10:23 And of course, it could be argued that the word invention doesn’t properly apply to the agricultural revolution.
2:10:31 I think, actually, Yuval Noah Harari argues that it wasn’t the humans who were the inventors,
2:10:36 but a handful of plant species, namely wheat, rice, and potatoes.
2:10:43 This is strictly a fair perspective, but I’m having fun, like I said, with this discussion.
2:10:47 Here, I just think of the entire Earth as a system that continuously transforms.
2:10:51 And I’m using the term invention in that context,
2:11:00 asking the question of when was the biggest leap on the log-scale plot of human progress.
2:11:04 Will AI, AGI, ASI eventually take the number one spot in this ranking?
2:11:08 I think it has a very good chance to do so,
2:11:14 due, again, to the size of the network of inventions that will come along with it.
2:11:22 I think we’ll discuss in this podcast the kind of things that would be included in the so-called AI package,
2:11:25 but I think there’s a lot more possibilities,
2:11:29 including discussed in previous podcasts, in many previous podcasts,
2:11:36 including with Dari Amadei talking on the biological innovation side, the science progress side.
2:11:41 In this podcast, I think we talk about something that I’m particularly excited about in the near term,
2:11:49 which is unlocking the cognitive capacity of the entire landscape of brains that is the human species,
2:11:55 making it more accessible through education and through machine translation,
2:12:04 making information, knowledge, and the rapid learning and innovation process accessible to more humans,
2:12:07 to the entire eight billion, if you will.
2:12:15 So I do think language or machine translation applied to all the different methods that we use on the internet
2:12:18 to discover knowledge is a big unlock.
2:12:22 But there are a lot of other stuff in the so-called AI package,
2:12:25 like discussed with Dario curing all major human diseases.
2:12:30 He really focuses on that in the Machines of Love and Grace essay.
2:12:37 I think there will be huge leaps in productivity for human programmers and semi-autonomous human programmers.
2:12:41 So humans in the loop, but most of the programming is done by AI agents.
2:12:52 And then moving that towards a superhuman AI researcher that’s doing the research that develops and programs the AI system in itself.
2:12:55 I think there will be huge transformative effects from autonomous vehicles.
2:13:02 These are the things that we maybe don’t immediately understand or we understand from an economics perspective,
2:13:14 but there will be a point when AI systems are able to interpret, understand, interact with the human world to a sufficient degree
2:13:20 to where many of the manually controlled human in the loop systems we rely on become fully autonomous.
2:13:27 And I think mobility is such a big part of human civilization that there will be effects on that,
2:13:32 that they’re not just economic, but are social, cultural, and so on.
2:13:36 And there’s a lot more things I could talk about for a long time.
2:13:43 So obviously the integration, utilization of AI in the creation of art, film, music.
2:13:50 I think the digitalization and automating basic functions of government
2:13:56 and then integrating AI into that process, thereby decreasing corruption and costs
2:14:03 and increasing transparency and efficiency, I think we, as humans, individual humans,
2:14:07 will continue to transition further and further into cyborgs.
2:14:14 So there’s already an AI in the loop of the human condition,
2:14:20 and that will become increasingly so as the AI becomes more powerful.
2:14:24 The thing I’m obviously really excited about is major breakthroughs in science,
2:14:29 and not just on the medical front, but on physics, fundamental physics,
2:14:32 which would then lead to energy breakthroughs,
2:14:37 increasing the chance that we become, we actually become a Kardashev Type 1 civilization,
2:14:42 and then enabling us in so doing to do interstellar exploration of space
2:14:44 and colonization of space.
2:14:48 I think they’re also, in the near term,
2:14:57 much like with the industrial revolution that led to rapid specialization of skills,
2:15:01 of expertise, there might be a great sort of de-specialization.
2:15:07 So as the AI system become superhuman experts of particular fields,
2:15:14 there might be greater and greater value to being the integrator of AIs,
2:15:18 for humans to be sort of generalists.
2:15:24 And so the great value of the human mind will come from the generalists, not the specialists.
2:15:28 That’s a real possibility that that changes the way we are about the world,
2:15:31 that we want to know a little bit of a lot of things,
2:15:33 and move about the world in that way.
2:15:36 That could have, when passing a certain threshold,
2:15:40 a complete shift in who we are as a collective intelligence,
2:15:43 as a human species.
2:15:47 Also, as an aside, when thinking about the invention that was the greatest in human history,
2:15:49 again, for a bit of fun,
2:15:52 we have to remember that all of them build on top of each other,
2:15:56 and so we need to look at the delta, the step change,
2:16:01 on the, I would say, impossibly to perfectly measure plot of exponential human progress.
2:16:05 Really, we can go back to the entire history of life on Earth,
2:16:10 and a previous podcast guest, Nick Lane, does a great job of this in his book,
2:16:17 Life Ascending, listing these 10 major inventions throughout the evolution of life on Earth,
2:16:25 like DNA, photosynthesis, complex cells, sex, movement, sight, all those kinds of things.
2:16:27 I forget the full list that’s on there,
2:16:33 but I think that’s so far from the human experience that my intuition about,
2:16:38 let’s say, productivity multipliers of those particular inventions completely breaks down,
2:16:45 and a different framework is needed to understand the impact of these inventions of evolution.
2:16:50 The origin of life on Earth, or even the Big Bang itself, of course,
2:16:55 is the OG invention that set the stage for all the rest of it.
2:17:00 And there are probably many more turtles under that,
2:17:02 which are yet to be discovered.
2:17:07 So anyway, we live in interesting times, fellow humans.
2:17:14 I do believe the set of positive trajectories for humanity outnumber the set of negative trajectories,
2:17:15 but not by much.
2:17:18 So let’s not mess this up.
2:17:24 And now, let me leave you with some words from French philosopher Jean de la Bruyere.
2:17:28 Out of difficulties, grow miracles.
2:17:32 Thank you for listening, and hope to see you next time.
Sundar Pichai is CEO of Google and Alphabet.
Thank you for listening ❤ Check out our sponsors: https://lexfridman.com/sponsors/ep471-sc
See below for timestamps, transcript, and to give feedback, submit questions, contact Lex, etc.
Transcript:
https://lexfridman.com/sundar-pichai-transcript
CONTACT LEX:
Feedback – give feedback to Lex: https://lexfridman.com/survey
AMA – submit questions, videos or call-in: https://lexfridman.com/ama
Hiring – join our team: https://lexfridman.com/hiring
Other – other ways to get in touch: https://lexfridman.com/contact
EPISODE LINKS:
Sundar’s X: https://x.com/sundarpichai
Sundar’s Instagram: https://instagram.com/sundarpichai
Sundar’s Blog: https://blog.google/authors/sundar-pichai/
Google Gemini: https://gemini.google.com/
Google’s YouTube Channel: https://www.youtube.com/@Google
SPONSORS:
To support this podcast, check out our sponsors & get discounts:
Tax Network USA: Full-service tax firm.
Go to https://tnusa.com/lex
BetterHelp: Online therapy and counseling.
Go to https://betterhelp.com/lex
LMNT: Zero-sugar electrolyte drink mix.
Go to https://drinkLMNT.com/lex
Shopify: Sell stuff online.
Go to https://shopify.com/lex
AG1: All-in-one daily nutrition drink.
Go to https://drinkag1.com/lex
OUTLINE:
(00:00) – Introduction
(00:07) – Sponsors, Comments, and Reflections
(07:55) – Growing up in India
(14:04) – Advice for young people
(15:46) – Styles of leadership
(20:07) – Impact of AI in human history
(32:17) – Veo 3 and future of video
(40:01) – Scaling laws
(43:46) – AGI and ASI
(50:11) – P(doom)
(57:02) – Toughest leadership decisions
(1:08:09) – AI mode vs Google Search
(1:21:00) – Google Chrome
(1:36:30) – Programming
(1:43:14) – Android
(1:48:27) – Questions for AGI
(1:53:42) – Future of humanity
(1:57:04) – Demo: Google Beam
(2:04:46) – Demo: Google XR Glasses
(2:07:31) – Biggest invention in human history
PODCAST LINKS:
– Podcast Website: https://lexfridman.com/podcast
– Apple Podcasts: https://apple.co/2lwqZIr
– Spotify: https://spoti.fi/2nEwCF8
– RSS: https://lexfridman.com/feed/podcast/
– Podcast Playlist: https://www.youtube.com/playlist?list=PLrAXtmErZgOdP_8GztsuKi9nrraNbKKp4
– Clips Channel: https://www.youtube.com/lexclips
Leave a Reply