AI transcript
0:00:09 AI is the only market where the more I learn, the less I know.
0:00:11 And in every other market, the more I learn, the more I know.
0:00:13 The more I’m able to predict things, and I can’t predict anything anymore.
0:00:15 What scares you about the future?
0:00:18 That’s a big question.
0:00:22 I think in a couple of years, we’ll start thinking about it as we’re selling units of cognition.
0:00:27 AI is dramatically underhyped because most enterprises have not done anything in it.
0:00:31 And that’s where all the money is, all the changes, all the impact is, all the jobs, everything.
0:00:36 The people that I know who have been very successful or driven solely by money end up miserable.
0:00:38 Because they have money, and then what?
0:00:41 It’s just, what do you do? What fulfills you?
0:00:44 What are the most common self-inflicted wounds that kill companies?
0:00:46 I think that…
0:00:49 What do you think is the next wave?
0:00:53 I think it’s going to be an ongoing wave of…
0:00:55 And that’s coming, right? And that hasn’t even hit yet.
0:01:17 Welcome to the Knowledge Project Podcast.
0:01:20 I’m your host, Shane Parrish.
0:01:26 In a world where knowledge is powered, this podcast is your toolkit for mastering the best of what other people have already figured out.
0:01:34 My guest today is Elad Gill, who has had a front row seat to some of the most important technology companies started in the past two decades.
0:01:40 He invested early in Stripe, Airbnb, Notion, Coinbase, Andrel, and so many others.
0:01:45 He’s also authored an incredible book on scaling startups called High Growth Handbook.
0:01:49 In my opinion, he’s one of the most underrated figures in Silicon Valley.
0:01:58 In this episode, we explore how he thinks about startups, talent, decision-making, AI, and most importantly, the future of all of these things.
0:02:08 We talk about the importance of clusters, why most companies die from self-inflicted wounds, and what it really means to scale a company, and importantly, what it means to scale yourself.
0:02:11 It’s time to listen and learn.
0:02:22 You’ve had a front row seat at some of the biggest, I would say, surprises in a way, like Stripe, Coinbase, Airbnb, when they were just ideas.
0:02:25 What was the moment where you recognized these were going to be outliers?
0:02:28 So all three of those are very different examples, to your point.
0:02:31 I invested in Airbnb when it was probably around eight people.
0:02:33 Stripe was probably around the same size.
0:02:37 And then Coinbase only got involved with much later, when it was a billion-dollar-plus company.
0:02:40 And even then, I thought there was enormous upside on it, which luckily has turned out to be the case.
0:02:46 I think really, the way I think about investing in general is that there’s two dimensions that really matter.
0:02:52 The first dimension is what people call product-market fit, or is there a strong demand for whatever it is you’re building?
0:02:55 And then, secondarily, I look at the team.
0:02:58 And I think most early-stage people flip it.
0:03:00 They look at the team first, and how good is a founder?
0:03:02 And obviously, I’ve started two companies myself.
0:03:06 I think the founder side is incredibly important, and the talent side is incredibly important.
0:03:13 But I’ve seen amazing people get crushed by terrible markets, and I’ve seen reasonably mediocre teams do extremely well in what are very good markets.
0:03:16 And so, in general, I first ask, do I think there’s a real need here?
0:03:17 How is it differentiated?
0:03:18 What’s different about it?
0:03:20 And then I dig into, like, are these people exceptional?
0:03:22 How will they grow over time?
0:03:24 What are some of the characteristics of how they do things?
0:03:35 Let’s go into people’s second, but how do you determine product-market fit in a world where a lot of people are buying product-market fit almost through brute force or giving away product?
0:03:40 Yeah, there’s a lot of signals you can look at, and I think it’s kind of varied by type of business.
0:03:42 Is it a consumer business versus enterprise versus whatever?
0:03:46 For things like consumer businesses, you’re just looking at organic growth rate and retention.
0:03:47 Are people using it a lot?
0:03:48 Are they living in it every day?
0:03:49 That sort of thing.
0:03:51 That would be early Facebook, right?
0:03:52 The usage metrics were insane.
0:03:58 And then for certain B2B products, it could be rate of growth and adoption.
0:04:02 It could be metrics people call, like, NDR and a dollar retention or other things like that.
0:04:09 Honestly, if you’re investing before the thing even exists in the market, then you have to really dig into how much do I believe there’s a need here, right?
0:04:10 Or how much is there a customer need?
0:04:16 So I invested in Rippling and other related companies before there’s anything built, right?
0:04:18 Under the premise that this is something that a lot of people want.
0:04:21 And Notion is the same thing.
0:04:24 Actually, Notion was a rare example where I did it as a person investment.
0:04:26 I met Ivan, who’s a CEO over there.
0:04:33 And everything about him was so aesthetically cohesive in a very odd way.
0:04:38 The way he dressed, his hairstyle, the color scheme of his clothes, the color scheme of the app and the pitch deck.
0:04:41 The only other person I’ve seen that with is Jack Dorsey, who started Square and Twitter.
0:04:46 And there was this odd, almost pure embodiment of aesthetic.
0:04:50 And I just thought it was so intriguing and so cool.
0:04:53 And I’ve only seen two people like that before that I had to invest.
0:04:56 And it was just this immense consistency.
0:04:57 It was very weird.
0:05:01 And you see that, like, you go to his house and it’s like, it feels like him.
0:05:03 You know, everything, the company feels like him.
0:05:04 Everything feels like him.
0:05:05 It’s fascinating.
0:05:06 He’s done an amazing job with it.
0:05:09 It almost stands out to the point where you think it’s manufactured.
0:05:11 I think it’s genuine.
0:05:12 I think it’s almost the opposite.
0:05:14 You feel the purity of it.
0:05:16 You’re like, oh my gosh, there’s a unique aesthetic element here.
0:05:23 And that probably reflects some unique way of viewing the world or thinking about products or thinking about people and their usage.
0:05:25 Let’s come back to outliers.
0:05:27 So product market fit, outliers.
0:05:30 How do you identify an outlier team?
0:05:34 Yeah, you know, I think it really depends on the discipline or the area.
0:05:37 For tech, I think it’s very different than if you’re looking in other areas.
0:05:43 For an early tech team, I almost use like this Apple framework of Jobs, Wozniak, and Cook, right?
0:05:46 Steve Jobs and Steve Wozniak started Apple together.
0:05:51 Steve Jobs was known as somebody who really was great at setting the vision and direction, but also was just an amazing salesperson.
0:05:54 And selling means selling employees to join you.
0:05:55 It means raising money.
0:05:57 It means selling your first customers.
0:05:58 It’s negotiating your supply chain.
0:06:01 Those are all aspects of sales in some sense or negotiation.
0:06:06 And so you need at least one person who can do that unless you’re just doing a consumer product that you threw out there, right?
0:06:08 And it just grows and then people join you because it’s growing.
0:06:13 Then you need somebody who can build stuff and build it in a uniquely good way.
0:06:15 And that was Wozniak, right?
0:06:21 The way that he was able to hack things together, drop chips from the original design of Apple devices, etc., was just considered legendary.
0:06:25 And then as the thing starts working, you eventually need somebody like Tim Cook who can help scale the company.
0:06:30 And so you could argue that was Sheryl Sandberg in the early days of Facebook who eventually came on as a hire and helped scale it.
0:06:35 And Zuck was really the sort of mixture of the product visionary, the salesperson, etc.
0:06:42 Why did all these people concentrate in San Francisco almost or California?
0:06:47 How did that happen where you had Apple, you have Stripe, you have Coinbase, you have Facebook?
0:06:50 Walk me through that.
0:06:54 We were talking a little bit about this before we started recording about clusters of people.
0:07:07 Yeah, it’s really fascinating because if you look at almost every major movement throughout history, and that could be a literary movement, it could be an artistic movement, it could be a finance movement, economic schools of thought.
0:07:19 It’s almost always a group of young people aggregating in a specific city who all somehow find each other and all start collaborating and working together towards some common set of goals that reflect that.
0:07:26 So there was, you know, a famous literary school in the early 20th century in London.
0:07:35 That was, I think it was like Virginia Woolf and John Maynard Keyes and E.M. Forster and all these people all kind of aggregated and became friends and started supporting each other.
0:07:37 Or you look at the Italian Renaissance.
0:07:44 Similarly, in Florence, you had this aggregation of all these great talents, all in a timely manner coincident with each other.
0:07:51 Or Favism or Italian Futurism or Impressionism, Paris in the, you know, late 1800s.
0:07:52 And so that repeatedly happens for everything.
0:07:55 And similarly, that’s happened for tech.
0:07:58 And even within tech, we’ve had these successive waves, right?
0:08:03 Really, the founding story of Silicon Valley goes back to the defense industry and then the semiconductor industry, right?
0:08:06 Defense was HP and other companies starting off in the 40s.
0:08:13 You then ended up with Shockley Semiconductor and Fairchild Semiconductor in the early semiconductor companies, 50s, 60s.
0:08:24 And that kind of established Silicon Valley as a hub and as things moved from microprocessors to computers to software, people just kept stuff propagating across those waves from within the industry.
0:08:29 So one big thing is just you have a geographic cluster and you have that for every single industry.
0:08:32 You look at wineries and they’re clustered in a handful of places because of geography.
0:08:35 You look at the energy industry, it’s in a handful of cities.
0:08:38 Finance is in New York and Hong Kong and London.
0:08:48 So every single industry has clusters, Hollywood and Bollywood and, you know, Lagos and Nigeria for the main hubs for, you know, movie making in different regions.
0:08:51 So in Silicon Valley, obviously, we created this tech cluster.
0:09:00 But then even within the tech cluster, there are these small pockets of people that I mentioned earlier that somehow find each other and self-aggregate.
0:09:03 It’s funny, I was talking to Patrick Hollis and the founder of Stripe about this.
0:09:12 And he mentioned that when he was 18 and he showed up in Silicon Valley as a nobody, right, completely unknown, 18-year-old, nobody’s heard of him.
0:09:19 And during that six-month period that he was first here, he said he met all these people who are now giants of Silicon Valley.
0:09:25 And it was this weird self-aggregation of people kind of finding and meeting each other and talking about what each other’s working on.
0:09:28 Somehow this keeps happening.
0:09:29 And this happens through time.
0:09:33 And then right now in Silicon Valley, it’s happening in very specific areas.
0:09:33 It’s happening.
0:09:36 All the AI researchers all knew each other from before.
0:09:38 They were in the common set of labs.
0:09:39 They had common lineages.
0:09:43 All the best AI founders, which is different from the researchers, have their own cluster.
0:09:45 And all the SaaS people have their own cluster.
0:09:51 And so it’s this really interesting almost self-aggregation effect of talent finding each other and then helping each other over time.
0:09:54 And it’s just fascinating how that works.
0:09:57 How do you think about that in an era of remote work?
0:10:04 Remote work is generally not great for innovation unless you’re truly in an online collaborative environment.
0:10:11 And the funny thing is that when people talk about tech, they would always talk about how tech is the first thing that could go remote because you can write code from anywhere and you can contribute from anywhere.
0:10:13 But that’s true of every industry, right?
0:10:14 You look at Hollywood.
0:10:19 You could make a movie from anywhere, like you film it off-site anyhow or on-site in different places.
0:10:20 You could write a script from anywhere.
0:10:22 You could edit the musical score from anywhere.
0:10:23 You could edit the film from anywhere.
0:10:25 You could write the script from anywhere.
0:10:27 So why is everything clustered in Hollywood?
0:10:29 Nobody would ever tell you, oh, don’t go to Hollywood.
0:10:32 Go to Boise and, you know, you could work in the movie industry.
0:10:33 Or finance.
0:10:37 You could raise money from anywhere, come up with your trading strategy from anywhere.
0:10:39 Everything in finance is in a handful of locations.
0:10:41 And so tech is the same way.
0:10:43 And it’s because there’s that aggregation of people.
0:10:51 There’s the people helping each other, sharing ideas, trading things informally, learning new distribution methods that kind of spread, learning new AI techniques that spread.
0:10:54 There’s money around it that funds it specifically so it’s easier to raise money.
0:10:58 There’s people who’ve already done it before who can help you scale once the thing is working.
0:11:04 That’s the common complaint I hear in Europe for Google Start companies there is we can’t find the executives who know how to scale what we’re doing.
0:11:05 Oh, interesting.
0:11:09 And so I do think there are these other sort of ancillary things that people talk about.
0:11:13 The service providers, the lawyers who know how to set up startups, right?
0:11:16 Or the accountants who know how to do tax and accounting for startups.
0:11:18 Those things sound trivial, but they cluster.
0:11:22 Most people think the key to a successful business is the product.
0:11:26 But often the real secret is what’s behind the product.
0:11:28 The systems that make selling seamless.
0:11:34 That’s why millions of businesses, from household names to independent creators, trust Shopify.
0:11:37 I’m not exaggerating about how much I love these guys.
0:11:41 I’m actually recording this ad in their office building right now.
0:11:44 Shopify powers the number one checkout on the planet.
0:11:49 It’s simple, it’s fast, and with ShopPay, it can boost conversion rates up to 50%.
0:11:51 I can check out in seconds.
0:11:53 No typing in details.
0:11:54 No friction.
0:11:59 It’s fast, secure, and helps businesses convert more sales.
0:12:02 That means fewer abandoned carts and more customers following through.
0:12:08 If you’re serious about growth, your commerce platform has to work everywhere your customers
0:12:08 are.
0:12:13 Online, in-store, on social, and wherever attention lives.
0:12:17 The best businesses sell more, and they sell with Shopify.
0:12:21 Upgrade your business and get the same checkout I use.
0:12:26 Sign up for your $1 per month trial at shopify.com slash knowledge project.
0:12:43 I think a lot about systems, how to build them, optimize them, and make them more efficient.
0:12:46 But efficiency isn’t just about productivity.
0:12:47 It’s also about security.
0:12:52 You wouldn’t leave your front door unlocked, but most people leave their online activity
0:12:54 wide open for anyone to see.
0:12:59 Whether it’s advertisers tracking you, your internet provider throttling your speed, or
0:13:00 hackers looking for weak points.
0:13:02 That’s why I use NordVPN.
0:13:05 NordVPN protects everything I do online.
0:13:11 It encrypts my internet traffic so no one, not even my ISP, can see what I’m browsing, shopping
0:13:12 for, or working on.
0:13:17 And because it’s the fastest VPN in the world, I don’t have to trade security for speed.
0:13:23 Whether I’m researching, sending files, or streaming, there’s zero lag or buffering.
0:13:27 But one of my favorite features, the ability to switch my virtual location.
0:13:32 It means I can get better deals on flights, hotels, and subscriptions just by connecting
0:13:33 to a different country.
0:13:39 And when I’m traveling, I can access all my usual streaming services as if I were at home.
0:13:45 Plus, Threat Protection Pro blocks ads, malicious links before they become a problem, and Nord’s
0:13:49 dark web monitor alerts me if my credentials ever get leaked online.
0:13:54 It’s a premium cybersecurity for the price of a cup of coffee per month.
0:13:55 Plus, it’s easy to use.
0:13:58 With one click, you’re connected and protected.
0:14:04 To get the best discount off your NordVPN plan, go to nordvpm.com slash knowledgeproject.
0:14:08 Our link will also give you four extra months on the two-year plan.
0:14:11 There’s no risk with Nord’s 30-day money-back guarantee.
0:14:14 The link is in the podcast episode description box.
0:14:19 A big part of why Combinator is sort of like helping everybody with that stuff.
0:14:23 Yeah, why Combinator is a great example of taking out-of-network people, at least that
0:14:26 was the initial part of the premise, not the full premise, right?
0:14:29 Like people like Sam Altman or others who were very early in YC came out of Stanford, which
0:14:30 was part of the main hub.
0:14:34 But a lot of other people came out of universities that just weren’t on the radar for people who
0:14:36 tended to back things in Silicon Valley.
0:14:39 And so, you know, the early Reddit founders went to East Coast universities.
0:14:45 The Airbnb founders, two of them were out of RISD, the Rhode Island Institute of Design.
0:14:53 And so, YC early on was very good at taking very talented people who weren’t part of the core networks in Silicon Valley and basically
0:14:55 inserting them into those networks and helping them succeed.
0:14:58 Why do you think they’re still relevant today?
0:15:00 Why is YC still relevant today?
0:15:03 I think they’ve just done a great job of building sort of brand and longevity.
0:15:06 Gary, who’s taken over, is fantastic.
0:15:07 And so I think he brings a lot of that.
0:15:13 Let’s go back to first principles and really implement YC the way that, you know, we think
0:15:14 it can really succeed for the future.
0:15:18 And I think they do a really good job of two things.
0:15:21 One is plugging people in, as mentioned, particularly your SaaS company, you want to have a bunch of
0:15:23 customers instantly, your batch mates will help you with that.
0:15:32 But also, it teaches people to ship fast and to kind of force finding customers.
0:15:37 And so because you’re in this batch structure and you’re meeting with your batch every week
0:15:40 and you hear what everybody else is doing, you feel peer pressure to do it.
0:15:44 But also, it kind of shapes how you think about the world, what’s important, what to work on.
0:15:47 And so I think it’s almost like a brainwashing program, right?
0:15:49 Beyond everything else they do, which is great.
0:15:52 It sets a timeline that you have to hit and it brainwashes you to think a certain way.
0:15:58 One of the things that I see, which I think is maybe relevant, maybe not, you tell me, is
0:16:04 I like how it brings people together who are probably misfits or outliers in their own environment
0:16:08 and then puts them in an environment where ambition is the norm.
0:16:10 It’s not the outlier to have ambition.
0:16:12 Where shipping is the norm.
0:16:15 It’s not the outlier to ship.
0:16:21 And it sort of normalizes these things that maybe cause success or lead to an increased
0:16:22 likelihood of success.
0:16:26 It’s actually a very interesting question of what proportion of founders these days are
0:16:29 actually people who normally wouldn’t fit in, right?
0:16:34 So the sort of founder archetype of before it was rebellious people or people who could never
0:16:35 work for anybody else or whatever.
0:16:40 And then as tech has grown dramatically in market cap and influence and everything else,
0:16:45 it’s inevitable that the type of people who want to come out here and do things has shifted.
0:16:48 And then the perception of risk in startups has dropped a lot.
0:16:51 And so I actually think the founder mix has shifted quite a bit.
0:16:54 Like there isn’t as much quirkiness in tech.
0:16:55 And during COVID, it was awful.
0:16:56 It was very unquirky.
0:17:00 Because at that point, you know, there was a zero interest rate environment.
0:17:02 Money was abundant everywhere.
0:17:07 And the nature of people who joined or who showed up shifted.
0:17:11 And so I think we had two or three years where the average founder just wasn’t that great,
0:17:12 right?
0:17:13 On a relative basis to history.
0:17:18 And then as the AI wave was happening, you know, I started getting involved with a lot of
0:17:21 the generative AI companies maybe three-ish years ago, maybe three and a half years ago.
0:17:25 So before Chachapiti came out and before MidJourney and all these things kind of took off.
0:17:30 And the people starting those companies were uniquely good.
0:17:32 And you felt the shift.
0:17:38 You went from these kind of plain vanilla, me too, almost LARPers, to these incredibly driven,
0:17:45 mission-oriented, hyper-smart, very technical people who wanted to do something really big.
0:17:47 And you felt it.
0:17:48 It was a dramatic shift.
0:17:52 And if you look at it, there’s basically been three or four waves of talent coming through
0:17:53 the AI ecosystem.
0:17:55 And I should say gen AI because we had this whole wave.
0:17:59 We had 10 years, 15 years of other types of deep learning, right?
0:18:03 We had recursive neural networks and convolutional neural networks and GANs and all these things.
0:18:08 And that technology basis fundamentally has different capabilities than this new wave.
0:18:14 And so there’s this paper in 2017 that came out of Google called the Transformer Architecture.
0:18:19 And that is the thing that spawned this whole wave of AI right now that we’re experiencing.
0:18:20 And so it’s a new technology basis.
0:18:24 We took a step function and we’re doing new stuff that you couldn’t do before on the old
0:18:24 technologies.
0:18:29 That whole wave led to this really interesting set of companies.
0:18:33 And the first people in that wave were the researchers because they were closest to it.
0:18:38 And they could see firsthand what was actually happening in the technology, in the market, how they
0:18:39 were using it.
0:18:44 You know, the engineers at OpenAI used to go into the weights to query stuff, which then eventually
0:18:46 actually in some form is ChatGPT, right?
0:18:47 They were doing it before it existed.
0:18:51 There was also MENA at Google, which was basically an internal form of almost like ChatGPT.
0:18:55 So they kind of saw the future and they went to try and substantiate it.
0:18:59 And you could argue that the same thing happened in the internet wave in the 90s, right?
0:19:03 All the people working at the National Supercomputer Centers like Marc Andreessen and others saw the
0:19:04 future before anyone else.
0:19:06 They’re using email before anyone else.
0:19:08 They were browsing the internet before anyone else.
0:19:12 They were using FTP and file downloads and sharing music files before anyone else.
0:19:15 And so they knew what was coming, right?
0:19:17 They had a glimpse into the future.
0:19:20 It’s the old saying, the future is here is just not equally distributed.
0:19:22 For AI, we had the same thing.
0:19:25 We had these researchers who could tangibly feel what was coming.
0:19:28 And so the first wave of AI companies was researchers.
0:19:30 The second wave was infrastructure people.
0:19:31 We’re not closest.
0:19:34 And in this current wave, we’re now at the application people, the people who are building
0:19:36 applications on top of the core technology.
0:19:38 What do you think is the next wave?
0:19:43 I think it’s going to be an ongoing wave of kind of everything, right?
0:19:46 There’s still a lot to build, but I think we’ll see more and more application level companies.
0:19:51 We’ll see fewer what are known as foundation model companies, the people building the open
0:19:55 AIs or Anthropics or some of the Google core technologies or X.AI.
0:19:57 There will be specialized versions of that, right?
0:19:58 That’s all the language stuff, right?
0:20:04 It understands what you say and it can interpret it and it can generate text for you and do all these
0:20:04 things, right?
0:20:07 That’s all these LLMs, large language models.
0:20:10 There’s going to be the same thing done for physics and material science.
0:20:11 We’ve already seen it happening in biology, right?
0:20:13 So at that layer, there’s a bunch of stuff.
0:20:14 There’s the infrastructure.
0:20:16 What is the equivalent of cloud services?
0:20:18 And then there’s the apps on top.
0:20:20 And then in the apps, you have B2B and then you have consumer.
0:20:23 And so I think we’re going to see a lot of innovation across the stack.
0:20:25 But I think this next wave is a mix of B2B and consumer.
0:20:31 And then I think the wave after that is very large enterprise adoption.
0:20:38 And so I think AI is dramatically underhyped because most enterprises have not done anything
0:20:38 in it.
0:20:42 And that’s where all the money is, all the changes, all the impact is, all the jobs, everything,
0:20:43 right?
0:20:45 It’s a big 80-20 rule of the economy.
0:20:48 And that’s coming, right?
0:20:49 And that hasn’t even hit yet.
0:20:56 Are there any historical parallels to anything that you can think of that map to artificial
0:20:57 intelligence or AGI?
0:21:06 I think the thing that people misunderstand about artificial intelligence is that, you know,
0:21:09 people are kind of viewing it as what you’re selling as like a cool tool to help you with
0:21:10 productivity or whatever it is.
0:21:14 I think in a couple of years, we’ll start thinking about it as we’re selling units of cognition,
0:21:16 right?
0:21:20 We’re selling bits of person time or person equivalent to do stuff for us.
0:21:27 I’m going to effectively hire 20 bot programmers to write code for me to build an app, or I’m going
0:21:34 to hire an AI accountant, and I’m going to basically rent time off of this unit of cognition.
0:21:39 On the digital side, it really is this shift from you’re selling tools to you’re selling
0:21:42 effectively white-collar work.
0:21:47 On the robotic side, you’ll probably have some form of like robot minutes or something.
0:21:52 You’ll probably end up with some either human form robots or other things that will be doing
0:21:54 different forms of work on your behalf.
0:21:57 And, you know, potentially you buy these things or maybe you rent them, you know, it’ll be
0:21:59 interesting to see what business models emerge around it.
0:22:01 What scares you about the future?
0:22:03 That’s a big question.
0:22:04 Along what dimension?
0:22:07 Wherever you want to take it.
0:22:08 Like what scares you about AI?
0:22:10 Do you have any fears about AI?
0:22:14 I think that I have opposing fears.
0:22:20 In the short run, I worry that there’s the real chance to kind of strangle the golden goose,
0:22:20 right?
0:22:28 I do think AI and this wave of AI is the single biggest potential motivator for versions of global
0:22:31 advancements in health and education and all the things that really matter fundamentally.
0:22:37 And there’s some really great papers from the 80s that basically show that one-on-one tutoring,
0:22:42 for example, will increase performance by one or two standard deviations, right?
0:22:44 You get dramatically better if you have a one-on-one tutor for something.
0:22:49 And if you actually look through history and you look at how Alexander the Great was tutored
0:22:54 by Aristotle and all these things, there’s a lot of kind of prior examples of people actively
0:22:56 doing that on purpose for their kids if they can afford it.
0:23:00 This AI revolution is a great example of something that could basically provide that for every child
0:23:04 around the world as long as they have access to any device, which is most people at this
0:23:04 point, right?
0:23:05 Globally.
0:23:10 So from an education system perspective, a healthcare system perspective, it’s a massive
0:23:10 change.
0:23:13 So in the short run, I’m really worried that people are going to constrain it and strangle
0:23:17 it and prevent it from happening because I think it’s really important for humanity.
0:23:23 In the long run, there’s always these questions of, you know, at what point do you actually consider
0:23:24 something sentient versus not?
0:23:25 Is it a new life form?
0:23:26 Like, is there species competition?
0:23:29 You know, there’s those sorts of questions, right?
0:23:30 In the very long run.
0:23:32 Without robots, you could say, well, you just unplug the data center.
0:23:33 Who cares?
0:23:34 You know, it doesn’t matter.
0:23:38 If you do have robots and other things, then it gets a little bit harder, maybe.
0:23:43 At what point do you think we’re going to, AI is going to start solving problems that we
0:23:48 can’t solve in the sense of a lot of what it’s doing today is organizing logic on a human
0:23:49 level equivalent.
0:23:50 It’s not being like…
0:23:52 No, it’s already surpassed us on many things, right?
0:23:57 Like, just even look at how people play Go now and the patterns they learned off of AI,
0:23:58 which can beat any person at Go.
0:24:03 I mean, gaming is a really good example of that, where every wave of gaming advancements where
0:24:07 you pitted AI against people, people said, well, fine, they beat people at checkers, but
0:24:09 they’ll never beat them at chess.
0:24:11 And then they beat them at chess and say, well, fine, chess, but they’ll never beat them at
0:24:12 Go.
0:24:13 They beat them at Go.
0:24:16 And they’re like, well, what about complex games where there’s bluffing?
0:24:17 They’ll never beat them at poker.
0:24:18 And then Noam Brown had his poker paper.
0:24:20 And they say, well, okay, poker.
0:24:22 Well, they’ll never beat them at things like diplomacy, where you’re manipulating people
0:24:23 against each other.
0:24:27 And then, you know, a Facebook team solved diplomacy, right?
0:24:31 And so gaming is a really great example where you have superhuman performance against every
0:24:31 game now.
0:24:35 And you see that in other aspects of things as well.
0:24:40 I guess where my mind was going is in terms of mathematical problems.
0:24:43 I mean, we’ve solved a couple maybe that we haven’t been able to solve, but we haven’t
0:24:51 made real leaps or biology or health or longevity, like where, you know, here’s the, not the solution
0:24:56 maybe to Alzheimer’s because that’s like a big leap, but maybe it’s like, you’re not looking
0:24:57 in the right area.
0:24:59 You need to research in this area more.
0:25:01 Like when is that sort of advancement coming?
0:25:02 Yeah, I think it’s a really good question.
0:25:06 I mean, AI is already having some interesting advancements in biology, right?
0:25:12 The Nobel Prize this past year in biology went to Demas and a few other people who built
0:25:16 predictive models using AI about how proteins will fold, right?
0:25:20 And so I think it’s already being recognized as something that’s impacting the field at the
0:25:21 point where it gets a Nobel.
0:25:26 The hard part with certain aspects of biology and protein folding is a good counter example.
0:25:27 We actually have very good data.
0:25:31 You had tens of thousands or maybe hundreds of thousands of crystal structures.
0:25:34 You had solved structures for all these proteins and you could use that to train the model.
0:25:35 Right.
0:25:41 If you look at it, about half or more than half of all biology research in top journals is
0:25:42 not reproducible.
0:25:44 So you have a big data problem.
0:25:47 Half the data is false.
0:25:48 It’s incorrect.
0:25:49 Right.
0:25:53 And this is actually something that Amgen published a couple of years ago where they showed this
0:25:56 because they weren’t able to reproduce cancer findings in their lab because they’re trying
0:25:57 to develop a drug.
0:26:00 And they’re like, wait a minute, this thing we thought could turn into a drug isn’t real.
0:26:01 Right.
0:26:06 And so there’s this really big replication issue in certain sciences.
0:26:08 Isn’t that part of the advantage for AI then?
0:26:10 Like, I’m thinking out loud here.
0:26:10 Sure.
0:26:15 Like, if I uploaded all of the Alzheimer’s papers to AI.
0:26:16 Yeah.
0:26:18 And it would be like, these ones aren’t replicatable.
0:26:20 There’s mathematical errors here.
0:26:21 This looks like fraud.
0:26:24 But all of these things have generated future research.
0:26:28 So what you’re doing is you’re being like, oh, you’ve spent billions of dollars on this.
0:26:33 That’s likely, not like statistically, it’s probably not going to yield results.
0:26:35 You should focus your attention here.
0:26:37 And that would have a huge impact on…
0:26:37 Yeah.
0:26:40 I think there’s almost like three different things that are mixed in here.
0:26:42 One is just fraud.
0:26:44 You know, you fudged an image, you’re reusing it, whatever.
0:26:46 I think AI is wonderful for that.
0:26:51 And I actually think, and I’m happy to, if anybody who’s listening to this wants to get
0:26:55 sponsored, or maybe we should do a competition or something to basically build like fraud detectors
0:26:56 using AI or plagiarism detectors.
0:26:58 You could do it for liberal arts as well as sciences, right?
0:26:58 Yeah.
0:27:00 And I bet you’d uncover a ton of stuff.
0:27:05 Separate from that, there’s people publishing things that are just bad.
0:27:08 And the question is, is it bad because they ignored other data?
0:27:10 Did they throw out data points?
0:27:13 How would you know as an AI system, right?
0:27:15 That somebody threw out half their data to publish a paper.
0:27:21 And so there’s other issues around how science is done right now.
0:27:25 Or you just rush it and you have the wrong controls, but it still gets published because
0:27:26 it’s a hot field.
0:27:26 That happens a lot.
0:27:31 If you look during COVID, like there were so many papers that in hindsight were awful papers,
0:27:34 but they got rushed out because of COVID.
0:27:37 And unless somebody goes back and actually redoes the experiment and then publishes it,
0:27:40 they read it and it didn’t work, which nobody does because nobody’s going to publish it for
0:27:40 you.
0:27:42 How do you know that it’s not reproducible?
0:27:45 And so that’s part of the challenge in biology.
0:27:48 And so the biology problem isn’t, can an AI model do better?
0:27:49 I’m sure it could.
0:27:54 The biology problem is how do you create the data set that actually is clean enough and has
0:27:56 high enough fidelity that you can train a model that then goes and cleans everything
0:27:57 else up, right?
0:27:58 And it’s doable.
0:27:59 Like all these things are very doable.
0:28:00 You just have to go and do it.
0:28:01 And it’s a lot of work.
0:28:04 If you look at things like math and physics and other things like that, people are just
0:28:06 starting to train models against that now.
0:28:09 So I do think we’ll, in the coming years, see some really interesting breakthroughs there.
0:28:15 Do you think that’ll be rapid or do you, like how will those breakthroughs happen?
0:28:17 Yeah, it’s kind of the same thing.
0:28:22 You kind of need to figure out what’s the data set you’re using, what kind of model and
0:28:25 model architecture you’re using, because different architectures seem to work better or worse for
0:28:26 certain types of problems as well.
0:28:30 Like the protein folding ones have three or four different types of models that often get
0:28:32 mixed in, at least traditionally.
0:28:35 A lot of them have moved to these transformer backbones, but then they’re augmented by other
0:28:36 things.
0:28:40 So it’s a little bit of like, do you have enough and the right data?
0:28:43 Do you have the right model approach?
0:28:44 And then can you just keep scaling it?
0:28:48 Walk me through why I’m wrong here.
0:28:51 Like, I’m just, you know, what came to mind when you were saying this is like, we’re training
0:28:53 AI based on data.
0:28:55 So it’s like, here’s how we’ve solved problems in the past.
0:28:57 This is how you’re likely to solve it in the future.
0:29:02 But if I remember correctly, DeepMind trained Go by just being like, here are the rules.
0:29:06 We’re not actually going to show you people that have played before.
0:29:10 And that led to the creativity that we now see.
0:29:12 Yeah, that’s called self-play.
0:29:15 And as long as you have enough rules, you can do it.
0:29:18 You need a utility function you’re working against, right?
0:29:21 And so in the context of a game, it’s winning the game.
0:29:23 And there’s very specific rules of the game.
0:29:24 You know when to flip over the Go piece.
0:29:26 You know what winning means, right?
0:29:30 And so it’s easy to train against that because you have a function to select against.
0:29:31 This game you did well.
0:29:32 This game you did badly.
0:29:35 Here’s positive feedback or negative feedback to the model.
0:29:37 They’re starting to do that more and more.
0:29:40 So if you look at the way people are thinking about models now and scaling them, there’s three
0:29:41 or four components to it.
0:29:42 One is ongoing data scale.
0:29:44 Second is the training cluster.
0:29:46 People always talk about all the money they’re spending on GPUs.
0:29:48 The third is reasoning modules.
0:29:53 And that’s the new stuff from OpenAI in terms of 01 and 03 and all these things.
0:30:01 There’s other forms of time of inference-related optimizations and how do you do them and some
0:30:03 aspects eventually of this self-play.
0:30:09 And some of the places where that may really come into focus soon is coding because you
0:30:11 can push code and you can see if it runs and you can see what errors are thrown.
0:30:16 And there’s more stuff you can do in domains where you have a clear output you’re shooting
0:30:18 for and that you can test against it.
0:30:19 And there’s rapid feedback.
0:30:21 And there’s rapid feedback.
0:30:21 And that’s the key.
0:30:25 How quickly can you get feedback to keep training the system and iterating?
0:30:27 What happens when I give an AI a prompt?
0:30:30 Like what happens on the inside of that?
0:30:32 What’s the difference between a good prompt and a bad prompt?
0:30:37 Like does it basically take my prompt and break it into reasoning steps that a human would
0:30:38 use?
0:30:42 Like first I do this, second I do this, third I do this, and then I give the output.
0:30:47 And then the follow-on to this is like what can we do to better prompt AI to get better
0:30:47 outcomes?
0:30:48 Yeah, great question.
0:30:53 So a lot of the people working on agents have basically built what you’re describing, which
0:30:59 is something that will take a complex task, break it down into a series of steps, store those
0:31:01 steps, and then go back to them as you get output.
0:31:03 So you’re actually chaining a model.
0:31:07 You’re pinging it over and over with the output of the prior step and asking it now to do the
0:31:07 next step.
0:31:10 So one approach to that is you literally break it up into 10 pieces.
0:31:15 If it’s a simple problem and you’re just like write me a limerick with XYZ characteristics,
0:31:19 then the model can just do that in a single sort of call to the model.
0:31:24 But if you’re trying to do something really complex, you know, book me a flight or find
0:31:25 me and book me a flight to Mexico.
0:31:26 It’s like, okay, first I need to find the flight.
0:31:30 And so that means I need to go to this website and then I need to interact with the website
0:31:30 and pull the data.
0:31:32 Then I need to analyze that information.
0:31:34 And then I have to figure out what fits with your trip.
0:31:37 And then I, you know, I go through the booking steps and then I get the confirmation.
0:31:41 So it really depends on what you’re asking the model to do.
0:31:44 When I think of a model, though, I don’t think of an agent.
0:31:46 I just think, well, I can’t AI do that.
0:31:51 Like, why do I need a specific type of AI to book a flight to Mexico?
0:31:53 Why can’t ChatGPT just do it?
0:32:02 ChatGPT in its current form, or at least in the simplest form, is effectively interrogating
0:32:05 a mix of like a logic engine and a knowledge corpus, right?
0:32:10 It’s like a thing that will look at what it knows and based on that, provide you with some
0:32:11 output.
0:32:14 That’s a little bit different from asking somebody to take an action.
0:32:18 And that’s similar to if I was talking to you and I said, hey, where’s a nice place to
0:32:19 go?
0:32:23 And you didn’t say, oh, you should go to Cabo or you should go to wherever, right?
0:32:27 That’s different for me saying, hey, could you get me there, right?
0:32:30 And you have to go to the computer and load up the website and book it for it.
0:32:32 It’s the same thing for AI, right?
0:32:37 And so right now we have AIs that are very capable at understanding language, synthesizing
0:32:44 it, manipulating it, but they don’t have this remembrance of all the steps that they’ve
0:32:45 taken and will take.
0:32:49 And so you need to overlay that as another system on top of it.
0:32:53 And you see this a lot in the way your brain works, right?
0:32:56 You have different parts of your brain that are involved with vision and understanding
0:32:57 it.
0:32:59 You have different parts of your brain for language.
0:33:01 You have different parts of your brain for empathy, right?
0:33:05 You have mirror neurons that help you empathize with somebody or relate to them.
0:33:10 So your brain is a bunch of modules strung together to be able to do all sorts of complex
0:33:11 tasks, be they cognitive or physical.
0:33:16 And one could assume that over time you end up with roughly something like that as well
0:33:18 for certain forms of AI systems.
0:33:21 How are you using AI today?
0:33:23 I use it a lot.
0:33:32 I use it for everything from, you know, like I’ll go to a conference and I’ll dump the
0:33:36 names of the attendees in and ask like, who should I chat with based on these criteria?
0:33:38 And could you pull background on them?
0:33:41 You know, obviously a lot of people use it for coding right now or coding related tasks.
0:33:45 I use it for a lot of what I’m known as like regexes, regular expansions.
0:33:49 It’s like if I want to pull something out of certain types of data, I’ll do that sometimes.
0:33:53 So there’s all sorts of different uses for it.
0:33:56 What have you learned about prompting that more people should know?
0:34:03 I think a lot of people, and I’m by no means like a, you know, there’s these people whose
0:34:05 jobs are called prompt engineering and that’s all they do.
0:34:11 I think fundamentally a lot of it just comes down to like, what are you specifically asking
0:34:12 and can you create enough specificity?
0:34:16 And sometimes you can actually add checks into the system where you say, go back and double
0:34:19 check this just to make sure that you didn’t omit something because there are enough errors
0:34:22 sometimes depending on which model you’re using and for what use case and everything else that
0:34:28 if you put in simple safeguards of, hey, generate a table of XYZ as output, but then go back
0:34:31 and double check that these two things are true, I think it’s helped me clean up a lot of things
0:34:32 that would normally have been errors.
0:34:35 It’s almost like adding a test case.
0:34:36 Yeah, yeah.
0:34:40 Basically, if you think about it as like a smart intern, you know, often with your intern,
0:34:43 you say, okay, go do this thing, but why don’t you double check these three things about it?
0:34:47 And as the models get more and more capable, they’ll be less like an intern and more like
0:34:51 a junior employee, and then they’ll be like a senior employee, and then they’ll be like
0:34:54 a manager and they’ll kind of, you know, as the models get better and better and the
0:34:56 capabilities get stronger, you’ll see all these other things emerge.
0:34:59 Where do you see the bottlenecks today?
0:35:02 And like what comes to mind for me are different aspects of AI.
0:35:09 So you have, from going all the way up the stack, you have electricity, you have compute,
0:35:12 you have LLMs, you have data.
0:35:17 Where do you see the bottlenecks being, where’s the biggest bang for the buck?
0:35:19 Like what’s preventing this from going faster?
0:35:22 You know, it’s a really interesting question.
0:35:27 And I think there’s people who are better versed than I am in it because there’s this ongoing
0:35:30 question of when does scaling run out for which of those things, right?
0:35:34 When do we not have enough data to generate the next versions of models or do we just use
0:35:35 synthetic data and will that be sufficient?
0:35:37 Or how big of a training cluster can you actually get to economically?
0:35:42 You know, how do you fine-tune or post-train a model and at what point does that not yield
0:35:43 as many results?
0:35:46 That said, each one of these things has its own scaling curves.
0:35:48 Each one of these seems to still be working quite well.
0:35:52 And then if you look at a lot of the new reasoning stuff that OpenAI and others have been working
0:35:53 on, Google’s been working on some stuff here as well.
0:35:59 When you talk to people who work on that, they feel that there’s still enormous scaling loss
0:36:00 for that still left, right?
0:36:02 Because those are just brand new things that just rolled out.
0:36:06 And so these sort of reasoning engines have their own big curve to climb as well.
0:36:11 So I think we’re going to see two or three curves sort of simultaneously continue to inflect.
0:36:19 Is this the first real revolution where incumbents have an advantage?
0:36:23 And I say that because data costs money, compute costs money, power costs money.
0:36:24 Yeah.
0:36:30 And it sort of favors the Googles, the Microsofts, the people with a ton of capital.
0:36:31 Yeah.
0:36:36 I think in general, every technology wave has a differential split of outcome for incumbents
0:36:36 versus startups.
0:36:40 So the internet was 80% startup value.
0:36:41 It was Google.
0:36:42 It was Amazon.
0:36:45 You know, it was all these companies we now know and love.
0:36:46 Meta, you know.
0:36:54 And then mobile, the mobile revolution was probably 80% incumbent value or 90%, right?
0:36:59 And so that was mobile search was Google and mobile CRM was Salesforce and mobile whatever
0:37:00 was that app you were already using.
0:37:05 And the things that emerged during that revolution of startups were things that took advantage of the
0:37:07 unique characteristics that were new to the phone.
0:37:08 GPS.
0:37:09 So you had Uber.
0:37:11 Everybody has a camera.
0:37:12 You have Instagram, et cetera, right?
0:37:17 And so the things that became big companies in mobile that were startups were able to do
0:37:20 it because they took advantage of something new that the incumbents didn’t necessarily have
0:37:21 any provenance over.
0:37:26 Crypto was 100% or roughly 100% startup value, right?
0:37:29 It’s Coinbase and it’s the tokens and everything else.
0:37:33 So you kind of go through wave by wave and you ask, what are the characteristics that make
0:37:34 something better or worse?
0:37:38 And if you actually look at self-driving, which was sort of an earlier AI revolution in some
0:37:43 sense, the two winners, at least in the West, seem to be Tesla, which was an incumbent car
0:37:47 maker in some sense, by the point that they were willing to step out, and Google through
0:37:47 Waymo.
0:37:51 So two incumbents won in self-driving, which I think is a little bit under discussed because
0:37:54 we had like two dozen self-driving companies, right?
0:37:58 Wouldn’t that make sense, though, because they have the most data in the sense of like
0:38:04 Tesla acquires so much data every day and now the way that they’ve set up full self-driving,
0:38:07 my understanding is it’s gotten really good in the last six months.
0:38:12 One of the reasons is they stopped coding, basically, and they started feeding the data into AI and
0:38:15 having the AI generate the next version effectively.
0:38:19 Yeah, a lot of the early self-driving systems were basically people writing a lot of kind of
0:38:20 edge case heuristics.
0:38:23 So you’d almost write a rule if X happens, you do Y or some version of that.
0:38:26 And they moved a lot of these systems over to just end-to-end deep learning.
0:38:31 And so this modern wave of AI has really taken over the self-driving world in a really strong
0:38:33 way that’s really helped these things accelerate, to your point.
0:38:36 And so Waymo similarly has gotten dramatically better recently.
0:38:38 So I think all that’s true.
0:38:43 I guess it’s more of a question of when does that sort of scale matter and why wasn’t there
0:38:46 anybody who was able to partner effectively with an existing automotive company?
0:38:49 What in other things happen in the market?
0:38:52 For this current wave of AI, it really depends on the layer you’re talking about.
0:38:56 And I think there’s going to be enormous value for both incumbents and startups.
0:39:01 On the incumbent side, it really looks like the foundation model companies are either paired
0:39:04 up or driven by incumbents.
0:39:05 Maybe one or two kind of examples.
0:39:09 So, you know, OpenAI is roughly partnered with Microsoft.
0:39:10 Microsoft also has its own efforts.
0:39:13 Google is its own partner in some sense, right?
0:39:16 Amazon has partnered with Anthropic.
0:39:20 Obviously, Facebook has Llama, the open source model.
0:39:26 But I think for three of the four, and then there’s X.EI, which, you know, is Elon Musk’s just
0:39:32 sort of ability to execute in such an insane way that’s really driving it and access to capital
0:39:33 and all the rest.
0:39:38 But if you look at it, and I wrote a blog post about this maybe two, three years ago, which
0:39:40 is basically, what’s the long-term market structure for that layer?
0:39:46 And it felt like it had to be an oligopoly or, you know, at most an oligopoly.
0:39:48 And the reason was this point that you made about capital.
0:39:52 And back then, it costs, you know, tens of millions to build a model.
0:39:57 But if you extrapolated the scaling curve, you’re like, every generation is going to be a few
0:39:58 X to 10 X more.
0:40:02 And so, eventually, you’re talking about billions, tens of billions of dollars, not that many
0:40:03 people can afford it.
0:40:06 And then you ask, what’s the financial incentive for funding it?
0:40:10 And the financial incentive for the cloud businesses is their clouds, right?
0:40:14 If you look at Azure’s last quarter, I think it was like a $28 billion quarter or something
0:40:14 like that.
0:40:19 I think they said that 10 to 15% of the lift on that was from AI being sold on the cloud.
0:40:21 So, that’s what?
0:40:23 One and a half to three billion, a quarter, right?
0:40:28 So, the financial incentive for Microsoft to fund open AI is it feeds back into its cloud.
0:40:30 It feeds back in other ways, too, but it feeds back to its cloud.
0:40:36 And so, I don’t think it’s surprising that the biggest funders of AI today, besides sovereign
0:40:39 wealth, has been clouds because they have a financial incentive to do it.
0:40:40 And people really miss that.
0:40:45 So, I think that that is part of what really helped lock in this oligopoly structure early
0:40:49 is you had enormous capital scale going to a handful of the best players through these
0:40:49 cloud providers.
0:40:52 And so, the venture capitalists would put hundreds of millions of dollars into these companies.
0:40:54 The clouds put tens of billions in.
0:40:55 Yeah.
0:40:56 And that’s the difference.
0:41:04 And I guess the optimism there is that I can go use the full scale of AWS or Azure or
0:41:07 Google and just rent time.
0:41:09 So, I don’t need to make the capital investments.
0:41:10 I don’t need to run the data center.
0:41:11 I don’t need to.
0:41:13 Well, you could have done that either way, right?
0:41:16 You didn’t have to take money from them because they’re happy to be a customer.
0:41:17 That’s what I’m saying, right?
0:41:21 So, like the optimism is like you can compete with them now because you’re just competing
0:41:22 on ideas.
0:41:24 You have access to the structure.
0:41:25 Yeah.
0:41:25 Yeah.
0:41:28 And you would have done that no matter what, just given that everything moved to clouds,
0:41:30 like these third-party clouds that you can run on.
0:41:35 So, that’s enabling, but at least for these sort of language models, they’re increasingly
0:41:37 just a moat due to capital scale.
0:41:41 Do you think that we just end up with like three or four and they’re all pretty much equivalent?
0:41:43 Yeah, I’m not sure.
0:41:45 I think you can imagine two worlds.
0:41:47 World one is where you have an asymptote.
0:41:51 Eventually, things kind of all flatline against some curve because you can only scale a cluster
0:41:53 so much, you only have so much data or whatever.
0:41:56 In which case, eventually, things should converge really closely over time.
0:42:00 And in general, things have been converging faster than not across the major model platforms
0:42:01 already.
0:42:07 Where a second world is, if you think about the capability set built into each AI model,
0:42:12 if you have something that’s far enough ahead and it’s very good at code and it’s very good
0:42:16 at data labeling and it’s very good at doing a lot of the jobs that allow you to build the
0:42:20 next model really fast, then eventually you may end up with a very strong positive feedback
0:42:25 loop for whoever’s far enough ahead that their model always creates the next version of the
0:42:26 model faster than anybody else.
0:42:28 And then you maybe have liftoff, right?
0:42:32 Maybe that’s the thing that ends up dramatically far ahead because every six months becomes more
0:42:33 important than the last five years.
0:42:38 And so, there’s another world you could imagine where you’re in a liftoff scenario where there’s
0:42:41 a feedback loop of the model effectively creating its next version.
0:42:47 So, GPT-5 or 7 or whatever, GPT-7 would create GPT-8, which would help create GPT-9, which
0:42:48 would even faster create GPT-10.
0:42:54 And at that point, you have an advantage, but the advantage is expanding at the velocity at
0:42:55 which you’re creating the next model.
0:42:55 Correct.
0:43:00 Because GPT-10 perhaps is so much more capable than 9 that everybody else is at 9, it’s already
0:43:00 building 11.
0:43:04 And it can build it faster, smarter, et cetera, than everybody else.
0:43:09 And so, it really comes down to what proportion of the model building task or model training
0:43:12 and building task is eventually done by AI itself.
0:43:16 Spring is here and you can now get almost anything you need delivered with Uber Eats.
0:43:17 What do we mean by almost?
0:43:20 You can’t get a well-groomed lawn delivered, but you can get chicken Parmesan delivered.
0:43:21 Sunshine?
0:43:21 No.
0:43:22 Some wine?
0:43:23 Yes.
0:43:29 What do you think of Facebook?
0:43:34 They’ve spent, I don’t know, 50, 60 billion and they’ve basically given it away to society.
0:43:35 Yeah.
0:43:35 Yeah.
0:43:39 I’ve been super impressed by what they’ve done with Llama.
0:43:40 I think open source is incredibly important.
0:43:43 And why is open source important?
0:43:45 It does a couple of things.
0:43:51 One is it levels a playing field for different types of uses of this technology and it makes
0:43:53 it globally available in certain ways that’s important.
0:43:59 Second, it allows you to take out things that you may not want in there and because it’s
0:44:01 open weights and it’s open source.
0:44:08 So, if you’re worried about a specific political bias or a specific cultural outlook, because
0:44:13 it’s really interesting if you look at the way people talk about norms and what should be
0:44:17 built into models and safety and all the rest, it’s like, who are you to determine all
0:44:22 of global norms with your own values, right?
0:44:25 That’s a form of cultural imperialism if you think about it, right?
0:44:28 You’re basically imposing what you think on everybody else.
0:44:32 And so, open source models gives you a bit more leeway in terms of being able to retrain
0:44:39 a model or have it reflect whatever norms of your country or your region or whatever lens
0:44:40 on that you want to take.
0:44:42 So, I think it’s also important from that perspective.
0:44:49 As an investor, what’s the ROI on a $1,600 billion open source model?
0:44:53 How do you think through what Facebook is trying to do or accomplish?
0:44:57 Is it just like, I don’t want the competitors to get too far ahead?
0:45:00 I don’t know how meta specifically is thinking about it.
0:45:03 So, I think I’d be sort of talking out of turn if I just made some stuff up.
0:45:09 I think that in general, there’s been all sorts of times where open source has been very important
0:45:10 strategically for companies.
0:45:15 And if you actually look at it, almost every single major open source company has had a
0:45:17 giant institutional backer.
0:45:20 IBM was the biggest funder of Linux in the 90s as a counterbalance to Microsoft.
0:45:27 And the biggest funders of all the open source browsers are Apple and Google with WebKit.
0:45:31 And you just go through technology wave after technology wave, and there’s always a giant
0:45:32 backer.
0:45:35 And maybe the biggest counter to that is Bitcoin and all the crypto stuff.
0:45:40 And you could argue that they’re their own backer through the token, right?
0:45:43 So, Bitcoin financially effectively has fueled the development of Bitcoin.
0:45:49 It’s kind of paid for itself in some sense as an open source tool or open source sort of
0:45:50 form of money.
0:45:52 You know, I don’t know why AI would be different.
0:45:58 I, a couple years ago, was trying to extrapolate who is the most likely party to be the funder
0:46:00 of open source AI.
0:46:04 And back then, I thought it would be Amazon, because at the time, they didn’t have a horse
0:46:07 in the race like Microsoft and Google, or maybe be NVIDIA.
0:46:12 And Meta was kind of on the list because of all the money they have, but in prowess and
0:46:15 engineering and fair, and, you know, they have a lot of great things, but they weren’t the
0:46:17 one I would have guessed as the most likely.
0:46:19 They were on the list, but they weren’t the most likely.
0:46:23 And then there’s other players with tons of money, and tons of capabilities.
0:46:25 And the question is, are they going to do anything?
0:46:26 What does Apple do?
0:46:27 What does Samsung do?
0:46:31 You know, there’s like half a dozen companies that could still do really interesting things
0:46:32 if they wanted to.
0:46:33 And the question is, what are they going to do?
0:46:39 How would you think about sort of the big players and who is best positioned for the
0:46:41 next two to three years?
0:46:42 How would you rank them?
0:46:44 In terms of AI or in terms of other things?
0:46:46 In terms of AI.
0:46:47 Yeah.
0:46:53 Like who’s most likely to accrue some of the advantages of AI?
0:46:59 Yeah, it’s kind of hard because AI is the only market where the more I learn, the less I
0:47:00 know.
0:47:02 And in every other market, the more I learn, the more I know.
0:47:05 And the more predictive value, or the more I’m able to predict things.
0:47:06 And I can’t predict anything anymore.
0:47:09 You know, I feel like every six months, things change over so rapidly.
0:47:13 You know, fundamentally, there’s a handful of companies in the market that are doing very
0:47:14 well.
0:47:21 Obviously, there’s Google, there’s Meta, there’s OpenAI, there’s Microsoft, Anthropic, and AWS
0:47:24 or Anthropic, X.AI.
0:47:27 You know, Mistral has done some interesting things over time.
0:47:30 So I think there’s like a handful of companies that are the ones to watch.
0:47:32 And the question is, how does this market evolve?
0:47:33 Does it consolidate or not?
0:47:34 Like what happens?
0:47:37 How do you think about regulation around AI?
0:47:38 Yeah.
0:47:43 So there’s basically like three or four forms of AI safety that people talk about and they
0:47:44 kind of mix or complate them.
0:47:47 The first form of AI safety is almost what I call like digital safety.
0:47:48 It’s like, will the thing offend you?
0:47:51 Or will there be hate content or other things?
0:47:55 And there’s actually a lot of rules that already exist around hate speech on the internet or hate
0:47:58 speech in general or, you know, what’s free speech or not and how you should think about
0:47:58 all these things.
0:48:00 So I’m less concerned about that.
0:48:01 I think people will figure that out.
0:48:06 There’s a second area, which is almost like physical safety, which is will you use AI to
0:48:07 create a virus?
0:48:09 Will you use AI to derail a train?
0:48:10 You know, et cetera.
0:48:15 And similarly, like when I look at the arguments made about how it will create a biological
0:48:18 virus and et cetera, et cetera, like you can already do that, right?
0:48:24 The protocols for cloning and PCR and all this, it’s all on the internet.
0:48:25 It’s all posted by major labs.
0:48:26 It’s in all the textbooks.
0:48:30 Like that’s not new knowledge that people can’t just go and do right now if they really wanted
0:48:30 to.
0:48:33 So I don’t know why that matters in terms of AI.
0:48:41 And then the third area is sort of this existential safetyism, like AI will become self-aware and
0:48:42 destroy us, right?
0:48:45 And when people talk about safety, they mix those three things.
0:48:48 They conflate them and therefore they say, well, eventually maybe something terrible happens
0:48:50 here, so we better shut everything else down.
0:48:52 While other people are just saying, hey, I’m worried about hate speech.
0:48:56 And so I think when people talk about safety, they have to really define clearly what they
0:48:56 mean.
0:49:00 And then they have to create a clear view of why it’s a real concern.
0:49:04 It’s sort of like if I kept saying, I think an asteroid could at some point hit the earth
0:49:06 and therefore we better do X, Y, Z.
0:49:07 We should move the earth or whatever.
0:49:11 You know, it’s just at some point these things get a little bit ridiculous in terms of safetyism.
0:49:16 There’s actually a broader question societally of like why has society become so risk averse
0:49:17 in certain ways and so safety centric?
0:49:20 And it impacts things in all sorts of ways.
0:49:22 I’ll give you a dumb example.
0:49:28 After what age does the data suggest that a child doesn’t need a special seat?
0:49:29 They can just use a seatbelt.
0:49:33 I think it’s like 10 or 12, isn’t it?
0:49:37 Well, so in California, for example, the law is up until age eight.
0:49:37 Okay.
0:49:40 You have to be in a booster seat or a car seat or whatever.
0:49:46 If you actually look at crash data, real data, and people have now reproduced this across
0:49:48 multiple countries, multiple time periods.
0:49:49 It’s the age of two.
0:49:50 Oh, wow.
0:49:53 So for six extra years, we keep people in booster seats and car seats and all that,
0:49:55 at least against the data, right?
0:49:55 Okay.
0:49:58 The Freakonomics podcast actually had a pretty good bit on this.
0:50:02 And there’s like multiple papers now that reproducibly show this retrospectively.
0:50:03 You just look at all the crashes.
0:50:04 That’s crazy.
0:50:04 Yeah.
0:50:05 So why do we do it?
0:50:05 Safety.
0:50:07 But it’s not safe.
0:50:08 Exactly.
0:50:09 But it’s positioned as safe.
0:50:11 As a parent, of course, you want to protect your children.
0:50:11 No, seriously, right?
0:50:14 And so, but then it has other implications.
0:50:17 It’s like you can’t easily transport the kids in certain scenarios because you don’t have
0:50:22 the car seat or, you know, you can only fit so many car seats in a car and it’s a pain
0:50:22 in the butt.
0:50:24 And do you upgrade the car if you want more kids?
0:50:25 And can you afford it?
0:50:27 And, you know, so it has all these ramifications.
0:50:33 And it’s because I think, A, it’s lucrative for the car seat companies to sell more car seats
0:50:34 for longer, right?
0:50:35 You get an extra six years on the kid or whatever.
0:50:39 Parents will, of course, say, I want safety no matter what.
0:50:44 And certain legislatures are happy to just, you know, legislate it.
0:50:48 So I think there’s lots and lots and lots of examples of that in society if you start picking
0:50:51 at it and you realize it pervades everything.
0:50:52 It pervades aspects of medicine.
0:50:55 It pervades things like AI now.
0:50:56 It’s just, it’s everywhere.
0:51:03 There’s one in Ottawa that I see on mornings when there’s schools around the, I don’t know,
0:51:04 five or six blocks of a school.
0:51:07 They basically have crossing guards everywhere now.
0:51:13 So it’s basically, even for high schools, like, so kids can’t walk to school on their
0:51:19 own, which I think you think, oh, well, how do you argue with that, right?
0:51:23 Like, and then I was thinking about this the other day because I was driving and, you know,
0:51:27 I got stopped by one of these people and I was like, we’re just teaching kids that like
0:51:28 they don’t even have to pay attention.
0:51:30 They can look at their phone.
0:51:32 The crossing guard is going to save them.
0:51:36 And then if the crossing guard is not like they’re, we’re not developing ownership or
0:51:37 agency in people.
0:51:39 How do you think about that?
0:51:44 I think it’s, I think it’s really bad for society at scale.
0:51:48 I mean, it’s kind of like, there was a different wave of this, which was, you know, 10, 15 years
0:51:53 ago with fragility and microaggressions and everything can offend you and you need to be super fragile
0:51:53 and all this stuff, right?
0:51:55 Which I think is very bad for, for kids.
0:51:57 And I think that has a lot of mental health implications.
0:52:03 The wave we’re in now, which is basically taking away independence, agency, risk-taking.
0:52:09 I think that has some really bad downstream implications in terms of how people act, what they consider
0:52:14 to be risky or not, and what that means about how they’re going to act in life and also their
0:52:15 ability to actually function independently.
0:52:16 So I agree.
0:52:20 I think, I think all those things are things that we’ve accumulated over the last few decades
0:52:21 that are probably quite negative.
0:52:27 You’re one of the most successful investors that a lot of people have probably never heard
0:52:27 of.
0:52:32 One of the things that you’ve said is that most companies die from self-inflicted wounds and
0:52:33 not competition.
0:52:38 What are the most common self-inflicted wounds that kill companies?
0:52:40 Yeah, I think there’s two or three of them.
0:52:43 You know, it depends on the stage of the company.
0:52:47 For a very early company, the two ways that they die is the founders start fighting and
0:52:51 the team blows up, or they run out of money, which means they never got to product market
0:52:51 fit.
0:52:54 They never figured out something that they could build economically that people would care
0:52:54 about.
0:52:58 So for the earliest stages, that’s, that’s roughly everything.
0:53:04 Every once in a while, you have some competitive dynamic, but the reality is most incumbent companies
0:53:05 don’t care about startups.
0:53:10 And startups have five, six years before an incumbent wakes up and realizes it’s a big deal and then
0:53:11 tries to crush them.
0:53:13 And sometimes that works.
0:53:16 Sometimes you just end up with capped outcomes.
0:53:19 So for example, you could argue Zoom and Slack got capped by Microsoft launching stuff into
0:53:24 teams in terms of taking parts of the market or creating a more competitive market dynamic
0:53:25 for them.
0:53:29 You know, the other types of self-inflicted wounds, honestly, sometimes people get very competitor
0:53:31 centric versus customer centric.
0:53:33 Go deeper on that.
0:53:35 I mean, there’s a lot of examples of that.
0:53:41 Sort of like if you focus on your, your competitor too much, you stop doing your own thing.
0:53:46 You stop building that thing the customer actually wants and you lose differentiation relative to
0:53:47 your competitor.
0:53:51 Or you start doing things that can hurt your competitor, but they don’t necessarily help
0:53:51 you.
0:53:54 And sometimes your competitor will retaliate.
0:54:00 An example of that would be in the pharmaceutical distribution world.
0:54:05 You know, 20 years ago, there was roughly three players that really mattered of any scale.
0:54:08 And they used to go after each other’s market share really aggressively, which eroded all the
0:54:10 pricing, which meant they were bad businesses.
0:54:16 And at some point, I think one of them decided to stop competing for share, but just protect
0:54:17 itself.
0:54:20 And then the others copied it and suddenly margins went way up in the industry, right?
0:54:24 They stopped being as focused on banging on each other and more just like, let me just
0:54:26 build more services for my customers and let’s just focus on our own set.
0:54:30 We’re all going to win a lot more that way, right?
0:54:33 In some cases, yeah, if you have an oligopoly market, that’s usually where it ends up.
0:54:38 Eventually, this is why people are so worried about collusion, right?
0:54:42 Eventually, the companies decide, hey, we should be in a stable equilibrium instead of beating
0:54:44 up on each other and shrinking margins.
0:54:49 Scaling a company often means scaling the CEO.
0:54:56 What have you learned about the ways that successful CEOs scale themselves and things that get in the
0:54:57 way?
0:54:58 Yeah, I think it’s two or three things.
0:55:02 One is figuring out who else you need to fill out your team with and how much can you trust
0:55:03 them and all the rest.
0:55:08 And so one piece of it is very innovative founder CEOs always want to innovate and so
0:55:10 they reinvent things that they shouldn’t reinvent.
0:55:14 Like sales is like effectively process engineering that’s been worked through for decades.
0:55:16 You don’t need to go reinvent sales.
0:55:18 You know, you just hire a sales team and it’ll work just fine.
0:55:21 So one aspect is getting out of your own way on reinvention.
0:55:23 There’s certain things you want to rethink, but many of them you don’t.
0:55:27 Part of it is hiring people who are going to be effective in those roles and more effective
0:55:28 than you might be.
0:55:31 Often you end up finding people who are complementary to you.
0:55:39 Now that really breaks down during CEO succession because what happens is often the CEO will promote
0:55:43 the person who’s their complement as the next CEO instead of finding somebody like them who
0:55:47 can innovate and push on people and drive new products and new changes.
0:55:52 And so often you see companies have a golden age under a founder and then decay.
0:55:56 And the decay is because the founder promoted their lieutenant who was great at operations
0:56:00 or whatever, but wasn’t a great product thinker or technology vision like themselves.
0:56:04 And so that’s actually a failure mode for like longer term related areas.
0:56:08 You could argue such a Microsoft is a good example of somebody who has more of a founder mindset.
0:56:09 I’m going to reinvent things.
0:56:10 I’m going to rethink things.
0:56:11 I’m going to do these crazy deals.
0:56:14 They backed open AI at GPT-2, which is like a huge risk.
0:56:16 They’ve done all sorts of really smart acquisitions.
0:56:22 So like that’s an example of somebody who actually did a, they did a smart succession there in
0:56:25 terms of finding somebody who’s a bit more like product founder mentality.
0:56:35 You know, in terms of other ways that CEOs fail is they listen too much to conventional wisdom
0:56:37 on how to structure their team.
0:56:42 And really the way you want to have your team function at a large organization is based
0:56:43 on the CEO.
0:56:45 What does the CEO need?
0:56:46 What are the compliments they need?
0:56:47 What is the structure they need?
0:56:52 And if you were to plop out that person and plop in a different CEO, that structure probably
0:56:53 shouldn’t work like half the time.
0:56:57 There’s some types of people where there’s lots of commonalities, particularly if it’s people
0:57:01 who came up the corporate ladder and they’re all used to doing things the same way.
0:57:05 But if you’re more of a founder CEO and you’re going to have your quirks and you’re going
0:57:08 to have your obsessions and you’re going to have all these things that founders often
0:57:11 have, you need an org structure that reflects you.
0:57:13 And so like Jensen from NVIDIA talks about this, right?
0:57:15 The claim is he has like 40 direct reports.
0:57:19 He claims that, you know, he doesn’t do many one-on-ones or things like that.
0:57:22 And the focus is more on finding very effective people who’ve been with him for a while and
0:57:23 who can just drive things, right?
0:57:25 And then he sort of deep dives in different areas.
0:57:31 That’s a very different structure from how Satya’s running Microsoft or Larry Ellison
0:57:32 has run Oracle over time.
0:57:36 Or, you know, you look at these other sort of giants of industry and management and everything
0:57:36 else.
0:57:40 And so I think you really need an org structure that reflects you.
0:57:43 Now, there’s going to be commonalities and there’s only so many reports most people can
0:57:44 handle and all the rest of it.
0:57:48 But I do think you kind of want to have the team that reflects your needs versus the generic
0:57:49 team that could reflect anybody’s needs.
0:57:54 Is that the problem with sort of a lot of these business leadership books that are written
0:57:59 about a particular person in style that they have and then people read them and they try
0:58:02 to implement them, but it’s not genuine to who they are?
0:58:03 I think that’s very true.
0:58:07 And it really depends on whether you’re talking about the generic case of, hey, it’s a big
0:58:12 company and you’re at a related large company that’s 100 years old that’s been run a certain
0:58:12 way.
0:58:17 Like, I wouldn’t be surprised if you could roughly interchange the CEOs of a subset of the pharma
0:58:19 companies in terms of the org structure.
0:58:23 They may not have the chemistry with the people or the trust or whatever, but like the org
0:58:25 structures are probably reasonably similar.
0:58:28 That’s probably pretty different than if you looked at, you know, how Oracle has been run
0:58:32 over time versus Microsoft over time versus Google over time versus whoever.
0:58:38 When you say that, I think the wording you use like conventional wisdom, CEOs should pay less
0:58:39 attention to conventional wisdom.
0:58:46 Do you mean that in the sense of the, I guess the nomenclature that Brian Jeske came out
0:58:47 was founder mode?
0:58:55 Yeah, I think, um, I think we lived through a decade or so, maybe longer where a lot of
0:59:01 forces came into play in the workplace that were not productive to the company actually obtaining
0:59:02 its missions and objectives.
0:59:08 And a lot of that was all the different forms of politics and bring your whole self to work
0:59:11 and all these things that people are talking about, which I don’t want somebody’s whole
0:59:11 self at work.
0:59:18 You know, I remember at Google, um, for Halloween, uh, and maybe we should edit this part out,
0:59:20 but there’s somebody who would show up and ask those chaps every Halloween.
0:59:23 And you’re like, I don’t want to see that.
0:59:24 Like I’m in a work environment.
0:59:26 Why is this, why is this engineer walking around like this?
0:59:26 Yeah.
0:59:27 Yeah.
0:59:30 And then the second you start bringing kids to work, you’re like, I sure as hell don’t
0:59:31 want this guy walking around.
0:59:31 Right.
0:59:32 Yeah.
0:59:33 And that’s bring your whole self to work.
0:59:34 Like, why would you do that?
0:59:37 You actually should bring your professional self to work.
0:59:40 You should bring the person who’s going to be effective in a work environment and can work
0:59:44 with all sorts of diverse people and be effective and doesn’t bring all their mores
0:59:47 and values and everything else in the workplace that don’t have a place in the workplace.
0:59:49 There’s a subset of those that do, but many don’t.
0:59:54 We lived through a decade where not only were those things encouraged, but the traditional
0:59:58 conventionalist executives brought that stuff with them.
1:00:00 And I think it was probably bad for a lot of cultures.
1:00:02 It defocused them from their mission.
1:00:04 It defocused them from their customers.
1:00:06 It defocused them from doing the things that were actually important.
1:00:11 And the first person to speak out against that was Brian Armstrong that I remember like
1:00:13 in a very public and visible way.
1:00:17 And then Toby Luque followed him not long after.
1:00:20 And they said, no, the workplace is not about that.
1:00:22 It’s about X, Y, and Z.
1:00:24 And if you don’t like it, like basically leave.
1:00:24 Yeah.
1:00:29 And was that the moment where we started to go back to founder mode effectively?
1:00:31 I think it took some time.
1:00:33 I think Brian was incredibly brave for doing that.
1:00:33 Totally.
1:00:35 And he got a lot of flack for it.
1:00:35 And I think it-
1:00:36 They tried to cancel him.
1:00:39 They tried to cancel him aggressively, which was sort of the playbook, right?
1:00:42 Oh, and this was happening inside of companies too, right?
1:00:44 You’d say something and you’d get canceled for it.
1:00:47 And so you can have a real conversation around some of these things.
1:00:48 And again, that just reinforced it.
1:00:52 And I think Brian stepping forward made a huge difference.
1:00:54 To your point, Toby, I think did it really well.
1:01:00 I still sometimes send the essay that he wrote for that to other people where he had a few
1:01:04 central premises, which is we have a specific mission and we’re going to focus on that.
1:01:05 We’re not focusing on other things.
1:01:07 We’re not a family.
1:01:07 We’re a team.
1:01:08 Yeah.
1:01:09 Right?
1:01:12 The family is like, hey, your uncle shows up drunk all the time.
1:01:14 You kind of tolerate it because it’s your uncle.
1:01:18 If somebody showed up drunk at work all the time, you shouldn’t tolerate that, right?
1:01:19 You’re not a family.
1:01:20 You’re a sports team.
1:01:22 You’re trying to optimize for performance.
1:01:26 You’re trying to optimize for the positive interchange within that team.
1:01:30 And you want people pulling in the direction of the team, not people doing their own thing,
1:01:31 which is a family, right?
1:01:35 And so I think there was a lot of these kind of conversations or discussions that were more
1:01:40 like it’s a family and bring yourself to work and all the holisticness of yourself.
1:01:44 And it’s actually, well, no, you probably shouldn’t show up at work drunk and, you know,
1:01:45 look at bad things on the internet.
1:01:49 You know, you should focus on your job and you should focus on good collaboration with your
1:01:50 co-workers and things like that.
1:01:57 You’re around a lot of outlier CEOs, not only in the context of you know them, but
1:01:58 you hang out with them.
1:01:59 You spend a lot of time with them.
1:02:03 What are sort of the common patterns that you’ve seen amongst them?
1:02:06 Are there common patterns or is everybody completely unique?
1:02:10 But I imagine that at the core, there’s commonality.
1:02:11 Yeah.
1:02:13 You know, this is something I’ve been kind of riffing on lately, and I don’t know if it’s
1:02:15 quite correct, but I think there’s like two or three common patterns.
1:02:19 I think pattern one is there are a set of people who are, and by the way, all these people
1:02:24 are like incredibly smart, you know, incredibly insightful, et cetera, right?
1:02:28 So they all have a few common things.
1:02:30 But I do think there’s two or three archetypes.
1:02:32 I think one of them is just the people who are hyper-focused.
1:02:34 They don’t get involved with other businesses.
1:02:36 They don’t do a lot of angel investments.
1:02:38 They don’t, you know, do press junkets that don’t make sense.
1:02:40 They just stay on one track.
1:02:44 And a version of that was Travis from Uber.
1:02:47 I knew him a little bit before Uber, and I’ve, you know, run into him once or twice since
1:02:50 then, but like, he was always just incredibly focused.
1:02:52 He used to be an amazing angel investor.
1:02:55 I think he made great investments, but he stopped doing it with Uber, and he just focused on Uber.
1:02:59 And as far as I know, he never sold secondary until he left the company, right?
1:03:02 He was just hyper-focused on making it as successful as possible.
1:03:04 So that’s one class of ArchType.
1:03:12 There’s a second class, which I’d view as people who are equally smart and driven, but a bit more,
1:03:15 polymathic may be the wrong word, but they just have very broad interests, and they express
1:03:17 those interests in different ways while they’re also running their company.
1:03:22 And often they have a period where they’re just focused on their company, and then they
1:03:23 add these other things over time.
1:03:28 And so examples of that, I mean, obviously Elon Musk is now that, right?
1:03:29 In terms of all that.
1:03:35 Patrick Collison is that he’s running a biology institute, or his Sobana and the other Patrick
1:03:38 are running it alongside him called ARK.
1:03:44 Brian Armstrong is now running a longevity company in parallel to Coinbase, or he has somebody
1:03:45 running it.
1:03:51 So there’s a lot of these examples of people doing X2, X3, and doing it in other fields.
1:03:56 Honestly, that’s a little bit of a new development relative to what you were allowed to do before,
1:04:01 because there’s both activist investors who try to prevent that, and public markets in
1:04:02 particular.
1:04:07 But also, it was just a different mindset of how do I show impact over time?
1:04:12 Are these people going from the first one, hyper-focused, to this?
1:04:16 Or were they always sort of, I don’t want to use the word dabble because it really understates
1:04:19 how focused they are on their businesses.
1:04:24 But are they always like that, and as they get larger, it scales differently?
1:04:30 Or is it, no, we’ve gone from sort of the first, which is this hyper-focus, to the second?
1:04:36 I think it’s more like when you talk to them, the way that they think about the world and
1:04:40 the set of interests they have is a little bit different from the first group of folks.
1:04:43 And I’m not talking about Travis specifically, because I didn’t know him well enough to have
1:04:44 a perspective on that.
1:04:49 But I just mean more generally, I’ve noticed that they have this commonality of when you
1:04:55 talk to them very early, they’re like 20 years old or whatever, and you meet them, the set
1:04:57 of interests that they have is very, very broad.
1:05:02 And they tend to go very deep on each thing that they get interested in, whether it benefits
1:05:03 them or not.
1:05:04 They just go deep on it, right?
1:05:05 Because it’s interesting.
1:05:09 They’re driven by a certain form of interestingness, in addition to being driven by impact.
1:05:13 And then I think there’s a third set of people who end up with outside successes.
1:05:16 And sometimes that’s just product market fit.
1:05:18 And then they grow into the role, you know?
1:05:24 And so there’s some businesses that just have either such strong network effects or just such
1:05:26 strong liftoff early on.
1:05:29 And they’re obviously very smart people and all the rest of it, but you don’t feel that
1:05:33 same drive underlying it or that same need to do big things.
1:05:34 It’s almost accidental.
1:05:37 And you sometimes see that.
1:05:39 Would you say that’s more luck?
1:05:41 I don’t know.
1:05:45 I mean, say somebody is really good at product market fit, but they’re not that aggressive.
1:05:47 And once they hit a certain level, they’re not that ambitious.
1:05:49 Part of it too is like, what’s your utility curve?
1:05:50 Like, what do you care about in life?
1:05:52 Do you care about status?
1:05:53 Do you care about money?
1:05:54 Do you care about power?
1:05:55 Do you care about impact?
1:05:57 Do you do things because it’s interesting?
1:05:58 Like, why do you do stuff?
1:06:04 And imagine people where that is a big part of everything they do, right?
1:06:07 Because I think the average person may have mixes of that, but they’re also just happy
1:06:08 going to their kids and hanging out, you know?
1:06:10 And like, it’s a different life, right?
1:06:16 Like, the average Google engineer is not going to be this insanely driven, hyper, you know,
1:06:17 hyper drive person anymore.
1:06:20 What do you think keeps people going?
1:06:25 I mean, a lot of people become successful and maybe they hit whatever number they have in
1:06:29 their head that they can like retire comfortably or live the life they want to live and they
1:06:31 become complacent.
1:06:32 Maybe not intentionally.
1:06:36 I mean, they’re not thinking that way, but they take their foot off the gas and, you
1:06:39 know, all of a sudden I’m focused on 10 different things instead of one thing.
1:06:45 And then there’s another subset of people that are like, they just blow right by that and
1:06:46 they keep going.
1:06:50 And whether it’s a hundred million or a billion or a 10 billion or, you know, in Elon’s case,
1:06:53 a hundred billion or more, but they keep going.
1:06:54 Yeah.
1:06:55 It’s back to what’s your utility, like, what do you care about?
1:06:56 What’s your utility function?
1:06:57 What’s driving you?
1:07:02 And based on what’s driving you, like the people that I know who have been very successful
1:07:03 or driven solely by money end up miserable.
1:07:07 Because they have money and then, and then what?
1:07:08 It’s never enough.
1:07:09 What do you do then?
1:07:10 Well, it’s not just never enough.
1:07:12 It’s just, what do you do?
1:07:13 What fulfills you?
1:07:15 You can already buy everything you could ever buy.
1:07:17 Like what fulfills you?
1:07:22 And you also see versions of this where you see people who make it and then they don’t know
1:07:23 what to do with themselves.
1:07:24 I think I mentioned this earlier.
1:07:28 There’s one guy I know who’s incredibly successful and he spends all his time buying domain names.
1:07:34 You’re like, well, is that fulfilling or, you know, it’s almost like what’s your meaning or purpose?
1:07:41 I feel like the people who end up doing these other things have some broader meaning or purpose driver even very early on.
1:07:43 And obviously people want to win and all the rest.
1:07:47 There’s this really good framework from Naval Ravikant.
1:07:53 And so in the 90s, John Doerr, who’s one of the giants, the legends of investing, used to ask founders,
1:07:54 are you a missionary or mercenary?
1:07:59 And of course, the question that you were expected to say is, I’m a missionary, right?
1:08:02 I’m doing it because it’s the right work to do and all this.
1:08:09 And Naval’s framework is like, when you’re young, of course, you’re at least half, if not more, mercenary.
1:08:10 Yeah.
1:08:11 You want to make it.
1:08:11 You’re hungry.
1:08:12 You don’t have any money.
1:08:13 You need to survive.
1:08:16 You know, you’re driven because of that in part.
1:08:23 And then in the middle phase of your career or life, you’re more of a missionary if you’re not a zero-sum person, right?
1:08:24 You suddenly can have a broader purpose.
1:08:25 You can do other things.
1:08:26 You can engage.
1:08:28 And then he’s like, late in your life, you’re an artist.
1:08:30 You do it for the love of the craft, right?
1:08:42 I much prefer that framework of the people that I see who do the most interesting big things over time fall into that latter category where always there is some mercenary piece.
1:08:45 Of course, you want to have money to survive and all this stuff.
1:08:49 And then that morphs into you become more mission-centric.
1:08:52 And then over time, you just do it for the love of whatever the thing you’re doing is.
1:08:54 And those are the people that I see that become happy over time.
1:08:58 What’s the difference between success and relevance?
1:09:03 Yeah, it’s a great question because there’s lots of different ways to define success.
1:09:06 Success could mean I have a million Instagram followers.
1:09:09 It depends on your own version of success, right?
1:09:13 So, societally, one of the big versions of success is a big financial outcome.
1:09:16 One could argue a bigger version of that is like a happy family.
1:09:18 You know, like there’s lots of versions of success.
1:09:26 Relevance means that you’re somehow impacting things that are important to the world and people seek you out because of that.
1:09:28 Or alternatively, you’re just impacting things, right?
1:09:32 But usually, people end up seeking you out because of that for a specific thing.
1:09:37 And the amazing thing is that there’s lots and lots of people who’ve been successful who are no longer relevant.
1:09:44 You just look at the list of even the billionaires or whatever metric you want to use and like how many of those people are actually sought out.
1:09:45 Yeah.
1:09:47 Because they’re doing something interesting or important.
1:09:51 And so, there’s this interesting question that I’ve been toying with, which is,
1:09:55 are there characteristics to people who stay relevant over very long arcs of time?
1:09:59 People are constantly doing interesting things, right?
1:10:05 One could argue Sam Altman has sort of maintained that over a very long arc between YC and the early things he was involved with the investing side.
1:10:08 And then, of course, now OpenAI and other areas.
1:10:12 Patrick is obviously doing that between Stripe and Arc and other areas.
1:10:16 And there’s people with longer arcs than that, right?
1:10:21 Marc Andreessen invented the browser and then there was one of the key people behind that.
1:10:26 And then started multiple companies, including Netscape, which was a giant of the internet.
1:10:28 And then started, you know, one of the most important venture firms in the world.
1:10:33 And so, that’s a great example of a very, very strong arc over time.
1:10:36 Or Elon Musk is a very strong arc over time, right?
1:10:38 From Zip2 to PayPal to all the stuff he’s done now.
1:10:41 So, the question is, what do those people have in common?
1:10:42 Peter Thiel, right?
1:10:48 Think of all the stuff he’s done across politics and the Thiel Fellows and the funds and Palantir and Facebook and all this stuff.
1:10:57 The commonality that stands out to me across all those people is they tend to be pretty polymathic.
1:10:58 So, they have a wide range of interests.
1:11:04 They tend to be driven by a mix of stuff, not just money.
1:11:07 So, of course, money is important and all the rest.
1:11:10 But I think for a subset of people, it’s interestingness.
1:11:11 For a subset, it’s impact.
1:11:12 For a subset, it’s power.
1:11:14 For whatever it is, but there’s usually a blend.
1:11:17 And for each person, there’s a different spike across that.
1:11:21 And the other, I think, commonality is almost all of them had some form of success early.
1:11:31 Because the thing that people continue to underappreciate is kind of like the old Charlie Mungerism that the thing he continues to underappreciate is the power of incentives, right?
1:11:34 The thing I continue to underappreciate is the power of compounding.
1:11:40 And you see that in investing and financial markets, but you also see that in people’s careers and impact.
1:11:46 And the people who are successful early have a platform upon which they can build over time in a massive way.
1:11:50 They have the financial wherewithal to take risks or fund new things.
1:11:53 And importantly, they’re in the flow of information.
1:11:58 You start to meet all the most interesting people thinking the most interesting things.
1:12:03 And you can synthesize all that in this sort of pool of ideas and thoughts and people.
1:12:08 This is full circle back to almost where we started, right?
1:12:17 Like how important is that flow of information to finding the next opportunity, to capitalizing on other people’s mistakes, to staying relevant?
1:12:19 Yeah, there’s two types of information.
1:12:24 There’s information that’s hidden.
1:12:27 And there’s information that…
1:12:30 So I’ll give you an example, right?
1:12:36 When I started investing in generative AI, all these early foundation model things, et cetera, basically nobody was doing it.
1:12:39 And it was all out in the open, right?
1:12:41 GPT-3 had just dropped.
1:12:43 It was clearly a big step function from two.
1:12:46 If you just extrapolated that, you knew really, really interesting things were going to happen.
1:12:48 And people were using it internally in different ways at these companies.
1:12:54 And so it was in plain sight that GPT-3 existed out there, but very few people recognized that it was that important.
1:12:56 And so the question is why, right?
1:12:57 The information was out there.
1:13:05 There’s other types of information that early access to helps impact how you think about the world.
1:13:07 And sometimes that could just be a one-on-one conversation.
1:13:10 Or sometimes, again, they could be doing things out in the open.
1:13:16 And so, for example, all the different things that Peter Thiel talked about and it cites on like 10 years ago ended up being true.
1:13:19 Not all, but a lot of them, right?
1:13:21 So wait, let me go through some of these.
1:13:29 So there’s, I found, I found information that is publicly available that you haven’t found.
1:13:33 There’s, I weigh the information differently than you do.
1:13:33 Yeah.
1:13:35 So I weigh the importance of it differently.
1:13:40 And then there’s access where I have access to information that you don’t have.
1:13:42 Are there other types of information advantage?
1:13:47 No, because I think the one where you interpret it differently that you mentioned has all sorts of aspects to that.
1:13:48 Go deeper on that.
1:13:50 Well, do you have the tooling to do it?
1:13:52 Do you need a data scientist, right?
1:13:53 It’s all the algorithmic trading stuff.
1:13:57 All the information’s out there, but can you actually make use of it?
1:14:00 There’s, do you have the right filter on it?
1:14:04 Do you pick up or glean certain insights or make intuitive leaps that other people don’t?
1:14:08 You know, there’s all the different, it’s sort of like when people talk about Richard Feynman, the physicist.
1:14:14 And they said, with other physicists who won Nobel Prizes, they’re like, oh yeah, I could understand how that person got there.
1:14:16 It’s this chain of logical steps and maybe I could have done that.
1:14:19 They’re like with Feynman, he just did these leaps and nobody knew how he did it.
1:14:26 And so I do think there’s people who uniquely synthesize information in the world and come to specific conclusions.
1:14:34 And those conclusions are often right, but people don’t know how they got there.
1:14:40 You’re bringing it back to clusters and all the stuff about information and how to think about it and how to interpret it.
1:14:41 It’s all about being in a cluster.
1:14:44 How do you go about constructing a better cluster?
1:14:55 Like if you take the presumption that the material that goes into my head, whether I’m reading, you know, that’s one way I’m conversing, I’m searching.
1:15:03 How do I improve the information quality through a cluster or not that my raw material is built on later?
1:15:05 Yeah, I think it’s a few things.
1:15:09 And I think different people approach your processes in different ways.
1:15:15 And this is back to the best people somehow tend to aggregate or maybe best is the wrong word.
1:15:24 There’s a bunch of people with common characteristics, a subset of whom become very successful, that somehow repeatedly keep meeting each other quite young in the same geography.
1:15:26 And again, it’s happened throughout history.
1:15:33 And so, A, there’s clearly some attraction between these people to talking to each other and hanging out with each other and learning from each other.
1:15:37 And sometimes you meet somebody and you’re like, wow, I just learned a ton off of this person in like 30 minutes.
1:15:43 And this was a great conversation versus, okay, yeah, that was nice to meet that person.
1:15:44 They’re nice or whatever, you know.
1:15:56 And I feel like a lot of folks who end up doing really big interesting things just somehow meet or aggregate towards these other people and they all tell each other about each other and they hang out together and all the rest.
1:16:00 And so, I do think there’s sort of self-attraction of these groups of people.
1:16:04 Now, the internet has helped create online versions of that.
1:16:12 There’s been a lot of talk now about these IOI or gold medalist communities where people do like math or coding competitions or other things.
1:16:17 Scott, the CEO of Cognition, is a great example of that where he knows a lot of founders in Silicon Valley.
1:16:19 And one of the reasons they all know each other is through these competitions.
1:16:25 And there’s a way to aggregate people growing up all over the country or all over the world who never would have connected.
1:16:26 And then they connect through these competitions.
1:16:29 And so, that’s become a funnel for a subset of people.
1:16:37 So, the move towards the internet, I think, has actually created a very different environment where you can find more like-minded people than you ever could before, right?
1:16:39 Because before, how would you find people?
1:16:42 And how would you even know to go to Silicon Valley?
1:16:46 Do you think it’s true that if I change your information flow, I can change your trajectory?
1:16:53 And if so, what are the first steps that people listening can take to get better information?
1:17:00 If you want to work in a specific area and be top of your game in that area, you should move to the cluster for whatever that is.
1:17:00 Yeah.
1:17:02 So, if you want to go into movies, you should go to Hollywood.
1:17:05 If you want to go into tech, you should go to Silicon Valley, if you want to, you know, etc.
1:17:11 And the whole, hey, you can succeed at anything from anywhere is kind of true, but it’s very rare.
1:17:13 And why make it harder for yourself?
1:17:14 Yeah.
1:17:15 Why play on hard mode?
1:17:15 Yeah.
1:17:19 How do you think about that in terms of companies and remote work?
1:17:31 Like, we were talking about this a little bit before we hit record in the sense of, you know, one of the things that people lose is the culture of the company and feeling part of something larger than themselves.
1:17:36 How does that impact the quality of work we do or the information flow we have?
1:17:42 There’s no more water cooler conversation where, like, hey, you know, in that presentation, you should have done this, not that.
1:17:42 Yeah.
1:17:43 No, that’s a great point.
1:17:44 I think it’s interesting.
1:17:53 If a company is really young and still very innovative, I think a lot of remote work tends to be quite bad in terms of the success of the company.
1:17:54 Now, that doesn’t mean it won’t succeed.
1:17:55 It just makes it much harder.
1:18:00 And a company I backed, I don’t know how long ago now, 14 years or something like that, was GitLab.
1:18:02 Which has done quite well.
1:18:03 It’s a public company now, et cetera.
1:18:08 And they were one of the very first remote first companies.
1:18:10 And so when I backed them, it was like four people or something.
1:18:11 I can’t remember, four or five people.
1:18:13 They were fully remote.
1:18:14 They stayed remote forever.
1:18:17 And they built a ton of processes in to actually make that work.
1:18:18 And they were brilliant about it.
1:18:24 And they actually have all this published on their website where you can go and you can read hundreds of pages about everything they’ve done to enable remote work.
1:18:30 Everything from, like, how they thought about salary bans based on location on through to processes and all the rest.
1:18:37 And it was a very quirky, it may still be, culture where I’d be talking to the CEO and he’d say, oh, this conversation is really interesting.
1:18:43 And he dropped the link to our Zoom into a giant group chat and random people just start popping in while we’re talking.
1:18:44 Oh, wow.
1:18:45 You know, and you’re like, who are these people?
1:18:48 Like, we’re just talking about should you do a riff and like 30 people just joined.
1:18:49 Like, is this a good idea?
1:19:01 It was a very, and it probably still is, very innovative, very smart culture, very process driven, you know, very just excellent at saying, okay, if we’re going to be remote, let’s put in place every single control to make that work.
1:19:03 So they’re very smart about that.
1:19:06 I have not seen many other companies do anything close to that.
1:19:13 And so I think for very early companies, the best companies I know are almost 100% in person.
1:19:15 And there’s some counter examples of that.
1:19:16 And crypto has some nuances on that.
1:19:18 And, you know, which is a little bit different.
1:19:22 But for a standard AI, tech, SaaS, et cetera, that’s generally the rule.
1:19:29 As a company gets later, you’re definitely going to have remote parts of your workforce, right?
1:19:30 Parts of your sales team are remote.
1:19:32 Although really, they should be at the customer site, right?
1:19:35 Remote should mean customer site or home office or something, right?
1:19:37 It shouldn’t mean truly remote.
1:19:42 But, and you always, even 10 years ago or whatever, would make exceptions, right?
1:19:48 You’d say, well, this person is really exceptional and I know them well and they’re moving to Colorado and we’ll keep this person because we know that they’re, you know, as productive.
1:19:51 They’re more productive than anybody else on the team, even if they’re not going to be in the office every day.
1:19:57 Later stage companies, there’s this really big question of like, how much of your team do you want to be remote?
1:19:58 How many days a week?
1:20:07 And then is enforcing a lack of remote policy just also enforcing that you’re prioritizing people who care about the company more than they care about other things.
1:20:07 Right.
1:20:12 And each CEO needs to come and make a judgment call about how important that is.
1:20:15 How much does that impact how they can participate in global talent?
1:20:17 Because that’s often the question or concern.
1:20:19 So there’s like a set of trade-offs.
1:20:26 I mean, the argument for it, I guess, is like it’s more flexible for employees if that is part of what you’re optimizing for.
1:20:31 But we can also hire world-class talent that we might not be able to hire otherwise.
1:20:31 Yeah.
1:20:34 And I don’t know if I 100% buy that, but it’s possible.
1:20:40 I’ve been in the sauna at the gym with a number of people on like Microsoft Teams calls.
1:20:42 Yeah, you can see people who are clearly not working.
1:20:50 Now, the flip side of that is, you know, there are certain organizations that you knew people weren’t working very hard at before things went remote, right?
1:20:58 Like some of the big tech companies before COVID, you’d go in and it’d be pretty empty until like 11 and then people would roll in for lunch and then they’d leave at like 2.
1:21:08 And so one argument I make sometimes is that big tech is effectively a big experiment in UBI, universal basic income, for people who went to good schools, right?
1:21:12 You’re literally just giving money to people for not doing very much in some cases.
1:21:19 Do you think that that’s starting to change and the complacency maybe that caused that is starting to go away as we get into this?
1:21:23 Like it seems like we had this, everybody was super successful.
1:21:26 They all had their own area, but now we have a new race.
1:21:27 Like we have to get fit again.
1:21:32 You know, it’s kind of like the person who goes to the gym and never breaks a sweat.
1:21:35 If you’re talking about fitness, you know, they lift away and they’re like, I’m going to get on my phone now.
1:21:37 That’s what I feel like has basically happened.
1:21:47 And so I think the reality is if you look at what Musk did at Twitter, where they cut 80% or whatever it was, I wouldn’t be surprised if you could do things that are pretty close to that at a lot of the big tech companies.
1:21:49 That’s fascinating.
1:21:57 One of the things that we talked about was sort of how the best in any field, there’s sort of like 20 people who are just exceptional.
1:21:58 Go deeper on that for me.
1:21:59 Yeah.
1:22:00 So we were talking about clusters, right?
1:22:01 So there’s geographic clusters.
1:22:03 Like, hey, all of tech is happening in one area.
1:22:07 And honestly, all of AI is happening in like, you know, a few blocks, right?
1:22:09 If you were to aggregate it all up.
1:22:14 So there’s these very strong cluster effects at the regional level.
1:22:20 And then as we mentioned, there’s groups of people who keep running into each other who are kind of the motive force for everything.
1:22:29 And if you look at almost every field, there’s at most a few dozen, maybe for very big fields, a few hundred people who are roughly driving almost everything, right?
1:22:36 You look at cancer research, and there’s probably 20 or 30 labs that are the most important labs where all the breakthroughs come out of it.
1:22:40 Not just that, the lineage of those labs, the people they came from was in common.
1:22:47 And the people who end up being very successful afterwards are all come from one of those, or mainly all come from those same labs.
1:22:49 You actually see this for startups, right?
1:22:54 My team went back and we looked at where do all the startup founders come out of school-wise.
1:22:58 And three schools dominate by far in terms of big outside outcomes.
1:23:01 Stanford is number one by far, and then MIT and Harvard.
1:23:07 And then there’s a big step down, and there’s a bunch of schools that have some successes, Berkeley and Duke and a few others.
1:23:10 And then there’s kind of everything else, right?
1:23:15 And so there are these very strong rules of like lineage of people as well, right?
1:23:19 And oddly enough, you see this in religious movements, right?
1:23:20 The lineage really matters.
1:23:23 Schools of yoga, the lineage really matters.
1:23:25 Like all these things, the lineage really matters.
1:23:30 And so what you find is that in any field, there’s a handful of people who drive that field.
1:23:32 And a handful, again, could be in the tens or maybe hundreds.
1:23:33 And that’s true in tech.
1:23:40 Like, you know, there was probably early on 20, 30, whatever, maybe 100 at most AI researchers who were driving much of the progress.
1:23:42 There’s a bunch of ancillary people, but there’s a core group.
1:23:45 That’s true in areas of biology.
1:23:46 That’s true in finance.
1:23:49 That’s, you know, and eventually most of these people end up meeting each other, right?
1:23:53 In different forms, and some become friends, and some become rivals, and some become both.
1:23:56 But it’s surprising how small these groups are.
1:24:03 And a friend of mine and I were joking that we must be in a simulation because we keep running into the same people
1:24:05 over the 10 or 20-year arc who keep doing the big things.
1:24:06 Yeah.
1:24:10 Does that mean those people are almost perpetually undervalued?
1:24:14 Especially if it’s not a CEO and they’re running their own show, if it’s a researcher.
1:24:22 If you take the hypothesis that maybe there’s only 20 people, 20 great investors, or, you know,
1:24:28 20 great researchers, or 20 great whatever, but they’re employees of somebody else,
1:24:30 then they’re perpetually undervalued?
1:24:35 Because it’s like, no matter how much I’m paying you, it’s almost not enough.
1:24:38 Because you’re going to drive this forward.
1:24:39 Yeah, it depends on how you define greatness.
1:24:40 Yeah.
1:24:42 If somebody is the world’s best kite flyer.
1:24:43 Yeah.
1:24:45 No, seriously, though, right?
1:24:48 Like, there’s going to be a handful of people who are the best at every single thing.
1:24:50 But there’s not a ton of economic value created by that.
1:24:52 Yeah, and so that’s the question, right?
1:25:00 And so, you know, part of the question is, what is the importance of each person relevant
1:25:02 to an organization or field?
1:25:05 And then are they properly recognized or rewarded relative to those contributions?
1:25:06 And if not, why not?
1:25:07 And if so, then great.
1:25:10 And so I think there’s a separate question, right?
1:25:12 Of rewards, effectively.
1:25:16 And rewards could be status, it could be money, it could be influence, it could be whatever it is.
1:25:19 What else have you guys learned about investing in startups?
1:25:25 So you had these clusters like, oh, you know, most people come from Stanford, MIT, or Harvard.
1:25:26 Yeah.
1:25:30 What are the other things that you’ve picked up that you were like, oh, that’s surprising
1:25:34 or counterintuitive or challenges an existing belief that I had?
1:25:37 Oh, I mean, I’ll give you one that challenges and then I’ll give you one that I think is consistent.
1:25:40 Maybe I’ll start with a consistent one, which is back to clusters.
1:25:44 We take all of market cap of companies worth a billion dollars or more that are private.
1:25:48 And every quarter or two, we basically look at geographically where are they based, right?
1:25:52 And traditionally, the US has been about half of that globally.
1:25:54 The Bay Area has been about half of that.
1:25:59 So 25% of all private technology wealth creation happens in one place, right?
1:26:00 In one city.
1:26:03 If you add in New York and LA, then you’re at like 40% of the world.
1:26:04 Wow.
1:26:05 Right?
1:26:06 And LA is mainly SpaceX and Android.
1:26:07 Yeah.
1:26:10 So it’s very concentrated, right?
1:26:14 That’s why when I see venture capitalists build these global firms with branches everywhere,
1:26:15 you’re like, why?
1:26:19 You know, like from a research allocation perspective, unless you’re just trying to, you know, have
1:26:20 a specific footprint for reasons.
1:26:27 And if you look at AI, it’s like 80 to 90% of the market cap is all in the Bay Area.
1:26:29 Right?
1:26:30 And so it’s a super cluster.
1:26:33 And you see that going the other way.
1:26:37 Like for fintech, a lot of the value of fintech was split between New York and the Bay Area.
1:26:37 Yeah.
1:26:41 So one aspect of it is these things are actually more extreme than you’d think for certain areas.
1:26:50 And space and defense is roughly all, or was Southern California until SpaceX moved some of its operations.
1:26:53 The counterintuitive thing is more tactical things.
1:26:58 So, you know, there’s a few things that people say a lot in Silicon Valley that just aren’t correct.
1:27:05 So if you look, for example, there’s this thing that you should always have a co-founder or an equal co-founder.
1:27:11 And if you look at the biggest successes in the startup world over time, they were either solo founders or very unequal founders.
1:27:15 So that, and there’s kind of examples to that, of course, but that was Amazon, right?
1:27:16 Jeff Bezos was the only founder.
1:27:19 Microsoft, it was unequal.
1:27:21 And eventually the other founder left.
1:27:26 You know, you kind of go through the list and there aren’t that many where there was true equality, you know.
1:27:30 But it’s now kind of this myth that you should be equal with your co-founder.
1:27:32 And I think there’s negative aspects to doing that.
1:27:37 A second thing is, that’s a little bit counterintuitive, is reference checks on founders.
1:27:42 So if you do a, if you get a positive reference check on someone, then it’s positive.
1:27:46 If you get a negative reference check on a founder, it’s usually neutral.
1:27:50 Unless people are saying they’re ethically bad or there’s some issue with them or whatever.
1:27:52 But there’s two reasons for that.
1:27:55 One is I think product market fit trumps the founder fidelity.
1:27:58 And so like, you could be kind of crappy, but if you hit the right thing, you can do really well.
1:28:01 But the other piece of it is it’s contextual.
1:28:11 Like somebody who’s kind of lazy and not great in one environment may actually be much better when they have their, when they’re responsible and they need to drive everything.
1:28:18 And, you know, as an example of that, there was somebody I worked with at Twitter who was a very nice person, but never really seemed that effective to me.
1:28:20 He was always kind of hanging out, drinking coffee, chatting.
1:28:24 And then a few years later, I met up with him and he was running a very successful startup.
1:28:25 And I said, what happened?
1:28:27 I mean, I said it nicer than that, right?
1:28:28 Yeah, of course.
1:28:29 Like, hey, like, it’s so interesting.
1:28:30 You built this great company.
1:28:31 Like, you know.
1:28:32 He said, you know what?
1:28:34 I finally feel like my ass is on the line.
1:28:36 And that’s why I’m working so hard.
1:28:37 And that’s why I’m so, you know.
1:28:43 Now, in general, I think that the true giant outside success archetype is somebody who can’t tolerate that.
1:28:44 Right.
1:28:44 Right.
1:28:46 They’re always on and they can’t help it.
1:28:52 But there are examples where the context of the organization and the context of your situation really shapes what you do.
1:28:59 When you invested in Andrel, what was your, you mentioned you had criteria and they checked it all.
1:28:59 Sure.
1:29:05 What was your mental, oh, if I’m going to invest in a tech forward defense company, it needs to have X, Y, Z.
1:29:06 What was that criteria?
1:29:18 Yeah, so Andrel happened in a unique moment in time where Google had just shut down Maven and defense had suddenly become very unpopular in Silicon Valley and people were making arguments that ethically you shouldn’t support the defense industry.
1:29:26 And all the stuff that I thought was pretty ridiculous, because if you cared about Western values and you wanted to defend them, of course you needed defense tech.
1:29:34 So I started looking around to see who’s building interesting things in defense because if the big companies won’t do it, then what a great opportunity for a startup, right?
1:29:36 It seemed like a good moment in time.
1:29:45 And it felt like there was four or five things that you needed in order to build a next-gen defense tech company because there was a bunch of defense tech companies that just never worked or hit small scale.
1:29:49 Number one is you needed a why now moment for the technology.
1:29:53 What is shifting in technology that the incumbents can’t just tack it on, right?
1:30:00 Because the way the defense industry works is there’s a handful of players called primes who sell directly to the DoD and they subcontract out everything else, right?
1:30:10 And if you’re not a prime and you don’t have a direct relationship, then you end up in a bad spot in terms of being able to really win big programs and survive as a company or succeed.
1:30:14 So number one is what is the technology why now that creates an opening?
1:30:17 For Anduril, it was initially machine vision and drones, which were new things.
1:30:22 Two is, are you going to build a broad enough product portfolio that you can become a prime?
1:30:23 Right.
1:30:25 Which they did from day one.
1:30:32 Third is, do you have connectivity slash ability to, you know, really focus on faster sales cycle?
1:30:40 Fourth is, can you raise enough money that you’ll last long enough that you can put up with really long timelines to actually get to these big programs of record?
1:30:43 And I think Anduril did their first program of record in something like three and a half years.
1:30:44 It was remarkably fast.
1:30:48 I think it was the fastest program of record since the Korean War or something, which is super impressive.
1:30:55 And then lastly, the way that the business model for the defense industry works is this cost plus.
1:30:56 Oh, yeah.
1:31:03 So you basically make, say, 5% to 12% on top of whatever your cost to work the product out is.
1:31:04 And that includes your labor.
1:31:06 That includes every component.
1:31:10 And that’s why there’s a very big incentive in the defense industry to overrun on time.
1:31:11 Yeah.
1:31:14 Because you’ve charged 10% on that time, right?
1:31:16 So if something’s late, you make more money.
1:31:18 And not have a cost incentive at all.
1:31:19 You have no cost incentive.
1:31:24 That’s why you have a $100 screw, because you make $5 on the screw that costs $100 instead of using a $0.10 screw, right?
1:31:25 Yeah.
1:31:32 And so the cost plus model is extremely bad if you want efficient, fast-moving defense industry, right?
1:31:36 And they were really focused on trying to create a more traditional hardware margin business,
1:31:44 where an example would be if Lockheed Martin sold a drone to the government for a million dollars and made 5% cost plus, they’d make $50K.
1:31:52 If Anderil sold a $100,000 drone with the same capabilities of the government and had a 50% hardware margin, they’d make $50,000 too.
1:31:54 But the government could buy 10 of them for the same price.
1:31:55 Yeah.
1:31:59 So the government gets 10 times the hardware or the capability set.
1:32:08 Anderil gets 10 times as much margin if, again, that structure works, and everybody basically wins, right?
1:32:11 And so I just thought that business model shift was really important.
1:32:16 Why now, though, in the sense of why wouldn’t the defense industry encourage more competition?
1:32:19 They know they’re paying cost plus.
1:32:21 They know the screw shouldn’t be $100.
1:32:26 Like, why didn’t they encourage this way before Anderil?
1:32:32 Yeah, I think at the time, cost plus was viewed as the most fair version of it because you’re like, oh, just give me your bill of materials and I know exactly what it costs.
1:32:33 And then you’ll just get a fixed margin.
1:32:35 And so that’s more fair.
1:32:41 And I know from my budgeting perspective, really, like how much budget I need to ask for, how much.
1:32:41 Yeah.
1:32:46 And I think in hindsight, maybe it worked in that moment in time, but it no longer seems applicable.
1:32:50 And then the other thing that’s happened in the defense industry is there’s been massive consolidation over the last 30 years.
1:32:56 And so a lot of the growth of these companies came through M&A, and so you had fewer and fewer players competing for the same business.
1:33:00 And so that also means that it’s back to the oligopoly market structure that we talked about earlier.
1:33:03 How do you see defense changing in the future?
1:33:07 Like, is it less about ships and more about cyber and drones?
1:33:16 And how do we see the future of defense spending in a world where what used to dominate is these like billion dollar ships?
1:33:28 And now we’re in a world of asymmetry where, you know, for a couple million bucks, I might be able to hire the best cyber attack team in the world, or I might be able to buy a thousand drones.
1:33:30 Or how do you think about that?
1:33:33 Like, how do you think about defense in the next five, 10 years?
1:33:44 Yeah, I mean, in general, defense is inevitably going to move to these highly distributed drone-based systems as a major component of any branch of the military.
1:33:54 And it’s not just because it’s faster, cheaper, et cetera, et cetera, but also there’s certain things that you can’t do with a human operator inside the cockpit.
1:34:05 So, for example, you have a plane, the G-forces that a human piloted plane can tolerate is much lower than if you’re just a drone and you don’t have to worry about people inside the…
1:34:09 Plus, we must be at a point where AI can outperform a human fighter pilot, I would imagine.
1:34:11 I haven’t kept up on defense.
1:34:20 Yeah, there’s a few different contracts, both in Europe and the U.S., that are moving ahead around autonomous flight and autonomous drones and all the rest of it, autonomous capabilities in general in the air.
1:34:31 You know, I think the thing that people have stuck to so far is if there’s any sort of decision that is involved with, like, killing somebody or hurting something, then you need a human operator to actually trigger it.
1:34:37 And so that way you’re not turning over control to a fully autonomous system, which I think is smart, right?
1:34:42 You don’t want the thing to do the targeting and go after the target and make all these mistakes, right?
1:34:44 You want a human to make that decision.
1:34:48 But we exist in a world where not everybody is going to follow those roles.
1:34:50 That’s true.
1:34:56 And then the question is, what’s the relative firepower of that group of people and how do you deal with them and, you know, what do you do to retaliate and everything else?
1:35:01 I mean, in general, one could argue warfare has gotten dramatically less bloody.
1:35:04 Oh, wait, go deeper on that.
1:35:17 Well, if you think about the type of warfare that happened 150 years ago, or imagine if some equivalent to the Hooties was constantly shooting at your ships 100 years ago, what do you think the response would have been?
1:35:19 Do you think you would have said, ah, don’t worry about it?
1:35:29 Obviously, we’ve become much more civilized in our approach and very thoughtful about the implications of certain ways that people used to fight battles and all the rest of it.
1:35:32 But the way that we deal with problems today is very different from how we used to deal with them.
1:35:38 Is there an equivalent to Andrel, but in the software space, from a defense perspective?
1:35:42 And I mean that as like cyber weapons or cyber defense.
1:35:43 Who’s the best?
1:35:45 Yeah, I’ve been looking around for that for a while.
1:35:49 I don’t think I’ve seen anything directly yet, but it may exist and I may just have missed it.
1:35:52 But I do think things like that are coming.
1:35:58 And you do see some AI security companies emerging, which are basically using AI to deal with phishing threats or other things.
1:36:03 You could argue material security is doing that, but there’s people working across pen testing and other areas right now as well.
1:36:05 This has been a fascinating conversation.
1:36:09 We always end with the same question, which is what is success for you?
1:36:12 Yeah, you know, I’ve been noodling on that a lot recently.
1:36:25 And I think if I look at the frameworks that exist and certain Eastern philosophies or religions, it’s almost like there are these expanding circles that change with time as you go through your life, right?
1:36:35 Early on, you’re focused more on yourself and your schooling and then you kind of add work and then you add your family and community and then you add society.
1:36:40 And then eventually you become a sadhu and you go off and you meditate in a cave in the forest or whatever.
1:36:44 And different people weigh those different circles differentially.
1:36:49 And, you know, a big transition I’m making right now probably is I’ve been focused a lot on work and family.
1:36:55 And the thing I’m increasingly thinking about are like, what are positive things I can do that are more society level?
1:36:56 Thank you.
1:36:57 This was awesome conversation.
1:36:58 Oh, no, thanks so much for having me on.
1:36:59 It was really great.
1:37:04 Thanks for listening and learning with us.
1:37:09 Be sure to sign up for my free weekly newsletter at fs.blog slash newsletter.
1:37:19 The Farnham Street website is also where you can get more info on our membership program, which includes access to episode transcripts, my repository, ad-free episodes, and more.
1:37:24 Follow myself and Farnham Street on X Instagram and LinkedIn to stay in the loop.
1:37:27 Plus, you can watch full episodes on our YouTube channel.
1:37:31 If you like what we’re doing here, leaving a rating and review would mean the world.
1:37:35 And if you really like us, sharing with a friend is the best way to grow this community.
1:37:36 Until next time.
1:37:36 Thank you.
What if the world’s most connected tech investor handed you his mental playbook? Elad Gil, an investor behind Airbnb, Stripe, Coinbase and Anduril, flips conventional wisdom on its head and prioritizes market opportunities over founders. Elad decodes why innovation has clustered geographically throughout history, from Renaissance Florence to Silicon Valley, where today 25% of global tech wealth is created. We get into why he believes AI is dramatically under-hyped and still under-appreciated, why remote work hampers innovation, and the self-inflicted wounds that he’s seen kill most startups.
This is a masterclass in pattern recognition from one of tech’s most consistent and accurate forecasters, revealing the counterintuitive principles behind identifying world-changing ideas.
Disclaimer: This episode was recorded in January. The pace of AI development is staggering, and some of what we discussed has already evolved. But the mental models Elad shares about strategy, judgment, and high-agency thinking are timeless and will remain relevant for years to come.
Approximate timestamps: Subject to variation due to dynamically inserted ads.
(2:13) – Investing in Startups
(3:25) – Identifying Outlier Teams
(6:37) – Tech Clusters
(9:55) – Remote Work and Innovation
(11:19) – Role of Y Combinator
(15:19) – The Waves of AI Companies
(20:24) – AI’s Problem Solving Capabilities
(26:13) – AI’s Learning Process
(30:41) – Prompt Engineering and AI
(32:00) – AI’s Role in Future Development
(34:37) – AI’s Impact on Self-Driving Technology
(40:16) – The Role of Open Source in AI
(43:23) – The Future of AI in Big Players
(44:23) – Regulation and Safety Concerns in AI
(49:11) – Common Self-Inflicted Wounds
(51:34) – Scaling the CEO and Avoiding Conventional Wisdom
(55:21) – Workplace Culture
(58:39) – Patterns Among Outlier CEOs
(1:15:50) – Remote Work and its Implications
(1:18:47) – The Impact of Clusters and Exceptional Individuals
(1:25:41) – Investing in Defense Technology
(1:27:38) – Business Model Shift in the Defense Industry
(1:31:46) – Changes in Warfare
SHOPIFY: Upgrade your business and get the same checkout I use. Sign up for your one-dollar-per-month trial period at shopify.com/knowledgeproject
NORDVPN: To get the best discount off your NordVPN plan go to nordvpn.com/KNOWLEDGEPROJECT. Our link will also give you 4 extra months on the 2-year plan. There’s no risk with Nord’s 30 day money-back guarantee!
Newsletter – The Brain Food newsletter delivers actionable insights and thoughtful ideas every Sunday. It takes 5 minutes to read, and it’s completely free. Learn more and sign up at fs.blog/newsletter
Upgrade — If you want to hear my thoughts and reflections at the end of the episode, join our membership: fs.blog/membership and get your own private feed.
Watch on YouTube: @tkppodcast
Learn more about your ad choices. Visit megaphone.fm/adchoices
Leave a Reply