Monopolies vs Oligopolies in AI

AI transcript
0:00:06 There’s only been one sin, and that one sin is zero-sum thinking.
0:00:09 We always worry about like, oh, is this defensible?
0:00:12 Oh, will this layer get margin?
0:00:13 Will this layer get value?
0:00:17 And the answer has kind of been unilaterally yes.
0:00:20 The answer has been every layer has gotten value.
0:00:21 Every layer has winners.
0:00:26 These markets are so large, and they’re growing so fast,
0:00:28 we’re actually seeing brand effects take place.
0:00:31 In this phase of model scaling,
0:00:36 a lot of the approaches to scaling don’t generalize.
0:00:42 This gives a ton of room for the application developers to build their own models.
0:00:47 I think that right now, open source is most dangerous
0:00:52 because China is better at it than we are.
0:00:57 Today on the podcast, we’re sharing a conversation from our friends at 20VC
0:00:59 with A16Z general partner, Martín Casado.
0:01:04 They cover the state of AI investing, why the real sin is zero-sum thinking,
0:01:07 how value is being created at every layer of the stack,
0:01:11 and the risks of monopolies versus the reality of concentrated markets.
0:01:13 Let’s get into it.
0:01:18 Martín, man, I love our conversations.
0:01:20 I was so excited when you said you’d join me again.
0:01:22 Thank you so much for doing this, man.
0:01:23 So excited to be here.
0:01:24 It’s great to see you.
0:01:26 Dude, I freaking hate these.
0:01:29 How did you get into venture intro questions?
0:01:30 So I just want to dive right in.
0:01:32 It is a freaking nuts time.
0:01:39 So starting off, how do you evaluate where we’re at today in the AI investing landscape?
0:01:44 Peak hype cycle, great, super excited, both.
0:01:45 How do you evaluate it?
0:01:48 So I’m kind of of two minds.
0:01:56 Of one mind is I do feel like my intuition doesn’t really work like it has the last 20 years.
0:02:09 It’s just the future is very uncertain, and one of the reasons is because this is really the first time software development and software creation is being disrupted.
0:02:12 And so on one hand, I was like, I don’t really know what to think.
0:02:20 On the other hand, observationally, there’s only been one sin, and that one sin is zero-sum thinking.
0:02:24 We always worry about like, oh, is this defensible?
0:02:26 Oh, will this layer get margin?
0:02:28 Will this layer get value?
0:02:31 And the answer has kind of been unilaterally yes.
0:02:34 The answer has been every layer has gotten value.
0:02:35 Every layer has winners.
0:02:39 Things that we thought were silly are making money.
0:02:40 It’s been solved.
0:02:43 There’s profitable companies.
0:02:45 I mean, the business case is there, et cetera.
0:02:49 And so I think the one sin is not playing the game.
0:02:53 Do you agree with the playing the game on the field sentiment?
0:02:57 When we look back at 21, you know, I remember everyone saying playing the game on the field.
0:02:59 I wish I hadn’t played the game on the field.
0:03:04 To be transparent, Martin, do you agree that you have to play the game on the field in Bansha?
0:03:07 I think behavior should follow business.
0:03:08 It shouldn’t follow Marx.
0:03:11 And I think in 2021, behavior was following Marx, right?
0:03:16 It was like the public, Marcus just decided these companies were valued a whole bunch.
0:03:19 Tiger came in with a ton of money and deployed it a whole bunch.
0:03:24 And so I think behavior following investment in Marx is a bad idea.
0:03:33 But in this case, you have some of the fastest growing companies we’ve ever seen by users, by revenue.
0:03:37 I mean, the amount of value that’s kind of shifted to this is so significant.
0:03:39 And so I think investors’ behavior should follow that.
0:03:41 If not, I mean, what are we doing?
0:03:46 When you think about shifting value, again, I’m diving right in, but this is not our first roundabout.
0:03:51 Like a lot of people have fun, and you said about kind of disruption of software development.
0:03:54 There is a ton of players in the Vibe coding space.
0:03:57 They are predominantly all sitting on top of Anthropik.
0:03:59 Claude code is gaining more and more dominance.
0:04:05 How do you think about these providers’ reliance on a tool that could eventually shut them off?
0:04:08 There are two futures to code.
0:04:10 In one future, you’ve got Anthropik as a monopoly.
0:04:24 In another future, you have, let’s call it an oligopoly, or maybe even a bit more of a market of these coding models.
0:04:26 And they’re just very different futures.
0:04:28 And I think when you answer this question, you have to consider both of these.
0:04:36 I will say the timing of this conversation you and I are having right now is like pretty soon after Claude 4 launched.
0:04:37 And that’s like a major model launch.
0:04:40 And these models are so episodic.
0:04:42 Every time one launches, everybody’s like, it’s the future.
0:04:43 Everything’s going to happen.
0:04:47 Remember the whole Ghibli OpenAI launch?
0:04:49 And we’re like, oh, image is going to change forever.
0:04:52 And then it comes, we’re excited, and then it kind of passes.
0:04:54 And maybe that’ll happen here.
0:04:55 Maybe that won’t.
0:04:56 I don’t know.
0:05:00 But for sure, our perception is colored by that launch.
0:05:01 So let’s consider both of these.
0:05:02 I’m going to consider the first one.
0:05:11 So historically, models don’t really keep much of an advantage because they’re so easy to distill.
0:05:21 And so we’ve even in the last week have seen launches of models, you know, Quinn, and I forgot Kimmy, that came out.
0:05:22 And they’re great.
0:05:24 And people like them, and they adopt them.
0:05:29 And in that world where you continue to have new models from different providers, you know, I would never count out Google.
0:05:30 Their coding models are fantastic.
0:05:35 The rumor is that GPT-5 coding is going to be great.
0:05:43 So in this world where you’ve got lots of models coming out from lots of providers, you need to have a consumption layer that’s independent, right?
0:05:56 And so then all of these companies are going to add that, you know, that consumption layer value, like, for example, to non-technical users or to Python users or to professional coders or whatever it is.
0:05:57 And that’s going to be a very healthy layer.
0:06:03 The other features, let’s assume that Anthropic is just a monopoly on coding models.
0:06:13 And in that case, you have what you normally have in these situations is they will decide kind of where it’s not profitable for them to enter or will change their business model.
0:06:19 Like, maybe they’re like, listen, we want to have the consumption layer, but we’re never going to be like an app dev tool company.
0:06:22 It’s just it’s a different sales motion, a different sales team.
0:06:27 And nobody knows where that stops, but they will put pressure on anybody that they view in their core focus.
0:06:32 And they will do whatever that they can to either capture that margin or just capture that market share.
0:06:39 I just think it’s just the wrong time to have this conversation right after a major model launch.
0:06:42 Because like I said, these models are so episodic.
0:06:46 And we always think like we always assume every time a model launches, it’s going to be a monopoly.
0:06:48 And it just really hasn’t been the case.
0:06:55 Going to your zero sum thinking, if you were to put a bet on which future is more likely,
0:06:57 which future do you think is more likely?
0:06:58 Oligopoly.
0:07:02 This is how the cloud, well, this is how the cloud played.
0:07:06 I think probably the best analog we have is the cloud, right?
0:07:12 You know, the other companies that are behind models can subsidize these things arbitrarily.
0:07:21 And they don’t have to do this in a way, you know, where they have the same economics as an independent company.
0:07:26 And so if you look at how the cloud, remember the cloud, AWS was like 70 or 80% market share early on.
0:07:28 Nobody thought they could ever catch up to them.
0:07:31 You know, they were the massive market leaders that created the category.
0:07:35 I mean, they had way more dominance than Anthropic has now.
0:07:38 And Microsoft and Google are like, you know, that’s an important big market.
0:07:39 We have to be in it.
0:07:41 And they just basically spun their way into it.
0:07:43 And then you ended up with an oligopoly on the clouds.
0:07:45 I see no reason.
0:07:47 I mean, Gemini 2.5 is a great model.
0:07:49 It’s a great model.
0:07:57 And if you actually look at it, you know, on the price performance, I would say in many use cases, the one that I actually use as my standard model, it’s better than Anthropic.
0:08:00 For some use cases, if you actually, you know, take into account price performance.
0:08:03 And Google can arbitrarily subsidize that too.
0:08:05 Never, you know, count out OpenAI.
0:08:09 I mean, they started the party.
0:08:12 They haven’t had a major model release in a while, certainly around code.
0:08:13 So that’s going to show up.
0:08:22 And so I just feel like it’s, you know, the players, the money behind the players, the fact that these models distill, like this will end up in an oligopoly.
0:08:24 But I mean, that’s just my guess.
0:08:31 To what extent do you think the large model providers in 10 years time have already been created?
0:08:33 Or are they yet to be founded?
0:08:38 I think that you end up with models with different flavors.
0:08:41 And there’s going to be a lot of new flavor models that will come out.
0:08:47 You know, like, you know, we haven’t even, you know, like, you know, Mira and Ilya are out there creating models.
0:08:50 I mean, you’ve got these very legit teams that were some of the pioneers.
0:08:57 You know, we’re just starting up models for the sciences.
0:09:06 And as you get more into kind of RL territory, these models really get a certain flavor.
0:09:08 They don’t generalize nearly as much.
0:09:12 And so, like, that’s going to naturally, from a technical perspective, fragment the models.
0:09:17 And so, I would say the core base model for, like, language, search, and code.
0:09:21 I mean, I think even code, actually, it’s still so early.
0:09:23 I mean, it’s very, very early in the super cycle.
0:09:28 In previous super cycles, remember, it took two or three generations for the winners to emerge.
0:09:30 I mean, Google was third-generation search.
0:09:34 Facebook was third-generation social networking, remember?
0:09:37 There was MySpace, there was Friendster, and then MySpace before that.
0:09:40 And so, I think there’s a lot of change.
0:09:41 There’s a lot of change to come.
0:09:49 But I do think that both Anthropic and OpenAI have done a remarkable job, remarkable, with brand independence and market share.
0:09:53 And so, I suspect there’ll continue to be stalwarts in the industry.
0:09:55 Are you in either of them?
0:09:57 We’re investors in OpenAI, yeah.
0:09:58 Got you.
0:09:59 Okay.
0:10:08 My question to you is, you know, fundamentally, there’s many, but do you think models are fundamentally good investments for venture firms?
0:10:18 When you look at employee stock compensation and the dilution that comes from it, and then the dilutive nature of the businesses, it’s a hard sell.
0:10:26 Okay, so, if there’s one thing I’ve learned, honestly, for anybody that’s listening to this, this would be worth, like, your time.
0:10:31 There is no one way to think of AI, and there is no, like, one way to think about models.
0:10:35 And the models themselves are entirely different businesses, depending on how you talk about the models.
0:10:38 So, to even answer that question, we have to tease apart what you mean by model.
0:10:53 So, for example, if you look at the diffusion models, like, say, like, Eleven Labs, Midjourney, Black Forest Labs, Ideogram, these are wonderful businesses that have great economics because the models are smaller.
0:10:57 The ecosystem isn’t subsidized in the same way, right?
0:11:04 Like, Google subsidizes language and code and video, but not speech, right?
0:11:11 And so, from an investor, these are clearly great investments because, you know, if you just look on the metrics alone.
0:11:20 On the other hand, the frontier language space, it’s much more complicated because there’s so much subsidization, right?
0:11:27 You have Meta and Google, a bunch of Chinese players that are entering it.
0:11:37 So, for a subset of the players, and this is why it’s a tricky question, for a subset of the players, you’re like, yeah, clearly, these are the fastest growing companies we’ve ever seen.
0:11:37 There’s tons of value.
0:11:39 These are very valuable entities, right?
0:11:41 You know, Anthropic, OpenAI.
0:11:48 But at the same time, even three years in, there have already been a number of companies that, you know, have had to exit early.
0:11:57 And so, I would say it’s kind of a high-stakes game where the winners really win, but, like, it requires a lot of capital to enter the game.
0:12:02 And if you’re not in one of the leaders, like, that, you know, capital is forfeit.
0:12:05 We do a show every week with Rory O’Driscoll and Jason Lemkin.
0:12:14 And Rory, very aptly, I think, just said, listen, with the transition to AI, every investor has just accepted a willingness to go massively up the risk curve on investing.
0:12:16 Do you agree with that?
0:12:22 Well, I think it’s the requirement of the game.
0:12:29 It’s like, these are very capital-intensive companies to build.
0:12:31 You know, they have to get the capital from somewhere.
0:12:34 They’re also the fastest-growing companies.
0:12:39 And so, you know, for the winners, it’s justified.
0:12:45 And so, I think it’s not that investors are willing to go up.
0:12:47 I mean, we’d be very happy not to.
0:12:48 I mean, I know you would, right?
0:12:50 It would be great to have great returns with low risk.
0:12:55 But the nature of the system and the game which we’re playing requires it.
0:12:58 And this is what – this is, by the way, this is the dissonance in all of this.
0:13:06 It’s just so important to call out, which is, on one hand, you do have these great businesses that are very fast-growing.
0:13:10 And zero-sum thinking has been tremendously wrong.
0:13:13 I mean, NVIDIA is continuing to grow in value.
0:13:19 The hosting providers, which everybody wrote off as being kind of non-defensible business, continue to grow in value.
0:13:22 The model companies, which I can’t tell you how many investors wrote off the models.
0:13:25 I mean, this question has been around for three years.
0:13:26 They continue to grow in value.
0:13:28 So every layer of the stack continues to grow in value.
0:13:32 So on one hand, you’re like, it’s all working.
0:13:36 You should be in the leaders in every layer of the stack.
0:13:40 On the other hand, we’ve seen tons of wipeouts already for the non-leaders.
0:13:47 And so it’s almost this bipolar or paradoxical situation where you kind of have to play, but it’s very, very high risk.
0:13:55 And if you don’t play, I mean, you’re kind of missing one of the fastest growths in value that we’ve seen in, what, 20 years?
0:14:01 Do you think you see the concentration of value to one or two players across markets in every market?
0:14:04 Whether you look at voice, it’s, you know, obviously you’re 11 labs.
0:14:08 Whether you look at it, it’s kind of a replic and lovable and open AI and anthropic.
0:14:10 This is such a great question.
0:14:11 This is such a great question.
0:14:13 So here’s one thesis.
0:14:15 I mean, it’s so early, we don’t know.
0:14:17 And maybe in a month, all this gets proven wrong.
0:14:20 But we actually talk about this a lot internally.
0:14:21 And here’s one thesis.
0:14:25 And this is the one that I’m attached to, which is these markets are
0:14:28 so large and they’re growing so fast.
0:14:30 We’re actually seeing brand effects take place.
0:14:32 And we haven’t seen that since the internet.
0:14:38 And by brand effects, I mean, if you become the household name, you will get the adoption.
0:14:42 Because it just does not require a lot of education.
0:14:48 It does not require a lot of competitive discussion or competitive positioning in the field.
0:14:54 You know, I would say for many of these models, I mean, you know, is one better than the other?
0:14:55 Yeah, maybe, but they’re pretty close.
0:14:57 But like, people know ChatGPT.
0:14:58 It’s like, it’s a household name.
0:14:59 My mom knows ChatGPT.
0:15:05 Crassy, when you’re like, honestly, why did I do lovable?
0:15:07 For the exact same reason that ChatGPT wins.
0:15:09 I thought it was the consumer brand that would win.
0:15:10 A hundred percent.
0:15:13 And I just think these markets are so large.
0:15:14 Brand effects work.
0:15:15 I mean, let’s talk about MidJourney.
0:15:18 MidJourney was the first that got above the quality bar.
0:15:22 It’s taken zero investment from institutions.
0:15:24 It’s still the market leader.
0:15:27 And it continues to do great.
0:15:33 And this is, meanwhile, what a bunch of other people have entered the market.
0:15:37 And so I do think it’s not unreasonable to assume that these markets are very large.
0:15:40 Leaders are going to have brand monopolies and brand moats.
0:15:43 And they’ll be able to maintain them until things slow down.
0:15:48 And in general, I’ve found markets do this, which is when markets are expanding.
0:15:52 So markets tend to expand and then contract, right?
0:15:52 Think about cloud, right?
0:15:55 It was kind of like this funny thing and it became very massive.
0:15:56 And then, of course, it slows down.
0:15:58 When it slows down, then you have the consolidation.
0:16:02 And then, you know, competitive dynamics come in.
0:16:04 I mean, we’re clearly in a massive market expanse phase.
0:16:05 It’s just very clearly the case.
0:16:11 And in which case, the leaders are going to continue to have, you know, a distribution advantage
0:16:13 just through brand recognition.
0:16:16 When does that tail off or does it not tail off?
0:16:21 When does the importance of brand and brand recognition dwindle and product prioritization
0:16:23 or product quality trumple?
0:16:28 I mean, I think it’s as soon as the market growth slows down.
0:16:35 You know, let’s, I mean, again, let’s take cloud as an example where…
0:16:37 Do you think these are actually tools of market…
0:16:38 Sorry, I’m so sorry to interrupt you, but market growth…
0:16:38 No, no, no.
0:16:44 Or actually just consumer intrigue, which is, there’s a lot of people who want to try building
0:16:47 a website on Replit or Lovable or Bolt or any of them.
0:16:50 There’s a lot of people who want to try voice with 11 Labs.
0:16:54 To what extent is it market intrigue versus expansion of market?
0:17:02 Well, I just think the expansion of market provides the dynamic so that you don’t saturate the user
0:17:04 with competing messages, right?
0:17:09 I mean, the idea of market expansion is the frontier continues to expand.
0:17:13 And the first thing the frontier hears is the household names.
0:17:15 And so the household names win.
0:17:21 And so I just think that that’s a, you know, that’s a natural artifact of expansion.
0:17:25 As soon as, like, the expansion slows, then that frontier is going to hear both names.
0:17:29 And then all of a sudden, now you’re in a discussion of which one to use and not to use.
0:17:34 And again, I think, like, for the longest time, when the cloud market was expanding,
0:17:35 everybody knew AWS.
0:17:36 It was the leader.
0:17:38 It was 70%, 80% market share.
0:17:43 And then as soon as that growth slowed down, then all of a sudden, market share started
0:17:44 to shift dramatically.
0:17:46 And it just wasn’t obvious.
0:17:46 Do you do GCP?
0:17:48 Do you do Azure, etc.?
0:17:55 But I would say that’s less an artifact of the fact that Google, Microsoft decided to enter
0:17:59 the game, and much more that the market growth itself started to slow down.
0:18:03 So we see market growth slow down, and then we see the dispersion of value across players
0:18:04 more so.
0:18:04 That’s right.
0:18:12 So the market slows down, and once that happens, the frontier, it becomes more saturated, right?
0:18:18 Just because we’re not adding people as much, and so they will get more of the educated message,
0:18:21 and they’ll start making more decisions, and you can have more of a conversation.
0:18:27 Like, of course, Anthropic would love to have the same brand as ChatGPT as a household name,
0:18:33 but how do you reach that frontier, you know, if it’s growing that fast?
0:18:35 It’s just, it’s operationally tough to do.
0:18:38 Kind of the only way that you do it is just through brand recognition, which is kind of
0:18:40 this word-of-mouth-y type thing.
0:18:45 It’s like on every podcast, and, you know, the friends, and whatever.
0:18:49 And so I do think, I do think we’re seeing brand effects happen now.
0:18:51 And we saw these in the early internet.
0:18:54 The brand leader tends to get 80% of the market.
0:18:56 It just tends to break out Pareto for a while.
0:19:01 And then over time, it’ll slow down, and these things even outbase more on product differentiation.
0:19:04 How do you factor that into your thinking when investing today?
0:19:07 Well, you just try to invest in the leader.
0:19:10 And it’s worth paying up for the leader, honestly.
0:19:15 I mean, it’s, you know, so I think, for me, I ask two questions.
0:19:20 Question number one is, like, for the area that it’s focused on, is it the leader?
0:19:21 If it is, it’s definitely worth paying up.
0:19:28 And then the second one is, the story actually has been that in a competitive space,
0:19:30 almost everybody just found kind of a new, niche-y white space.
0:19:32 So let’s just take the example of OpenAI.
0:19:39 I mean, OpenAI was the first to code, right, with GitHub Copilot.
0:19:43 I mean, they provided the weights, as far as I know, and they lost that.
0:19:47 And they were the first to image with Dali, and they lost that.
0:19:53 And they were the first to video with Sora, and as far as I can tell, they lost that.
0:19:59 And yet, they’re still the massively dominant player in language, and continue to be so and will be so.
0:20:04 And arguably, that was the right thing for them, because that’s by far the largest market, by far.
0:20:08 And so OpenAI acted totally rationally and has the largest market.
0:20:13 But that gave the ability for mid-journey to take image or BFL to take image.
0:20:18 You know, Google seems to have grabbed video with VO3.
0:20:24 Code, I mean, on the model side, Anthropic has, you know, turned that into, you know, this wonderful business.
0:20:32 And so when markets expand, not only do you have these brand effects that we were talking about, they also tend to fracture a bunch.
0:20:35 And what seems to have been a sub-market will emerge as a leading market.
0:20:38 And you even see this kind of on the image side, right?
0:20:41 You’ve got a bunch of viable image players that focus on different things, right?
0:20:45 Like, Ideogram is great for designers, a professional design community.
0:20:51 BFL is the open source community that, you know, especially for developers that use these things in products.
0:20:58 And then mid-journey is for, you know, more of the fantasy, like, you know, also professional designers.
0:21:00 But it’s a very stylized kind of opinionated view.
0:21:03 And all of these are independent, you know, viable companies.
0:21:07 So I think we’re going to see fragmentation for quite a while before we see consolidation.
0:21:12 I need, the show is successful because I’m very open with my troubles.
0:21:14 I need your advice.
0:21:18 You know, A-Bridge in the US, I’m not sure if you’re in it, but I’m sure you know it.
0:21:24 Very simple, there’s a European player that does, like, medical transcription for nurses.
0:21:27 They went from one to eight million in a year.
0:21:29 And we’re looking at leading their A.
0:21:31 And I’m thinking exactly the same.
0:21:34 You’re going up against A-Bridge because you’re going to need to compete in the US.
0:21:36 This is going to be a big business.
0:21:41 Is that a losing game where you are a European competitor?
0:21:43 This is a great question.
0:21:53 So another very interesting thing that we haven’t seen in a very long time is we do have geographic biases showing up with AI.
0:21:57 And the regulatory environments are quite balkanized.
0:22:01 You know, there’s language and cultural biases that are also balkanized.
0:22:04 And so we’re actually seeing a lot of regional players show up.
0:22:06 And so I think it’s very legitimate.
0:22:11 Now, the thesis cannot be European company X wins the American market.
0:22:14 But I promise when it comes to AI, the European market is large enough.
0:22:16 I promise that.
0:22:23 And so I think a very legit thesis is, you know, this becomes a regional player in Europe and then maybe a portion of the US market.
0:22:28 Can I ask you, a lot of people denigrate these businesses that we’ve discussed because of their margins.
0:22:31 They’re simply pass through funnels to the large language models.
0:22:34 Do you think that is something that changes over time?
0:22:35 And it’s the same for all great businesses.
0:22:37 Uber started off with shit margins.
0:22:39 Now they have better margins.
0:22:40 Same thing.
0:22:45 I just don’t buy that these are endemic to the business model.
0:22:48 Like, this is certainly not my experience at all.
0:22:49 And so there’s always this question.
0:22:59 If you’re a founder and you get access to, you know, relatively cheap private capital, and you can do a tradeoff between margins and distributions and it’s land grab time, what would you do?
0:23:05 And the argument is the incremental user is someone you can monetize forever down the road.
0:23:08 And then if you don’t get that user during a lot of ground, you can never monetize it.
0:23:13 The rational business decision is to sacrifice margin for distribution.
0:23:14 It’s just a rational business decision.
0:23:16 And we’ve seen this forever.
0:23:21 I mean, hell, the web wasn’t even monetized, right?
0:23:21 Literally.
0:23:23 I mean, like, this time we can actually monetize these things.
0:23:28 Forget, like, you know, break even or negative margins.
0:23:33 It was literally, like, massively negative because we didn’t even have a business model until the advertisements come up.
0:23:38 So this is, like, the most rational thing that markets have been doing, at least tech markets, forever.
0:23:43 And it’s no different this time with AI.
0:23:51 I do think there’s a question of, okay, so if you do want to then turn on margins, how do you do it, right?
0:23:58 And then you can, of course, you’ll either have to build a traditional mode, two-sided marketplace, a brand mode.
0:24:02 And then you can, of course, how do you think about the long tail kind of integration and domain understanding?
0:24:11 So, for example, let’s say your healthcare company, if they really crack the European market and they understand all the regulation, like, Anthropik’s not going to take the time to do that.
0:24:16 Or, you know, so there’s clearly pricing power you have on that side.
0:24:19 Or you have to do actual technical differentiation.
0:24:30 One thing that we’re learning is in this phase of model scaling, a lot of the approaches to scaling don’t generalize.
0:24:35 So if I want to be much better at, like, coding, I may not be so good at something else.
0:24:46 This gives a ton of room for the application developers to build their own models that service certain areas that the large models just aren’t focused on.
0:24:49 And so I think there’s even a ton of technical level to differentiate.
0:25:02 So my sense is, and this is, I mean, you know, I don’t want to talk too much about, you know, my portfolio and what I see just because there’s sensitivities around the number.
0:25:16 But in my experience, most of these companies that are, like, let’s say, break-even margins, it’s like a board-level specific choice to prioritize distribution, not just because this is systemically something they have to do.
0:25:17 We mentioned their sovereignty.
0:25:22 I am intrigued how you think about safety and safety around AI and models.
0:25:25 You’ve had Vin on Coase would be like, we have to lock this down.
0:25:29 If this was not locked down, it would be like nuclear secrets being handed out.
0:25:32 I remember then Mark came and was like, fuck that.
0:25:33 No way.
0:25:37 How do you feel about the future of safety within this landscape?
0:25:43 I mean, it’s crazy to have VCs talking against the open source, right?
0:25:44 I mean, Founders Fund did too.
0:26:02 And for me, it’s just wild when, you know, pro-innovation, you know, pro-innovation sectors of the economy, academia too, have decided that, like, open, transparent innovation is somehow an antithesis of safety.
0:26:05 And I know that’s not what you asked, but, like, I just want to make the point.
0:26:10 It’s just we were in very bizarro land for a while, and it seems like we’re coming out of that now.
0:26:13 So let me just draw a bit of a parapet.
0:26:14 Do you think we’re coming out of that?
0:26:16 I think we’re moving more and more into that.
0:26:17 Great.
0:26:21 You know, Metro and Alex are going to turn Lama fully closed.
0:26:22 Great.
0:26:24 So let’s go back to that in just one second.
0:26:29 I want to answer the question that you – because you actually asked, like, a great question on how I view this.
0:26:31 And let’s go to whether we’re coming out or not.
0:26:33 So how do I think about safety?
0:26:40 So, you know, I was actually very, very close to security during the rise of the internet.
0:26:42 You know, I worked for the intelligence community.
0:26:47 I worked for Livermore National Labs.
0:26:52 And then, you know, when I did my PhD, like, you know, a good – you know, 50% of my work was in security.
0:26:56 I taught, like, a cybersecurity policy course.
0:27:08 And the thing with the internet is you had these very specific examples of new types of attacks that, like, impacted nation states.
0:27:10 Like, critical infrastructure would go down.
0:27:13 You know, you’d have things like the morris worm.
0:27:16 Like, you know, I mean, you had these really significant examples.
0:27:23 And that kind of kicked off this large discussion on how you handle it.
0:27:33 And it was so significant at the time that at the nation state level, you know, we started thinking that we have to actually change our doctrine.
0:27:36 You know, we were kind of this Cold War era, mutually assured destruction.
0:27:47 We had to change it to this notion of, like, defense asymmetry, which meant the more we relied on these things, the more vulnerable we were, right, as opposed to, like, a country that didn’t rely on them because you could be attacked.
0:27:50 And then, of course, kind of the whole terrorist information warfare stuff.
0:27:57 And so, the implications were so absolute, and you had so many proof points, and you could articulate them incredibly well.
0:28:07 And so, if you look at the AI stuff, I mean, for every computer system, you have security considerations.
0:28:14 But we’ve got this 30-, 40-year, very robust discourse around this that we can draw from and use from.
0:28:21 And the thing that I don’t understand is how all of a sudden we’ve decided that these are not computer systems.
0:28:23 They don’t obey the same laws.
0:28:30 And we have to kind of throw out everything that we’ve learned and kind of, like, revisit the discourse, even though we don’t even have the same proof points.
0:28:36 I mean, like, nobody can make a strong argument on asymmetry or need a shift to doctrine.
0:28:38 And if they can, let’s go ahead and have that discussion.
0:28:42 You know, I still have yet to see the dramatic new attack.
0:28:44 It’s going to come for sure, but we haven’t seen it yet.
0:28:50 And so, I just feel like the discourse around this is not in line with the reality.
0:28:52 It’s not in line with historical precedents.
0:29:01 And so, we should absolutely take these things seriously, but we should draw on the information that we’ve learned from in the past and the approaches we’ve taken in the past.
0:29:20 The last thing I’ll say on it is the biggest difference this time is in the past, the people creating the technology were kind of pro-tech, and the people that were, like, selling security solutions were, like, the fear mongers, right?
0:29:25 So, you’d have somebody create, like, the internet, and they’re like, this is safe and it’s great for everybody.
0:29:28 But then you’d have somebody to create a firewall and, like, oh, the internet’s dangerous.
0:29:30 Every sociopath is your next-door neighbor.
0:29:35 So, you had both the same voices, but in two different bodies based on interests.
0:29:38 The interesting thing this time is they’re in the same body.
0:29:42 So, the person that’s creating the thing is also like, oh, this thing is very dangerous.
0:29:49 I don’t recall the last time we had something like that, but it’s created a dynamic that’s just been very confusing for everyone.
0:29:58 Do you not think open source increases the opportunity set for hostile actors like China and Russia to harm us?
0:30:01 I mean, I think it’s tautologically true.
0:30:11 Like, I think tautologically you can say, do you believe computers and the availability of computers increase their ability to harm us?
0:30:15 And I would say absolutely computers and the availability of computers do.
0:30:16 I would say…
0:30:29 So, I think that right now, open source is most dangerous because China is better at it than we are.
0:30:36 And as a result of that, we’re seeing a proliferation of Chinese open source models everywhere.
0:30:42 Now, unfortunately, we don’t have control over Chinese regulation.
0:30:46 And so, I would say the answer is yes, because of China, not because of us.
0:30:52 And the right way for us to respond is to fuel our open source efforts against that.
0:30:55 So, let me just be very specific.
0:31:01 So, I think Chinese open source can be a national security issue, for sure.
0:31:07 And any of the software that’s produced by a nation state that we view, you know, quasi-adversarially.
0:31:15 The way that we combat that is we also are incredibly open and we also do a proliferation of technology.
0:31:25 What do you think we can learn from China, regulatory-wise, that would enable us to have the same or better open source ecosystems slash environments?
0:31:36 I mean, to me, this is, you know, the United States has a long history of being pro-innovation, pro-innovation for national security, pro-innovation for national defense.
0:31:39 I think we should be funding this stuff like crazy.
0:31:40 I think we should get the national labs involved.
0:31:42 We should get academia involved.
0:31:46 You know, we should make this a national priority, just like China does.
0:31:49 And we should just, you know, a full-throated endorsement of all of this stuff.
0:31:50 I think we should do closed stuff.
0:31:52 I think we should do open stuff.
0:31:52 And we’ve done this forever.
0:32:01 You know, my first job out of college, this is, you know, 1999, was working at Lawrence Livermore National Labs on the ASCII program.
0:32:03 And what were we doing then?
0:32:06 We were, I mean, the broad program was stipulating nuclear weapons.
0:32:08 I mean, this is what it was.
0:32:12 And a lot of the concerns we have today were concerns we had then around compute.
0:32:19 I mean, we actually stopped Saddam Hussein from, like, importing PlayStations because we were worried about, you know, using them for simulation.
0:32:22 We’d put export controls on the hardware.
0:32:34 And we’d say the same things, like, oh, you know, computers out there, like computers, you know, they’re going to enable, you know, the enemies and all sorts of stuff.
0:32:35 And this is like nuclear weapons.
0:32:37 This isn’t like some abstract AI thing.
0:32:40 This is like actual, actual on-the-ground weapons.
0:32:45 The posture that we took at the time, the conclusion is we’re just going to be the leaders in all of this stuff.
0:32:50 And we funded academia and we funded the labs and we won.
0:32:56 And we were able to control, like, the technical discourse of the planet going forward.
0:33:00 And this time, instead, we want to put our head in the sand and let somebody else do it.
0:33:03 And so, like, they’re going to learn from our, you know, our success.
0:33:07 And somehow, you know, we’re not.
0:33:15 Do Trump’s cuts to universities’ research labs not impact your ability to do what you just said?
0:33:18 Are you not actively going against what you should be doing?
0:33:28 I am very pro-investing in, I am very pro-investing in academia and in the national labs.
0:33:42 I think there’s always a political shift in money depending on what they view is in line with administration politics.
0:33:46 Like, I’ve, I still, I can’t tell you, you know, I did my PhD at Stanford.
0:33:48 I’ve done a bunch of NSF grants.
0:33:53 I don’t remember ever somebody saying, we like indirect costs.
0:33:58 Every researcher, every professor, every single one was like, indirect costs are terrible.
0:34:03 Obama, Obama tried to get rid of indirect costs.
0:34:04 He’s like, you know what?
0:34:09 Universities, they have a tax-exempt status.
0:34:17 So, why don’t we just have them, you know, spend 5% of their endowments like any other tax-exempt organization?
0:34:22 And, you know, that will cover a lot of indirect costs.
0:34:23 And he couldn’t get it through.
0:34:27 So, this is a bi-kind of partisan issue that is long-standing.
0:34:33 And I, and I mean, I would say that, like, a change is needed.
0:34:39 Now, to the extent that, you know, you know, I think these things are very hard to implement.
0:34:44 But I would say, concretely, yes, we should invest in these things.
0:34:48 Yes, we need a shift in how funding happens.
0:34:51 I do think that, like, indirect costs have gotten way out of hand.
0:34:56 And until, until it was like Trump doing it, everybody that I know in academia totally agreed.
0:35:00 But yes, of course, change and shifts in funding will be disruptive.
0:35:01 And so, I think all things are true.
0:35:03 I just want, I just don’t want to do it.
0:35:07 I don’t want to do this to a simple, like, Trump does bad things, because I don’t think that is the case.
0:35:11 And then, you know, funding science is arbitrarily good, because I don’t think that’s the case.
0:35:14 I mean, I definitely think we should fund as much or more.
0:35:19 I definitely think that a shift in funding and change to the system is needed.
0:35:23 And, you know, the right path through that is complex.
0:35:24 I don’t quite know it.
0:35:28 You very kindly said that I asked a good question on the reversion back to closed source.
0:35:28 Yeah.
0:35:32 When we mentioned Alex joining Meta, what it meant for Lama.
0:35:39 I said quite zero-sum-wise, to your point, we’re clearly seeing a movement back towards closed and away from open.
0:35:44 How do you see that, and do you disagree, is my statement now, on the transition?
0:35:51 No, I think that’s, so, I agree on the ground 100% that I think we’re seeing a movement away from open source.
0:35:54 But the rhetoric around open source has shifted, right?
0:35:59 I mean, we just had the AI, what was the name of the bill that just came out?
0:36:05 I mean, it’s like the American AI policy and recommendations is a full-throated endorsement for open source.
0:36:09 So, I think discourse-wise, there’s more support for open source than ever before.
0:36:13 I think, ecosystem-wise, I think you’re right.
0:36:17 I do think it’s quite likely that we’re going to see less open source.
0:36:19 Now, listen, OpenAI has said that they’re going to open source.
0:36:20 That would be wonderful.
0:36:22 And if they do that, I think that would be very, very positive.
0:36:24 Do you think they will?
0:36:26 I just, I have no idea.
0:36:27 I hope so.
0:36:35 It would be a very rational, but, I mean, here’s the great, maybe here’s the, like, we say open source, but it’s such a misnomer when it comes to AI.
0:36:43 I mean, the standard model of open sourcing AI is you open source the smaller model and you keep the more capable model closed source.
0:36:48 And it’s a way that you get distribution and brand recognition, but you don’t actually erode your business.
0:36:53 This has been very, very successful as a business model.
0:36:59 And unlike actual software open source, just because you release your model doesn’t mean somebody can replicate it.
0:37:02 Like, to replicate it, you’d have to, like, recreate the data pipeline and the training pipeline.
0:37:12 And so, you know, I think that there’s just, like, a lot of concern of investing, you know, hundreds of millions of dollars or billions of dollars to train something and then just giving all of that away.
0:37:19 But I feel very confident that the business justification is there and behavior will always follow business.
0:37:25 And we’re going to continue to see open source be a large part of the ecosystem.
0:37:29 And remember, historically, open source has only been about 20% of the total market value.
0:37:30 I would say it’s much higher than that for AI.
0:37:34 So, in a way, we’re doing better than software has historically.
0:37:38 What did you believe about the AI landscape that you now no longer believe?
0:37:40 We’ve touched on so many different elements.
0:37:43 My mindsets have changed around so many.
0:37:48 I mean, the one for me that I’ve just consistently got wrong is just how fast these coding models advance.
0:37:51 And this has probably just sunk cost fallacy.
0:37:53 My entire life, I’ve just been this nerdy program.
0:37:55 I’ve been programming since the 90s.
0:37:56 I mean, it’s like it’s my happy place.
0:38:03 And I just never thought that they would advance to the level that they have.
0:38:05 I mean, I still develop most evenings.
0:38:11 And it’s just, you know, instead of watching a sitcom, I just goof off and mostly writing, like, old video games or whatever just for fun.
0:38:13 Like, it’s silly stuff.
0:38:21 And I’m already at the point that I just couldn’t work back to working without them.
0:38:23 And I’ve spent, you know, 30 years without them.
0:38:30 And it’s just their ability to offload all of the shit I didn’t want to learn is remarkable.
0:38:35 The thing that kept me away from code for a while, which is I would kind of dabble with it.
0:38:41 I would drop it is, yeah, just learn all of this, like, all these weird frameworks.
0:38:45 And, like, none of the knowledge is foundational.
0:38:49 It’s just like some fucking random dev came up with some weird way to do something.
0:38:54 And you’ve got to kind of learn, you know, some poor design decision to do it.
0:38:55 And none of it made any fucking sense.
0:39:03 And it just felt like you’re wasting your brain space on poor decisions made by random open source developers.
0:39:05 And that was programming in the past.
0:39:08 Like, programming, so let me just put it in context.
0:39:15 In the late 90s, programming was you download your IDE, you sit down to your computer, you program something.
0:39:21 And then it would turn into a binary, and then you’d run that binary.
0:39:25 And so, like, you could, like, really get a lot done just by sitting down and writing code.
0:39:34 You know, by, I would say, like, 2015 or so, you know, writing with something is, like, you’d have to, like, fucking, like, download, like, 50 million packages.
0:39:37 And, like, to run it, you’ve got to run some stupid dev server.
0:39:40 And to, like, actually have anybody else use it, you’ve got to, like, learn how to host it.
0:39:46 And, you know, like, it was a bunch of libraries that were, like, dealing with incompatibilities.
0:39:47 For all of us, it was a weird fucking platform.
0:39:51 So, like, 90% of your time had nothing to do with code.
0:39:54 Like, 90% of your time was just dealing with all the environment platform bullshit.
0:39:59 And so, what’s so nice now is you can just focus on your code.
0:40:08 So, like, now I literally just, I mean, I use Cursor, and I just have, like, I just have the AI tell me how to host the thing and tell me what package to use and whatever.
0:40:11 And I just strictly focus on what I want and the logic.
0:40:14 And so, it’s almost like it’s brought coding back.
0:40:17 And you can see this across the industry.
0:40:19 Like, all of, like, I’ve got, I mean, I grew up in the industry.
0:40:24 I know a bunch of very strong developers that have been developing for a very long time that have basically stopped.
0:40:28 They’re, like, running companies now or whatever, and they’re all back to programming at night.
0:40:37 And I really think that, you know how, like, there’s, like, the adage of, like, I don’t know, like, the old man that goes into the garage and, like, makes the train set for, like, nostalgic reasons.
0:40:46 I think, like, the modern version of it is these old systems programmers, like, you know, vibe coding at night just because it’s become pleasant again.
0:40:49 And so, I know you asked about the thing that’s kind of surprised me the most.
0:40:53 But I really think it’s such a marvel what these coding models are able to do.
0:40:55 And they add very real value.
0:41:01 Do you think they make 1X engineers 10X or 10X engineers 100X?
0:41:06 10X engineers 100X would be what I said.
0:41:07 But I don’t actually think it’s that.
0:41:10 I think they make 10X engineers 2X.
0:41:14 I would say every company I work with uses Cursor, right?
0:41:21 And then if I actually look at, has that increased the velocity of the products coming out?
0:41:26 I don’t think that much just because so much of it.
0:41:28 So what’s changing then?
0:41:30 Because dev productivity is going up.
0:41:33 So is the quality of product going up if the product release cadence isn’t?
0:41:37 I just think the things that are hard remain really hard.
0:41:46 And so, you know, like, let’s just talk about, like, creating a model.
0:41:52 So let’s say I’m creating a new model, a new frontier model, right?
0:41:56 And to create that new frontier model, I’ve got to collect data, and I’ve got to run a pipeline,
0:42:00 and I’ve got to, like, sit with my, you know, my Jupyter notebook, and I’ve got to, like,
0:42:02 look at the lost curves, and I’ve got to rerun it.
0:42:09 And, like, that’s just a lot of kind of experimentation and so forth.
0:42:12 You know, there’s no coding model that’s going to do that for you.
0:42:21 But if I wanted to create tests or a test suite or, you know, or visualization or write documentation,
0:42:23 it’s actually really good at that.
0:42:30 And so I would say that probably in the long run, having more robust, maintainable code bases
0:42:38 with less bugs is just as likely to be the impact as feature velocity.
0:42:41 Because, you know, in startups, again, I’m an infra guy.
0:42:42 This is probably different from apps.
0:42:45 Like, I’ve always thought apps had no technology to begin with.
0:42:49 Like, every time I look at vertical SaaS, I’m like, why do we even care about the technical team?
0:42:51 It’s fucking crud, man.
0:42:55 It’s like, crud is, like, create, you know, read, update, delete.
0:42:56 It’s like, they all do the same thing.
0:42:58 They all just kind of look like a web app.
0:43:00 They’re all like, who cares about the technology?
0:43:01 The technology is simple.
0:43:03 These are all these kind of go-to-market things and whatever.
0:43:05 But infrastructure is different.
0:43:12 Infrastructure is like very real trade-offs in the design space that only somebody understands
0:43:13 computer science would know.
0:43:19 So for infrastructure companies, I think it’s quite unlikely that AI will really help, like,
0:43:24 speed that up because it comes down to something that the developer has to decide on,
0:43:26 has to articulate the trade-offs.
0:43:30 But I do think it could really help with the development process so you have less bugs and
0:43:31 things like that.
0:43:36 And so I actually view it more as, like, a more robust development methodology that necessarily,
0:43:40 you know, speeds up the core product.
0:43:47 Given the kind of dev productivity changes that occur because of these tools, how does that
0:43:49 impact defensibility within companies today?
0:43:55 If time to copy it, which is, Misha at Fiverr said this on the show, he said time to copy
0:43:56 has basically been reduced to nothing.
0:44:01 To what extent does that change defensibility for companies?
0:44:06 I mean, I still think we should just go back to the split between apps and infrastructure.
0:44:09 For apps, like, how long does it take to copy it anyways?
0:44:14 I mean, you know that there are entire companies that they’re, like, their stated purpose is just
0:44:18 to copy another company in the app space.
0:44:19 It’s just so easy to do.
0:44:22 I mean, there is no core technology for random app.
0:44:26 I mean, there’s no, like, differentiable technology for random app.
0:44:31 Let’s say that you’re creating, I don’t know, some healthcare vertical SaaS thing.
0:44:35 Like, you could contract, and you have been forever, the actual app.
0:44:39 I mean, the business is actually the long tail of understanding that domain.
0:44:42 So I just don’t think it changes that paradigm at all.
0:44:47 And then when it comes to core infrastructure, which is what I focus on, things like, think
0:44:54 like databases, foundation models, there’s no way that right now models can just copy.
0:44:58 And the reason that there’s no way is it’s not that the models aren’t capable of doing
0:44:59 the technology.
0:45:05 It’s just that there is a long tail of understanding of the trade-offs for the particular use case
0:45:05 and domain.
0:45:11 And because it’s a new market often, then you understand that through market exploration.
0:45:16 And so I just don’t feel that, I think these models really help with the software development
0:45:22 process for, you know, non-deeply technical areas like apps.
0:45:24 Sure, they can help speed it up.
0:45:28 But over time, all of these reduce to a long tail understanding of the market.
0:45:31 I mean, Aaron Levy said it so, Aaron Levy said it so beautiful.
0:45:36 I mean, do you know what the average, what do you think the average PR is, pull request
0:45:40 is for a production code base?
0:45:44 Like how many, how many lines of code is the average change that gets accepted?
0:45:47 What would you guess for like some production enterprise app?
0:45:50 I have no idea.
0:45:51 It’s two.
0:45:51 It’s two.
0:45:51 Yeah.
0:45:52 It’s very, very small.
0:45:55 It’s actually two, but let’s say it’s 12, right?
0:45:58 And what does that two or 12 line signify?
0:46:04 That two or 12 line signify probably some learning in the field or some understanding of what is
0:46:04 needed.
0:46:11 And so the long tail, the thing that’s the hard thing is to understand the specific deployment
0:46:13 environment and market you’re going to.
0:46:14 That’s the hard thing.
0:46:15 The hard thing isn’t the two lines of code.
0:46:16 That’s actually quite easy.
0:46:23 And so in many ways, I would say, you know, the AI is getting rid of the middle, right?
0:46:29 Like, so very new computer science, like models, they don’t know how to do just because nobody’s
0:46:29 done it before.
0:46:31 And that’s kind of pushing the state of the art.
0:46:36 And then in the app space, all of the hard stuff is the business anyways, right?
0:46:39 And this is why, like, the changes are very small and, like, you learn everything to go
0:46:43 to market, which the models don’t know just because you’re exploring a new market.
0:46:46 And it’s all the bullshit in the middle that they’re helping us with.
0:46:49 And so, you know, for me, it’s just kind of net accretive.
0:46:56 Do you think that CS holds the same weight as a study in education discipline that it
0:46:58 always did and you would always recommend it?
0:47:03 Or does that change in a world that’s fundamentally more democratized in terms of creation, like
0:47:03 we discussed?
0:47:11 I mean, I feel very strongly that, like, if you care about building systems out of computers,
0:47:13 you have to understand how they work.
0:47:18 What do you think we do today, Martin, that we will look back on in five or 10 years’ time
0:47:20 and go, I can’t believe we did that.
0:47:21 It could be prompting.
0:47:24 It could be choose the model that we’re working on.
0:47:29 I find it ridiculous that we are supposed to choose which model, like Grok 3, Grok 4, Grok
0:47:32 5, Grok shopping, Grok weather.
0:47:33 What the fuck?
0:47:34 Just figure it out.
0:47:39 Well, I’m just taking it from a programmer’s view.
0:47:45 I mean, I just think hopefully we’ll just stop worrying about frameworks altogether and maybe
0:47:48 even languages, maybe even like a proto-language evolves.
0:47:52 And we can just focus on logic and fundamental trade-offs.
0:47:57 I mean, we’ve gotten into this very backwards world where these days programmers think about
0:48:00 all the non-fundamental stuff and they don’t think about the fundamental stuff.
0:48:02 Let me give you an example.
0:48:03 So I always worry.
0:48:05 This is going to be this weird philosophical rant.
0:48:09 But I always worried, you know, while I was doing grad school and when I was doing research
0:48:17 that we kind of entered a space where there’s so much research that has been done over the
0:48:19 years that you never know if you’re doing something new.
0:48:21 Like you just couldn’t do the literature search.
0:48:21 There’s so much.
0:48:26 And so like the entire industry just spent all of its time redoing research.
0:48:31 You know, it’s like, it’s like, it’s like you’re like cleaning a room and you’re trying
0:48:34 to like sweep out the dust, but rather than sweep it out the door, you’re just kind of
0:48:35 moving it.
0:48:37 Like you’d move it to the bed or you move to the wall.
0:48:39 And then like, that’s all you do is just kind of sweep the dust around, but you never actually
0:48:40 get it out of the house.
0:48:42 That’s what research felt to me.
0:48:44 It was like, we’re in this mad delusion.
0:48:51 And on top of that, it also felt like many of the most important problems were kind of
0:48:52 between disciplines.
0:48:56 And so like, in order to even solve them, you just have to know too many things and we couldn’t
0:48:56 do that.
0:49:01 And so I just felt like there’s like the entire scientific industrial establishment was just
0:49:03 kind of redoing the same stuff.
0:49:09 And so in a way, I think AI has the ability to pull out of this mass craziness, this mass
0:49:14 ineffectiveness, which A, it’s very good at telling you if you’ve done it before, right?
0:49:15 You know, it’s very good at that.
0:49:17 It actually knows all the literature, knows all the history.
0:49:21 And it’s also very good at tying different disciplines, right?
0:49:22 It is an expert in all of these things.
0:49:28 And so I think we’ve been stuck in this morass and it’s a bit of a liberator.
0:49:31 So we can actually focus on the new problems and know we’re doing new things.
0:49:34 And so I’ve got this very optimistic view of where it’s pulling us.
0:49:39 And so I know it’s more of a philosophical answer to the question that you asked, but in
0:49:44 a way, I think it needed to happen to get to the next level of problems that we need
0:49:44 to solve.
0:49:49 In terms of societal implications there, I mean, the worst question ever is like, oh,
0:49:51 the job displacement question.
0:49:53 But I am intrigued.
0:49:54 Yeah.
0:49:58 Because in the one hand, I see intense job displacement happening faster than ever.
0:50:04 And then I’m also very aware of Brad Feld wrote a brilliant post where he basically said every
0:50:08 single cycle, every time we’ve always said, oh, what are we going to do?
0:50:10 Calculators, what are we going to do?
0:50:11 Computers, what are we going to do?
0:50:13 AI now, what are we going to do?
0:50:13 Yeah.
0:50:18 To what extent does this actually require the what are we going to do versus another for fuck’s
0:50:19 sake?
0:50:20 Don’t we see the pattern?
0:50:21 Yeah.
0:50:24 So I’m very sympathetic to concerns around job displacement.
0:50:27 And I think we should take them very seriously as a society.
0:50:29 Like I’m in no way libertarian.
0:50:33 I think that this is kind of where governments do step in and we do help out.
0:50:35 But first, we have to understand.
0:50:37 And it’s actually very unclear.
0:50:39 So let me tell you just a quick anecdote.
0:50:48 So I, you know, my cousins are all pretty like, I think high end is the wrong term, but
0:50:50 they’re pretty established translators.
0:50:53 And they have been for a long time, multiple languages.
0:50:55 And, you know, they visited recently.
0:50:57 This is a husband and wife pair.
0:51:01 And they’re like, listen, like, you know, we have to change jobs because translation is
0:51:01 all going to AI.
0:51:08 And I asked, I said, you know, so the jobs are going away.
0:51:10 And they said, well, no, they’re shifting.
0:51:14 And now instead, we’ve got to like spot check these AIs.
0:51:18 And the only way we can hold it up to our standards if we rewrite the entire thing, but they won’t
0:51:19 pay for that.
0:51:22 And I don’t, you know, by the way, these are Italian.
0:51:25 So they speak this way, but they’re like, you know, I can’t work on something without a soul.
0:51:25 Right.
0:51:34 And I think that their dilemma is a good microcosm for the broader dilemma, which is one thing
0:51:39 that’s very unique about AI is that it actually requires today a human handler.
0:51:42 I mean, they’re just so unpredictable.
0:51:48 You know, I mean, most of the use cases that we know, all the monetized use cases have a human
0:51:49 on the other side of it.
0:51:50 Right.
0:51:54 I mean, coding, you’ve got a professional coder, all the creative stuff.
0:51:57 You’ve got, you know, somebody like doing all of the creation.
0:52:01 I mean, these are, it’s kind of an enabler and that’s a tool.
0:52:04 But the nature of what you do does shift.
0:52:08 And that’s very different than, for example, electricity, where like it doesn’t require a human.
0:52:13 Like it’s like either you light the fire or like there’s no fire to light.
0:52:18 And so, you know, I think we as a society need to understand the level of displacement.
0:52:19 We have to understand.
0:52:20 I think it’s very important that we do.
0:52:22 I think these are things that governments should get involved in.
0:52:26 I do just have to turn to your venture investing just before we do a quick fire.
0:52:29 Do you enjoy it as much as you did before?
0:52:31 It is a much faster landscape.
0:52:32 The money is much bigger.
0:52:35 Do you enjoy it as much as you did before?
0:52:40 I spoke to many of your founders and they said, they said that they didn’t think you enjoyed
0:52:45 the administrative work that you now have to do with the size and scale of Andreessen.
0:52:48 Oh, well, those are two different questions.
0:52:50 I love, I love, I love, I love the investor.
0:52:51 I mean, the investing is great.
0:52:55 It’s just the most exciting time in the industry since the late 90s.
0:52:57 It’s great to be part of a super cycle.
0:52:57 I love it.
0:53:05 Actually, no, I love the, I actually really like the firm building side.
0:53:13 It’s, you know, I mean, frankly, I could do without, you know, endless meetings, but I’ve
0:53:17 actually been pretty, I’ve actually been pretty good at limiting those too.
0:53:21 And so, no, no, I think this is actually the most exciting time to be in the industry and
0:53:21 venture.
0:53:23 I, I, I, candidly, I’m not trying to, I’m not trying to bullshit.
0:53:26 No, no, no, dude, I’m, I’m, I’m a venture investor too.
0:53:28 I was, I’m, I’m with you and I say the same to our LPs.
0:53:34 Is your price elasticity more on deals because of the super cycle entry point that we’re in
0:53:38 or less because of the risk or uncertainty level that we’re in?
0:53:45 So philosophically for me, philosophically, I just think the market sets the price.
0:53:51 I just, I, I just don’t have the hubris to, to think I can somehow outsmart the market or
0:53:54 like a single deal is going to like bend to my will.
0:53:59 And so, I mean, philosophically how we think about investing in general is…
0:54:00 No way because of price often.
0:54:03 Price, no.
0:54:03 Ownership, yes.
0:54:07 What is the ownership you need?
0:54:14 It, it, it, it all depends on the fund, the market, the size of the market, the understanding
0:54:15 of risk.
0:54:17 Everything comes down to ownership for us, not price.
0:54:22 I mean, you just can’t make the fund mechanics work, you know, if you don’t get the ownership.
0:54:27 Now, for very, very, very, very, very large markets that are obviously very large for very
0:54:31 large checks, then we don’t care as much, but that tends to be growth territory anyways.
0:54:36 For early stage investments, you know, you kind of need to understand what the median outcome
0:54:42 is and you have to be able to, um, size the median outcome in a way that at least returns
0:54:44 say a fifth of the fund or half of the fund.
0:54:46 Is that not the joy of being at Andreessen?
0:54:52 You can take a 5% ownership on first check because you can size up into the next and size
0:54:52 up into the next.
0:54:56 Is it not my challenge that I have to get as much as possible on the seed or the A?
0:55:00 So the way that I view it is a bit different, which is, I think there’s, I think there’s,
0:55:03 there’s two legit ways of investing now that have emerged.
0:55:08 One of them is you’re very much a specialist and you’ve got a special network, special value.
0:55:11 You understand a special, sorry, you understand, sorry.
0:55:12 You understand a special size of the market.
0:55:15 Like, like you’re very, very much a specialist.
0:55:23 Um, and that is kind of how you win deals, get the ownership, keep the ownership, and then
0:55:24 make your company successful.
0:55:30 The other one is, and I wouldn’t say it’s like an AOM thing, but it’s like you have all
0:55:36 of the products so that you can be adaptive in the market because, you know, I’ve been doing
0:55:37 this for 10 years.
0:55:41 The strategy that works has shifted this entire time.
0:55:45 Sometimes it’s early, sometimes it’s mid-stage, sometimes it’s collaborating with growth.
0:55:49 And so if you don’t have, honestly, sometimes it’s credit.
0:55:52 Sometimes, which we don’t have a credit fund, but I can understand why people do it.
0:55:59 And so the market is competitive and everybody’s scrambling for deals.
0:56:07 And if you don’t have the different funds or products to offer, then often that’s kind
0:56:10 of where people are going to squeeze you out or get alpha, et cetera.
0:56:14 And so I think that for, for the game that we play, it’s very, very important that you
0:56:20 have all of these funds and the ability to enter at all stages for exactly that reason.
0:56:22 And so again, I don’t think it’s a you and me thing.
0:56:27 I think you play a very different game than we do, uh, because I do think that on one side,
0:56:31 like, you know, you have to go very specialized, very focused, very early, where for us, you
0:56:37 know, we’re trying to find out what is the right time to enter, uh, um, to, to, you know,
0:56:39 to get the ownership that we need.
0:56:44 What’s the size of fund that you primarily invest out of day to day?
0:56:45 I know you have flexibility.
0:56:46 1.2 billion.
0:56:49 So I run the infrastructure fund, which is $1.2 billion fund.
0:56:53 So my challenge here is your cost of capital is just so much less than mine.
0:57:00 Your ability to put a larger check in bluntly with, with much more confidence is there because
0:57:06 I’m investing out of a $275 million series A fund and $125 million C fund.
0:57:10 It’s just like much more meaningful dollars for me than it is for you, which will affect
0:57:10 my willingness.
0:57:11 Yeah.
0:57:16 Well, my, my, my challenge is like, we have to live with these investments forever and conflicts
0:57:18 are very, very, very difficult for us to do.
0:57:21 And so we don’t enter very often at the stage that you do for this reason.
0:57:22 I mean this respectfully.
0:57:27 Everyone chastises Andreessen for their conflicts and for investing in many conflicting companies.
0:57:28 Do you think that’s unfair?
0:57:34 It’s so hard to keep your nose clean on this one because especially with a shift towards
0:57:38 AI, companies pivot all the time after you invest.
0:57:42 Like I don’t recall me like intentionally investing in a car.
0:57:48 In fact, I mean, we routine, I would say the number one reasons we don’t, that’s not
0:57:48 true.
0:57:51 One of the top reasons we don’t invest in companies is because of conflicts.
0:57:52 I mean, we do it.
0:57:53 I mean, I just did it.
0:57:55 I mean, just recently, I can’t say the name of the company.
0:57:57 We didn’t invest because it was a hard conflict.
0:58:01 And even though like, by the way, the company, the portfolio company was not doing the thing,
0:58:02 but it was on the roadmap.
0:58:03 And the founder called me.
0:58:05 He’s like, Martin, you just can’t invest in this company.
0:58:05 I said, okay.
0:58:08 So I think we try our best.
0:58:10 Can you say, okay, like, sorry, sorry, just to push back on you there.
0:58:14 If it’s not on the roadmap, I’m really sorry, founder.
0:58:16 I have as much faith and conviction in you as possible.
0:58:20 But if it’s not on the roadmap, I’m not having you tell me how to do my job.
0:58:23 So here’s my talk track.
0:58:25 And it’s evolved over the years.
0:58:29 And I stole this from Chris Dixon, which is I say, listen, you have one mortal enemy.
0:58:32 You choose whoever that mortal enemy is.
0:58:33 And whoever it is, I’m with you.
0:58:34 We’re going to go kill that mortal enemy together.
0:58:37 But you get one, you don’t get an arbitrary number of mortal enemies.
0:58:39 And so in this case, I’m like, listen, is this it?
0:58:40 Is this your one mortal enemy?
0:58:42 And the founder said, yes, this is the one mortal enemy.
0:58:43 I’m like, all right, fuck them.
0:58:44 Let’s go kill them.
0:58:46 And that’s, and that’s it.
0:58:52 That’s, that’s, that’s kind of now, listen, we have a number of companies where they pivot
0:58:55 midstream and they start competing after we’ve invested.
0:58:57 It happens all the time.
0:59:01 And we also do have the venture and the growth fund.
0:59:07 And we try to minimize conflicts there, but sometimes they happen, you know, just very different
0:59:09 stage companies, very different teams working on it.
0:59:14 But I would say that we try very, very hard to steer away from conflicts.
0:59:19 Given the nature of, as you said that the volume of pivots that occur today, given your entry
0:59:24 point, I always advocate wholeheartedly for being 98% founder.
0:59:30 And then you have wonderfully smart people like Elad Gill wholeheartedly advocate for being
0:59:31 market first.
0:59:38 How does the pivot frequency and experiences you’ve had impact your prioritization mechanism
0:59:39 around where you spend time?
0:59:45 So I don’t, I don’t want to speak for Elad, but that’s not my experience working with Elad.
0:59:46 And I’ve done many deals with him.
0:59:49 Elad is very, very focused on the founder.
0:59:55 I think the one thing I would say is he’s very good with founder market fit, maybe the
0:59:56 best in the industry.
1:00:00 I have a huge respect for how Elad invests.
1:00:01 Unpack that.
1:00:04 Why and how does he do founder market fit that’s the best?
1:00:08 He will find a market that he really likes.
1:00:11 And sometimes it’s like even a fast follow market, right?
1:00:20 Like, you know, and then he will find who he thinks is a great founder for that market.
1:00:25 And so he’s very good at like this kind of boy band construction based on the market.
1:00:31 The primary point I want to make is very much in his investment cycle, the founders have always
1:00:31 mattered.
1:00:32 Any of this, he’s followed on deals.
1:00:33 I’ve done, I’ve followed on deals.
1:00:35 He’s done, we’ve done a bunch of deals together.
1:00:40 I’ve never, I’ve never gotten the impression.
1:00:43 I mean, I’ve actually always got the impression that they actually, the founders, the primary
1:00:47 decision once he’s chosen the market.
1:00:48 So I would, I would say it’s a primary concern for him.
1:00:54 When you have misjudged a founder, what did you not see that you should have seen?
1:00:57 So can I ask you, can I ask you your previous question?
1:00:59 Cause you’re like, okay, so how do I, how do we think about it?
1:01:05 So, so I, we think about it very, very simply, which is, um, the only sin in investing and
1:01:06 I’ve sinned so much.
1:01:11 The only sin investing is, is, is, is, is missing the winner.
1:01:16 Like there’s no, it’s fine to like invest in a category that doesn’t work.
1:01:17 It’s fine to lose money.
1:01:21 I’ll, but like, if you choose the wrong company, like that’s, that’s not okay.
1:01:26 And listen, I, it’s just so hard to, to get it right all of the time.
1:01:31 And so the way that we view it is we just look for viable, you know, what are viable spaces?
1:01:34 And it’s, it’s, it’s determined viable because…
1:01:37 Someone said to me the other day, I’m so sorry to interrupt you, that at Andreessen, you get
1:01:41 killed for choosing the wrong company, but being right about the space.
1:01:45 You won’t get killed if you were just wrong about a space.
1:01:45 Correct.
1:01:46 That’s exactly right.
1:01:47 Yeah.
1:01:47 Yeah.
1:01:52 So the, the, the view is like, you, like there’s, there’s basically no amount of work you can
1:01:54 do to determine if a space is going to work or not.
1:01:59 I mean, that’s just, you know, that’s like weather prediction, but given a set of companies,
1:02:02 you can actually do the work to understand which one of those is the best.
1:02:04 Now we’ve got it wrong.
1:02:06 Do you think you can?
1:02:09 The question is, is can you beat the market with that strategy?
1:02:10 Yes.
1:02:11 I think you can beat the market.
1:02:14 No, I do not think that you can equivocally tell the best.
1:02:18 Can you beat the expectation of the market by, by, by running this strategy?
1:02:20 I would say yes.
1:02:23 Can you specifically pick the winner every time?
1:02:24 Absolutely not.
1:02:25 Clearly not.
1:02:29 When did you most poignantly for you pick the market,
1:02:30 but pick the wrong horse?
1:02:36 I just don’t want to, you know, I, I don’t want to call out any specific company.
1:02:38 Fair enough.
1:02:41 When you, when you think about like, you mentioned sins there,
1:02:45 what was a big sin that comes to your mind when you?
1:02:48 Well, yeah, I mean, I mean, I can answer the opposite.
1:02:52 There’s a bunch of markets that just haven’t, haven’t really worked.
1:02:57 Like, um, you know, the entire streaming market has been very, very tough.
1:03:03 Like Dave, the data streaming market, it’s just turned out to be a subset of the, the analytics
1:03:04 batch market.
1:03:09 And so, you know, maybe, you know, ClickHouse is, Aaron Kess is doing phenomenal with, and
1:03:11 I’m not an investor, but he’s doing phenomenal.
1:03:13 But that may be the one breakout since Confluent.
1:03:17 But like, that’s just been a very, very tough space historically, whether you’re at the dashboard
1:03:21 layer, you have the transformation layer, you have the feature store layer.
1:03:24 It’s like, there’s been entire spaces where we played multiple bets where like, it just
1:03:26 didn’t, it just didn’t work out.
1:03:30 And so many, many, many times we’ll invest in a space where just none of them work.
1:03:34 You know, I will tell you, there’s definitely been companies where invested where at the time
1:03:36 the company was the very, very clear leader.
1:03:40 And then something happens, some macro shifts, some, you know, something else happened.
1:03:45 And, and, uh, you know, I think that’s just how the game goes.
1:03:47 And, um, and you’ve probably heard this.
1:03:51 I mean, the thing with actually having a strategy like that is if you’re trying to scale a venture
1:03:56 firm, you just need something that you can articulate and teach other people.
1:04:03 I just find it hard that if you pick the right market and the wrong horse, bad, bad, Martine.
1:04:03 Yeah.
1:04:06 But if you don’t pick the right market, fine.
1:04:13 To me, some points need to be given for the insightfulness to pick the right market and
1:04:16 some forgiveness to be seen for that.
1:04:17 It’s fucking hard to pick the horse.
1:04:20 Almost I’d fire the one who picked the wrong market entirely.
1:04:21 Where was your insight at least?
1:04:22 Yeah.
1:04:26 And this is why you run your own venture firm and you can have whatever strategy you want.
1:04:27 I just.
1:04:28 Is that not, is that moronic?
1:04:29 No, I learned.
1:04:30 No, no, no, no, it’s not.
1:04:33 I, no, I just think it’s, I just think it’s philosophically different on the approach.
1:04:33 Right.
1:04:38 And so I actually don’t believe you can predict the future of technology adoption.
1:04:40 It’s a very tough thing, right?
1:04:43 I mean, you don’t know what a big company is going to do, can wipe out an entire market.
1:04:46 You don’t know what an innovation will wipe out entire markets.
1:04:47 This happens all the time.
1:04:52 I mean, you could argue that AI is, is really invalidating tons of markets and I don’t think
1:04:53 anybody could have seen that happen.
1:05:00 But if you have, say, 10 companies that have some traction and you can talk to the, you
1:05:02 know, the founders, you can diligence the teams, you can diligence the market, you can diligence
1:05:04 the project, you can diligence the technical approach.
1:05:08 I think you could just say something a lot more concrete than, you know, is some future
1:05:10 innovation going to wipe out this entire market?
1:05:18 Do you think it’s paradoxical or opposing to believe that both AGI will be dominant and
1:05:23 present in a set time period and to at the same time be investing in enterprise SaaS?
1:05:25 I don’t know.
1:05:28 I mean, I would say humans are AGI and we still invest in enterprise SaaS.
1:05:37 This is the problem is everybody somehow, they somehow think that AGI just means like unlimited
1:05:41 powerful and anything I want to disappear in the future disappears.
1:05:42 Like, come on, you’re AGI.
1:05:45 I’m AGI.
1:05:48 We invested in enterprise SaaS.
1:05:52 I think to be honest, Sam Altman sets the definition of what AGI is.
1:05:55 So whatever him and Microsoft decide is AGI will be AGI.
1:05:57 Dude, I want to do a quick fire round.
1:05:59 So I say a short statement, you give me your immediate thoughts.
1:06:00 Yeah?
1:06:01 Yeah.
1:06:04 What’s one of the most overhyped AI categories today?
1:06:05 ASI.
1:06:12 What’s one of the worst VC takes on AI you’ve heard recently?
1:06:14 Open source is bad for national security.
1:06:19 What one founder would you back in any category?
1:06:21 Whatever they did, I just want to wire them the money.
1:06:23 Michael Trull.
1:06:28 Why, specifically?
1:06:30 I’ve worked with him for a year.
1:06:31 He’s just remarkable.
1:06:32 He’s…
1:06:34 What makes him remarkable?
1:06:43 It’s just so rare that I’ve found a founder who knows…
1:06:46 He has three things.
1:06:48 He knows what he wants.
1:06:51 He’s got an intuition that’s impeccable.
1:06:55 And he listens incredibly well and gathers information.
1:06:59 And it’s a very, very potent combination.
1:07:02 And then, of course, he’s incredibly smart and he’s got great product taste.
1:07:08 What’s your favorite trait in yourself that has been most impactful to your own success?
1:07:11 Deep-seated anxiety from being poor?
1:07:15 Seriously.
1:07:16 I agree.
1:07:19 I mean, listen, I grew up like, you name it.
1:07:21 Food stamps, dirt road.
1:07:24 Like, I mean, I come from Montana.
1:07:24 It’s so funny.
1:07:26 People hear the name Martin and they’re like, oh, he must be…
1:07:28 And then, you know, I was actually born in Spain, so I’m a Spanish citizen.
1:07:32 So they’re like, you know, he must be some, like, sophisticated European.
1:07:35 I’m like, motherfucker, dude, I grew up on a dirt road in Montana.
1:07:37 Like, when there was hunting season, my school shut down.
1:07:40 Like, I’m like a Western country boy.
1:07:47 And so, you know, listen, I, you know, I mean, I had a great family.
1:07:48 I didn’t have any of those hardships.
1:07:50 I had a wonderful family and educated family.
1:07:53 And so, like, you know, we kind of muddled our way through.
1:08:00 But, you know, you go through that and you see how hard your parents work and whatever.
1:08:04 You just don’t take anything for granted.
1:08:09 And, you know, listen, I sold a very successful outcome from a company.
1:08:13 And I could have retired on that day.
1:08:18 And I still have not taken a day off or I haven’t worked since basically forever.
1:08:21 Now, listen, I’ll take, like, a week off while I have a job.
1:08:24 But I’ve never not had a job in, what, 20 years.
1:08:26 It’s just…
1:08:28 Did that day feel fucking awesome?
1:08:33 Coming from a dirt track and bunny food stamps, as you said,
1:08:35 you can retire today.
1:08:36 I know you didn’t.
1:08:38 But did it feel as good as you thought it would?
1:08:41 You know, it’s kind of an interesting thing.
1:08:44 No, I mean, no, I mean, it was very bittersweet.
1:08:48 I think actually selling companies is very bittersweet for any founder, right?
1:08:50 It’s like, you know, it’s a death in a way.
1:08:54 I mean, you know, you spend so much time with something and then it shifts.
1:08:57 But here’s the interesting thing.
1:09:02 And maybe this is kind of advice to other founders, which is you always think about…
1:09:06 You always think about that thing you’ll do when you, like, you know, make the $100 million
1:09:09 or whatever, you know, like, you know, I’m going to go do that thing.
1:09:12 But you only think about that thing in the most stressful times.
1:09:15 So my thing was, so my cousin’s a movie director.
1:09:17 His name is Vincenzo Natale, pretty legit guy.
1:09:20 And I was like, you know what I’m going to do?
1:09:25 As soon as, like, I, you know, the money hits the bank, I’m going to drive down to Hollywood
1:09:29 and I’m going to help him make movies and be an actor and just kind of be one of those people.
1:09:36 And so, you know, it happened, the wire hit, and I was driving down to Five.
1:09:39 And I’m like, what the fuck am I doing?
1:09:41 Like, I love technology.
1:09:43 I love my job.
1:09:44 I don’t know.
1:09:45 I hate Hollywood.
1:09:48 I don’t, I have nothing in common with these people.
1:09:52 You know, I probably got two hours out of town and I just turned my car around and came right
1:09:58 on back because I was like, you know, you only have those visions at the most stressful time.
1:10:02 And when you’re not stressed, you realize that there’s something that brought you to this
1:10:05 place and it’s genuine interest and genuine love of it.
1:10:11 And so my only, my only advice to other people going through this is just don’t, don’t use
1:10:15 those dreams that you concocted when you were like really in the pressure cooker, like not
1:10:18 sleeping, your relationships are falling apart, that whole thing.
1:10:21 Like, that’s not the thing, that steady state you’re going to want to do.
1:10:26 Like, you’re probably where you are because of, for the love of, and letting that go tends
1:10:28 to be pretty disastrous to some people.
1:10:31 Was making money or having money what you thought it would be?
1:10:33 You know, I had to play all of these tricks.
1:10:35 I actually borrowed one, which was very helpful.
1:10:41 Um, which was, uh, so I just have a, had a hard time spending money just cause like,
1:10:45 I mean, look, I mean like for me, like, you know, when I got into like the Stanford PhD
1:10:51 program, this is so embarrassing, but like, um, we always thought like $20 was like a lot
1:10:52 of money growing up.
1:10:54 Like, you know, and we’d call it like the yuppie food stamp.
1:10:56 Cause it was like 20 bucks.
1:11:00 And, and, uh, I remember I was like, I was going to go to bites cafe and I was going to pay
1:11:05 with $20, like a $20 bill, because like, that’s kind of like some like stamp of like,
1:11:06 having money.
1:11:10 So I was just, you know, I was just so naive to all of these things.
1:11:16 Um, and so like, it was just very hard for me to like, you know, like, like once, you
1:11:19 know, I made enough, you know, generational, you know, I made generational wealth to do it.
1:11:22 And so I talked to a friend of mine who went for a similar thing.
1:11:23 He’s like, you know what I did?
1:11:26 He said, I, I came up with, let’s, you know, let’s call him Brad.
1:11:32 I came up with a Brad coin and the Brad coin, let’s say I’m worth, you know, 10 times more
1:11:34 than like an average rich person.
1:11:37 So my, the Brad coin is worth, you know, 10 times more.
1:11:39 So I buy thing in Brad coins.
1:11:46 And so if it’s, uh, you know, let’s say it’s a business class flight, right?
1:11:50 I mean, that’s $10,000, but in Brad coins, it’s only $1,000 and $1,000 sounds a lot better
1:11:51 than 10,000.
1:11:52 So I feel good.
1:11:56 So I actually had to adopt a lot of these mechanisms where like, I’ll make a Martine
1:11:57 coin and it’s worth this much money.
1:11:58 What got worse with money?
1:12:04 This is something I, I, I, I have to deal with all the time, but like, I mean, my wife forces
1:12:08 me to keep it real and she just won’t abide by any of the shit.
1:12:11 So man, I got three fucking dogs that are crazy.
1:12:13 Like she doesn’t like helping the house.
1:12:15 Like I drive a fucking Volkswagen.
1:12:21 We have three chickens in the back, you know, I’m like fucking schlepping the kid all the
1:12:22 time.
1:12:25 I mean, like, listen, man, if it were me, I would be living your life, man.
1:12:31 I’ll be like a hundred percent, you know, you know, being New York in the penthouse with
1:12:37 the private jet and instead I’m in a fucking Volkswagen with three dogs and a messy house
1:12:37 and no hell.
1:12:41 So I just like, I, uh, yeah.
1:12:43 Dude, you’re so whipped.
1:12:47 You know, it’s not, it’s not even that, right?
1:12:51 It’s like, you know, like, I mean, this is what marriage is, man.
1:12:52 Like, you know.
1:12:54 What’s your biggest lessons on marriage?
1:12:55 For me, I’m 29.
1:12:58 I got a great relationship, but not quite there yet.
1:13:01 What would you tell me about greatness in marriage that I should know?
1:13:05 Well, listen, I got it wrong once.
1:13:07 I’m not sure I can, uh, I’m the right guy to ask here.
1:13:10 Like my, my startup, my startup was really tough.
1:13:12 Like, you know, it was, it was really tough.
1:13:15 And I think that burned through my first marriage and she’s, she was great.
1:13:17 Yeah, fuck, dude, I’m the wrong guy to ask.
1:13:20 I’m really the wrong guy to ask.
1:13:25 I mean, I, I will say, I will say something, I mean, which is a different question than he asked,
1:13:33 but I think it’s important, which is, I have found that men in particular, um, that have stable
1:13:39 relationships just do a much better job in work.
1:13:41 Um, they’re just much more stable.
1:13:44 I think the best founders I have tend to be like have families and et cetera.
1:13:50 And, um, I do think again, like, you know, I don’t want to make it a gender thing.
1:13:50 Maybe it’s not.
1:13:51 It may just be my observation.
1:13:55 I work with a lot of men that like families are really, really, really good for men,
1:13:57 even though they can be a pain in the ass.
1:14:05 And so I just think the only high level, high level view is like, it’s just, these things
1:14:06 are super important.
1:14:10 And so like whoever you have and you’re working with it, like, it’s an important thing
1:14:14 that like, kind of like, it really is keeping you grounded.
1:14:19 I mean, in my, I mean, in my case, listen, like, I mean.
1:14:22 You got chickens, baby.
1:14:25 I mean, you know, it’s like the, it’s like the, what does Zorba the Greeks say?
1:14:27 It’s the full catastrophe.
1:14:30 I know it’s the only way I can do what I do.
1:14:32 There’s, there’s no other, there’s no other way.
1:14:32 Right.
1:14:35 I mean, like the level of pressure is the amount of work that I do.
1:14:38 I mean, I’d probably work all in 80 to a hundred hours a week.
1:14:40 I’ve been doing it for 10 years.
1:14:46 I mean, the amount of demands, I just, it’s very, very hard to do with like, without like,
1:14:48 you know, support and grounding.
1:14:53 And so, you know, in a way, again, like I’m not the right person to ask her, like, how do
1:14:53 you treat your wife?
1:14:56 Like I just, whatever, like I’m, I’m a fucking autistic nerd.
1:15:02 Like I have no idea, but I do know that these things are incredibly important for, for us
1:15:04 and, and you should value them and treat them as such.
1:15:10 If you think about Andreessen in 10 years time, where do you think Andreessen will be then?
1:15:16 Like what does the 10 years ago when you remember it, it was a fucking different firm, amazing
1:15:20 and innovative in its own time, but it was from where it is now night and day.
1:15:22 Where is the 10 year Andreessen in 2035?
1:15:31 The most remarkable thing about the firm, in my opinion, is that it’s able to evolve and
1:15:33 adapt very aggressively because the way it’s structured.
1:15:36 I mean, Mark and Ben really are the top of the firm.
1:15:37 They really are.
1:15:40 And I think it’s a feature, not a bug.
1:15:46 And I think it’s very, I mean, it’s kind of a historical quirk that VC was created around
1:15:47 a partnership model.
1:15:50 Like that’s the same thing you’d use for a dentist office or a law firm.
1:15:57 And I think it’s, there’s positives in that there’s a bunch of different agendas that kind
1:16:02 of, kind of sit at the same level, but for like decision velocity and disruptive change,
1:16:03 it’s death.
1:16:06 And so I think that that’s a massive benefit to the firm.
1:16:10 I’m just delighted that this is the way it is because they can make these big aggressive.
1:16:12 So I don’t know what it’s going to look like in 10 years.
1:16:17 I guarantee it’s going to look different as it evolves with the landscape.
1:16:20 Martin, I so appreciate you, dude.
1:16:21 You are fantastic.
1:16:21 You’re open.
1:16:22 You’re honest.
1:16:27 I love the last 15 minutes there, but I really appreciate you, man.
1:16:27 Yeah.
1:16:28 Likewise.
1:16:29 Harry, always a pleasure.
1:16:30 You’re the best.
1:16:34 Thanks for listening to the A16Z podcast.
1:16:39 If you enjoyed the episode, let us know by leaving a review at ratethispodcast.com slash
1:16:40 A16Z.
1:16:42 We’ve got more great conversations coming your way.
1:16:44 See you next time.
1:16:50 This information is for educational purposes only and is not a recommendation to buy,
1:16:53 hold, or sell any investment or financial product.
1:16:57 This podcast has been produced by a third party and may include paid promotional advertisements,
1:17:00 other company references, and individuals unaffiliated with A16Z.
1:17:06 Such advertisements, companies, and individuals are not endorsed by AH Capital Management, LLC,
1:17:08 A16Z, or any of its affiliates.
1:17:13 Information is from sources deemed reliable on the date of publication, but A16Z does not
1:17:14 guarantee its accuracy.
1:17:24 A16Z, or any of its affiliates.
1:17:24 A16Z, or any of its affiliates, or any of its affiliates, or any of its affiliates.
1:17:25 A16Z, or any of its affiliates, or any of its affiliates, or any of its affiliates.

In this interview from the 20VC podcast, Martin Casado (a16z General Partner) joins Harry Stebbings to unpack the state of AI, the rise of coding models, the future of open vs. closed source, and how value is shifting across the stack.

Martin offers a candid view of the opportunities and dangers shaping AI and venture capital today.

 

Resources: 

Find Martin on X: https://x.com/martin_casado

Find Harry on X: https://x.com/harrystebbings

More about 20VC:

Subscribe on YouTube: https://www.youtube.com/@20VC

Subscribe on Spotify:

https://open.spotify.com/show/3j2KMcZTtgTNBKwtZBMHvl?si=85bc9196860e4466&nd=1&dlsi=d1dbbc6a0d7c4408

Subscribe on Apple Podcasts:

https://podcasts.apple.com/us/podcast/the-twenty-minute-vc-20vc-venture-capital-startup/id958230465

Visit their Website: https://www.20vc.com

Subscribe to their Newsletter: https://www.thetwentyminutevc.com/

Follow 20VC on Instagram:  https://www.instagram.com/20vchq/#

Follow 20VC on TikTok: https://www.tiktok.com/@20vc_tok

 

Stay Updated: 

Let us know what you think: https://ratethispodcast.com/a16z

Find a16z on Twitter: https://twitter.com/a16z

Find a16z on LinkedIn: https://www.linkedin.com/company/a16z

Subscribe on your favorite podcast app: https://a16z.simplecast.com/

Follow our host: https://x.com/eriktorenberg

Please note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures.

Leave a Comment