AI transcript
0:00:14 And if we’re going to make a departure from a posture that was developed from 40 years, we better have a pretty damn good reason.
0:00:17 Today, a new frontier of scientific discovery lies before us.
0:00:21 You can sometimes judge a book by its cover, and I think this is a strong start.
0:00:25 The conversation around AI regulation in the U.S. has changed dramatically.
0:00:30 Just a year ago, the loudest voices were calling to pause or shut down open-source AI.
0:00:33 Today, the U.S. is pushing to lead the global race.
0:00:38 So what changed? And what does it mean for innovation, competition, and the future of open-source?
0:00:49 I’m joined by A16Z general partners Martin Casado and Anjane Midha to unpack the new AI action plan, the politics behind it, and the implications for builders and policymakers alike.
0:00:51 Let’s get into it.
0:01:07 As a reminder, the content here is for informational purposes only, should not be taken as legal, business, tax, or investment advice, or be used to evaluate any investment or security, and is not directed at any investors or potential investors in any A16Z fund.
0:01:12 Please note that A16Z and its affiliates may also maintain investments in the companies discussed in this podcast.
0:01:19 For more details, including a link to our investments, please see A16Z.com forward slash disclosures.
0:01:27 So we’re talking a week or two after the action plan has been announced.
0:01:28 Looks like we’ve come a long way.
0:01:29 Yeah.
0:01:33 You guys have been on the front lines for years now in this discourse, fighting to make this possible.
0:01:38 Why don’t we trace where we’ve been so that we could then understand how we got here and where we’re going?
0:01:45 I mean, under the Biden administration, we had the executive order, which was basically the opposite of what we’re seeing today.
0:01:47 I mean, it was trying to limit innovation.
0:01:49 It was doing a bunch of fear-mongering.
0:01:55 But to me, what was even more striking was not regulators being regulators.
0:01:55 Right.
0:01:56 You’d expect that.
0:02:04 But if you remember, Ange, and this is why we got involved, is you would have these politicians, you know, making recommendations, which is fine.
0:02:04 You’d expect that.
0:02:06 But nobody was saying anything.
0:02:08 You know, it was like academia was silent.
0:02:08 Right.
0:02:10 The startups were silent.
0:02:14 And if anything, like the technologists were kind of supporting it.
0:02:19 So we were in the super backwards world where it was like innovation is bad or dangerous and we should regulate it.
0:02:20 We should pause it.
0:02:28 You know, there was this discourse and it was like somewhat fueled by tech as opposed, you know, and then nobody was going against it.
0:02:30 And so I think today we should definitely talk about the action plan is great.
0:02:36 But we should also talk about how the entire industry has kind of come around to say like, listen, we need to keep these things in check.
0:02:37 We need to be sensible.
0:02:37 Yeah.
0:02:38 I mean, pause AI.
0:02:39 That was two years ago.
0:02:43 Remember the big sort of, you know, all the CEOs signed this petition.
0:02:45 Oh, yeah.
0:02:48 I think that was the last AI action summit, right?
0:02:49 The one before Paris.
0:02:51 Guys, there’s been so many of these.
0:02:52 Yeah, I must try.
0:02:53 I must believe I must try.
0:02:53 No, no.
0:02:58 Remember, like, what was Dan Hendricks’s CAIS?
0:02:59 What was California AI?
0:03:01 The Center for AI Safety.
0:03:02 Center for AI Safety.
0:03:03 That’s it.
0:03:04 The nonprofit.
0:03:04 Yeah, yeah, yeah.
0:03:04 Yeah.
0:03:12 And then they got like all of these like people to sign this list, you know, like when you need to worry about the existential risk of AI.
0:03:13 And like that was the mood.
0:03:17 It was almost like, like, can I just do something by contrast, right?
0:03:21 So I was, you know, there during kind of like the early days of the web and the internet.
0:03:27 And at that time, you actually had examples of the stuff being dangerous, right?
0:03:29 Like Robert Morris, like, let out the Morris worm.
0:03:30 It took down critical infrastructure.
0:03:32 We had, so we had new types of attacks.
0:03:33 We had viruses.
0:03:34 We had worms.
0:03:36 We had critical infrastructure.
0:03:38 We actually had a different doctrine for the nation.
0:03:41 We said, you know, the more we get on the internet, the more vulnerable we are.
0:03:44 So instead of like mutually assured destruction, we have this notion of asymmetry.
0:03:47 So there was all of these great examples of why should we be concerned?
0:03:49 And what did everybody else do?
0:03:50 Pedal to the metal.
0:03:52 Invest more technology.
0:03:52 This is great.
0:03:56 And so like, you know, we were still at the time, like we wanted the internet.
0:03:57 We wanted to be the best.
0:03:58 We wanted to build it out.
0:04:04 You know, the startups were all over it and coming into this AI stuff two years ago, it
0:04:07 was the opposite, which is like, there were the concerns with new technology, which you
0:04:08 always have.
0:04:12 But like, there are very few voices that were like, actually, it’s really important we invest
0:04:13 in this stuff.
0:04:16 And so that’s kind of, to me, the bigger change is this more cultural change.
0:04:17 I think that’s right.
0:04:24 There was a moment in, I think it was last summer, where somebody sent you and me a link
0:04:26 to the SB 1047 bill.
0:04:31 And I remember Martin and I reacting like, there’s no way this is going to get any steam.
0:04:36 What was absurd to us, I think, was that it made it through the House and the Senate.
0:04:40 And it was on its way to a final vote and would have become law, one signature from the governor
0:04:41 later.
0:04:45 And I think there was this escalation where I realized, I think my view is that technologists
0:04:49 like to technology and politicians like to policy.
0:04:52 And we pretend like these two things are in different worlds.
0:04:55 And as long as these two worlds don’t collide and the engineers get to like build interesting
0:05:01 tech and there’s no sort of like self-own too early.
0:05:03 We generally trust in our policymakers.
0:05:09 And that changed completely, I think, last summer, which is a really weird cultural shift, which
0:05:09 is, no, no, no.
0:05:13 A lot of the policymakers who actually, I think, were quite open about the fact they didn’t know
0:05:17 much about the technology because it was moving so fast, still felt like something had to be
0:05:18 done.
0:05:19 Therefore, this is something.
0:05:20 Therefore, it must be good.
0:05:25 And that this was this, I think the most egregious example of this being adversarial was SB 1047.
0:05:26 I don’t agree.
0:05:32 But that culture shift was one from let’s let the tech mature and then decide how to
0:05:37 regulate it later to like before let’s try to regulate it in its infancy was like a massive,
0:05:39 I think, shift in my head.
0:05:40 But let’s just talk about how bad it was.
0:05:42 You had VCs.
0:05:49 Like their entire job is investing in tech, talking against open source.
0:05:53 You know, like Vinod, Founders Fund, they’re like, open source AI is dangerous.
0:05:55 It gives China the advantage.
0:06:01 And there was just some sort of prognostication that if we didn’t do open AI, like the Chinese
0:06:03 would somehow forget math and not be able to create models.
0:06:09 And then you forward by a year and they’ve got the best models by far and we’re way behind.
0:06:14 So it was like the people that are supposed to be protecting the U.S. innovation brain trust
0:06:17 were somehow on the side of the let’s slow it down.
0:06:23 And I think that now there’s this realization of actually China is really good at creating
0:06:24 models and they’ve done a great job.
0:06:28 We’ve kind of hamstrung ourselves from whatever discussion we were having.
0:06:29 And I think you’re right.
0:06:35 I think it was just like it’s good to be concerned about the dangers and job risks, but it has
0:06:36 to be a fulsome discussion.
0:06:37 You need both sides.
0:06:39 And when you and I jumped in, it just didn’t feel fulsome at all.
0:06:44 It was like one side was dominant and there’s almost no one on kind of the pro tech, pro
0:06:45 innovation, pro open source side.
0:06:47 I just think it didn’t feel grounded in empirics, right?
0:06:48 Well, certainly not that.
0:06:50 It came from, you know…
0:06:54 So what is the steel man of the critique of open source that they were making a couple
0:06:54 years ago?
0:06:56 That, you know, this is like a nuclear weapon.
0:06:58 Would you open source your nuclear weapon plans?
0:06:59 Would you open source your F-16 plan?
0:07:05 So the idea was that somehow like this was like, you know, nuclear weapons are not dual
0:07:06 use.
0:07:07 Nuclear energy is dual use, right?
0:07:09 An F-16 is not dual use.
0:07:11 Like a jet engine is dual use.
0:07:16 But a lot of the analogies that were used at the time were something that, you know, if
0:07:18 you squint one way, parts of it are dual use.
0:07:20 They could be used for good or for bad.
0:07:23 But like the examples were clearly the weapons.
0:07:24 And that’s what they would say.
0:07:26 They would say, listen, these things are incredibly dangerous.
0:07:28 Would you open source like whatever the plan is for an F-16?
0:07:34 And then, you know, the other side would slowly decided like this conversation is ridiculous.
0:07:35 We’ve got to go ahead and set up.
0:07:40 It says, you know, no, you would not do this for an F-16 because that is a fighter, you know,
0:07:41 jet.
0:07:44 However, like a lot of the technologies used to build it.
0:07:48 Yes, this is, you know, fundamental.
0:07:51 It’s not like people aren’t going to figure it out anyways.
0:07:54 And we need to be the leader, just like we were the leader in nuclear.
0:07:58 And we were, then by the way, in nuclear, like if you go historically, when that came
0:08:00 out, we invested incredibly heavily in it.
0:08:06 The things that we thought were proximal to weapons, of course, we made sensitive, you
0:08:10 know, but this, you know, all the universities were involved, like the entire country had
0:08:12 the discourse and that just wasn’t what was happening.
0:08:13 I think that’s true.
0:08:17 They were basically like there was a substantive argument against open source and there was
0:08:18 an atmospheric one.
0:08:23 And the substantive one was like the one Martin mentioned that the technology was being
0:08:25 confused for the applications, right?
0:08:30 And all the worst case outcomes of the applications or misuses were then being confused.
0:08:31 But they were also theoretical too.
0:08:32 It’s even worse than that.
0:08:36 It was like, you’re right in what you’re saying, but it was like, this could potentially create
0:08:37 bioweapons.
0:08:37 It was funny.
0:08:40 We got a bioweapon expert and he’s like, well, not really.
0:08:43 I mean, like the difference between like a model in Google is almost nothing.
0:08:47 But, you know, like that was used as this, you know, straw person argument and then it
0:08:49 could hack into a whole bunch of stuff like nobody had ever done it before, but it was
0:08:50 theoretical.
0:08:52 So it was like these theoretical arguments that were very specific.
0:08:54 Versus abroad technology.
0:08:55 That was one.
0:08:59 And then the atmospherics were, there was a famous former CEO who went up in front of
0:09:03 Congress and literally in a testimony said, the U.S. is years ahead of China.
0:09:08 And so since these are nuclear weapons and their misuses were being confused with the technology
0:09:12 and we’re so far ahead, let’s lock it down so we can maintain that lead.
0:09:16 And therefore our adversaries will never get their hands on it, which were both just fundamentally
0:09:21 wrong for the reason Martin said, like substantively, AI was not introducing new marginal risks.
0:09:24 So if you did an eval on how much easier it is.
0:09:25 Well, at least not identified at the time.
0:09:26 Not at the time.
0:09:32 I mean, you would go to Don Song, who is like a safety researcher, McCarthy genius fellow at
0:09:33 Berkeley.
0:09:36 And you’d say, what are the marginal risks of AI?
0:09:37 She’d say, great question.
0:09:38 We should research that.
0:09:39 That should be a good research problem.
0:09:44 The world expert on this question was like, this is a very important, but it’s an open
0:09:44 research statement.
0:09:45 Yeah.
0:09:49 So, so, so no empirical evidence at the time that AI was creating net new marginal risks
0:09:53 and just factual inaccuracies that we were ahead of China, because if you just paid attention
0:09:57 to what’s happening, DeepSeq had already started to publish a fantastic set of papers, including
0:10:00 DeepSeq Maths V2, which came out last summer.
0:10:03 And you’re like, okay, obviously these guys are clearly close to the frontier.
0:10:05 They’re not years behind.
0:10:09 And so when R1, DeepSeq R1 came out earlier this year, you know, a lot of Washington was
0:10:10 like shocked.
0:10:10 Oh my God.
0:10:12 They’re like, how did these folks catch up?
0:10:13 They must’ve stolen our weights.
0:10:17 No, actually it’s not that hard to distill on the outputs of our labs.
0:10:20 Have you actually looked at the author list of any paper in AI?
0:10:22 Like where do you think these people come from?
0:10:26 So, so I think those two things were, I’ve, it felt like we were being gaslit constantly
0:10:29 because both the content and the atmosphere were just wrong.
0:10:29 Yep.
0:10:29 Yep.
0:10:29 Yep.
0:10:32 Maybe one question for the smartest people or the most sober people who were against it
0:10:35 is like, maybe they were asking, where should the burden of proof be?
0:10:38 Because it’s hard to prove that there is risk, but it’s also hard to prove that there isn’t
0:10:39 risk.
0:10:43 And so this question of what’s risky, is it riskier to just go full steam ahead or is it
0:10:47 riskier to kind of slow down until we better understand sort of these models, you know,
0:10:48 interpretability, et cetera.
0:10:52 I mean, I think it’s really important to ground these hypothetical discussions on what we’ve
0:10:53 learned as an industry.
0:10:56 I mean, the discourse around tech safety has been around for 40 years.
0:11:00 So we went through it with compute, like remember when we’re like, okay, Saddam Hussein shouldn’t
0:11:04 have PlayStations because you can use GPUs to simulate nuclear weapons.
0:11:10 That was actually a pretty robust and real discussion, but that did not stop, you know, us from having
0:11:12 other people create chips or video games, right?
0:11:15 I mean, we went through the internet, we went through cloud, we went through mobile.
0:11:19 And so we’ve been through all of these tech waves and we’ve learned how to have this discussion
0:11:23 in a way that for the United States interest balances these two things.
0:11:26 And, you know, listen, we’ve had kind of areas that were very sensitive to national
0:11:26 governance.
0:11:30 Think about like Huawei and Cisco, for example, and we as a nation did start to put in kind
0:11:32 of import and export restrictions as a result.
0:11:40 And so I just feel these almost platonic, you know, polemic questions like the one that you
0:11:43 just posed aren’t rooted in 40 years of learning.
0:11:50 So all I ask is if we’re going to make a departure from a posture that was developed from 40 years,
0:11:52 we better have a pretty damn good reason.
0:11:58 And if we don’t have a good reason, then I think we should probably learn from that experience.
0:12:01 Yeah, I think extraordinary claims require extraordinary evidence.
0:12:05 And so the burden of proof should be on the party making the extraordinary claims.
0:12:09 And if there’s a party who’s going to show up and say, you know, AI models are like nukes
0:12:15 and California should start imposing downstream liability on open source developers for open sourcing
0:12:18 the weights, that’s a pretty high claim to make.
0:12:21 And so you should have like exceptional proof if you want to change the status quo.
0:12:27 And the status quo is you do not hold scientists liable for downstream uses of their technology.
0:12:28 That’s absurd.
0:12:32 That’s a great way to shut down the entire innovation ecosystem and start throwing literally
0:12:33 like researchers in jail.
0:12:33 We don’t want that.
0:12:35 We want them to be trying to push the frontier forward.
0:12:40 And I just don’t think that the tall claims were not being followed up by tall proof.
0:12:43 When we’re talking about open source, are we all talking about the same thing?
0:12:46 Meaning are there degrees of open source or is it kind of just like a binary?
0:12:50 Open weights, I think, was the primary contention, which is that if somebody put out an open,
0:12:55 the weights of a model and a bad guy took those weights, fine-tuned it, did something really
0:13:01 terrible two years later, the SB 1047 regime proposed that the original developer of the
0:13:05 weights and that they put out basically as free information should be held liable, which was absurd.
0:13:06 Right, right.
0:13:07 So I think a weight-
0:13:09 Yeah, I mean, I just want to make sure we’re very clear because, like, you know, people
0:13:10 jump on top of these things.
0:13:10 Right.
0:13:11 What he’s saying is correct.
0:13:18 So basically, if the weights were over a certain size and there was a mass casualty event.
0:13:22 I think catastrophic harm was the word used, but there were no real-
0:13:23 No, it was mass casualty.
0:13:25 There were so many versions-
0:13:26 Actually, I don’t know which version you’re talking about.
0:13:27 Yes, yes.
0:13:28 But I remember we actually looked it up.
0:13:35 The legal definition was three or more people were killed or the medical system was overwhelmed,
0:13:38 which there was actually precedence of this, including like a car crash.
0:13:38 Right.
0:13:39 Right.
0:13:42 And there was actually precedence of this happening in a rural area, which basically doesn’t have
0:13:43 any sort of capacity.
0:13:50 And so, you know, basically it would move the conversation to the courts and outside of
0:13:56 policy, which is, again, historically, we’ve taken a policy position on these things, which
0:14:01 follows precedence that we understand, you know, to make sure that we don’t introduce externalities,
0:14:06 like, for example, allowing, you know, China to race ahead with open source, which is, you
0:14:07 know, which has happened.
0:14:12 And the key thing is by moving it to the courts, even if you don’t, you could, you, one could
0:14:15 argue, oh, Anj, but like, sure, it’s moving to the courts.
0:14:16 That means it’s open for debate.
0:14:19 It’s not clear that open weights are going to be regulated with liability.
0:14:21 The point is that that creates a chilling effect.
0:14:25 The chilling effect is the idea that when, when our best talent is considering.
0:14:26 I could be sued.
0:14:29 Like, I’m, I, like, you know, I’m a random kid in Arkansas developing something.
0:14:33 Like, I don’t want to be in a world where it can be resolved in the cars.
0:14:34 Right.
0:14:35 Hey, I can’t even afford, you know, whatever.
0:14:42 And in a situation where you have an entire nation state backed entity like China may actually
0:14:43 doing the opposite of a chilling effect, right?
0:14:46 Surging a race to the frontier.
0:14:50 Why on earth would we want, you know, that there’s this meme of a guy on a bike and he
0:14:54 picks up a stick and puts it into his front and topples forward.
0:14:57 That’s the effect of a chilling, that, that is what chilling effect is, right?
0:14:59 At a time when your, your primary adversary is, is racing.
0:15:04 So let’s trace how the conversation has changed because we don’t see Vinod tweeting about open
0:15:04 source anymore.
0:15:07 Obviously, open ad has changed your tune, especially right now.
0:15:09 What, um, is it really just deep seek?
0:15:14 Is that, or how do you trace kind of how, how the sentiment shifted on open source?
0:15:15 Let’s, let’s go through a few theories.
0:15:17 I’m not really sure what happened.
0:15:27 I almost felt like it was almost culturally in vogue to be a thought leader on the negative
0:15:28 externalities of tech.
0:15:31 And it kind of started with Bostrom, but it was picked up by Elon.
0:15:37 It was picked up by, um, uh, Moskowitz.
0:15:42 I mean, a bunch of like these intellectuals that like we all respect and still do.
0:15:46 I mean, they’ve, they’re just really the titans of our industry and our era.
0:15:51 They were asking these very interesting intellectual questions around like, do we live in a simulation?
0:15:55 What happens if AI can recursively, uh, self-improve?
0:15:59 And then actually, you know, they created whole kind of cultures and online social discourse around
0:15:59 this stuff.
0:16:06 And so I think to no small part, that became a bit of a runaway train and it’s just catnip
0:16:11 to policymakers, you know, and so I, I think part of it is like people didn’t really realize
0:16:16 that this has become so real because of course, GPT two comes out and then three comes out and
0:16:17 like, oh, this stuff’s amazing.
0:16:18 And somehow it got conflated.
0:16:24 So I think part of it is just a path dependency on, on where we came from, which is kind of
0:16:25 the legacy of Bostrom.
0:16:26 I think that was part of it.
0:16:31 I think the ungenerous approach would be, would be that there was a lot of discourse
0:16:35 is awesome, but a lot of the people pushing the discourse were first order thinkers.
0:16:37 They weren’t doing the math on, wait, wait a minute.
0:16:42 If policymakers who have no, um, background in front area, which by the way, nobody does
0:16:48 because this space is only three, four years old, start to take discourse as canon, which
0:16:49 is a big difference.
0:16:51 Then what happens, what are the second and third order effects?
0:16:55 And the second and third order effects that are that you start making laws that are really
0:17:03 hard to undo and, and start mistaking interesting thought experiments as the basis for policy.
0:17:06 And once that happens, those of us who’ve look, law, law is basically code.
0:17:08 Code is, code is hard to refactor.
0:17:10 Law is like impossible to refactor.
0:17:14 And so I think the second and third order, third order effects was that were of a lot of
0:17:18 well-intentioned folks, for example, in the existential risk community saying, look, if you’re
0:17:22 intellectually honest about the rate of progress of AI, it’s not crazy to say that there
0:17:24 are some existential risks on the technology.
0:17:24 It’s non-zero.
0:17:25 Sure.
0:17:26 Yes, that is true.
0:17:30 But then to then say that that threshold is high enough to start introducing Nash, sweeping
0:17:36 changes in regulation to the way we create technology, that leap, I don’t think a lot of the early
0:17:38 proponents of that technology realized they would do that.
0:17:43 In fact, I think Jack Clark, who runs Policy Philanthropic, literally tweeted like towards the
0:17:45 end of the SB 1047 saga.
0:17:50 He was like, I guess we, we should have, we didn’t realize the impact of how far this could
0:17:51 have gone.
0:17:56 And I think to those of us who had interacted with DC before and regulation before it, like
0:18:01 the second and third order, third order effects were much more discernible or legible.
0:18:04 And then I think what DeepSeq did was just made it super legible to everybody else.
0:18:09 So I think they were already, like, I think DeepSeq was the catalyst, but it wasn’t like
0:18:14 there was a step, it didn’t change the reality that the second and third order effects of policymakers
0:18:19 confusing sort of like discourse for fact were always going to be terrible.
0:18:22 I just think it brought to light something a lot of us were already saying, which is we’re
0:18:26 in, we’re in a race with adversaries and that should be the calculus we, the calculus we should
0:18:27 be working backwards from.
0:18:27 Yeah.
0:18:31 There was, there was always this prevailing view, which has turned out to be so wrong from
0:18:36 really well-intentioned people, which was like, it’s going to be regulated anyways.
0:18:40 If it looks like we’re self-policing, we can dictate, you know, how that happens.
0:18:41 Right.
0:18:47 Um, and unfortunately that just turned out not to be true because, you know, whatever self-policing
0:18:50 we seem to be doing scared the shit out of people and they ended up like, and then of
0:18:54 course I would say very opportunistic elements in tech decided to use that for whatever agenda
0:18:55 that they had.
0:18:57 And so it kind of, it got away from us.
0:18:57 And so.
0:19:00 Mark had this sort of the Baptist bootlegger.
0:19:01 Yes.
0:19:02 I was just going to say exactly.
0:19:03 True believers.
0:19:07 And then, um, sort of people who use the sort of that thinking for, to support their own
0:19:07 ends.
0:19:09 And it, and it seems like that’s changed even just on the company.
0:19:12 But the, but the reality is, I think it was driven.
0:19:14 I think the majority of people are, are neither.
0:19:15 Yeah.
0:19:20 The majority of people are pragmatists that are not trying to take advantage of the system
0:19:24 that think, well, maybe if we have this discourse, it’s an honest discourse and then we’ll self-police.
0:19:30 And then I just feel like the silent majority was not part of the discussion.
0:19:34 Maybe the biggest change now is like, those people are there.
0:19:35 Like the founders are there.
0:19:36 Academia is there.
0:19:36 VCs are there.
0:19:40 Now, now the people that are not either Baptist or bootleggers are driving the discussion,
0:19:42 which actually is independent of the action plan itself.
0:19:45 I feel much more in a better position now.
0:19:48 Like for example, there’s still a bunch of stupid regulation that’s popping up, but I’m
0:19:51 not calling Ange at night and like, we have to do something now.
0:19:55 Cause I feel like, okay, there’s actually representation that’s sensible where at the time there was
0:19:55 none.
0:19:56 Right.
0:19:59 And I think to move, you know, to the, to the action plan, I think this is a great, like
0:20:01 if you read the first page, right.
0:20:02 What a marked shift.
0:20:06 The fact that the coauthors include technologists.
0:20:06 Yeah.
0:20:07 Right.
0:20:11 It’s, and I think that was the core problem is DC is a system, like a self-contained system
0:20:13 and the values is self-contained system.
0:20:17 And I think a lot of the people here were assuming best intentions over here and vice versa.
0:20:23 And what happened is a few bad actors essentially use that arbitrage opportunity to represent Silicon
0:20:25 Valley’s views incorrectly in DC.
0:20:30 And when we saw some of the legislation, we had policymakers calling us up and saying, wait,
0:20:35 you guys aren’t happy with 1047, but the guys, your, the other tech people were calling us
0:20:37 and saying, you’d love more of this kind of regulation.
0:20:38 We would say, what other tech people?
0:20:41 And it turns out we are not one homogenous group.
0:20:45 Little tech is extraordinarily different from big tech, which is extraordinarily different
0:20:47 from the academic communities.
0:20:52 And I think one of the things we had to contend with was like, we used to be one shared culture.
0:20:58 And then when tech grew, we actually, there are some major differences in the valley, at
0:20:59 least between parties.
0:21:00 We’re not one tech ecosystem anymore.
0:21:03 We have different interests and DC hadn’t updated that.
0:21:07 And I think what’s amazing about the action plan is it’s written by people who have bridged
0:21:12 both with enough representation across like the four or five different subcultures within
0:21:13 tech who have different interests.
0:21:13 Great.
0:21:14 I think that’s new.
0:21:14 Yeah.
0:21:20 Going back to open source, why don’t you talk a little bit about just sort of the, how different
0:21:24 companies, help us make sense of how different companies have thought about it or from a sort
0:21:26 of a business strategy perspective?
0:21:30 You know, maybe we saw Meta with maybe the first big open source push.
0:21:32 You know, OpenAI has sort of evolved there too.
0:21:36 I’ve seen even Anthropix seems to be evolving their dialogue a little bit.
0:21:41 How should we think about open sources as a business strategy in terms of what’s changed here and why?
0:21:47 Oh, look, I don’t think this is, this part is actually, is playing out beautifully along the
0:21:51 same trend lines of all previous computing infrastructure, databases, analytics, operating
0:21:52 systems like Linux.
0:21:57 The way it works is the closed source pioneers the frontier of capabilities.
0:21:58 It introduces new use cases.
0:22:01 And then the enterprises never know how to consume that technology.
0:22:05 And when they do figure out eventually that they want cheaper, faster, more control, they
0:22:10 need somebody like a Red Hat to then introduce them and provide solutions and services and packaging
0:22:13 and forward deployed engineering and all of that around it.
0:22:16 And which is why the arc generally in enterprise infrastructure has been closed source wins
0:22:21 applications and open source tends to do really well in infrastructure, especially in large
0:22:25 government customers, the regulated industries where there’s a bunch of security requirements,
0:22:26 things need to run on prem.
0:22:28 The customer needs total control over it.
0:22:31 Broadly, you could call that the sovereign AI market right now.
0:22:35 Lots of governments and lots of legacy industries are going, wait, this open source thing is really
0:22:36 critical to us.
0:22:40 So I think whereas two, three years ago, it was open source was viewed as like this like
0:22:44 largely philosophical endeavor, which it is.
0:22:46 Open source has always been political and philosophical by definition.
0:22:49 But now there’s an extraordinary business case for it, which is why I think you’re seeing
0:22:53 a lot of startups and companies also changing their posture because they’re going, wait a
0:22:57 minute, some of the largest customers in the world, enterprise customers happen to be governments
0:23:01 and happen to be legacy industries and fortune 50 companies and they want stuff on prem.
0:23:03 And that’s when you go adopt open source.
0:23:05 I say, I think there’s been a business shift as well.
0:23:05 I don’t know if you’d agree.
0:23:06 Yeah, this is great.
0:23:07 I, so I totally agree.
0:23:10 I, I do think it’s interesting to have a conversation where it’s the same and where it’s different.
0:23:14 Like everything I said is exactly right, which is we have a very long history with
0:23:18 open source and it’s a very useful tool for businesses, but also for research and academia,
0:23:19 et cetera.
0:23:21 But let’s just talk about businesses and startups, right?
0:23:22 It’s a great way to get a distribution advantage.
0:23:26 It’s a great way to enter a market where you’re not an incumbent and you’re a startup.
0:23:31 So it’s just kind of one of the tools for building in software that’s been used.
0:23:34 And open source has been used in a very similar way, right?
0:23:35 I mean, you can use it for recruiting.
0:23:36 You can use it for brand.
0:23:38 You can use it to get distribution.
0:23:39 And we see all of that.
0:23:43 But there’s something that’s unique about AI that software doesn’t have.
0:23:48 And like, we’re seeing very viable business models come out of it that don’t have the
0:23:49 limitations of traditional software.
0:23:51 And this is for two reasons.
0:23:55 One of them is like, open weights is not the ability to produce the weights.
0:23:58 But open software is the ability to produce the software.
0:24:02 Like if you give me open software, I can compile it, I can modify it, whatever.
0:24:03 But giving open weights, you don’t have it.
0:24:06 You don’t have the data pipeline, you know, when you’re talking about open weights.
0:24:12 So you don’t actually enable your competitors in the same way Opus Office enables it.
0:24:12 So that’s one.
0:24:17 The second one is this is very nice business model that’s kind of a piece dividend to the
0:24:22 rest of the industry, which is you open, you produce open weights to your smaller models
0:24:23 that anybody can use.
0:24:28 But the larger model you keep internally, which is actually also more difficult to operationalize
0:24:29 for inference, right?
0:24:30 I mean, there’s kind of good reasons to do this.
0:24:38 And then you charge for the largest model and then, you know, the smaller open models
0:24:40 you use for band or distribution or whatever.
0:24:45 And so I feel like it’s actually almost an evolved from a business strategy and an industry
0:24:47 perspective version of open source for these reasons.
0:24:53 I think it’s the AI flavor of open core, which was historically a theoretically, was supposed
0:24:57 to be a theoretically sort of sustainable model for open source software development, which
0:25:00 was really hard to implement because of the reasons Martin said, where once you gave away
0:25:02 the code, it was really hard for you to protect your IP.
0:25:05 But with weights, you can contribute something to the research community.
0:25:07 You can give developers control.
0:25:10 You can allow the world to red team it and make it more secure while you’re still able to
0:25:14 actually, because of the way distillation works and some of the ways like post training
0:25:18 works, you can still actually hold on to some of the core IP, which then allows you
0:25:19 to build a viable, sustainable business.
0:25:23 And that is unique about open, but also you have the data pipelines, you have the data.
0:25:27 Like, I mean, nobody else could, just because I give you the weights, doesn’t mean you can
0:25:29 recreate the model.
0:25:31 Like you could distill it to a subset model.
0:25:34 There’s a bunch of stuff you can do, but not necessarily recreated.
0:25:38 And so I, listen, having been kind of a student of open source business models for 20 years
0:25:44 and have watching, you know, it, it shaped the way that the, the industry has adopted and
0:25:45 built software.
0:25:50 We’re actually think that the, the AI one is, is more beneficial to the companies doing
0:25:51 it for sure.
0:25:55 But as a result of that, we’re going to continue to see a lot of it.
0:26:00 And so I think we should just kind of assume that open source is part of it and every country
0:26:00 is going to do it.
0:26:05 And one of the best things about this current AI action plan is it acknowledges that and it
0:26:10 wants to incent the United States to be the leader in it, which is such a traumatic shift
0:26:11 from where we were this time last year.
0:26:16 Yeah, there’s sort of an ecosystem mindset that people who, if you’ve worked in any kind
0:26:21 of developer business, which Marty and I unfortunately have spent, you know, way too long doing, you
0:26:25 know, working on dev infrastructure and dev tools, but you sort of internalize this idea
0:26:32 that when, if you, it’s often, you have to often sort of trade off short-term revenue for
0:26:33 long-term ecosystem value.
0:26:34 Right.
0:26:39 And I think what this, the action plan shows is that yes, in the short term, it may seem
0:26:44 like we’re giving away IP to the rest of the world by open sourcing weights and showing the
0:26:46 rest of the world how to create reasoning models and all of this stuff.
0:26:51 But in the longterm, if every other major nation is running their entire AI ecosystem on the
0:26:56 back of American chips and American models and American post-training pipelines and American
0:27:01 RL techniques, then that ecosystem win is orders of magnitude more valuable than any short-term
0:27:09 sort of give of IP, which anyway, as we saw with DeepSeq, that marginal headstart is minimal.
0:27:10 Okay.
0:27:13 So just to close the loop on open source, over the next several years, how do you predict
0:27:16 open source and closed source will intersect?
0:27:17 Like what will the industry look like?
0:27:18 Well, I think these are two different markets.
0:27:18 Yeah.
0:27:22 I mean, literally the requirements of the customers are completely different, right?
0:27:26 So if you’re a developer, you’re building an application and you happen to need the latest
0:27:30 and greatest frontier capabilities today, you have a different set of requirements than
0:27:36 if you’re a nation state deploying like a chat companion for your entire employee base of
0:27:37 like 7,000 government employees.
0:27:42 And you need, and the product requirements, the shape of how you provide those, do you deploy
0:27:43 them?
0:27:46 The infra, the service, the support, and then the revenue models are completely different.
0:27:51 And so often I think people don’t realize that closed source and open source are not just
0:27:54 differences in technology, but completely different markets altogether.
0:27:56 They serve different types of customers.
0:28:01 And so, and I think if you believe AI is this sort of explosive new platform shift, then
0:28:02 there’ll be winners in both.
0:28:07 I do think what we need to contend with is that it seems like it’s getting harder and
0:28:12 harder to be a category leader if you don’t enter fast.
0:28:16 Like the speed at which a new startup is able to enter the open source or the closed source
0:28:19 market and create a lead is absurd, right?
0:28:23 We both have the chance to work with founders who are, I mean, literally, you know, 20 something
0:28:27 year olds out of college, two years out of college building revenue run rate businesses
0:28:32 in the tens to hundreds of millions of dollars serving both of these markets expanding like
0:28:32 this.
0:28:40 And so I think the biggest mistake is to confuse these two markets as one and to do the classic
0:28:45 like, oh, let’s wait to see how they evolve because the pace at which a new entrant is able
0:28:47 to actually create a lead in the category is quite stunning.
0:28:49 Let’s go into the action plan.
0:28:52 What are our biggest reflections from it?
0:28:53 Where are we most excited?
0:28:58 If you look at the quote that they start with, I wanted to read it out because I thought it
0:28:59 was pretty poignant.
0:29:04 It was, today, a new frontier of scientific discovery lies before us.
0:29:07 And I thought that first opening line was fantastic.
0:29:11 Out of all the things they could have said, you know, they could have said we’re in a nuclear,
0:29:16 we’re in an arms race, which sure, the first page, the title says winning the race.
0:29:20 But if you actually start reading the document, the first sentence is a quote from the president
0:29:23 that says, today, a new frontier of scientific discovery lies before us.
0:29:26 And I love that they led with something inspirational.
0:29:27 Yeah.
0:29:31 Because ultimately, the technology has to confer some benefits on humanity.
0:29:39 And I personally, I just love the fact that we are just starting to explore what these frontier
0:29:43 models mean for scientific discovery in physics, in chemistry, in material science.
0:29:48 And we need to inspire the next generation to want to go into those areas because it’s hard.
0:29:50 It’s really hard to do AI in the physical world.
0:29:54 You have to literally hook up wet labs and start doing experiments in an entirely new way.
0:29:57 And you need people who are excited not only about wanting to do machine learning work,
0:30:04 but also the hard work of being lab technicians and running experiments and literally pipetting
0:30:07 new materials and chemistry.
0:30:13 And that, I think, was missing in a lot of the discourse under the previous administration.
0:30:16 So you can sometimes judge a book by its cover.
0:30:18 And I think this is a strong start.
0:30:21 And now I think we should actually dive into some of the bullets.
0:30:26 OK, so the other one that I thought was a huge omission is there’s basically no real mention
0:30:29 of academia, investing in academia.
0:30:31 Like, there’s some oblique references to it.
0:30:38 But it’s just been such a mainstay of innovation, computer science, of the last 40 years, not having
0:30:39 a major part of it.
0:30:40 I think it’s a shame.
0:30:44 And I understand that right now there’s kind of a standoff between higher ed and the administration.
0:30:45 And I get it.
0:30:48 And I actually think that both sides actually have fairly reasonable points.
0:30:55 But, you know, to have a major tech initiative without including academia, it just feels like
0:31:00 we’re, you know, what is it, fighting a battle with a hand tied behind our back, like some
0:31:01 aphorism.
0:31:06 This is a good problem to have, which is that I think it’s extremely ambitious.
0:31:10 It’s a little bit light on execution details, right?
0:31:11 Which is what happens next.
0:31:18 So a good example of that is I think I do think directionally it was great that they said
0:31:26 we need, let’s read this bullet point on build an AI evaluations ecosystem.
0:31:32 I love that because it acknowledges that, hey, before we start actually passing grant proclamations
0:31:37 of what these models are risky or whether these models are dangerous or not, let’s first even
0:31:39 agree on how to measure the risk in these models.
0:31:41 Before jumping the gun.
0:31:45 That part, I think, in addition to the open source bullet, was probably, I thought, the
0:31:48 most sophisticated thinking I’ve seen in any policy document.
0:31:50 And look, the reality is America leads the way.
0:31:55 And so every other, you know, within 24 hours of this dropping, Marty and I were getting texts
0:32:00 and messages from folks in many other governments around the world going, what do you guys think?
0:32:05 And I, and I, it was not hard for me to endorse it and tell them, like, look at it as a reference
0:32:12 document because there are things here that arguably are more sophisticated than policy experts, even
0:32:14 in Silicon Valley would recommend.
0:32:17 because building an AI evaluations ecosystem is not easy.
0:32:22 And they, I think, lay out a pretty thoughtful proposal on, on the fact that that’s important.
0:32:23 Now, the question is how?
0:32:28 And I think that’s what we have to help DC with the hard work of like implementing this stuff.
0:32:34 But the vibe shift going from, let’s not jump the gun on saying these models are dangerous.
0:32:39 Let’s first talk about building a scientific grounded framework on how to assess the risk in these
0:32:42 models to me was, was not at all a given.
0:32:44 And I was really excited about that.
0:32:45 Yeah.
0:32:49 There’s been a lot of focus in the last few years by, by several companies, but also by the
0:32:52 brand industry around this idea of alignment.
0:32:53 Right.
0:32:58 Have we made any progress on alignment or what is your assertive perspective of what are they
0:32:58 trying to do?
0:32:59 Is that a feasible goal?
0:33:03 Help us understand what they’re trying to solve for.
0:33:09 So at an almost tautological level, alignment’s an obvious thing you’d want to do.
0:33:11 I have a purpose.
0:33:12 I want to align the AI to this purpose.
0:33:22 And it turns out these models are problematic, generally unruly, chaotic, whatever adjectives
0:33:23 you want to use.
0:33:29 And so like, you know, understanding how to better align them to any sort of stated goal
0:33:31 is, is very obviously a good thing.
0:33:36 And so I think we’d all agree that alignment to whatever the goal is to make it more effective
0:33:39 that, that goal and do that thing is good, especially given these models who have, tend
0:33:40 to have a mind of their own.
0:33:51 The subtext that certainly I bristle to is that, is that the people doing the alignment are
0:33:57 somehow protecting the rest of us from whatever they think their ideal is as far as, you know,
0:34:01 dangers to me or thoughts I shouldn’t have or information I shouldn’t be exposed to.
0:34:05 Which is why I think we need to be, even when we come up with policy, we need to be very
0:34:13 careful not to impose like a different set of, you know, ideological rules on top of these.
0:34:17 I just, I just think like alignment is something we should all understand.
0:34:21 Actually aligning them to me is, is, is kind of where I take issue from any sort of kind
0:34:22 of top-down mandate.
0:34:24 Um, I, I agree.
0:34:29 And I think, you know, there’s a, there’s a quote from a researcher, um, who, which, which
0:34:35 I think is very accurate, which is you, you’ve got to think about these AI systems as, as almost
0:34:39 biological systems that are grown, not coded up, right?
0:34:43 Because sure, they express a software, but in many ways, when you’re training a model, it
0:34:48 is, you are actually growing in, in this environment of, of a bunch of prior history and
0:34:49 training data, et cetera.
0:34:52 And often empirically, you actually don’t know what the capabilities of the model are until it’s
0:34:53 actually done training.
0:34:57 So I think that’s a useful analogy where I think that falls down is when people go, oh,
0:35:03 well, if it’s, if we can’t align it because we actually don’t know, it’s a biological mechanism
0:35:05 until it’s grown up, you don’t know what its risks are and so on.
0:35:12 Then we can’t deploy these AI models in mission critical places until we’ve solved, let’s say
0:35:16 the black box problem, the mechanistic interpretability problem, which is, can you trace deterministically
0:35:17 why a model did something?
0:35:21 We’ve made a lot of advances as a space in the last few years, but it still remains a research
0:35:21 problem.
0:35:26 But that doesn’t mean just because you, you don’t understand the true mechanism of the
0:35:28 system doesn’t mean you don’t unlock its useful value.
0:35:35 If you look at most general purpose technologies in history, electricity, nuclear fusion, like
0:35:40 we, there, there are many examples of technologies where we knew they were complex systems and we
0:35:44 didn’t truly understand at an atomistic level or mechanistic level, how they work, but we still
0:35:45 use them.
0:35:47 I mean, we don’t understand how the internet works.
0:35:51 I mean, like there’s a whole research of network measurements trying to find out what the heck
0:35:52 the internet was going to do.
0:35:53 Is it going to have congestion collapse?
0:35:57 I mean, like, you know, any complex system has states that you just don’t understand.
0:36:01 Now, let’s now say these models more so than many and the implications are very real, but
0:36:03 like we, we know how to deal with ambiguity.
0:36:04 We don’t even know how our own brains work.
0:36:05 No.
0:36:07 Her consciousness.
0:36:08 Yeah.
0:36:10 And, but we, we don’t stop working with other human beings.
0:36:11 Unfortunately, we’re stuck with them.
0:36:13 So we have no option on that one.
0:36:14 Yeah.
0:36:15 Totally.
0:36:17 I mean, I think to extend that analogy, what do you do?
0:36:19 You, you’re like, okay, I don’t know how a brain works.
0:36:21 It’s got a bunch of risks.
0:36:25 This person may be crazy, but I still want to unlock all the beautiful benefits of the
0:36:27 big, beautiful brains that humans have.
0:36:29 And so you develop education.
0:36:33 You, you send kids to school and you teach them values and then you send them off to college
0:36:35 and then they get to learn something specific.
0:36:36 And then you get to test them in the real world environment.
0:36:41 They get a resume and they get work experience and they get to prove that they actually are within
0:36:43 a risk-based framework manageable and so on.
0:36:46 And that, and that as a society has unlocked human capital, right?
0:36:50 Like the great, arguably the greatest technology we’ve had in, you know, 500 years of modern
0:36:51 industrial innovation.
0:36:57 So I think what I hate about the alignment discourse is it sometimes confuses the, the
0:37:01 fact that we don’t understand the system for the fact that then we can’t use it.
0:37:06 And I think, I don’t think we’ve, we’ve like for a long time, I think mechanistic
0:37:09 interpretability, which is kind of like some folks would say is the holy grail is like
0:37:14 being able to re reverse engineer why a model does something is still a research problem,
0:37:19 but that doesn’t mean we haven’t made progress on how to use unaligned models or to improve
0:37:23 alignment to a point where they’re useful in massive ways like software engineering.
0:37:28 I think what I, what the smartest might say is it’s not that, um, it’s really just what’s
0:37:28 the rush.
0:37:33 Like, uh, you know, maybe let’s focus on like, uh, you know, integrating all the capabilities
0:37:38 we already have before, you know, pushing the frontier to which then it ends well, but the
0:37:39 arms raised, et cetera.
0:37:42 Like there’s, there’s a risk of slowing down too, that maybe isn’t fully appreciated.
0:37:47 Until we’ve solved cancer every month that we’re not rushing to the frontier of accelerating
0:37:52 biological discovery or scientific progress is a month that millions of people are suffering
0:37:55 from disease that we could be solving with AI.
0:37:58 We don’t talk about the opportunity cost of slowing down the frontier.
0:38:04 I mean, this is the thing with all of these, like there, there’s, there’s always this kind
0:38:05 of reverse question on innovation.
0:38:09 And they say, well, okay, it’s like, it’s like the Bostrom urn experiment, you know, the,
0:38:14 the, his kind of whole urn hypothesis, like there’s an urn of innovation and you pull up
0:38:14 balls.
0:38:17 One of us, a black ball that destroys everything, right?
0:38:20 Like, so eventually you’ll draw that ball.
0:38:21 So why would you ever do innovation?
0:38:22 Like that is the thought experiment.
0:38:27 And, and, and the answer is so simple, which is, it just turns out that it’s much more dangerous
0:38:29 not to pull out balls than pull out balls.
0:38:30 Like that’s always the answer.
0:38:33 So like when people ask PDoom, so what is the PDoom?
0:38:36 The answer is not like 0.1 or 0 or a hundred.
0:38:42 The answer is the PDoom without AI is actually quite a bit greater than the PDoom with AI.
0:38:44 And the, what’s the rush?
0:38:48 The answer is the same thing, which is clearly if you ignore exactly what I’m saying.
0:38:54 If you ignore the benefits of technology, then you would say, if it’s all negative, no rush
0:38:55 at all, right?
0:38:59 The reality is, is the benefits are, they’re so dramatic and they’re so obviously dramatic
0:38:59 now.
0:39:03 Thank God we’ve got a year’s worth of, of, of data on this stuff.
0:39:05 Like they’re clearly economically beneficial.
0:39:10 They’re clearly beneficial, uh, beneficial, uh, um, expanding a number of areas of like core
0:39:15 science that the rush is, is getting to the next set of solutions.
0:39:21 Um, you know, as opposed to being afraid of, you know, set of problems that we still can’t
0:39:25 clearly articulate and listen, as soon as we do understand marginal risk and we do have
0:39:27 these, we absolutely should address those directly.
0:39:31 Which is, which again, the action plan does a great job of, of, of penciling this out.
0:39:35 I mean, it does want to explore implications on jobs, implications on defense, implications
0:39:38 on, you know, alignment.
0:39:40 Like, and that’s exactly where we should be in the exploration phase.
0:39:44 Do we have a definition of, of marginal risk that, or a perspective of how to think about
0:39:45 that, that, that idea?
0:39:51 Well, let’s just be clear what we’ve been by marginal risk, which is, um, computer science
0:39:52 or, uh, computer systems are risky.
0:39:54 Network systems are risky.
0:39:56 Stochastic systems are risky.
0:40:02 We’ve been, we’ve got decades of, you know, ways of thinking about measuring, regulating,
0:40:06 changing common behavior based on this type of risk.
0:40:10 And so the question is, can you take all of that apparatus that’s been hard won and
0:40:11 apply it to AI?
0:40:16 If so, like, A, we know it’s effective because we’ve used it before and we’ve got a lot of
0:40:18 experience with it and, and B, it’s ready to be done.
0:40:22 Or is there a different type of risk that’s not endemic on those systems?
0:40:27 In which case we’ll have to come up with something that new, which is you go down that exploration.
0:40:29 Maybe it works, maybe it doesn’t work, et cetera.
0:40:29 Right.
0:40:33 So that like, um, uh, that’s what marginal risk is.
0:40:38 And I just think that the problem is, is if you don’t, if you don’t know what it is, how
0:40:39 are you going to define a solution?
0:40:42 I think that’s right.
0:40:46 I mean, philosophically, the idea is if you’re going to say we need new solutions, then you
0:40:52 need to articulate why the problem is new and why our solutions that work really great
0:40:55 are not, are no longer, uh, sufficient.
0:40:56 Right.
0:41:00 And I think it’s, it’s almost obvious when you state it, but this was the state of the
0:41:05 world a year ago that we were having to like look around the room and say, can I raise my
0:41:05 hand?
0:41:09 Why are we introducing net new liabilities and new laws that we’ve never had to do before?
0:41:13 If you can’t articulate why there are no new problems to solve, if it ain’t broken, why
0:41:14 are you trying to fix it?
0:41:19 Um, and so marginal risk is, I think a slightly just more technical way to say we have the tools
0:41:21 to manage risk.
0:41:22 We don’t need new ones.
0:41:26 And if you think we need new ones, then Hey, just take a minute to articulate to us.
0:41:28 Why is there anything else you wanted to make sure we got to?
0:41:30 Otherwise, let’s, I think it was great.
0:41:33 Time to put the action plan into action.
0:41:33 Excellent.
0:41:35 Martin, Ange, thanks so much for coming to the podcast.
0:41:36 Thank you.
0:41:36 Thanks for having us.
0:41:41 Thanks for listening to the A16Z podcast.
0:41:46 If you enjoyed the episode, let us know by leaving a review at rate this podcast.com slash
0:41:47 A16Z.
0:41:49 We’ve got more great conversations coming your way.
0:41:50 See you next time.
0:41:53 Bye.
a16z General Partners Martin Casado and Anjney Midha join Erik Torenberg to unpack one of the most dramatic shifts in tech policy in recent memory: the move from “pause AI” to “win the AI race.”
They trace the evolution of U.S. AI policy—from executive orders that chilled innovation, to the recent AI Action Plan that puts scientific progress and open source at the center. The discussion covers how technologists were caught off guard, why open source was wrongly equated to nuclear risk, and what changed the narrative—including China’s rapid progress.
The conversation also explores:
- How and why the AI discourse got captured by doomerism
- What “marginal risk” really means—and why it matters
- Why open source AI is not just ideology, but business strategy
- How government, academia, and industry are realigning after a fractured few years
- The effect of bad legislation—and what comes next
Whether you’re a founder, policymaker, or just trying to make sense of AI’s regulatory future, this episode breaks it all down.
Timecodes:
0:00 Introduction & Setting the Stage
0:39 The Shift in AI Regulation Discourse
2:10 Historical Context: Tech Waves & Policy
6:39 The Open Source Debate
13:39 The Chilling Effect & Global Competition
15:00 Changing Sentiments on Open Source
21:06 Open Source as Business Strategy
28:50 The AI Action Plan: Reflections & Critique
32:45 Alignment, Marginal Risk, and Policy
41:30 The Future of AI Regulation & Closing Thoughts
Resources
Find Martin on X: https://x.com/martin_casado
Find Anjney on X: https://x.com/anjneymidha
Stay Updated:
Let us know what you think: https://ratethispodcast.com/a16z
Find a16z on Twitter: https://twitter.com/a16z
Find a16z on LinkedIn: https://www.linkedin.com/company/a16z
Subscribe on your favorite podcast app: https://a16z.simplecast.com/
Follow our host: https://x.com/eriktorenberg
Please note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures.