The Little Tech Agenda for AI

AI transcript
0:00:05 There have been these big institutional players in DC in the state capitals for a very long time.
0:00:09 There wasn’t anyone who was actually advocating on behalf of the startups and entrepreneurs,
0:00:14 the smaller builders in space. They’re trying to build models that might compete with Microsoft
0:00:19 or OpenAI or Meta or Google. For those companies, what are the regulatory frameworks that would
0:00:23 actually work for them as opposed to making that competition even more difficult than it already is?
0:00:28 Regulate use, do not regulate development, somehow is interpreted as do not regulate.
0:00:36 I actually can’t think of a single example across the portfolio in which we are arguing for zero regulation.
0:00:42 Who’s speaking up for startups in Washington, DC? Today, I’m joined by Matt Peralt,
0:00:48 Head of AI Policy at A16Z, and Colin McKeown, Head of Government Affairs at A16Z,
0:00:52 to talk about the Little Tech Agenda, a framework designed to ensure that regulation doesn’t just
0:00:57 work for the giants, but also for the five-person teams trying to build the next breakthrough. Their
0:01:04 approach, regulate harmful use, not development. We’ll cover federal versus state rules, open source,
0:01:08 export controls, and what smart preemption could look like. Let’s get into it.
0:01:13 Colin, Matt, welcome to the podcast.
0:01:15 Thanks so much. Thanks for having us.
0:01:19 So there’s a lot we want to get into around AI policy. But first, I want us to take a step back
0:01:25 and reflect a little bit. We had publicly announced the Little Tech Agenda in July of last year.
0:01:30 There’s a lot that’s happened since. When we first take a step back, Colin, and talk about what is the
0:01:32 Little Tech Agenda and how did it come to be at the firm?
0:01:36 Yeah. I mean, look, a ton of credit to Mark and Ben for having sort of the vision on this.
0:01:43 I think certainly when I first started here, I arrived, we started advocating on behalf of
0:01:50 technology interest, technology policy. And I think what we realized was there have been these big
0:01:54 institutional players that have been in DC in the state capitals for a very long time.
0:01:59 Some of them have done a lot of really good work on behalf of the entire tech community.
0:02:05 But there wasn’t anyone specific who was actually advocating on behalf of what I think what we call
0:02:10 little tech, which I think in my mind are the startups and entrepreneurs, the smaller builders
0:02:15 in the space. And I think beyond that, what we realized was, well, they’re not always 100% aligned
0:02:20 with what’s going on with the big tech folks. And that’s not necessarily always a bad thing or a good
0:02:25 thing. But I think that was the whole impetus of this. You know, how are we going to think about
0:02:29 positioning ourselves in DC and the state capitals in terms of our advocacy on these issues?
0:02:36 And how do we differentiate in between sort of the big tech folks who come with their certain
0:02:41 degrees of baggage? And the smallest is small. From the left and the right, right? And the smallest is
0:02:44 small. So that was really sort of the basic impetus of this. For me, it was actually sort of
0:02:49 almost a recruiting vehicle. So when it hit in July, I was not yet at the firm. I started in November.
0:02:54 And when I first read the agenda, it sort of transformed the way that I looked
0:02:58 at the rooms that I would sit in, where there would be policy conversations, where all of a sudden,
0:03:02 you could see essentially an empty seat. And Little Tech’s not there. You know,
0:03:06 there would be conversations where people would say, and in this proposal, we want to add this
0:03:09 disclosure requirement. And then we’ll have companies do a little bit more and a little bit more.
0:03:13 And when you’ve read the Little Tech agenda, all of a sudden, you start thinking, how is this
0:03:17 going to work for all the people who aren’t in the room? And so for me, the question like thinking
0:03:22 about coming into this role in the firm was, is this a voice? Is this a part of the community I want to
0:03:26 advocate for and think about? And when you start looking at the policy debate from a perspective
0:03:30 of Little Tech, and you see how many of the conversations don’t include a Little Tech
0:03:34 perspective, it comes, from my point of view, is very compelling to think about how can I advocate
0:03:36 for this part of the internet ecosystem?
0:03:36 Right.
0:03:41 And Kyle, why don’t you outline some of the pillars of the Little Tech agenda, or some of the things that
0:03:45 we focus the most on, and maybe how it differentiates from sort of big tech more broadly?
0:03:50 Yeah. I mean, well, I mean, just from a firm perspective, right? Obviously, we’re verticalized.
0:03:53 You know, we all live and breathe this. And I think that that’s been very, very competitive
0:03:57 for us on the business side. But I also think it’s very competitive of us on the policy side,
0:04:03 too, right? Obviously, MAD leads our AI vertical and that’s sort of our AI policy lead. We have a huge
0:04:07 crypto effort. We have a major effort around American dynamism. And then this is sort of defense
0:04:11 procurement reform, which is something that the United States is needed forever and ever.
0:04:16 We have, you know, other colleagues who work on the bio and health team. And they’re fighting on
0:04:21 behalf of, you know, FDA reform, everything from PBMs. There’s a whole vertical there that they’re
0:04:27 working on. We’re working a lot on fintech-related issues. And then, you know, just like classic
0:04:34 tech-related sort of internet entrepreneurs coming up. What does that relate to? There’s a lot of tax
0:04:38 issues that come along with it. And then, of course, obviously, there are the venture-specific
0:04:42 things that we have to deal with. But look, I think I try and think about this from a basic
0:04:48 point of view, which is just like, if you’re a small builder, what are the things that should
0:04:52 differentiate you between someone who’s a trillion-dollar company and you have hundreds of thousands of
0:04:58 employees, right? If you’re five people and you’re in a garage, how are you supposed to be able to
0:05:03 comply with the same things that are built for a thousand-person compliance teams? Like, it’s just not
0:05:07 the same thing. Right. And like, there are categories and categories that, you know, Matt and I are
0:05:12 dealing with on a regular basis. But that’s probably the main pillar, which is five-person
0:05:17 versus trillion-dollar company, not the same thing. It’s made my job actually really hard in certain
0:05:23 ways since I started at the firm because the kinds of partners that you want within our portfolio
0:05:28 often don’t exist in that, like, a lot of the companies don’t have a general counsel.
0:05:32 Yeah. They don’t have a head of policy. They don’t have a head of communications. And so the
0:05:37 kinds of people who typically sit at companies thinking all day about, like, what is this state
0:05:42 doing in AI policy? What is this federal agency doing in terms of rulemaking? They’re not at startups that
0:05:48 are just a couple of people and engineers trying really hard to build products. Those companies face
0:05:53 this incredibly daunting challenge. I mean, it seems so daunting for someone like me, like non-technical,
0:05:57 and I’ve never worked at a startup. If they’re trying to build models that might compete with
0:06:02 Microsoft or OpenAI or Meta or Google, and that is unbelievably challenging in AI. You have to have
0:06:07 data. You have to have compute. There’s been a lot written about the cost of AI talent recently. It’s
0:06:12 incredibly, incredibly daunting. And so the question that Colin and I talk about all the time is for
0:06:17 those companies, what are the regulatory frameworks that would actually work for them as opposed to making
0:06:21 that competition even more difficult than it already is? Yeah. Well, yeah, one of the principles I’ve heard
0:06:26 you guys hammer home is we want a market that’s competitive where startups can compete. We don’t
0:06:30 want a monopoly. We don’t want even oligopolies, you know, a cartel-like system. And that doesn’t
0:06:36 mean no regulation because that can, as we’ve seen, that could be destabilizing too. But it means smart
0:06:41 regulation that enables that competition in the first place. Yeah. So I think one of the things that’s been
0:06:47 surprising to me to learn about venture is the time horizon that we operate in. So our funds are 10-year
0:06:54 cycles. So we’re not looking to spike an AI market tomorrow and have a good year or a good six months or a good
0:07:00 two years. We’re looking to create vibrant, healthy ecosystems that result in long-run benefits for people and long-run
0:07:07 financial benefits for our investors and for us. And that means having a regulatory environment that facilitates
0:07:14 healthy, good, safe products. It doesn’t mean, like, if people have scammy, problematic experiences with AI
0:07:18 products, if they think AI is bad for democracy, if they think it’s corroding their communities,
0:07:24 that’s not in our financial incentive. That’s not good for us. And so that really animates the kind of
0:07:31 core component of the agenda, which is not trying to strip all regulation, but instead focusing on
0:07:35 regulation that will actually protect people. And we think that there are ways to do that without making it
0:07:39 It’s making it harder for startups to compete. Yeah, to Matt’s good point, I walk into a lot of lawmaker offices.
0:07:44 You know, it sounds like I’m pitching my book, but I genuinely say, like,
0:07:50 Our interests are aligned with the United States of America’s interests. Yeah. Because the people that we’re funding
0:07:55 are on the cutting edge. They’re the people who are going to build the companies that are going to drive the jobs.
0:08:00 They are going to drive the national security components that we need. And they’re also going to drive the economy.
0:08:07 And like, we want to see them build over a long time horizon. And like, that is exactly how we should be
0:08:10 building policy in the United States. Of course, like half the offices I walk into, like, all right,
0:08:17 great. Get that guy out of here. 99.9% of people we talk to think that all we want is no regulation.
0:08:23 And yeah, despite like writing extensively, both of us writing, speaking extensively about the importance
0:08:27 of good governance for creating the kind of markets that we want to create. If Colin can speak more to
0:08:31 it in crypto, I’ve learned a lot from our crypto practice, because the idea there is you really need
0:08:36 to separate good actors from bad actors and ensure that you take account for the differences. And it’s
0:08:42 true in AI as well. If we don’t have safe AI tools, if there is absolutely no governance,
0:08:46 that’s not going to create a long run healthy ecosystem that’s going to be good for us and
0:08:50 good for people throughout the country. I actually can’t think of a single example
0:08:55 across the portfolio in which we are arguing for zero regulation.
0:09:00 The core component of our AI policy framework, which was developed before my time, I wish I
0:09:05 could take credit and I can’t, is focused on regulating harmful use, not on regulating development.
0:09:11 And that sentence, regulate use, do not regulate development, somehow is interpreted as do not
0:09:16 regulate. And people just omit for some reason, the part that we focus on, on focusing on regulating
0:09:21 harmful use. And that in our view is robust and expansive and leaves lots of room for policymakers
0:09:25 to take steps that we think are actually really effective in protecting people. So regulating
0:09:29 use means regulating when people violate consumer protection law, when they use AI to violate
0:09:34 consumer protection law, or when they use AI in a way that violates civil rights law at the state and
0:09:39 federal level or violating state or federal criminal law. So there’s an enormous amount of action there
0:09:45 for lawmakers to seize on. And we really want that to be like an active component of the governance agenda
0:09:49 that we’re proposing. And for some reason it’s all passed over and the focus is just on don’t
0:09:53 regulate development. I don’t exactly understand why that ends up being the case.
0:09:57 – Easy headline. – So there’s been a lot that’s happened in AI policy and I want to get to it,
0:10:02 but first, perhaps Matt, you can trace the evolution a bit over the last few years. I believe there was
0:10:05 a time where we were like pattern matching with social media regulation a bit. Why don’t you trace
0:10:08 some of the biggest inflection points, kind of the debates over the last few years, and we’ll get to
0:10:13 today, maybe, Carl? – I think we have to play a little bit of history. And I want to get to,
0:10:17 you know, sort of a point that I think is the really critical point of what we’re all facing here.
0:10:23 for us, for me, I would say from a policy and government affairs perspective, this conversation
0:10:31 started early 2023. That was sort of like the kickoff of the gun. It sort of puttered along and
0:10:39 became more and more real over time. But in the fall of 2023, so almost exactly to the day two years ago,
0:10:44 there was a series of Senate hearings in which, you know, some major CEOs from the AI space came and
0:10:51 they testified. And I think that the message that folks heard was, one, we need and want to be
0:10:56 regulated, which I think maintains that truth today. That’s obviously, you know, what Matt and I are
0:11:05 working on on a regular basis. But I think included in some of that testimony was a lot of speculation
0:11:12 about the industry that led to and sort of absolutely jump-started this whole huge wave
0:11:19 of conversation around the rise of Terminator, you know, go hug your families because we’re going to
0:11:25 all be dead in five years. And that spooked Capitol Hill. I mean, they absolutely freaked out about it.
0:11:29 And look, rightfully so. You have these really important, powerful people who are building this
0:11:32 really important, powerful thing. And they’re coming in, they’re going to tell you that, you know,
0:11:37 everyone’s going to die in five years, right? That’s a scary thing for people to hear. And,
0:11:43 oh, by the way, we want to be regulated. Which, you know, look, that starting gun,
0:11:50 I think, moved us in hyperspeed into this conversation around, how do we lock this down? How do we regulate
0:11:55 it very, very, very quickly? I think that led to the Biden executive order, which, you know,
0:12:03 we’ve publicly sort of, you know, denounced in certain categories. Um, that executive order led
0:12:08 to a lot of the conversation that I think we’re having in the states, a lot of the, you know,
0:12:15 sort of bad bills that we’ve seen come through the states. Um, and I think it also, um, led to a number
0:12:20 of federal proposals that we’ve seen that have not been very well thought through also. And look,
0:12:24 you know, I think people are kind of sitting around, they’re like, oh, well, you know,
0:12:29 was it just like, you know, some testimony from the CEOs that did this? And the answer to that is no.
0:12:35 You know, from my point of view, um, and look, you know, they deserve a lot of credit. I think the
0:12:43 effect of Altruist community for 10 years backed by large sums of money, um, were very, very effective
0:12:52 at influencing think tanks and nonprofit organizations in DC and the state capitals to sort of push us in
0:12:59 a direction where, um, people are very fearful about the technology. And that has, that has shaped,
0:13:05 significantly shaped the conversation that we’re having throughout DC and the state capitals and
0:13:10 candidly on a global stage. You know, the EU acting, the EU AI act, you know, we’re public on that.
0:13:16 There’s a lot of very, very problematic provisions in there. All of this banner of safetyism came from
0:13:21 this 10 year headstart that these guys have had. So when I always, you know, that that’s kind of a bit
0:13:28 of the history, but sort of as an aside to this, I always, I always just have to smirk or, you know,
0:13:33 smile to try and laugh it off. But I mean, when people are writing these articles about the fact that
0:13:38 the AI industry is, you know, pumping all this money into the system, certainly like I’m not suggesting
0:13:42 that there’s not money in the system. We’re obviously active on the political and policy side.
0:13:47 We’re, you know, we’re not hiding that, but it is dwarfed by the amount of money that is being
0:13:52 spent and has been spent over a 10 year window. And, and candidly, I mean, the reason that Matt
0:13:57 and I have jobs is because we are playing catch up. We are here to try and make sure that people
0:14:02 understand what is actually going on in this conversation and be a counterforce to this,
0:14:08 this group of people and, and this idea, this ideology that has been here for a long period of time.
0:14:13 So that’s, you know, that, that’s kind of the briefer on this. Yeah. I mean, and, and companies,
0:14:19 I think we’re ready to consider some policy frameworks that, that I think we’re probably
0:14:24 really going to be challenging for the AI sector in the long run. Right. And I think that’s because
0:14:33 I w I was at, um, Meta then Facebook starting in 2011 and through 2019. And so after really like 2016,
0:14:37 there was aggressive criticism of tech companies and the general framing is like, um, you’re not
0:14:43 being responsible and regulation needs to catch up. You, um, governance of social media is behind where
0:14:49 the products are. Um, and whatever you think about that, that was really the kind of strong view in the
0:14:54 ecosystem that like governance has allowed the lack of governance has allowed problematic things to happen.
0:15:02 And so I think when AI was starting to accelerate and, um, and, and you had certain sort of prevailing
0:15:06 political interests, I think that were driving the conversation, companies rushed to the table.
0:15:11 And I think it was a group of five, three, five, seven companies who went into the White House
0:15:16 and negotiated voluntary commitments. Yeah. I mean, we don’t even have to make the argument about the
0:15:21 importance of representing little tech in, when you see that there is a set of companies who
0:15:28 negotiated an arrangement for what it would look like to build AI at the frontier with all current
0:15:32 developers who weren’t those companies and all future startups not represented at the table.
0:15:39 Yeah. I think that is why, like, we started to think about the value of having more dedicated
0:15:43 support around AI policy because clearly the views of little tech companies aren’t represented in the
0:15:50 conversation. Yeah. Well, I mean, let me, let me just add one thing to this. And I, I, I, it’s Mark and Ben’s
0:15:55 story. They’ve told it many times. I was in the meeting as well, you know, and, and like, you know,
0:16:00 everything they’ve said has been a hundred percent true and accurate, but there was a, there was a
0:16:05 prevailing view by very, very powerful people of the previous administration
0:16:13 that this was going to be only two or three major companies able to compete in the AI landscape.
0:16:20 And because that was the case, they needed to be basically locked down and, and put in this incredibly
0:16:25 restrictive view from a policy and regulatory perspective. And that was going to be kind of like
0:16:30 this, this entity that was kind of like our arm of the government. And, and I think that that was
0:16:37 the most alarming thing that I think we had heard from the administration on top of an incredibly
0:16:42 alarming series of events that happened on the crypto side, um, including sort of wanting to eradicate it
0:16:49 off the face of planet. It seemed like, so I, I think that that all led to kind of the position that we’re in
0:16:54 now and certainly like Matt’s hiring and the thing, you know, like us building out the team, et cetera.
0:17:01 So that narrative is clearly like a very alarming, maybe the most alarming version of this. But even
0:17:04 since I’ve been in this role, I’ve heard other versions of it where people will say, oh, don’t
0:17:08 worry about this framework. It just applies to three or five companies, or it just applies to five to seven
0:17:13 companies. And I think they mean that to provide comfort to us. Like, oh, this isn’t going to cover a lot of
0:17:18 startups, but the view of the AI market where there are only a small number of companies building at the
0:17:22 frontier is not the, that’s not the vision for the market that we have. We want it to be competitive
0:17:27 and diverse at the frontier and the policy ideas that were coming out of the period that Colin’s
0:17:32 talking about were dramatically different from where they are today in a way that I think like some people
0:17:38 have even like lost sight of exactly where we were a couple of years ago. There were ideas being proposed
0:17:44 by not just government, but industry to require a license to build frontier AI tools and for it to
0:17:48 be regulated like nuclear energy, not just a historic for software development.
0:17:53 Yeah, right. Unprecedented. Yeah. Yeah. And for it to be regulated like nuclear energy with like an
0:18:01 international style nuclear, like, sorry, an international level nuclear style regulatory
0:18:05 regime to govern it. And we’ve moved, like, no matter what you think about the right level of
0:18:09 governance, there are not a lot of people now who are saying what we need as a licensing regime where
0:18:13 you literally apply for permission from the government to build the tool. But that wasn’t
0:18:17 that far in the rearview mirror. Yeah. And look, and we’re also talking about bans on open source.
0:18:22 Yeah. I’m like, we’re still kicking around that idea at the state level. And, and look, I, I, it all,
0:18:27 you know, look for us who live and breathe the tech stuff on a daily basis, this, this is, you know,
0:18:32 this sounds insane and crazy. But let me, you know, like just to make it a little bit more real,
0:18:42 like the nuclear policy in the United States has yielded two, three new nuclear power plants in a
0:18:47 50 year period since these organizations have been started. And look, like you can, some people are
0:18:51 pro nuclear or some people are anti nuclear. I, I don’t want to get into that debate. The point though
0:18:57 is, is that that was not the intended policy of the United States of America. That was the effect of putting
0:19:04 together this agency and what has come from that. And I think, you know, look, if we do the same
0:19:09 thing to AI, had we done the same thing in AI in that period of time, then you don’t have the medical
0:19:14 advancements. You don’t have the breakthroughs. You don’t have all of the things that come from this
0:19:19 that are incredible. But beyond that, we lose to China. Yeah.
0:19:24 Full stop. You lose to China. And then our greatest national security threat becomes the one who has
0:19:28 the most powerful technology in the world. Right. And I think, I think the early concern
0:19:31 on the open source was that we would be somehow giving it to China, but then we’ve seen with deep
0:19:36 seek, et cetera, that they just have it anyways. Yeah. Yeah, exactly. Right. Exactly. You know,
0:19:40 the idea that we could lock this down, I think, I think, you know, I mean, Mark and Ben have talked
0:19:45 about this. I mean, I think they’ve debunked that a number of times. Yeah. Just understand was for the
0:19:50 previous administration, what was their calculus? Was it that they were true believers in the fears?
0:19:54 Was it that there was some sort of political benefit to having the views that they had,
0:19:58 especially on the crypto side? I don’t understand what’s what is the constituency for anti crypto
0:20:03 stance? How do you make sense of of sort of the players or the intentions or motivation? Just
0:20:08 understand sort of the calculus there. Yeah. You know, I mean, look, I think that that’s a really,
0:20:12 I think that’s a really hard one to answer and I’m not sure I can pretend to be completely in their
0:20:18 minds. I think there’s a couple of different competing forces here. Like one is, you know,
0:20:24 what are the constituencies that support the sort of that administration? What are the constituencies
0:20:31 that support that side of the aisle? And I think that especially over the last 10 to 15 years,
0:20:35 it has been very, very heavy, heavy focus on consumer fit, consumer safety, which I think,
0:20:39 look, a very important thing. And we’re obviously in alignment on that. I think everyone should be
0:20:42 in alignment, have to protect consumers, have to be able to protect the American public.
0:20:50 But I think that, um, a lot of that conversation has been weaponized. I think that it is, it is a
0:20:56 big time moneymaker. I think a lot of these groups either get backing from very, very wealthy,
0:21:03 special interest or they are small dollar fundraising off of quick hits like, you know,
0:21:08 AI is coming for your jobs, donate $5 and we’re going to, you know, and we’ll make sure that we
0:21:13 take care of this in Washington for you. And, you know, like pretty easy, you know,
0:21:19 it’s a pretty easy manipulation tactic, you know, it’s used like from a bunch of people, but I think
0:21:25 that that’s like a very, that held very seriously true. Right. And I think, um, you know, the other
0:21:31 thing here is that I think, um, personnel is policy. It’s the old saying, personnel is policy. And I think
0:21:37 a lot of the individuals that, um, were in very senior decision-making roles within that White House and
0:21:43 that administration came from this sort of consumer protection background where they’ve seen this,
0:21:49 that was their constituency. They were put in this position to come after private enterprise. Like,
0:21:54 you know, that was, that was the, that was the goal. Like, there’s this whole idea out there,
0:21:59 I think among some of those folks that, you know, Senator Warren has, has, you know, proposed this many
0:22:06 times is, is like, if you’re not getting, you know, if you’re not going after and getting people on a
0:22:11 regular basis in the private sector, then you’re not working hard enough. And I, and like, I, I just,
0:22:17 you know, I think that that, that is, is probably like the second thing. And then like the third is
0:22:26 just, we’re at this very weird moment where being a builder and being in private enterprise is, is a bad
0:22:32 thing to some policymakers. It’s not, you know, you’re not doing good because you’re earning a profit.
0:22:37 And, you know, they certainly won’t say that, but the activities and the things that they’re doing
0:22:44 are a hundred percent alive with that, that type of idea. So I, you know, I, I think that’s the basic
0:22:45 crux of it.
0:22:54 I, I think the, the, the things that motivated that approach were done in good faith. And I think, I think
0:22:59 it’s what you alluded to earlier, which was like, I don’t share this view, but there are a lot of people
0:23:05 who believe that social media is poorly regulated. And that because policymakers were asleep at the wheel,
0:23:11 we woke up at some point, I don’t know, sometime in the 2014 to 2018 period, and realized that we had technology
0:23:16 that we thought was actually not good for our society. And I think that whether or not you think
0:23:21 that that’s true or not, that I think that was, that has been a widely held view. It’s a, it’s a held
0:23:25 view on the right and on the left, it’s a bipartisan view. And so I think when this new technology came on
0:23:30 the scene, this was a do over opportunity for policymakers, right? Like we can get this right,
0:23:35 when we didn’t get the last thing right. And so I understand that motivation. It makes a lot of
0:23:42 sense. I think the thing that we, that we strongly feel is the set of policy ideas that came out of that
0:23:48 like good faith belief, were not the right policy ideas to either protect consumers or lead to a
0:23:54 competitive AI market. Like some of many of the politicians who are pushing concepts that would have
0:24:01 really put a stranglehold, I think on AI startups and would have led to more monopolization of a
0:24:05 market that already tends toward monopoly because of the high barriers to entry already. Um, those
0:24:09 politicians three years before had been talking about how problematic it was that there wasn’t more
0:24:13 competition in social media. Um, and then all of a sudden they’re behind, you know, a licensing regime,
0:24:17 which is not, I don’t think there’s much economic evidence that licensing is pro-competitive.
0:24:23 It typically is the opposite. The disagreement is less with the core feeling. Like we want to protect
0:24:28 people from harmful uses of this technology and more from the policy concepts that came out of that
0:24:33 feeling that we think would have been disruptive in a problematic way to the future of the AI market.
0:24:39 Yeah. Anecdotally, it seemed from, from afar that some of the concerns early on were almost,
0:24:44 you know, to match social media, like around disinformation or even like DEI concerns. Um,
0:24:48 and then, you know, people were trying to sort of, uh, make sure the models were incompatible with,
0:24:52 what about with sort of the sort of, you know, um, speech regime at the time, but then it’s kind of
0:24:56 shifted to, Oh wait, no, is, is, is there more existential concerns around jobs or, or is AI even
0:25:02 like nukes in, in the sense of like, and even people doing harm or AI itself doing harm, but it seemed to,
0:25:05 to escalate a bit and, you know, uh, maybe aligned with that testimonial that you alluded to.
0:25:10 I, I, I experienced it as feeling like the gold posts always move. And one of the things that I said,
0:25:15 like that, I said that I started asking people when I was really trying to settle into this
0:25:20 regulate use, not development policy position is what do we miss? Like if we regulate use primarily
0:25:24 using existing law, what are the things that we miss? And I haven’t gotten very many clear answers
0:25:31 to that, right? Like you can’t do illegal things in the universe and you also can’t use AI to do illegal
0:25:35 things. And typically when people list out the set of things that they’re most concerned about
0:25:40 with AI, it are, they’re typically things that are covered by existing law. Not, probably not
0:25:46 exclusively. Right. But primarily. And so that at least seems like a, a good starting point. Some of
0:25:50 the other issues that I think are like understandably ones that we should be concerned about have a range
0:25:55 of different considerations associated with them. Like the, like if you’re concerned about misinformation
0:25:59 or like speech that you think might not be true or might be problematic, there are significant
0:26:03 constraints on the government’s ability to regulate that. Yeah. The first amendment imposes
0:26:08 pretty stringent, uh, restrictions. And I think for very good reason, because you don’t want the
0:26:12 government to dictate the speech preferences, policies of private speech platforms for the most
0:26:18 part. And so, um, so that those issues might be concerns, but they’re not necessarily areas I think
0:26:23 where you want the government to step in and take strong action. Um, and so there, I think there are
0:26:27 things that we should probably do as a society to try to address those issues, but regulate government
0:26:31 regulation maybe isn’t the primary one. And again, and most of the things that people are most
0:26:36 concerned about, like real, real use of the technology for clear cognizable real world harm,
0:26:43 existing law typically covers it. I have a theory on this. So I think everything that Matt just said is,
0:26:46 is spot on, but, but, you know, like then, then you’re kind of sitting around and you’re kind of
0:26:53 scratching your head. It’s like, okay, well, if use covers it and there hasn’t been, you know, a, a very
0:27:00 incredibly fair rebuttal onto why use is not enough in, in terms of focus on, on the policy and regulatory
0:27:05 side, what’s, what’s the answer. I think we’re, we’re experiencing sort of this, I don’t know if
0:27:10 it’s phenomena, but we’re experiencing this pattern on the crypto side too, which is, which is we’re
0:27:16 having a very, very spirited debate on the crypto side of things on how to regulate sort of these
0:27:19 tokens and how do you launch a token in the United States as a security or as a commodity. And this
0:27:23 is sort of this age old debate that’s, you know, plagued securities, traditional securities laws for
0:27:30 years, but also certainly with crypto industry. But what we have found is there are, there are a
0:27:37 number of people who have entered this debate who are actually trying to get at the underlying
0:27:42 securities laws. Like they, they want to reform securities laws. They don’t want to reform crypto
0:27:48 laws that involve securities. And this is their only venue by which they can enter that conversation.
0:27:54 Because we’re not having, there, there’s no will from the Congress or from policymakers to go and
0:28:00 overhaul the securities laws right now. You know, it’s just not there. But what is moving is crypto.
0:28:04 So people, you know, there are all these people that are now trying to enter this debate and like,
0:28:08 oh, we should relook at this and like, well, this doesn’t have anything to do with it. We shouldn’t
0:28:12 be entering this conversation yet. They’re still pushing. Right. And that’s kind of muddy the water.
0:28:16 I think a very similar thing is actually happening on the AI side, which is, you know,
0:28:22 there are a number of members of Congress that feel like, well, we missed it on the 96 Telecom Act.
0:28:29 Like that wasn’t, we didn’t do good enough around then. So we need to rewrite the wrongs
0:28:35 through the venue of an AI policy conversation. Right. Because if you, if you think about it,
0:28:41 right. Assuming that use doesn’t go far enough for someone. Right. And this is the same conversation
0:28:45 that we’re having in California right now or in Colorado right now, if uses does not go far enough.
0:28:51 Okay. Well, then it would be really, really simple if you could have a privacy conversation around this,
0:28:55 if you could have an online content moderation conversation, an algorithmic bias conversation
0:29:00 around it, you could do all of that, wedge it through AI. And then assuming AI is actually going
0:29:06 to be the thing that we all think it’s going to be. Now you’ve put basically a regular regulatory
0:29:10 funnel on the other side, like you’ve put a mesh screen where everything has to run through AI,
0:29:13 and therefore it runs through this regulatory proposal you put together.
0:29:17 Yeah. The, the, the thing that I’ve really been wrestling with in the last few weeks
0:29:23 is whether those kinds of regimes are actually helpful in addressing the harm that they purport
0:29:27 to want to address. And Colorado is a really good example. So there are all these bills that have
0:29:31 been introduced at the state level. Colorado is the only one that’s passed so far. It’s set up this,
0:29:37 this regime where you basically have to decide, are you doing a high risk use of AI or a low risk use
0:29:41 of AI? And this would be for startups that don’t have a general counsel, don’t have a head of policy,
0:29:45 can hire an outside law firm to figure it out high risk, low risk. And then if you’re high risk,
0:29:49 you have to do a bunch of stuff, usually impact assessments, sometimes audit your technology
0:29:54 to try to anticipate, is there going to be bias in your model in some form, which maybe an impact
0:29:58 assessment helps you figure that out a little bit, but it’s probably not going to eliminate
0:30:03 bias entirely. It certainly isn’t going to like end racism in our society.
0:30:12 Colorado is now their governor, their attorney general have put pressure on the legislature to
0:30:16 roll back this law because they think it’s going to be problematic for AI in Colorado. And so there
0:30:20 was just a special session there to consider various different alternatives. One of the alternatives that
0:30:28 was introduced proposed codifying that the use of AI to violate Colorado’s anti-discrimination statute
0:30:33 is illegal. That’s consistent with the regulate harmful use framing that we’ve talked about. And it’s
0:30:39 instead of having this like amorphous process where maybe you address bias in some form, maybe you don’t,
0:30:45 this goes straight at it. It’s not a bank shot. It goes straight at it, where if someone uses AI in a way that violates
0:30:50 anti-discrimination law, that that would be, that could be prosecuted. The attorney general could
0:30:56 enforce. And I don’t, I still don’t understand why that approach is not, is somehow less compelling than
0:31:00 this complex administrative paperwork approach. I think it’s kind of the reason that Colin’s describing,
0:31:06 which is like people want another, a different bite at the apple of bias, I suppose. But it’s not
0:31:11 clear to me that it’s actually the best way to effectuate the outcomes that you want, as opposed to
0:31:15 just criminalizing or creating civil penalties for the harm that you can see clearly.
0:31:23 It’s also, I mean, in policymaking and bill writing, it, it, it’s really, really easy to come up with bad ideas.
0:31:23 Yeah.
0:31:28 It’s easy, right? Because they’re not well thought through. The first thing comes to your head, someone publishes a paper on
0:31:35 something, here we go. It takes real hard work to get something that actually works. And then it’s even harder to
0:31:39 actually go through a political and policy negotiation with a diverse set of stakeholders and actually land the plane on
0:31:44 something. Yeah. I think that’s part of the reason that people think that we are anti-governance,
0:31:49 because when we, I mean, Colin, again, he lived this history, I’m coming in late to it. But like,
0:31:55 as we were ramping up our policy apparatus, these were the ideas in the ecosystem, licensing, nuclear
0:32:02 style regulation, like flops, threshold based disclosures, really complicated transparency regimes,
0:32:07 impact assessments, audits, which are a bunch of ideas that we think are not going to help protect
0:32:13 people and are going to make it really hard for low resource startups. And so we’ve been trying to say,
0:32:17 no, no, no, don’t do that. And so that sounds like deregulate. But, but for whatever reason,
0:32:22 it’s been hard so far to shift toward like, here’s another set of ideas that we think would be compelling
0:32:25 and actually protecting people and creating stronger AI markets.
0:32:32 Right now we don’t see, you know, terrorists or criminals being aided, you know, 1000X with AI
0:32:36 and in performing terrorism or crime. Like when I ask people, like, what are you truly scared about?
0:32:40 Like, give me a concrete scenario. People, you know, they’ll be like, oh, what about like bioterrorism or
0:32:45 something? Or what about, you know, cybersecurity, you know, theft or something? We seem very far away from
0:32:51 that. Is there any amount of development at, you know, in the next few years, any amount of breakthroughs
0:32:58 where you, where you might say, oh, you know, maybe use isn’t enough or, or do we think that that will always be a?
0:33:04 I think it’s conceivable. I mean, and I think we’ve been open about that. Like we, we, we think existing law is a good
0:33:10 place to start. It’s probably not where we end. So Martin Casado, one of our general partners, wrote a great piece on
0:33:15 marginal risk in AI, basically saying like when there’s incremental additional risk that we should
0:33:19 look for policy to address that risk. And so the situation you’re describing, I think might be that.
0:33:24 I think what you’re getting at is a really important question about just potential significant harms that
0:33:31 we don’t yet contemplate. We get asked often about our regulate use, not regulate development framework.
0:33:35 Are you just saying that we should address issues after they occur? And I understand why that’s a concern.
0:33:41 Like there might be future harms. Um, and wouldn’t it be nice if we could prevent them in advance,
0:33:47 but that is how our legal system is designed. And typically when you talk to people about ways that
0:33:54 you could try to address potential criminal activity or other legal violations ex ante before they occur,
0:33:59 that’s really scary to people. Like Eric, what if we just learned a lot of information about you and then
0:34:03 predicted the likelihood that you might do something unlawful in the future. And if we think it’s exceeded a
0:34:07 a certain threshold, then we’re going to go and try and take action against you before you’ve done it
0:34:12 so that we can prevent future crime that you’re laughing because it’s laughable. We, we don’t want
0:34:19 a kind of ex ante surveillance, um, both because it feels invasive, but also because it, it’s, it,
0:34:24 it often is ineffective. Like you might, it might, we might run some tests that shows that maybe you’re
0:34:29 likely to pre be predisposed to some kind of criminal activity, but we don’t know until you’ve done it,
0:34:34 that you’re going to do that. You’ve done it. And so, um, I think that kind of approach, again,
0:34:41 I think it’s motivated by a really valid concern and a valid desire to prevent harm.
0:34:45 What if we could prevent harm before it’s occurred? The challenge is the regulatory framework, I think
0:34:50 probably won’t do that. It probably won’t have the effect of preventing harm. And there are all these costs
0:34:53 associated with it, mainly from our perspective, inhibiting startup activity.
0:35:00 Yeah. Mark, um, once told me on a podcast, he told me his joke, which is, uh, man goes to the
0:35:05 government, uh, you know, I go to the government because, uh, I have this big problem. Uh, now I
0:35:12 get a lot of regulation. Now I have two problems. Okay. Let’s talk about the state of AI policy today.
0:35:17 There’s a lot that’s happened last few months with the moratorium, the action plan. What are some of the
0:35:20 things that we’re excited about right now? What are some of the things we’re, we’re less excited about
0:35:25 right now. Why don’t we give a breakdown of where we’re at right now? So I think given what
0:35:28 Collins described about where things were a couple of years ago, it’s great to see
0:35:33 the federal government, um, certainly the executive branch, but not just the executive branch. I think
0:35:39 this is in Congress across both aisles being supportive of frameworks that we think are much better
0:35:46 for little tech. So trying to identify areas where regulatory burden outweighs value and where we can
0:35:51 right-size regulation to make it easier for AI startups. As Colin said, support for open source,
0:35:54 we were in a really different place on that a couple of years ago. Now it seems like there’s much more
0:35:58 consensus. And again, it actually was across the end of the last administration and the current
0:36:04 administration around the value of open source for competition and innovation. Um, there are the,
0:36:10 the national AI action plan also had, um, great stuff in it about, um, thinking through the balance
0:36:13 between the federal government and state governments, which is something that we’ve done a lot of
0:36:17 thinking about. There’s an important role for each. Um, but the, we think the federal government
0:36:22 should really lead regulation of development of AI states should police harmful conduct within
0:36:26 their borders. And I think there’s stuff in the action plan that would try to ensure those respective
0:36:31 roles. There’s also a lot of stuff in the action plan that wasn’t really talked about much. It wasn’t
0:36:37 sort of the headline grabbing stuff that I thought was incredibly compelling, um, in terms of, again,
0:36:42 trying to, to create a future for AI that just works better for more people. And a really good
0:36:47 example is the stuff on worker retraining, um, that focused on different programs that
0:36:53 could help workers if they’re displaced as a result of AI, as well as monitoring AI markets and labor
0:36:58 markets to make sure that we understand when there are significant labor disruptions. So I think it sort
0:37:02 of gets at a point that you were alluding to a couple of minutes ago about like, what happens when
0:37:06 there’s something really disruptive in the future? Can you predict with certainty that there won’t be
0:37:13 this crazy disruptive thing? And no, we can’t. There, there might be significant labor disruption. Um,
0:37:17 others at the firm have talked extensively about how typically there’s always, there are worries about
0:37:20 labor disruptions when there’s new technology introduced. Typically there are increases in
0:37:25 productivity that end up being good for labor overall. We think that’s the direction of travel, but you
0:37:30 never know. We can’t predict it with certainty. And so I think it’s a really strong step to try to just
0:37:35 monitor labor markets to see what the disruption might look like so that we’re set up to take strong policy
0:37:40 action in the future. Can I, can I just say one thing about the AI action plan? Sure. And I, I don’t
0:37:47 want to juxtapose this, uh, to what we saw under the Biden administration, which is incredible amount of
0:37:52 activity in the Biden administration, an incredible amount of activity under the Trump administration. But,
0:37:57 you know, look, I, I kind of view these executive orders and these plans that come out from
0:38:02 administration are very, very important. And some of them have true policy. They direct the
0:38:07 agencies to do things to come out with rewards and then take under rulemakings and things like that.
0:38:15 But from an AI action plan perspective, for me, it was so significant because I think it turned the
0:38:22 conversation on its head before it was, we have to, we have to only focus on safety with a splash of
0:38:28 innovation. Yeah. And now it is, we understand how important this is from a national security
0:38:33 perspective. We understand how important this is from an economic perspective. We need to make sure
0:38:39 that we win while people, while keeping people safe. Yeah. Right. And that dynamic and that shift
0:38:44 of rhetoric is incredibly important because what that does is it signals to the rest of the world,
0:38:48 it signals to other governments that this is the position of the United States and will be the position
0:38:54 for the next three and a half years. And this is the position of the United States to the Congress.
0:38:59 So when the Congress is looking at potentially taking up pieces of legislation or taking actions
0:39:04 or even committee hearings, which, you know, for the broad base of what we’re talking about are fairly
0:39:11 insignificant, all of that is sort of kept in mind. So now the conversation has shifted significantly and
0:39:18 that that is really, really important. Speaking of winning, Colin, I’m curious for our thoughts on AI
0:39:23 policy vis-a-vis China, whether it’s export controls or any other, you know, issues we care about.
0:39:27 Yeah. I mean, well, I mean, look, first and foremost, we’ve talked about it already. I mean,
0:39:33 we have to win. Right. And, and I think, I think that that is, that is at, that is at the main thrust of
0:39:39 a lot of what we’re doing here and a lot of the way that we think about this from a firm perspective.
0:39:46 You know, I think first is making sure that the, the founders and the builders can build appropriately
0:39:52 with appropriate safeguards and an appropriate regulatory structure. The second is how do we win and make
0:39:56 sure that America is the place where AI is, is probably the most functional and foundational
0:40:03 vis-a-vis China. Um, you know, I, I, I think that, um, there has been a long conversation,
0:40:07 the diffusion rule that came out from the Biden administration, specifically on export controls.
0:40:17 Um, many, I think panned that proposal. I think that that was, um, a lot of people suggested it was
0:40:22 probably too restrictive. It wasn’t the right way to think about things. I think, you know, we have spent
0:40:29 most of our time, Matt leading this effort has spent most of his time, our time specifically focused on
0:40:35 how are we regulating the underlying models and how are we regulating hopefully the use of these models
0:40:42 versus specifically sort of on the export control piece. What I will say though is very concerning
0:40:47 sort of some of the proposals that came out, um, from the Biden administration, some of the proposals that
0:40:50 we’ve seen in the state level and some of the proposals that we’ve seen and at the congressional
0:40:58 level of federal standpoint that dealt with specifically export controls on models themselves.
0:40:58 Yeah.
0:41:05 And we’re still kind of having this conversation. There’s, there’s a, um, there is a policy set
0:41:08 that has been kicked around for a while. It’s called the outbound investment policy, which is basically
0:41:15 how much U.S. money from the private sector is flowing into Chinese companies. And I, very noble,
0:41:21 laudable, you know, super supportive of that concept. You know, we are a very sort of primary
0:41:26 America, America first sort of organization here. We’re investing primarily in American companies and American
0:41:34 founders. Um, so, you know, we’re, we’re very supportive of it. But when you, when you sort of
0:41:42 edge into the idea that we might inadvertently ban U.S. open source models from being able to be
0:41:49 exported across the country, like by definition of open source, there is no, there are no walls around
0:41:54 these types of things. So that’s one of the areas that we’ve been very, very focused on. Um, and I think,
0:42:00 uh, I think obviously very important to make sure that we don’t have these very powerful technologies,
0:42:06 U.S. main technologies in, in the hands of our Chinese counterparts and the PLA and the CCP using
0:42:12 this against us. But I also think that we need to make sure that we’re not extending too far
0:42:18 and limiting the power of open source technologies to be able to kind of be the platform around the world.
0:42:24 You know, the final point that I’d make here is we do all ultimately and fundamentally have a decision to
0:42:31 make as, as you know, the U.S., which is, do we want people using U.S. products across the world,
0:42:35 which helps for a whole bunch of different reasons, but certainly on soft power from national security
0:42:40 respective? Or do we want people to use Chinese products? The more that we lock down, obviously,
0:42:46 American products, the more Chinese, the Chinese will enter those markets and sort of take a land grab
0:42:46 in that space.
0:42:51 When you get into more what happened with the moratorium and the fallout that ensued?
0:42:56 I think this one is a bit complicated. There was a perception about the moratorium when it came out
0:43:02 that it would have prohibited all state law from, from existing for a 10 year window. Obviously,
0:43:06 that’s a long period of time. I’m not sure we would necessarily completely agree with that policy stance.
0:43:12 That, from our point of view, is a misinterpretation for a whole bunch of different reasons of actually
0:43:18 what the language said. But, you know, sometimes in D.C., a lot of times in D.C., perception is reality.
0:43:26 And that kind of took hold. But I also think that, you know, there are also, you know, strong competing
0:43:33 forces like we’ve discussed, right, from the, I think, the Doomer crowd or the safety crowd that were
0:43:39 very, very anti, that had used all of their tentacles that they’ve spread out over the last decade to try
0:43:45 and move in and try and kill this. I think they also were successful in leveraging some other industries
0:43:49 to try and come in and also move forward to try and kill this thing.
0:43:55 And look, you know, by virtue of the vehicle, the underlying procedural vehicle, this reconciliation
0:44:01 package that it was moving in, it was a partisan exercise. It was going to be Republicans on Democrats,
0:44:08 and that was that, right? And there was nothing, even a prominent A.I. policy that was going to be
0:44:13 dropped in a reconciliation package that was ever going to drag Democrat votes over it because it was such
0:44:18 a big sort of Christmas tree style thing that had all kinds of all kinds of tax reform positions,
0:44:26 et cetera. And if you were in one of those situations, the margins on the votes become
0:44:33 very, very, very small. So all it took was, you know, one or two Republican senators hitching their
0:44:40 wagon to some of these ideas that were out there to tank this thing, right? And look, I think that’s
0:44:46 going to be a situation that you’re going to fight in any sort of political policy legislative
0:44:50 outcome or any sort of any any sort of issue that you’re going to be running within the Congress.
0:44:58 Right. But I think more so than anything, and we heard this repeatedly from a whole bunch of different
0:45:02 people, and this is what we’ve also experienced. The industry was just not organized well enough.
0:45:06 Right. And that’s not just the industry. It’s also the people who care about this thing that aren’t
0:45:12 actually industry stakeholders. The stakeholders who were pro some level of moratorium or some level
0:45:19 of preemption were just not organized. And I think that that was, you know, both an eye-opening moment,
0:45:25 but also an important moment, because I think what we have done in the preceding,
0:45:31 you know, three, four months since this thing has gone down is we’ve taken a long, hard look at what
0:45:37 we need to do collectively from a coalition to be able to be in a better position next time we’re there.
0:45:41 And so what does that look like? Right. I mean, first and foremost, it comes with
0:45:46 writing, doing podcasts, talking about these things, talking about the details of what’s
0:45:53 actually in these proposals and what it actually means for states and the federal government to make
0:45:57 sure that we’re fighting through the FUD that’s coming through because it’s always going to be there.
0:46:03 There’s misrepresentation all over the all over the field. The second piece is let’s all get on the
0:46:07 same page, which I think we’ve, we’ve worked very hard to do and where we can find alignment. We,
0:46:13 I think we’ve found that alignment between big, medium and little. And then I think the third and
0:46:18 probably the most important is what are we doing on sort of the political advocacy side to make sure
0:46:25 that we have the appropriate tools to be able to push forward in a way that ensures that America
0:46:30 continues to lead and that we don’t lose out on this race to China. And that’s, you know, part of the
0:46:36 reason that we have recently announced our donation to leading the future pack, which will have, you
0:46:41 know, several different entities underneath it, which I think is, is designed to sort of be that political
0:46:47 center of gravity in the space. Um, and that will fight at the federal level and the state and local
0:46:52 levels. So we’re happy to be a part of it. And I, we expect, you know, there will be others that join
0:46:58 this sort of common cause fight on the AI side. If we could wave a wand, what would we like to be done at
0:47:03 the state level versus the federal level versus how should we think about that, that interplay
0:47:08 compared to where we’re at now? Yeah. So, so I, I think there, the, the, the helpful answer here
0:47:12 comes from the constitution. Constitution actually lays out a role for the federal government and a
0:47:18 role for state governments. Federal government takes the lead in interstate commerce. So governing a
0:47:23 national AI market and governing AI development, we think is primarily Congress’s role.
0:47:29 Um, sometimes when, when people say that, I think the, what other people hear for some reason is
0:47:35 states should do nothing. And we have been, we’ve tried very hard to be very deliberate in not saying
0:47:40 that and making clear that states have an incredibly important role to play in policing harmful conduct
0:47:45 within their jurisdictions. So criminal law is a perfect example. There is some criminal law at
0:47:49 the federal level, but the bulk of criminal laws at the state level, like when you think about routine
0:47:55 crimes, if you were going to prosecute someone, uh, prosecute a perpetrator, the it’s likely that
0:47:59 that would occur under state law. And so to the extent we want to take account of
0:48:06 local activity that, um, that would, where there’s criminal conduct involved and we want to make
0:48:09 sure that the laws are robust enough to protect people from that activity, that’s going to be
0:48:17 primarily state, state law. Um, oddly enough, I mean, as Colin is describing, like we, this isn’t the
0:48:21 delineation that we’ve started out with. There are a lot of state laws that have sort of taken the approach
0:48:27 of some, sometimes explicitly, um, Congress hasn’t acted. So we have a responsibility to act. And that’s
0:48:32 true to some extent, like you can act within states can act within their constitutional lane.
0:48:37 Some of what states have done have gone outside that lane. And so we actually just this week
0:48:43 released a post on, um, potential dormant commerce clause concerns associated with state laws. And the
0:48:49 basic idea there is that there’s a constitutional test that says that states cannot excessively burden
0:48:55 out-of-state commerce if it, when it, when that greatly exceeds the in-state local benefits.
0:49:00 And so courts actually weigh that there’s a balancing test. Are the harms cost to out-of-state
0:49:06 activity, do those significantly outweigh the benefits on the local side? And we think that at
0:49:10 least for some of the proposals that have been introduced, it’s, it’s likely that they won’t,
0:49:14 that the benefits are somewhat diminished relative to what the proponents think they are,
0:49:19 and that the costs are significant. Like the cost of a developer in Washington state
0:49:25 for complying with a law that’s in California or a law that’s in New York is going to be significant.
0:49:30 And so our hope I think is not that the dormant commerce clause ends up serving as a function that
0:49:37 makes it hard for states to enact laws. Um, but actually just get serves as a guidepost for states
0:49:41 around the kinds of laws that they might actually introduce. And I think it pushes in the direction
0:49:46 that’s consistent with our agenda, which is to, to take an active role in legislating and
0:49:52 enforcing laws that are focused on harmful use. Looking at the next six months to a year,
0:49:55 what are the issues that we’re most focused on or that we’re thinking about are going to,
0:50:00 you know, be playing a role in the conversation? Yeah, I think it’s first and foremost,
0:50:05 some level of federal preemption. And I, I, I want to be very specific about this. Again,
0:50:10 to Matt’s point, we’re not talking about preempting all state law. We’re talking about
0:50:16 making sure that we have a federal framework specifically for this model regulation and,
0:50:23 and hopefully how the models can be used. Right. Um, I, I, I think that’s going to be so,
0:50:30 so critical because we can’t just like any other technology, no technology can live under a 50 state
0:50:35 patchwork. And, and, and that’s, that’s been the biggest issue that we’ve been fighting over the
0:50:43 last year and a half or so. Um, so I think, I think that, um, I think that there are some other
0:50:49 sort of policy sets that I think will be handled beyond that, that I think can kick into sort of
0:50:54 workforce training. I think there’s some literacy things that should be coming up. Obviously,
0:50:59 there’s a huge, robust conversation around data centers and energy that I think it will be really,
0:51:04 really important. But above all, I think most of our time and energy will be focused on trying to
0:51:09 have some level of federal standard here to try and drive the dividing line between the federal and
0:51:12 state government, which I think Matt has already done a ton of great work on.
0:51:19 Yeah, I think this is just a super exciting policy moment for AI. Um, there’s the, the last couple
0:51:22 years where I think there are a bunch of ideas that have been proposed. And for the reasons that we’ve
0:51:28 discussed, we think those ideas fall short, both in terms of protecting consumers and in terms of ensuring
0:51:34 that there’s a robust startup ecosystem. Um, most of those laws I think have actually not succeeded in
0:51:39 passing. So like there were a number of laws introduced at the state level in the, in this past year’s
0:51:44 legislative sessions that we thought had a strong likelihood of, of passing. And I think to date,
0:51:49 none of them have passed. Um, Colin has also been building out the expertise and skillset and
0:51:54 capacity on his team. We just hired Kevin McKinley to lead our work in state policy. And he, I think,
0:52:00 will help us to take a real affirmative position in the legislative sessions ahead on what might actually
0:52:04 be AI policy. That’s good for startups. So instead of being in the position of saying no,
0:52:07 because we’re sort of starting late and kind of with one hand behind our back,
0:52:13 I think we’re in a position to really actually try to articulate and advance a proactive agenda
0:52:19 in AI that’s compelling. I think Colin hit the main parts of it. Um, ensuring proper roles for the
0:52:24 federal and state governments, focusing on regulating harmful use, not development. And there are specific
0:52:29 things that you can do there in terms of increasing capacity and enforcement agencies, making clear that
0:52:35 AI is not a defense to claims brought on under existing criminal or civil law. Um, and in technical
0:52:40 training, I think for government officials to make sure that they can identify and prosecute cases where
0:52:44 AI is used in a harmful way. And then all this infrastructure and talent stuff that, that Colin’s
0:52:51 describing work or retraining AI literacy. Um, we’ve also given some thought to the idea that has been
0:52:57 articulated by a number of lawmakers and was in the national AI action plan of creating a central resource
0:53:00 housed in the federal government. And you could also do it in state governments as well.
0:53:05 Uh, that lower some of the barriers to entry for startups. Um, you know, compute costs and,
0:53:11 and data access. Um, and we think that’s really compelling in terms of, um, ensuring that startups
0:53:15 can compete. And that idea, like many of these is bipartisan. It’s been supported by the current
0:53:20 administration. It was supported by leading Democrats, um, over the last couple of years. So that’s the kind
0:53:26 of thing that we are hoping that when we have the, the room and position to really advocate for an
0:53:29 affirmative agenda that we’ll get some traction in policy circles.
0:53:36 We’re not always in 100% alignment with other people in the industry, you know? And, and I think,
0:53:41 I think that that’s, you know, big, medium, little, you know, across the board, there’s other sort of
0:53:46 like consumer advocacy groups that obviously feel differently about these things. I think for the
0:53:52 most part, the industry is generally aligned on some level of a federal standard here and understanding
0:53:57 that the thing again, that won’t work is a 50 state patchwork. Yeah. And I think that that’s
0:54:02 super, super important because I think for the first time you actually have this sort of alignment
0:54:06 there. And if you have that sort of alignment, that’s kind of momentum that you can to actually
0:54:10 push things over the finish line and get something done. And I, and I think, look, also the Trump
0:54:15 administration to their credit has also been incredibly supportive of this idea too. There’s a,
0:54:21 like, that’s an incredibly important point. One criticism usually raised in sort of an
0:54:26 implicit criticism sort of way is, Hey, you’re the little guys, but often you align with the big
0:54:31 guys. So aren’t you just saying, aren’t you just in favor of a deregulatory agenda that works for big
0:54:35 tech? And one of the things that I think is really extraordinary about the little tech agenda is it’s
0:54:41 really nonpartisan and it’s doesn’t take a position on big, little, it basically says, here’s the agenda.
0:54:46 And when you agree with us, we’ll support you. And when you disagree with us, we’ll oppose you.
0:54:51 And that’s not party line. It’s not big, little. And so I think what we saw over a
0:54:58 the phase that Colin was referring to kind of initially in the recent set of AI policy was a
0:55:04 phase of divergence between big and little licensing regime. Bigs were sort of pushing it. Little was
0:55:08 concerned about it. Then there’s, then there was a period of convergence. And I think actually,
0:55:13 if you look at like the national AI action plan comments across a range of different providers,
0:55:18 as Colin’s saying, like a lot of them, they had some core similarities. So, so lots of large companies
0:55:23 have advocated for federal preemption. We don’t oppose that just because big companies are advocating for
0:55:27 it. We think that that’s good for startups. I think it’s possible. I’m curious. I mean,
0:55:33 this is really, you know, Colin really understands this in a way that I don’t like how the political
0:55:37 chips will fall. I think it’s possible we’re in a period of some divergence. And one thing that we hear
0:55:42 repeatedly, which is sort of funny, is people will bring us stuff and they’ll say industry agrees
0:55:46 with this. So we expect you to agree. You can’t, the industries already agree. You can’t disagree.
0:55:51 And we say the big parts of the industry have agreed, but we, we, sometimes we agree with them,
0:55:55 but sometimes we have different views. And so when we disagree, it’s not because we’re trying to like
0:55:59 blow up a policy process or make it different difficult for lawmakers who are trying to move
0:56:03 something forward. It’s because when we’re looking at it, we’re looking at it through this particular
0:56:08 lens. And I think, I hope it’s not the case, but I think there might be more fracturing in the
0:56:11 months ahead. Yeah, I agree with you on that. And by people, he means lawmakers,
0:56:16 just to be specific. Yes. That’s a great place to wrap. Colin, Matt,
0:56:18 thanks so much for coming on the podcast. Thanks very much.
0:56:26 Thanks for listening to the A16Z podcast. If you enjoyed the episode, let us know by leaving a review
0:56:32 at ratethispodcast.com slash A16Z. We’ve got more great conversations coming your way. See you next time.
0:56:37 As a reminder, the content here is for informational purposes only.
0:56:52 Please note that A16Z and its affiliates may also maintain investments in the companies discussed in this podcast.
0:57:00 For more details, including a link to our investments, please see A16Z.com forward slash disclosures.
0:57:08 Thank you.

Who’s speaking up for startups in Washington, D.C.?

In this episode, Matt Perault (Head of AI Policy, a16z) and Colin McCune (Head of Government Affairs, a16z) unpack the “Little Tech Agenda” for AI- why AI rules should regulate harmful use, not model development; how to keep open source open; the roles of the federal government vs states in regulating AI; and how the U.S. can compete globally without shutting out new founders.

 

Timecodes: 

0:00 – Introduction 

1:12 – Defining the Little Tech Agenda

4:40 – Challenges for Startups vs. Big Tech

6:37 – Principles of Smart AI Regulation

9:55 – History of AI Policy & Regulatory Fears

19:26 – The Role of Open Source and Global Competition

23:45 – Motivations Behind Policy Approaches

26:40 – Debates on Regulating Use vs. Development

35:15 – Federal vs. State Roles in AI Policy

39:24 – AI Policy and U.S.–China Competition

40:45 – Current Policy Landscape & Action Plans

42:47 – Moratoriums, Preemption, and Political Dynamics

50:00 – Looking Forward: The Future of AI Policy

56:16 – Conclusion & Disclaimers

Resources: 

Read the Little Tech Agenda: https://a16z.com/the-little-tech-agenda/

Read ‘Regulate AI Use, Not AI Development : https://a16z.com/regulate-ai-use-not-ai-development/

Read Martin’s article ‘Base AI Policy on Evidence, Not Existential Angst: https://a16z.com/base-ai-policy-on-evidence-not-existential-angst/

Read ‘Setting the Agenda for Global AI Leadership’:

https://a16z.com/setting-the-agenda-for-global-ai-leadership-assessing-the-roles-of-congress-and-the-states/

Read ‘The Commerce Clause in the Age of AI”: 

https://a16z.com/the-commerce-clause-in-the-age-of-ai-guardrails-and-opportunities-for-state-legislatures/

Find Matt on X: https://x.com/MattPerault

Find Collin on X: https://x.com/Collin_McCune

a16z Podcasta16z Podcast
SaveSavedRemoved 0
Register New Account