California’s Senate Bill 1047: What You Need to Know

AI transcript
0:00:07 The cost to reach any given benchmark of reasoning or capability is dropping by about 50 times every five years.
0:00:17 The definitions for what was dangerous in the Cold War became obsolete so fast that a couple of decades later when the Macintosh launched, it was technically a munition.
0:00:26 Great technologies always find their way into downstream uses that the original developers would have had no way of knowing about prior to launch.
0:00:35 No rational startup founder or academic researcher is going to risk jail time or financial ruin just to advance the state of the art in AI.
0:00:39 There’s no chance we’d be here without open source.
0:00:49 The state of California ranks as the fifth largest economy in the world, and on a per capita basis, the Golden State jumps all the way up to number two.
0:01:00 Now, one of the drivers of those impressive numbers is, of course, technology, with California being the home of all but one fan companies and a long, long tail of startups.
0:01:09 But something happened recently that has the potential to dislocate the state’s technical dominance and set a much more critical precedent for the nation.
0:01:14 On May 21st, the California Senate passed Bill 1047.
0:01:31 This bill, which sets out to regulate AI at the model level, wasn’t garnering much attention until it slid through an overwhelming bipartisan vote of 32 to 1 and is now queued for an assembly vote in August, which, if passed, would cement it into law.
0:01:34 So here is what you need to know about this bill.
0:01:40 Senate Bill 1047 is designed to apply to models trained above certain compute and cost thresholds.
0:01:52 The bill also puts developers, both civilly and even criminally liable for the downstream use or modification of their models by requiring them to certify that their models won’t enable quote “hazardous capability.”
0:01:57 The bill even expands the definition of perjury and could result in jail time.
0:02:05 Third, the bill would result in a new frontier model division, a new regulatory agency funded by the fees and fines on AI developers.
0:02:10 And this very agency would set safety standards and advice on AI laws.
0:02:13 Now, if all of this sounds new to you, you’re not alone.
0:02:20 But today you have the opportunity to hear from A16C General Partner Ajay Mida and venture editor Derek Harris.
0:02:30 Together, they break down everything the tech community needs to know right now, including the compute threshold of 10 to the power of 26 flops being targeted by this bill.
0:02:39 Whether a static threshold can realistically even hold up to exponential trends in algorithmic efficiency and compute costs, historical precedents that we can look to for comparison,
0:02:47 the implications of this bill on open source, and the startup ecosystem at large, and most importantly, what you can do about it.
0:02:55 Now, this bill really is the tip of the iceberg with over 600 new pieces of AI legislation swirling in the United States today.
0:03:08 So if you care about one of the most important technologies of our generation and America’s ability to continue leading the charge here, we encourage you to read the bill and spread the word.
0:03:15 As a reminder, the content here is for informational purposes only, should not be taken as legal, business, tax or investment advice,
0:03:21 or be used to evaluate any investment or security and is not directed at any investors or potential investors in any A16C fund.
0:03:27 Please note that A16C and its affiliates may also maintain investments in the companies discussed in this podcast.
0:03:39 For more details, including a link to our investments, please see a16c.com/disposures.
0:03:49 Before we dive into the substance of the California Senate Bill 1047, can you start with giving your high level reaction to the bill and maybe give listeners a sense of why it’s such a big deal right now?
0:03:52 No shock, disbelief.
0:04:05 It’s hard to understate just how blindsided startups, founders, the investor community that have been heads down building models, building useful AI products for customers, paying attention to what the state of the technology is,
0:04:07 and ultimately just innovating at the frontier.
0:04:12 These folks, the community broadly feels completely blindsided by this bill.
0:04:22 When it comes to policymaking, especially in technology at the frontier, the spirit of policymaking should be to sit down with your constituents, startups, founders at the frontier, builders,
0:04:24 and then go solicit their opinion.
0:04:37 And what is so concerning about it right now is that this bill, SB 1047, was passed in the California Senate with a 32 to 1 overwhelming vote, bipartisan support.
0:04:44 And now it’s headed to an assembly vote in August, less than 90 days away, which would turn it into law.
0:04:49 And so if it passes in California, it will set the precedent in other states.
0:05:02 It will set a nationwide precedent and ultimately that’ll have rippling consequences outside of the US to other allies and other countries that look to America for guidance and for thought leadership.
0:05:09 And so what is happening here is this butterfly effect with huge consequences on the state of innovation.
0:05:20 There’s a lot to get into with the proposed law and some of its shortcomings or oversights, but the place I want to start is both SB 1047 and President Biden’s executive order from last year,
0:05:36 established mandatory reporting requirements for models that are trained, and this is a little difficult to speak, bear with me listeners, that are trained on 10 to the 26 integer floating point operations per second or FLOPs as the acronym of compute power.
0:05:41 So can you explain to listeners what FLOPs are and why they’re significant in this context?
0:05:47 Right. So FLOPs in this context refers to the number of floating point operations used to train an AI model.
0:05:55 And floating point operations are just a type of mathematical operation that computers perform on real numbers as opposed to just integers.
0:06:02 And the amount of FLOPs used is a rough measure of the computing resources and complexity that went into training a model.
0:06:08 And so if models are like cars, FLOPs might be the amount of steel used to make a car, to borrow an analogy.
0:06:12 It doesn’t really tell you much about what the car can and cannot do directly.
0:06:19 But it’s just one way to kind of measure the difference between the steel required to make a sedan versus a truck.
0:06:26 And this 10 to the 26 FLOP threshold is significant because that’s how the bill is trying to define what a covered model is.
0:06:33 It’s an attempt to define the scale at which AI models become potentially dangerous or in need of additional oversight.
0:06:46 And this all starts from the premise that foundation models trained with this immense amount of computation are extremely large and capable to the point where they could pose social risks or harm inherently if not developed carefully.
0:06:57 But tying regulations to some fixed FLOP count or equivalent today is completely flawed because algorithmic efficiency improves, computing costs decline.
0:07:09 And so models that take far fewer resources than 10 to the 26 FLOPs will match the capabilities of a 10 to the 26 FLOP model of today within a fairly short time frame.
0:07:16 So this threshold would quickly expand to cover many more models than just the largest, most cutting edge ones being developed by tech giants.
0:07:21 It will basically cover most startups in open source too within a really short amount of time.
0:07:35 And so while today in 2024, realistically, only a handful of the very largest language models like GPT-4 or Gemini and other top models from big tech companies are likely to sit above that 10 to the 26 FLOP threshold.
0:07:43 In reality, most open source and academic models will soon be covered by that definition as well.
0:07:50 This would really hurt startups, it would burden small developers and ironically it’s going to reduce the transparency and collaboration around AI safety.
0:07:53 By discouraging open source development.
0:08:04 What we see frequently is people in labs going out there and saying we’re going to build big state-of-the-art models that cost less to train, that use fewer resources, that use more data or different types of data.
0:08:08 There are all these different knobs to pull to get performance out of these models.
0:08:14 Seems like you could have this sort of performance for a fraction of the cost in a small number of years.
0:08:22 Right, so that all comes down to two key trends. One, the falling cost of compute and two, the rapid progress in algorithmic efficiency.
0:08:28 Empirically, the cost per FLOP for GPUs is having roughly every two to two and a half years.
0:08:39 And so this means that a model that costs about $100 million to train today would only cost about $25 million in about five years and less than $6 million in a decade.
0:08:44 Just based on hardware trends alone, just Moore’s Law. But that’s not even the whole story, right?
0:08:52 Algorithmic progress is also making it dramatically easier to achieve the same benchmark performance with way less compute rapidly.
0:09:03 And so when you look at those trends, we observe that the compute required to reach a given benchmark of reasoning or capability is decreasing by half about every 14 months or less.
0:09:17 So if it takes $100 million worth of FLOPs to reach some given benchmark today, in five years, it would only take around $6 million worth of FLOPs to achieve that same result, just considering the algorithmic progress alone.
0:09:28 Now, when you put these two trends together, it paints a pretty stunning picture because the cost to reach any given benchmark of reasoning of capability is dropping by about 50 times every five years.
0:09:38 And so that means that if a model costs $100 million to train to some benchmark in 2024, by 2029, it will probably cost less than $2 million.
0:09:40 That’s well within a startup budget.
0:09:51 And by 2034, a decade, that cost will drop to somewhere between $40,000, $50,000, putting it within the reach of literally millions of people.
0:09:58 And despite these clear trends, the advocates for the bill seem to be overlooking or underestimating this rapid progress.
0:10:05 Some folks are suggesting that, oh, these smaller companies might take 30 years or more to reach this 10 to the 26 FLOP threshold.
0:10:09 But as we’ve just discussed, that’s a pretty serious overestimation.
0:10:19 So even assuming a model costs $1 billion to train to that level today, it’s going to cost as little as $400,000 in just a decade.
0:10:25 And it is easily within the range for most small businesses who are going to then have to grapple with compliance and regulation and so on.
0:10:37 And so look, the bottom line is that given the breakneck pace of progress and compute costs and efficiency, we can expect smaller companies and academic institutions to start hitting these benchmarks in the very near future.
0:10:47 Yeah, I think it’s a relevant touch point to remind people that a smartphone today, like an iPhone 15 has more FLOPs, more performance than a supercomputer did about 20 years ago.
0:10:52 Like the world’s fastest supercomputers, your iPhone can do more FLOPs than that.
0:11:00 The Apple Macintosh G4, I think back in 1999 had enough computing power that it would have been regulated as a national security threat.
0:11:03 So these numbers, these are very much sliding scales to your point.
0:11:13 That’s right. That’s right. That’s a great historical example. I think there was this 1979 Export Administration Act that the US had written in the Cold War era in the 70s.
0:11:23 And the definitions for what was dangerous in the Cold War became obsolete so fast that a couple of decades later when the Macintosh launched, it was technically a munition.
0:11:34 So we’ve been here before and we know that when policymakers and regulators try to capture the state of a current technology that’s dramatically improving really fast, they become obsolete incredibly fast.
0:11:35 And that’s exactly what’s happening here.
0:11:48 The other thing is, at the time we’re recording this, there are some proposed amendments floating around to test the 1047, one of which would limit the scope of the bill to applying, again, only to models trained at that compute capacity.
0:11:54 And additionally, that also cost more than $100 million to train.
0:12:03 So what’s your thought on that? And again, if we attach a dollar amount to this, doesn’t it make the compute threshold kind of obsolete?
0:12:13 Yeah, so this $100 million amendment to train might seem like a reasonable compromise at first, but when you really look at it, it has the same fundamental flaws as the original flop threshold.
0:12:22 The core issue is that both approaches are trying to regulate the model layer itself, rather than focusing on the malicious applications or misuses of the models.
0:12:30 Generative AI is still super early, and we don’t even have clear definitions for what should be included when calculating these training costs.
0:12:38 Do you include the data set acquisition, the researcher salaries? Should we include the cost of previous training runs or just the final ones?
0:12:42 Should human feedback for model alignment expenses count?
0:12:46 If you find you in someone else’s model, should the cost of the base model be included?
0:12:59 These are all open questions without clear answers and forcing startups, founders, academics to provide legislative definitions for these various cost components at this stage would place a massive burden on these smaller teams.
0:13:04 Many of whom just don’t have the resources to navigate these super complex regulatory requirements.
0:13:13 Plus, when you just look at the rapid pace of model engineering, these definitions would need to be updated constantly, which would be a major drain on innovation.
0:13:26 So when you combine that ambiguity with the criminal and monetary liabilities proposed in the bill, as well as the broad authority they’re trying to give to the new frontier model division, which is sort of like a DMV for AI models that they’re proposing,
0:13:31 which can arbitrarily decide these matters, the outcome is clear, right?
0:13:41 Most startups will simply have to relocate to more AI friendly states or countries while open source AI research in the US will be completely crushed due to the legal risks involved.
0:13:47 So in essence, the bill is creating this disastrous regressive tax on AI innovation.
0:13:54 Large tech companies that have armies of lawyers and lobbyists will be able to shape the definitions to their advantage.
0:13:59 While smaller companies, open source researchers and academics will be completely left out in the cold.
0:14:10 It’s almost like saying we’ve just invented the printing press and now we’re only going to let those folks who can afford $100 million budgets to make these printing presses decide what can and cannot be printed.
0:14:16 It’s just blatant regulatory capture and it’s one of the most anti competitive proposals I’ve seen in a long time.
0:14:25 And what we should be focusing on instead is regulating specific high risk applications and malicious end users.
0:14:29 That’s the key to ensuring that AI benefits everyone, not just a few.
0:14:40 Now, you mentioned that the purported goal of some of these bills 1047 in particular is to prevent against what you might call catastrophic harms or existential risks from artificial intelligence.
0:14:50 But I’m curious, do you think, I mean, are the biggest threats from LLMs really weapons of mass destruction or bio weapons or autonomously carrying out criminal behavior?
0:15:04 I mean, if we’re going to regulate these models, I mean, should we not regulate use cases that are like proven in the wild and can actually do real damage today versus hypothetically at some point, these things could happen?
0:15:22 Absolutely. I mean, basically what we have is a complete over rotation of the legislative community around entirely non existent concerns of what is being labeled as AI safety when what we should be focusing on is AI security.
0:15:32 These models are no different than databases or tools in the past that have given humans more efficiency, better ways to express themselves.
0:15:34 They’re really just neutral pieces of technology.
0:15:44 Now, sure, they may be increasing or allowing bad actors to increase the speed and scale of the attacks, but the fundamental attack vectors remain the same.
0:16:02 It’s spearfishing, deep fakes, it’s misinformation, and these attack vectors are known to us and we should focus on how to strengthen enforcement and give our country better tools to enforce those laws in the wake of increasing speed and scale of these attacks.
0:16:10 But the attacks themselves, the attack vectors haven’t changed. It’s not like AI suddenly has exposed us to tons of new ways to be attacked.
0:16:15 And that’s just so far off and frankly unclear and today largely in the realm of science fiction.
0:16:29 And so the safety debate often centers around what is called existential risk or these models autonomously going rogue to produce weapons of mass destruction or the Terminator Skynet situation where they’re hiding their true intentions from us.
0:16:38 And sure, maybe there’s some theoretically tiny likelihood that that happens many, many, many years from now, but exceptional claims require exceptional evidence.
0:16:50 And so the real threat here is from us not focusing on the misuses and malicious users of these models and putting the burden of actually doing that on startups, on founders and engineers.
0:16:51 Right.
0:17:00 And to your point, even if a model made it marginally easier to learn, let’s say how to build a bio weapon, like one, people know how to do that today.
0:17:11 We have all of these things. We have labs dedicated to all of these things. You still need to get materials to carry out these attacks and there are regulations around acquiring those materials and databases around who’s buying what.
0:17:17 Yes, it does seem like the existing legal framework for some of these major threats is very robust.
0:17:18 Exactly.
0:17:24 What we really need is more investment in defensive artificial intelligence solutions, right?
0:17:41 What we need is to arm our country, our defense departments, our enforcement agencies with the tools they need to keep up with the speed and scale at which these attacks are being perpetuated, not slowing down the fundamental innovation that can actually unlock those defensive applications.
0:17:51 And look, the reality is America and her allies are up against a pretty stiff battle from adversarial countries around the world who aren’t stopping their speed of innovation.
0:17:56 And so it’s almost an asymmetric warfare against ourselves that’s being proposed by SB 1047.
0:18:00 Yeah, I’m certain there are governments that would in fact fund those hundred million dollar models.
0:18:01 Well north of that, right?
0:18:02 Yeah.
0:18:10 And we have increasing evidence that this is happening and that our national security actually depends on improving and accelerating open source collaboration.
0:18:25 And just two months ago, the Department of Justice revealed and published a public investigation, the conclusion of which was that a Google engineer was boarding a plane to China with a thumb drive with frontier AI hardware schematics from Google.
0:18:30 This was a nation state sponsored attack on our ecosystem.
0:18:39 And the only defense we have against that is actually making sure that innovation continues at breakneck speed in the country, not adding more burden to model innovation.
0:18:52 The other thing that SB 1047 would do, which we haven’t really touched on is impose liability, civil and in some cases criminal liability on model developers for the civil liability part.
0:19:03 If they build a model that’s covered by this bill, I need to be able to prove with beyond reasonable assurance or whatever the language is that this could not possibly be used for any of these types of attacks.
0:19:11 And also they have to be able to prove that no one else could come along and say fine tune their model and use it for some sort of attack, right?
0:19:22 So that’s a whole new level to be on the hook for money as an individual or jail time as an individual for building this model and not making it quote unquote safe enough.
0:19:24 Oh no, you’re absolutely right.
0:19:34 The idea of imposing civil and criminal liability on model developers when downstream users do something bad is so misguided and such a dangerous precedent.
0:19:42 First off, the bill requires developers to prove that their models can’t possibly be used for any of the defined hazardous capabilities.
0:19:47 But as we just discussed, these definitions are way too vague, ambiguous and subject to interpretation.
0:19:53 How can a developer prove a negative, especially when the goalposts keep moving?
0:19:55 It’s an impossible standard to meet.
0:20:04 Second, the bill holds developers responsible for any misuse of their models, even if that misuse comes from someone else who’s fine tuned or modified the model.
0:20:05 It’s ridiculous.
0:20:11 It’s like holding car manufacturers liable for every accident caused by a driver who’s modified their car.
0:20:15 So it’s an absurd standard that no other industry has held to.
0:20:21 The practical effect of these liability provisions will be to drive AI development underground or offshore.
0:20:29 A rational startup founder or academic researcher is going to risk jail time or financial ruin just to advance the state of the art in AI.
0:20:35 They’ll simply move their operations to a jurisdiction with a more sensible regulatory environment and the US will lose out.
0:20:36 Period.
0:20:40 The worst part, these liability provisions actually make us less safe, not more.
0:20:49 By driving AI development into the shadows, you lose the transparency and open collab that’s essential for identifying and battle-hardening vulnerabilities in AI models.
0:20:53 What we need is more open source development, not less.
0:21:03 So while the bill sponsors may have good intentions, imposing blanket liability on model developers for hypothetical future misuse is the exact opposite of what we need.
0:21:04 Right.
0:21:10 Supporters might argue, well, let me put some behind bars for lying to the government for lying about the capabilities of their models.
0:21:14 But again, like you might not know the capabilities of your models, right?
0:21:17 Or what a downstream user could do with that.
0:21:20 I wanted to ask you too, because you’ve built startups, you invest in startups.
0:21:31 I mean, can you walk through like the kind of wrench this type of compliance would throw into whether it’s the finances or the operation or just the general way that startups and innovative companies work?
0:21:32 Oh, yeah.
0:21:36 Look, I love California and that’s why I’m fighting so hard for this.
0:21:39 I did my undergraduate and graduate work here in the Bay.
0:21:41 I founded my first company here.
0:21:44 I sold that to another California company.
0:22:03 And over the last decade plus that I’ve been here, it’s only become more and more clear to me that a huge part of what makes the entire startup ecosystem even work is the ability for founders to take bold technology risks without having to worry about the kinds of ambiguity and liability risks that this bill is proposing.
0:22:13 When we first started ubiquity six, my last company, the goal was to empower developers to use our computer vision pipeline for all kinds of new use cases that we hadn’t even imagined.
0:22:19 We had some idea of what people would do originally augmented reality applications.
0:22:31 But after we’d launched it, we found millions of users who used our 3D mapping technology for entirely new kinds of uses from architecture and robotics to VFX and entertainment that we hadn’t even considered.
0:22:45 And so the whole engine and the beauty of platform businesses is that developers can focus on developing general and highly flexible technology and then just let the market figure out entirely new niche use cases at scale.
0:22:57 And this is true of almost every great AI business I’ve either worked with directly or invested in, right, whether it was mid journey and image generation and anthropic and language models or 11 labs and audio models.
0:23:14 Great technologies always find their way into downstream uses that the original developers would have had no way of knowing about prior to launch and to burden that process with the liability of this bill of saying that developers have to somehow prior to launch.
0:23:31 Demonstrate beyond any shred of reasonable doubt, which again, a completely ambiguous definition in the bill that these users were known about their risks were understood that exhaustive safety testing had been done to make sure none of these things would be possible.
0:23:33 Just completely kill that engine.
0:23:44 If we went back in time and this bill passed as currently in vision, as much as I hate to say it, there’s no chance I would have founded my company in California.
0:23:53 Speaking of startups, that’s to say nothing about open source projects and open source development, which have been like a huge driver of innovation over the past couple of decades.
0:24:00 We’re talking about very, very bootstrapped, skeletal budgets and some of these things, but hugely, hugely important.
0:24:07 Oh, fundamentally, I don’t think the current wave of modern generative scaling laws based AI would even exist without open source, right?
0:24:18 If you just go back and look at how we got here, its formers kind of the atomic unit of how these models learn was an open source widely collaborated on development, right?
0:24:26 In fact, it was produced at one lab, Google, and allowed another lab after open publishing and collaboration, which was open AI to actually continue that work.
0:24:29 And there’s no chance we’d be here without open source.
0:24:32 The downstream contributions of open source continue to be massive today.
0:24:43 When a company like Mistral or Facebook open source models and release their weights, that allows other startups to then pick up on their investments and build on top of them.
0:24:48 It’s like having the Linux to the close source windows operating systems.
0:24:57 It’s like having the Android to the close source iOS and without those, there’s no chance that the speed at which the AI revolution is moving will continue.
0:24:59 It’s certainly not in California and probably not in the United States.
0:25:02 Open source is kind of the heart of software innovation.
0:25:10 And this bill slows it down, has a chilling effect on open source by putting liability on the researchers and the builders pushing open source forward.
0:25:11 Yes.
0:25:17 And the other thing about open source is, I guess this is true of any model theoretically, but the idea if someone takes it and builds on it, right?
0:25:21 In AI, in generative AI or foundation models, you would call that fine tuning, right?
0:25:24 Where you retrain a model to your own purposes using your own data.
0:25:29 And again, this bill would, as written, impose liabilities again on the original developers.
0:25:35 If someone is able to fine tune their model to perform theoretically some sort of bad act, right?
0:25:46 I mean, how realistic is it for someone to even build a model that would be resistant or resilient against these types of fine tuning attacks or optimizations for lack of a better term?
0:25:47 Yes.
0:25:49 So this is another kind of worms as well.
0:25:56 Again, a symptom of the root cause of this bill’s flawed premise of regulating models instead of misuses.
0:26:09 So in the current bill draft, the language says that these restrictions and regulations will extend to a concept of a derivative model, which is a model that is a modified version of another model, such as a fine tuned model.
0:26:15 So if someone makes a derivative model of my base model that’s harmful, I am now liable for it.
0:26:25 It’s akin to saying that if I’m a car manufacturer and someone turns a car I made into a tank by putting guns on it and shoots people with it, I should get thrown in jail.
0:26:29 The definition of what a derivative model is also super vague.
0:26:35 And so now the bill sponsors are considering an amendment that says, oh, let’s add a compute cap to this definition.
0:26:39 And they’ve decided to pick 25%, which is quite arbitrary.
0:26:51 And to say if somebody uses more than 25% of the compute that the base model developer used to fine tune a model, then it’s no longer a derivative model and you’re off the hook for it as the base model developer.
0:26:54 Well, that’s absolutely nonsensical as well.
0:27:11 As some great researchers like Ian Stoika at Berkeley have shown, it takes an extremely small amount of compute to fine tune a model like Vecuna, where with just 70,000 shared GPT conversations, they fine tuned Lama to become one of the best open source models at the time,
0:27:18 showing it really doesn’t take much computer data to turn a car into a tank to borrow an analogy.
0:27:25 And so like with the 10 to the 26 compute threshold issue we discussed earlier, this is just another arbitrary magic number.
0:27:36 The bill authors are pulling out a thin air to try and define model layer computing dynamics that are so early and changing that it’s absolute over regulation and will kill the speed of innovation here.
0:27:45 All right, so you’ve alluded to this, but I wanted to ask directly, if we say not all regulation is bad, if you were in charge of regulating AI, how would you approach it?
0:27:52 Or how would you advise lawmakers who feel compelled to address what seemed like concerns over AI, what would be your approach?
0:27:59 Non-negotiable really here should be zero liability at the model layer, right?
0:28:08 What you want to do is target misuses and malicious users of AI models, not the underlying models and not the infrastructure.
0:28:21 And that’s the core battle here. I think that’s the fundamental flaw of this bill is it’s trying to regulate the model and infrastructure and not instead focus on the misuses and malicious users of these models.
0:28:40 And so over time, I think it would prove out that the right way to keep the US at the frontier of responsible, of secure AI innovation is to actually focus on the malicious users and misuses of models, not slow down the model and infrastructure layer.
0:28:50 We should focus on concrete AI security and strengthening our enforcement and our defenses against AI security attacks that are increasing at speed and scale.
0:28:58 But fundamentally, these safety concerns that are largely science fiction and theoretical are a complete distraction at the moment.
0:29:03 And lastly, we have no choice but to absolutely accelerate open source innovation.
0:29:11 We should be investing in open source collaboration between America and our allies to keep our national competitiveness from falling behind our adversarial countries.
0:29:23 And so the three big policy principles I would look for from regulators would be to regulate and focus and target misuses, not models, to prioritize AI security over safety and to accelerate open source.
0:29:34 But the current legislation is absolutely prioritizing the wrong things and is rooted in a bunch of arbitrary technical definitions that will be outmoded, obsolete and overreaching fairly soon.
0:29:40 One might say we should regulate the same way we regulate the internet, which is to say, let it thrive.
0:29:46 It really is tantamount to saying we’ve barely just invented the printing press or we’ve barely just invented the Model T Ford car.
0:30:04 And now what we should immediately do is try to rush and prevent future improvements to cars or to the printing press by largely putting the responsibility of any accidents that happen from people irresponsibly driving the car out on the streets on Henry Ford or of the inventors of the printing press.
0:30:11 So then the final question here, taking everything into account, what can everyday listeners do about this, right?
0:30:22 I mean, if I’m a founder, if I’m an engineer, if I’m just concerned, what can I do to voice my opinion about SB 1047 about frankly any regulation coming down the line?
0:30:24 How should people think about making their voice heard?
0:30:26 Yeah, so I think three steps here.
0:30:28 The first would be to just read the bill.
0:30:30 It’s not very long, which is good.
0:30:33 But most people just haven’t had a chance to actually read it.
0:30:46 Step two, especially for people in California, the most effective way to have this bill be opposed is for each listener to call their assembly rep and tell them why they should vote no on this bill in August, right?
0:30:47 This is less than 90 days away.
0:30:57 So we really don’t have much time for all of the assembly members to hear just how little support this bill has from the startup community, tech founders, academics.
0:30:59 And step three is to go online.
0:31:10 You know, make your voice heard on places like Twitter, where it turns out, you know, a lot of both state level and national level legislators do listen to people’s opinions.
0:31:17 And so, look, I think if this bill passes in California, it sure as hell is going to create a ripple effect throughout other states.
0:31:19 And then this will be a national battle.
0:31:37 If you liked this episode, if you made it this far, help us grow the show, share with a friend, or if you’re feeling really ambitious, you can leave us a review at ratethispodcast.com/a16c.
0:31:42 You know, candidly producing a podcast can sometimes feel like you’re just talking into a void.
0:31:47 And so if you did like this episode, if you liked any of our episodes, please let us know.
0:31:49 I’ll see you next time.
0:31:51 (upbeat music)
0:32:01 [BLANK_AUDIO]

On May 21, the California Senate passed bill 1047.

This bill – which sets out to regulate AI at the model level – wasn’t garnering much attention, until it slid through an overwhelming bipartisan vote of 32 to 1 and is now queued for an assembly vote in August that would cement it into law. In this episode, a16z General Partner Anjney Midha and Venture Editor Derrick Harris breakdown everything the tech community needs to know about SB-1047.

This bill really is the tip of the iceberg, with over 600 new pieces of AI legislation swirling in the United States. So if you care about one of the most important technologies of our generation and America’s ability to continue leading the charge here, we encourage you to read the bill and spread the word.

Read the bill: https://leginfo.legislature.ca.gov/faces/billTextClient.xhtml?bill_id=202320240SB1047

Leave a Comment