AI transcript
0:00:08 is the most harmful to little tech.
0:00:18 When all your marketing team does is put out fires, they burn out. But with HubSpot,
0:00:24 they can achieve their best results without the stress. Tap into HubSpot’s collection of AI tools,
0:00:30 breeze, to pinpoint leads, capture attention, and access all your data in one place.
0:00:36 Keep your marketers cool and your campaign results hotter than ever. Visit hubspot.com/marketers to
0:00:47 learn more. Hey, welcome to the Next Wave Podcast. I’m Matt Wolf. I’m here with Nathan Lanz. And
0:00:53 today we’ve got a really important episode. Today, we’re talking with Anjanay Mida and he’s a general
0:01:00 partner over at A16Z. He was on the ground floor for companies like Mid Journey, and Luma, and
0:01:06 Anthropic, and some of the biggest AI companies in the world. And today, his message is really
0:01:12 important. He’s talking about legislation that they’re trying to pass right now in California
0:01:21 that could kill AI. And this bill is called SB 1047, and it will really, really hinder AI
0:01:26 progress if it gets passed. And in this conversation, we’re going to talk to him about what this bill
0:01:33 is, why you should care about this bill, some better options for AI regulation, and what we
0:01:38 can all do about it to make sure that the right regulations get passed and the wrong regulations
0:01:45 do not. So this is a fascinating conversation with a lot to learn. So let’s jump right in with
0:01:51 Anjanay Mida. Welcome to the show. Anjanay, thanks so much for joining us. We’re really excited
0:01:57 to talk to you about AI and AI regulation and AI investments. So thanks for joining us today and
0:02:02 how you doing? I’m doing great. Thanks for having me. Why don’t we start by getting a little bit of
0:02:08 background? Can you break down what is SB 1047, like in layman’s term? What do we need to know
0:02:14 about it? Yeah, so look, basically SB 1047 is a proposed law. It’s a California state bill
0:02:20 that’s making its way through the California legislature right now. That’s part of a much
0:02:26 broader wave of 750 or so new pieces of AI legislation that have been proposed in the US
0:02:35 since Biden signed the AI executive order late last year. But this one is the most
0:02:42 harmful to Little Tech. Little Tech is startups, open source researchers, academia. Unlike many
0:02:50 of those well-intentioned builds, which say, “Hey folks, AI is a powerful new useful technology,
0:02:56 and like many useful technologies like electricity or the internet, which has good and bad uses,
0:03:01 we should be thoughtful about how this technology is used. We should punish bad actors for doing
0:03:09 bad things with that neutral technology.” Unlike that rational approach, this bill is drafted to
0:03:15 attack underlying model researchers, scientists, and developers. And among other things, it’s trying
0:03:22 to place civil and criminal liabilities on developers of AI models, as opposed to focusing
0:03:29 on the malicious users of those models. So as proposed by this bill, overseeing these new laws
0:03:33 would be a frontier model division, which is kind of like a new DMV they want to form,
0:03:39 a new regulatory agency that would have the power to propose requirements on startups,
0:03:44 on researchers, on academia that would dictate if a researcher or an engineer could ultimately
0:03:50 be thrown in jail or not. Now, it’s so crazy that when this bill was proposed amongst tons and tons
0:03:55 of other bills, most people read it and said, “Okay, crazy bills like this get proposed all
0:04:02 the time. This is never going to get anywhere.” But the California Senate passed SB 1047 in May,
0:04:09 32 to 1. And so this bill is now slated for a California Assembly vote in August,
0:04:17 less than 60 days away. If passed, we are one signature from Gavin Newsom away from cementing
0:04:24 this into California law. And so this is an incredibly dangerous piece of well-intentioned,
0:04:32 but incredibly misguided regulation that is trying to make AI safer by focusing on the underlying
0:04:35 model instead of the malicious misuses, which is really where we should be focusing.
0:04:41 If that’s passed, I don’t see how you build an AI startup in California. Why would you take that
0:04:47 risk? You’d go to Texas or somewhere else. I mean, no rational AI researcher or scientist
0:04:53 is going to risk being thrown in jail just to pursue their research in California.
0:05:01 I think that if I had to be really sympathetic, I think he’s probably trying to elevate
0:05:09 the attention that AI gets, but it’s just proposed such misinformed drafting that’s
0:05:14 completely devoid from the way these models are actually researched, trained, and developed in
0:05:20 the real world. When it comes to frontier AI research, a local legislator who has no background
0:05:27 in AI development, technology development, and whose co-authors are frankly, I believe,
0:05:34 the most experience that the bill’s co-authors have is real-world experience in a lab that has
0:05:38 actually developed and productionized these models is a four-month internship at Google.
0:05:45 One of the co-authors of the bill is a well-meaning high school think tank. It is staffed by a
0:05:53 couple of high school researchers or something. From a substantive analysis, we’ve put out tons
0:05:59 and tons of critiques of the substantive pieces of the bill, but the process around this bill,
0:06:05 oh my god, I mean, it’s just a joke. That’s what ultimately led us to launching this website,
0:06:15 Stop SB 1047, two weeks ago, because after many attempts to provide the senator and esteem with
0:06:22 feedback on the problems with the bill and how to address it, and being ignored, our founders and
0:06:27 the Little Tech community just got super frustrated when every new revision of the bill just ignored
0:06:35 all that feedback and instead made the bill even worse. With this August vote deadline looming,
0:06:41 it’s just become so important and urgent to amplify those voices that are being ignored,
0:06:45 right? Like startups, researchers, the open-source community at large to voice their concerns.
0:06:53 We just wanted to amplify those concerns that Scott Wiener’s team has been ignoring. I have
0:06:56 learned now, even if you’re not interested in politics, politics takes an interest in you.
0:07:02 A lot of legislators, I think, especially in other states, are being quite thoughtful about
0:07:06 saying, “You know what? We’re open to feedback. Give us feedback,” and they’re making revisions to
0:07:11 the bill that actually address that feedback. That’s not the case here, right? This is a process
0:07:17 that’s been led by a legislator who keeps saying, “I’m open-minded to feedback.” It takes a bunch
0:07:21 of founders’ time and companies’ time, and then when you see the new draft, it addresses none of
0:07:27 the core issues. For a theoretical example of what this could mean, and you can definitely
0:07:35 correct me if I’m wrong, but let’s say there’s an open-source model out there that some people
0:07:40 developed and they put online, made it open-source. Somebody else grabs that open-source model,
0:07:44 uses it to hack into a government system. I don’t know. Something like that. They use the model
0:07:52 to do some bad actor stuff. The people who made the model are just as liable as the people who
0:07:58 actually did the hacking, right? That’s right. If you read the bill, what the bill is saying is,
0:08:05 if you open-source a model that meets some criteria that they put, which is completely arbitrary,
0:08:08 we can get to that in a second. But if you open-source a covered model,
0:08:14 you have to certify that this model cannot be used for any catastrophic harms,
0:08:20 and if somebody downstream picks up your model and does something bad with it, fine-tunes it,
0:08:23 changes it, modifies it in ways that you didn’t control and does something bad,
0:08:28 you are liable as the open-source developer for the harm they did. They’re placing this perjury
0:08:34 penalty on that developer, and you might go, “Okay, Anj, well, perjury is kind of,
0:08:41 that’s pretty severe. If you’re a guilty perjury, you get thrown in jail.” Yes, you do. What this
0:08:48 bill is proposing is that if an open-source model developer fails to certify appropriately, and
0:08:52 by the way, there’s no real definitions proposed yet. All they’re saying is this new agency will
0:08:56 have the full rights to determine these definitions in the future. You could potentially be hell liable.
0:09:00 I think that’s just crazy.
0:09:05 Yeah, definitely. So, is this only open-source, or does this apply to the closed-source models as
0:09:12 well? They’re proposing civil and criminal liabilities on all model developers, open-source,
0:09:20 closed-source. So, how is this disproportionately affecting the smaller businesses
0:09:26 building open-source than it is the open AIs, Microsofts, Googles of the world?
0:09:33 Oh, this is classic regressive tax, right? If you just think about a concept of regressive tax
0:09:38 versus a progressive tax, a regressive tax is something that disproportionately hits
0:09:45 less resourced people than people with more resources, right? And the way they’ve drafted
0:09:52 the bill by putting all of this burden of definitions that have no precise definition today,
0:09:58 what’s going to happen is, if this bill passes, this agency is going to get lobbied by Big Tech,
0:10:03 who has armies and armies of lawyers and compliance experts to shape the definitions
0:10:09 in their favor. And tiny startups, open-source researchers, academic labs, who don’t have all
0:10:14 those resources will just be left out in the cold. We’ve seen this happen with multiple industries,
0:10:16 and that’s what’s going to happen here as well.
0:10:20 So, it actually sort of helps some of these bigger companies with the regulatory capture that,
0:10:24 you know, they’re not outright saying they’re going for, but they’re probably going for, right?
0:10:30 100%. Let’s take one example from the bill, which the sponsor,
0:10:37 the bill sponsor, Scott Wiener, keeps saying, “Oh, look, my definition of what a covered model is
0:10:42 only applies to Big Tech companies because it only gets triggered by $100 million training
0:10:47 threshold.” Okay, well, hold on a second. The Big Tech company’s training budgets are in the
0:10:51 billion. So, first of all, if all you cared about was really just attacking and regulating Big Tech,
0:10:57 you would start your bill with the number B for billion, right? Number two, what even is a training
0:11:03 budget? There’s no such canonical definition today. This space is so early that, you know,
0:11:08 if I sampled the 16 different AI model startups that I’ve invested in over the last three years
0:11:12 for their definition of training, every single one has a slightly different meaning, right?
0:11:17 Pre-training versus post-training versus fine-tuning versus computing latent representations,
0:11:25 like past training runs. If I took Lama, if I’m a startup and I took Lama 3,
0:11:31 which costs, call it, you know, about $100 plus million to train, and then I fine-tuned it,
0:11:36 does their training expenditure apply to mine too? None of these definitions have been,
0:11:41 have at all been, the bill’s authors have proposed zero definitions around these
0:11:44 pretty important issues, right? Do you think that’s purposeful? Because, like, obviously,
0:11:48 if you leave it vague like that, that gives them so much power and control over all of this, right?
0:11:55 Look, I think there’s the generous interpretation and the, you know, the less generous one. The
0:11:59 generous one, you know, there’s this idea of Occam’s razor, right? The simplest explanation is usually
0:12:04 the right one. When I first read the bill, I was so worked up, I was like, wow, this has been
0:12:13 maliciously vague, right, to put this burden on model developers. When I then looked at the bill’s
0:12:18 authors and their backgrounds, then I realized that they just don’t know what they’re talking about,
0:12:25 right? I mean, I kid you not, there’s literally zero beyond, I think beyond one researcher on
0:12:32 that team who spent four months inside of a lab as an intern. They don’t have any experts on the
0:12:36 drafting team who’ve actually trained models, who’ve deployed them, who’ve worked at startups
0:12:40 for an extended amount of time that are frontier model companies. I mean, I just think they don’t,
0:12:43 I think they’re well-intentioned. I wish I could tell you they had the competence to have done
0:12:48 this maliciously. I think there’s good reason to believe they’re just way in over their heads with
0:12:54 no real-world experience here. Right, right. Don’t attribute the malice what you can, you know,
0:12:58 explain with ignorance or whatever. I don’t remember the exact quote, but that seems to be
0:13:05 what’s going on here. Right. We’ll be right back, but first, I want to tell you about another great
0:13:10 podcast you’re going to want to listen to. It’s called Science of Scaling, hosted by Marc Robárez,
0:13:16 and it’s brought to you by the HubSpot Podcast Network, the audio destination for business
0:13:21 professionals. Each week, host Marc Robárez, founding chief revenue officer at HubSpot,
0:13:25 senior lecturer at Harvard Business School, and co-founder of Stage 2 Capital,
0:13:30 sits down with the most successful sales leaders in tech to learn the secrets,
0:13:35 strategies, and tactics to scaling your company’s growth. He recently did a great episode called
0:13:40 How Do You Solve for a Siloed, Marketing, and Sales, and I personally learned a lot from it.
0:13:45 You’re going to want to check out the podcast, listen to Science of Scaling wherever you get your
0:13:57 podcasts. I’m curious. If you were an advisor to help with creating some regulation, are there
0:14:01 things that you believe should be regulated, or do you think it should just be open door,
0:14:07 let’s just push forward, accelerate at all costs, or are there some areas where you’re like,
0:14:14 okay, these are areas I think should be regulated? Oh, I’m absolutely in favor of regulation. Let’s
0:14:19 make it clear. Models are powerful tools like electricity, they can be used for good and bad,
0:14:26 and we should focus on preventing people from doing bad things with it. But this approach to
0:14:31 regulating the underlying technology and placing burdens on researchers instead of placing the
0:14:36 burdens on the misuses of the models is completely misguided. I have an issue with this particular
0:14:39 piece of legislation. I don’t have a problem with regulation, especially regulation that’s
0:14:46 thoughtful, that’s drafted in partnership with industry, that puts America first, that doesn’t
0:14:53 just hand away our entire AI startup industry to China. Yes, I’m absolutely in favor of regulation.
0:15:00 If you were asking me, if you were drafting legislation with policymakers to make
0:15:03 sure AI is developed safely and responsibly, what would you prioritize? I’d probably look
0:15:08 for three basic principles in that drafting. One, focus on the misuses, not the models.
0:15:13 Right? Focus on the malicious users, not the underlying infrastructure.
0:15:19 The second would be to prioritize concrete security problems over these sort of
0:15:26 super theoretical borderline sci-fi, doomsday terminator scenarios that they’re calling AI
0:15:32 safety. Those are not our most pressing safety issues where a model autonomously goes rogue
0:15:38 and launches a cyber attack on our power grid. That’s the plot line of a Schwarzenegger movie.
0:15:42 Right? What is happening, and I know this because our portfolio companies are being
0:15:46 attacked by this, we get approached by law enforcement agencies all the time, is in fact
0:15:53 good old-fashioned spearfishing, misinformation attacks, identity theft that are where these
0:16:00 attacks are increasing in speed and scale because bad actors are using AI tools. It’s the same attack
0:16:06 vectors. We have laws that say these are illegal. We don’t need more laws to say these should be
0:16:12 even more illegal. What we do need is laws to bolster enforcement, invest in defensive tools
0:16:16 that our agencies can then use to fight these increasing speed and scale of AI security. That’s
0:16:20 the problem we should be focusing on. Right? Anyway, that’s the second thing. Let’s prioritize
0:16:27 concrete AI security over sort of doomsday safety scenarios that have almost zero empirical evidence
0:16:33 that these will ever come to pass. Then I think the third thing I would do is to really prioritize
0:16:40 open-source development in the United States to maintain the competitive edge we have globally.
0:16:46 Right? Because us placing these burdens on our startups, our open-source researchers,
0:16:51 our universities is not slowing down China. They’re full steam ahead. But if you prevent our
0:16:55 open-source ecosystem from collaborating, from putting up models that people can research,
0:16:58 can fine-tune, can red team to make them more secure, you’re going to hurt us,
0:17:03 and you’re hurting the U.S. national of competitiveness. Nobody else, while everybody
0:17:08 else races ahead. Those are sort of the three simplest principles. We provided that feedback
0:17:15 ad nauseum, to be honest, to the senator, but none of the amendments to the bill have addressed
0:17:20 these core issues. Right, right. Speaking of China, I don’t know if you saw the news today,
0:17:26 but it looks like open AI is going to be banning chat GBT in China. It looks like this is possibly
0:17:29 in collaboration with the U.S. government, or at the direction of the U.S. government.
0:17:36 So I do wonder if we’re going to end up in a scenario where open AI and the other major AI
0:17:41 players who are closed-source, if they’re already in collaboration with the U.S. government behind
0:17:46 the scenes, a member of the NSA who joined is, I think, a board member of open AI recently,
0:17:52 Mira Moradi, the CTO, she openly said in an interview that they collaborate with the U.S.
0:17:56 government in terms of showing them the new models before they come out. I do wonder if
0:18:00 that’s going to lead to a world where, yeah, the closed-source models, they’re collaborating with
0:18:04 the U.S. government because the U.S. government sees this as a national security. It’s an asset
0:18:09 to the U.S., but also it’s a security threat. It’s a risk as well, and if they will actually end up
0:18:14 pushing that there should be no open source because of that. I’m almost kind of like what
0:18:18 Founders Fund is saying, where it’s like, and that’s one area where I’m a little bit conflicted,
0:18:22 because I saw some of the people at Founders Fund saying that open-source AI can be dangerous,
0:18:25 right? And that’s actually going to help China. So, I’d love to hear your thoughts on that,
0:18:29 like how kind of A16D is more on the side of open-source, and it seems like Founders
0:18:35 Fund is slightly against open-source for AI. Look, I think any arguments that claim
0:18:44 that open-source AI is a threat to national security are either, frankly, misinformed so that
0:18:50 they’re just coming from a place of not knowing the true state of reality on the ground
0:18:54 or they’re malicious, and that they’re designed to hold the United States back.
0:19:00 And let me explain what I mean there. Number one is a very, I think,
0:19:08 misinformed understanding of the state of information security at the best labs, right?
0:19:16 There’s this idea we have that closed-source labs are so protective and secretive of their
0:19:23 weights that China doesn’t have them, and we somehow have this amazing competitive advantage
0:19:30 over China. For over 10 years now, the Chinese government has had a state-sponsored program
0:19:37 to infiltrate targets of valuable IP development in the United States, and it’s not AI-specific,
0:19:44 this is in all kinds of industrial processes, that is a nationally-sponsored strategy by the
0:19:52 government of China to exfiltrate valuable IP from the United States to China. And while the FBI
0:19:57 and other enforcement agencies can’t comment on ongoing investigations, I will tell you that you
0:20:00 don’t have to look too far to look for public evidence that this is already happening at the
0:20:07 frontier labs. Just two months ago, there was an engineer from Google who was caught by the FBI
0:20:13 boarding a plane to China with TPU schematics on a thumb drive. We’re not talking sophisticated
0:20:23 exfiltration here, guys, thumb drive, okay? So, I would be, number one, I think any national
0:20:32 security game theory that folks are abiding on must take into account the reasonable likelihood
0:20:37 that frontier model labs in the United States are already infiltrated by adversarial nation states.
0:20:42 Frankly, I think there’s good evidence already from our enforcement agencies that ongoing
0:20:49 investigations that will soon become public will make that clear. But you just have to
0:20:55 go read the news to know that this is happening. So, number one, any national security strategy
0:21:01 that says, oh, we’re ahead and they can’t get our weights from closed-source labs is you’re already
0:21:06 giving away the game. Okay, so let’s start from an operating assumption that at best, we are at
0:21:12 par with them where they have our frontier developments today. I’m not even sure we can
0:21:19 claim we’re ahead. Let’s just say the goal is to remain at parity, right? The idea that open source
0:21:27 is somehow going to give away our national competitiveness fails to take into account
0:21:33 that the way we got to the frontier in the first place was through collaboration between researchers
0:21:40 of different labs, right? And the current big tech argument that, oh, open-sourcing our weights will
0:21:47 allow adversarial countries to get them does only one thing and one thing only. It allows them to
0:21:54 stop having to publish their research, stop having to have a convenient excuse to tell
0:21:58 their best researchers who want to, by the way, publish their research. The way the best AI researchers
0:22:09 get more sort of feedback on their research is by presenting openly. The scientific process
0:22:13 is you put out your research, you share about it publicly, other people then provide feedback,
0:22:20 and then you improve, right? That entire process of open collaboration at the frontier of AI is
0:22:27 about to basically be all but dead. And one of the biggest ways that we have shot ourselves in
0:22:33 the foot is by preventing academia in the United States to contribute to that research, right?
0:22:40 So if open source, for example, today is the only way that allows frontier university labs
0:22:47 to contribute to research at all. If Lama III was not open sourced, if Mistral was not open sourced,
0:22:56 Stanford, Berkeley, MIT, like these institutions, the postdocs, the PhDs there would have zero way
0:23:03 of contributing to AI research. And so I think if you believe that the public university system
0:23:08 and open collaboration between labs is critical to keeping our national competitiveness ahead,
0:23:15 then turning off open source is a great way to keep us behind, right? Especially at a time when
0:23:20 our labs are already infiltrated. So if the enemy already has our best and then we’re slowing down
0:23:26 our best, the most likely steady state is that we lose our national competitiveness and we fall
0:23:34 behind, right? So I think that either people don’t know just how people advocating for these who are
0:23:38 arguing that open source is bad for national security. Frankly, a lot of them just don’t know
0:23:42 what they’re talking about because I don’t think they’re investors in enough frontier labs at this
0:23:47 point. And frankly, I just think there’s a bunch of people who are culturally misguided because
0:23:54 they think that these doomsday scenarios are more realistic than they really are.
0:23:59 So I have one one sort of last question about the SB 1047. It’s a little bit of like a devil’s
0:24:06 advocate question. So if this is like a California bill, you know, when people just argue, well,
0:24:11 just go do your research outside of California. Like, I don’t know, just I’m curious your thoughts
0:24:19 on that. Yeah, it’s a good question. So unfortunately, this, the drafting of the bill was amended to be
0:24:26 even more clear that the bill stretches across state lines. You know, up until last week,
0:24:30 there was some debate like, oh, Ange, like it doesn’t say that this applies outside of California.
0:24:35 They, the bill’s authors went outside out of their way to make it clear that the statute would
0:24:44 reach across state borders. Oh my god. So really, I mean, I mean, I believe the bill’s authors has
0:24:50 been actually promoting that as a feature of the bill, not a buck. So this legislation is nationwide,
0:24:54 whether we like it or not, or they’re proposing it to be nationwide. So the only, I think the most
0:25:01 likely scenario will be that our best researchers, our best teams will move offshore to this emerging
0:25:08 kind of region across the world that I’m calling an AI sanctuary. Basically, there’s sort of three
0:25:15 things you need now as a world-class startup or a model research team. You need cheap electricity,
0:25:23 right, cheap, abundant, sort of sustainable, clean electricity to run the massive amounts of
0:25:27 compute you need to train these models. And the last thing is you need regulatory
0:25:32 certainty and protection to train these models where you’re not being as a researcher, you’re
0:25:38 not being held with civil and criminal liabilities. And you know what, frankly, I have been shocked
0:25:46 by how many, you know, nations have reached out since we started publicly speaking about the
0:25:51 bill saying, hey, please send us your best and brightest. We will gladly protect them without
0:25:57 regulations. And I think that will mean that our best companies do offshore to places that are
0:26:01 offering them cheap and abundant energy, compute and regulatory protection.
0:26:06 Yeah. So let’s talk about what people can do. If, you know, you’re listening to this and you’re
0:26:11 going, OK, yeah, this definitely sounds like we need to stop this from happening, what can we do
0:26:18 about it? Yeah, I’m glad you asked. So stop SB1047.com. It’s a public website that’s a hub for researchers,
0:26:22 academics, and anybody else concerned about the impact of the bill can go and write to their
0:26:28 legislators. So if you oppose the bill, please visit the site. We’ve got a templatized bill that
0:26:34 you can then customize for yourself and send it to your assembly representative. We have a list
0:26:39 there where you can easily pick who your representative is. We released the website
0:26:47 last week. And in the first four days, we had the community send over 375 letters to the assembly.
0:26:52 And so this is an important issue that a lot of people, a lot of startups, a lot of academics,
0:26:58 and a lot of open source researchers are concerned about. But we need to get the word out to even
0:27:03 more people. We have less than 60 days before the final assembly vote on this proposed law.
0:27:07 So please tell others about the site, share the information, raise awareness among those who’ll
0:27:12 be impacted by this bill. We basically think Little Tech deserves to have its voices heard.
0:27:15 And so if you visit the website, we make it super simple for you to understand
0:27:19 how this bill impacts you if you’re Little Tech and how to take action, which is to send a letter
0:27:26 to your representative and make your voice heard. You know, the message about helping small startups,
0:27:30 like I personally feel that, but I think a lot of people are probably going to resonate more with
0:27:34 the fact that this is very important for the future of America. It’s kind of appropriate
0:27:40 that this is like right after July 4th, and it can stay. We’re celebrating America and I personally,
0:27:45 that’s the thing that inspires me is like, this is very important. Like if we just like hand AI
0:27:50 dominance to China, like the reason America has been so successful is that we were, you know,
0:27:55 we were the leaders in culture for a long time with like entertainment. We were leaders in technology,
0:28:02 internet. And so that’s why freedom is spread around the world. And if we don’t win an AI,
0:28:04 it’s probably going to be the opposite of freedom that’s spreading around the world
0:28:10 through China. And so I think it’s a big problem. And like, and the opposite of freedom when you
0:28:16 have AI is, it can be quite scary when you can use AI to mass control people. And so I believe
0:28:21 that it’s really important that America wins this. And I think that more people probably will
0:28:24 resonate with that kind of message versus, you know, obviously, Silicon Valley people,
0:28:28 we like, yeah, we want to support the startups. Right. But a lot of other people, I think that’s
0:28:34 a more powerful message, I think. I think you’re absolutely right. And there’s a professor at
0:28:42 Berkeley, Ian Stoika, he testified in Sacramento last week saying that this bill, well, this bill
0:28:45 is called safe and secure innovation for frontier artificial intelligence models.
0:28:50 In its current form, it would do the opposite. It will hurt the innovation in California,
0:28:55 and it will result in a more dangerous rather than a safer world. And he goes on to explain that
0:29:02 first if SB 1047 passes, when it comes to open source models, he predicts that within one year,
0:29:08 we will all use open source models developed overseas, likely in China. Why? Because this law
0:29:13 will discourage building open source models in California, and likely the United States,
0:29:18 and Chinese open source models are already very competitive. And three of the top six
0:29:24 open source models are already Chinese, according to the Berkeley LMSIS chatbot arena evaluation.
0:29:29 The second is that if SB 1047 passes, then California will lose its competitive edge when it
0:29:35 comes to AI. Because as a researcher in a fast moving field, you don’t want to be constrained by
0:29:39 such limitations. So you just go elsewhere when you can do your best research. So more and more
0:29:44 PhD students of Chinese origins will just go back to China, while others might consider going to
0:29:51 places like, you know, other adversarial countries. So where they can enjoy huge funding for their
0:29:57 research. And this is already happening, according to him. And he’s a leading academic at one of the
0:30:02 preeminent American university labs. And so when he’s saying it, we really have to, I think,
0:30:08 sit up and pay attention. And then the last thing he did talk about is how SB 1047 incentivizes
0:30:13 companies that sell to enterprises to move out of California, since most of their customers and
0:30:18 enterprises have already have headquarters, you know, out of California. And so for the
0:30:22 California market, they could just provide, they will have to basically provide inferior models
0:30:27 to conform with SB 1047, which will mean that the state will will turn from a leader to a laggard.
0:30:33 And that’s not a future any of us wants. And so it’s this is, you’re right, Nathan, that this is
0:30:39 not just a California issue. This is an America issue. And I don’t think enough people across
0:30:44 the U.S. realize just how dangerous this piece of legislation is for all of America. And I think
0:30:49 more people should be talking about about about it the way you are. I think on that note, that’s
0:30:56 probably how we’ll wrap up the episode. But everybody can head over to stop SB 1047.com to
0:31:02 get more details about the bill as well as more details about how to help prevent this bill from
0:31:06 actually getting passed. And Anjanay, thank you so much for hanging out with us today. This has
0:31:10 been a fascinating discussion. And I think it’s really going to open a lot of people’s eyes. I
0:31:14 don’t think a lot of people even realize that this is kind of happening behind the scenes. It
0:31:18 doesn’t seem like it’s getting a lot of publicity right now. So I think it’s important that we
0:31:22 have these discussions and let people know that this is happening. This is what the California
0:31:27 government’s shooting for. So if you like what we’re getting out of AI right now, and you like
0:31:32 the progress we’ve seen, we need to do something about this. So I appreciate you sharing all your
0:31:36 thoughts and all the details about this, because I do think it’s going to be eye-opening to a lot of
0:31:41 people. Oh, thank you guys. I’m a huge fan of the bot and, you know, helping us spread the word and
0:31:48 get the message about the gauze out is deeply appreciated. So thank you. Absolutely. Amazing.
0:32:04 Thanks again. All right. Thanks, guys.
0:32:06 you
Episode 15: Is the future of AI development under threat due to new legislation? Matt Wolfe (https://x.com/mreflow) and Nathan Lands (https://x.com/NathanLands) are joined by Anjney Midha (https://x.com/AnjneyMidha), a General Partner at a16z and a prominent voice in the tech community and advocate against SB 1047.
In this episode, Anjney Midha dives deep into the potential ramifications of California’s proposed bill, SB 1047, on the tech industry, startups, and researchers. The discussion covers why regulations could force AI companies to leave California, how the bill might favor big tech over smaller developers, and the broader implications for America’s leadership in AI. Anjney also introduces StopSB1047.com, a platform to raise awareness and mobilize opposition to the bill.
Check out The Next Wave YouTube Channel if you want to see Matt and Nathan on screen: https://lnk.to/thenextwavepd
—
Show Notes:
- (00:00) Proposed bill aims to hold AI developers accountable.
- (05:29) Launching website to stop SB 1047, frustration.
- (07:21) Open source model certification liability issue explained.
- (09:57) Sponsor questions bill’s effect on tech companies.
- (15:59) OpenAI may ban ChatGPT in China.
- (17:14) Open source AI not a national security threat.
- (23:31) Global legislation may lead to AI sanctuary.
- (26:06) Small startups need support to prevent AI dominance.
- (27:57) Chinese dominance in open source AI models.
—
Mentions:
- Anjney Midha: https://a16z.com/author/anjney-midha/
- a16z: https://a16z.com/
- StopSB1047.com: https://www.stopsb1047.com/
- Ion Stoica: http://people.eecs.berkeley.edu/~istoica/
—
Check Out Matt’s Stuff:
• Future Tools – https://futuretools.beehiiv.com/
• Blog – https://www.mattwolfe.com/
• YouTube- https://www.youtube.com/@mreflow
—
Check Out Nathan’s Stuff:
- Newsletter: https://news.lore.com/
- Blog – https://lore.com/
The Next Wave is a HubSpot Original Podcast // Brought to you by The HubSpot Podcast Network // Production by Darren Clarke // Editing by Ezra Bakker Trupiano