David Sacks: AI, Crypto, China, Dems, and SF

AI transcript
0:00:03 The Europeans, I mean, they have a really different mindset for all this stuff.
0:00:10 When they talk about AI leadership, what they mean is that they’re taking the lead in defining the regulations.
0:00:14 You know, they get together in Brussels and figure out what all the rules should be, and that’s what they call leadership.
0:00:17 It’s almost like a game show or something. They do everything they can to strangle them in their crib.
0:00:23 And then if they make it through like a decade of abuse in small companies, then they’re going to give the money to grow.
0:00:29 Ron Reagan had a line about this, which is if it moves, tax it. If it keeps moving, regulated. If it stops moving, subsidize it.
0:00:33 The Europeans are definitely at the subsidizing stage.
0:00:38 AI and crypto now sit at the center of the global race for technological and economic leadership.
0:00:45 Today, you’ll hear from David Sachs, Marc Andreessen, and Ben Horowitz on what it takes for America to stay ahead.
0:00:51 We discussed the Trump administration’s new approach to AI and crypto policy, the balance between innovation and regulation,
0:00:57 and how the U.S. can lead on energy, chips, and open source while avoiding the mistakes of over-regulation.
0:00:58 Let’s get into it.
0:01:04 David, welcome to the A6Z podcast. Thanks for joining.
0:01:05 Yeah, good to be here.
0:01:11 So, David, you’re the AI and crypto czar. Why don’t you first talk about why it makes sense to have those as a portfolio?
0:01:15 What do they have to do with each other? And then I’ll have you lay out what’s the Trump plan on those two categories and how we’re doing.
0:01:19 Well, there are two technologies that I guess are relatively new.
0:01:22 And so there’s a lot of fear of them.
0:01:26 And I think people don’t actually know that much about them.
0:01:27 They don’t really know what to make of them.
0:01:33 I think that from a policy standpoint, and we can talk about the similarities and differences, the approaches are a little different.
0:01:38 I think with crypto, the main thing that’s needed is regulatory certainty.
0:01:44 All the entrepreneurs I’ve talked to over the years, they all say the same thing, which is just tell us what the rules are.
0:01:47 We’re happy to comply, but Washington won’t tell us what they are.
0:01:58 And in fact, during the Biden years, you had an SEC chairman who took an approach, which I guess has been called regulation through enforcement, which basically means you just get prosecuted.
0:02:08 They don’t tell you what the rules are, you just basically get indicted, and then everyone else is supposed to divine what the rules are as you get prosecuted and fined and imprisoned.
0:02:10 So that was the approach for several years.
0:02:16 And as a result of that, basically the whole crypto industry was in the process of moving offshore.
0:02:21 And America, I think, was being deprived of this industry of the future.
0:02:34 And so President Trump, during his campaign last year, he gave a now-famous speech in Nashville in which he declared that he would make the United States the crypto capital of the planet, and that he would fire Gensler.
0:02:36 That was like the big applause line.
0:02:36 I think he—
0:02:38 I applauded.
0:02:43 He’s talked about how surprised he was at what a big ovation he got at that.
0:02:45 So he said it again, and the crowd erupted again.
0:03:01 But in any event, he promised basically to provide this clarity so that the industry would understand what the rules are, be able to comply, in turn, that should provide greater protection for consumers and businesses, everyone who’s part of the ecosystem, and it makes America more competitive.
0:03:06 So I think that’s the mandate on crypto is, in a way, it’s pro-regulation.
0:03:09 It’s basically we want to put in place regulations.
0:03:14 In a way, AI is kind of the opposite, where I think the Biden administration was too heavy-handed.
0:03:19 They were starting to really regulate this area without even understanding what it was.
0:03:24 No one had really taken the time to understand how AI was even being used, what the real dangers were.
0:03:26 There was this intense fear-mongering.
0:03:38 And as a result of that, the approach of the Biden administration was, they were in the process of implementing very heavy-handed regulations on both the software and hardware side.
0:03:39 And we can drill into that.
0:03:46 I think that with the Trump administration, the approach has been that we want the United States to win the AI race.
0:03:48 It’s a global competition.
0:03:52 Sometimes we mention the fact that China is probably our main competitor in this area.
0:04:01 They’re the only other country that has the technological capability, the talent, the know-how, the expertise to beat us in this area.
0:04:03 And we want to make sure the United States wins.
0:04:08 And, of course, in the U.S., it’s not really the government that’s responsible for innovation.
0:04:09 It’s the private sector.
0:04:11 So that means that our companies have to win.
0:04:16 And if you’re imposing all sorts of crazy burdensome regulation on them, then that’s going to hurt, not help.
0:04:26 So the president gave a, I think, very important AI policy speech a couple months ago on July 23rd, where he declared in no uncertain terms that we had to win the AI race.
0:04:29 And he laid out several pillars on how we do that.
0:04:35 It was pro-innovation, pro-infrastructure, which also means pro-energy and pro-export.
0:04:37 And we can drill into all those things if you want.
0:04:38 But that was the high line.
0:04:44 And so I think that, again, with AI, the idea is kind of like, how do we unleash innovation?
0:04:48 And I think with crypto, it’s been more about how we create regulatory certainty.
0:04:50 But, you know, in terms of my role, like, why am I doing both?
0:04:53 I mean, I think the common denominator is just, again, these are new technologies.
0:04:59 They’re both obviously come from the tech industry, which has a very different culture than Washington does.
0:05:05 And I kind of see it as my role to help be a bridge between what’s happening in Silicon Valley and what’s happening in Washington.
0:05:22 And helping Washington understand not just the policy that’s needed or the innovation that’s happening, but also kind of culturally, what makes the tech industry different and special and how that needs to be protected from a government doing something excessively heavy-handed.
0:05:28 So, David, you know, we’re going to talk a lot about AI today, but just on crypto, I’ve had this interesting experience this year, kind of after the election.
0:05:31 Kind of people adjusted to the change of government.
0:05:39 And I’ve had this discussion with a number of people who, let’s say, in politics, who were previously anti-crypto, who have been trying to figure out how to kind of get to a more sensible position.
0:05:47 And then also, actually, people in the financial services industry who kind of followed it from a distance and maybe participated in the various debanking things without really understanding what was happening.
0:05:52 But the common denominator has been, they’re like, Mark, I didn’t really understand how bad it was.
0:05:59 I basically thought you guys in tech were basically just whining a lot and pleading as a special interest and kind of doing the normal thing.
0:06:08 And I figured the horror stories were kind of made up, people getting prosecuted and entrepreneurs getting their houses raided by the FBI and like the whole panoply of things that happened.
0:06:12 And I now, in retrospect, now that I go back and look, I’m like, oh, my God, this was actually much worse than I thought.
0:06:13 Do you have that experience?
0:06:18 And as you’re in there and kind of as you now have a complete view of everything that happens, do you think people understand actually how bad it was?
0:06:20 I mean, I think it’s a great point.
0:06:21 I mean, I didn’t really know either.
0:06:22 You kind of heard generally.
0:06:25 I mean, we knew that there was debanking going on.
0:06:30 And by the way, it wasn’t just crypto companies that were being debanked, but their founders were being debanked personally.
0:06:34 So if you were the founder of a crypto company, you couldn’t open a bank account.
0:06:35 I mean, that’s a huge problem.
0:06:36 It’s like, how do you transact?
0:06:37 How do you make payments?
0:06:39 How do you pay people?
0:06:41 I mean, it basically deprives you of a livelihood.
0:06:42 It’s a very extreme form of censorship.
0:06:44 So that was definitely happening.
0:06:48 And then, of course, you have all the prosecutions that the SEC was behind.
0:06:49 So, yeah, it was really bad.
0:06:54 I remember back in, I think it was in March, we had a crypto summit at the White House.
0:07:02 And one of the attendees said that a year ago, I would have thought it was more likely that I’d be in jail than that I’d be at the White House.
0:07:05 And so it was a really big milestone for the industry.
0:07:09 They’d never, ever received any kind of recognition like that.
0:07:13 The idea that this was even an industry that you would do an event at the White House.
0:07:18 I mean, at a minimum, I think crypto is seen as very de classe.
0:07:21 But in any event, yeah, no, it’s been a huge shift.
0:07:23 I mean, we basically have stopped that.
0:07:43 One of the things that is very different between crypto and AI that we’ve noticed is that on the crypto front, everybody just wanted rules.
0:07:56 And the industry was relatively unified, whereas in AI, we’ve seen very interesting kind of calls coming from inside the house with certain companies really going for regulatory capture.
0:08:02 People have early leads saying, let’s cut off all new companies from developing AI and so forth.
0:08:03 What do you make of that?
0:08:04 And where do you think that’s going?
0:08:07 I think it’s a very big problem.
0:08:13 I actually recently criticized one of our AI model companies for engaging in a regulatory capture strategy.
0:08:14 Yes.
0:08:16 It’s a very fair criticism, by the way.
0:08:18 It is very fair.
0:08:20 And actually, of course, they denied it.
0:08:22 And then, should I tell the story?
0:08:47 I mean, rarely do you get vindicated on X so thoroughly and completely as I did on this, because after this company, it was basically Anthropic, after they denied it, there was, I mean, what basically happened is that Jack Clark, who’s a co-founder and head of Pulse for Anthropic, gave a speech at a conference where he compared fear of AI to a child seeing monsters in the dark or thinking there were monsters in the dark.
0:08:49 But then, you turn the lights on and the monsters are there.
0:08:53 I thought that was such a ridiculous analogy.
0:08:55 I mean, it’s basically puerile.
0:09:01 I mean, it’s so childish just to be almost self-indicting, because you’re basically admitting the fear is made up, not real.
0:09:05 In any event, so I said, well, this is like fear-mongering and part of the regulatory capture strategy.
0:09:07 And of course, they denied it.
0:09:17 But then, a lawyer who was in the crowd at his speech said, well, yeah, but Jack’s not telling you what he said during the Q&A, which he basically admitted that everything that Anthropic was doing.
0:09:23 was with things like SB53, which is supposedly just implementing transparency.
0:09:32 He said, no, he admitted that was just a stepping stone to their real goal, which was to get a system of pre-approvals in Washington before you can release new models.
0:09:38 And he admitted, as part of the Q&A, that making people very afraid was part of their strategy.
0:09:43 So again, just as much of a smoking gun as you could ever get in a spat on X.
0:09:55 But the reason why I think that that approach is so damaging is that the thing that’s really made, I think, Silicon Valley special over the past several decades is permissionless innovation.
0:09:56 Right.
0:09:59 It’s the two guys in a garage can just pursue their idea.
0:10:03 Maybe they raise some capital from angels or VC firms.
0:10:06 Basically, people who are willing to lose all of their money.
0:10:09 And these are people who are young founders.
0:10:11 They could also be a future dropout in a dorm room.
0:10:13 And they’re able just to pursue their idea.
0:10:27 And the only reason that I think has happened in Silicon Valley, whereas you look at industries like, I don’t know, like pharma or healthcare or defense or banking or these highly regulated industries where you just don’t see a lot of startups, is because they’re all heavily regulated.
0:10:31 Which means you have to go to Washington to get permission to do things.
0:10:40 And the thing I’ve seen in Washington is just that, you know, the approvals get set up for reasons, but those reasons very quickly stop mattering.
0:10:47 And it just matters, like, how good your government affairs team is at navigating through the bureaucracy and figuring out how to get those approvals.
0:10:51 And it’s not something that your typical startup founders are going to be good at.
0:10:57 It’s something that big companies get good at because they’ve got the resources, and that’s exactly what regulatory capture means.
0:11:08 So the whole basis of Silicon Valley success, the reason why it’s really the crown jewel of the American economy and the envy of the rest of the world.
0:11:12 We see all these attempts by all these other countries to create their own Silicon Valley.
0:11:15 The reason that’s the case is because of permissionless innovation.
0:11:24 And what is being contemplated and discussed and implemented with respect to AI is an approval system for both software and hardware.
0:11:26 And this is not theoretical.
0:11:27 This has already been happening.
0:11:37 On the hardware side, one of the last things that the Biden administration did the last week of the Biden administration was impose the so-called Biden diffusion rule,
0:11:50 which requires that every sale of a GPU on Earth be licensed by the government, which is to say pre-approved, unless it fits into some category of exception.
0:11:58 So basically, basically, the overall idea is that compute is now going to be a licensed and pre-approved category.
0:12:00 We rescinded that.
0:12:10 And then on the software side, like I said, I mean, the goal very clearly is to start with these reporting requirements to the government, to the states.
0:12:17 And then where that ramps up to is you have to go to Washington to get permission before you release a new model.
0:12:23 And, you know, this would drastically slow down innovation and make America less competitive.
0:12:27 I mean, you know, these approvals can take months.
0:12:28 They can take years.
0:12:34 When models are, when a new chip is released every year and we have licenses that have been sitting in the hopper for two years,
0:12:38 I mean, the requests are obsolete by the time they finally get approved.
0:12:45 And that would be even more true with models where, you know, the cycle time is, you know, like three or four months for a new model.
0:12:53 I mean, you know, and what exactly is bureaucracy in Washington going to know about this technology that, you know,
0:12:56 that they’re going to be in a good position to approve in any event?
0:12:58 But this is what is being contemplated right now.
0:13:05 And I think it would be a disaster for Silicon Valley, but also, and for innovation, but therefore for American competitiveness.
0:13:13 And I think we will lose the AI race to countries like China if, you know, if this is the set of rules that we have.
0:13:20 Yeah, one of the really diabolical things about their argument is if they really believe there was a monster,
0:13:24 then why are they buying GPUs, like, at a rate faster than anybody?
0:13:32 And then the other thing that we know from being in the industry is their reputation is they have literally the worst security practices
0:13:34 in the entire industry with respect to their own code.
0:13:40 So if you were building this monster, the last thing you’d want to do is, like, leave a bunch of holes around for people to hack it.
0:13:42 So they don’t believe anything they’re saying.
0:13:47 It’s, like, completely made up to try and maintain their lead by this.
0:13:56 Well, I think there is, I think it’s a heady drug to basically say that, you know,
0:14:01 we’re creating this, you know, new superintelligence that is going to, could destroy humanity,
0:14:07 but we’re the only ones who are virtuous enough to ensure that this is done correctly.
0:14:07 Right?
0:14:09 And I think that, you know.
0:14:10 It’s a good recruiting tool.
0:14:11 Yeah.
0:14:12 Join the virtuous team.
0:14:13 Yes.
0:14:14 I think that’s right.
0:14:21 But, yeah, but I think that is definitely, you know, I think of all the companies,
0:14:25 that particular one has been the most aggressive in terms of the regulatory capture
0:14:28 and pushing these for these regulations.
0:14:32 And just, I mean, let’s just bring it up a level.
0:14:33 Just doesn’t have to be about them.
0:14:40 There’s now something like 1,200 bills going through state legislatures right now to regulate AI.
0:14:48 25% of them are in the top four blue states, which are California, New York, Colorado, and Illinois.
0:14:52 Over 100 measures have already passed.
0:14:56 I think three of them just got signed in the last month in California alone.
0:14:59 I’ll tell you, just let me tell you what Colorado is.
0:15:05 Actually, Colorado, Illinois, and California have all done some version of a thing called
0:15:11 algorithmic discrimination, which I think is, it’s really troubling where it’s headed.
0:15:19 What this concept means is that if the model produces an output that has a disparate impact
0:15:23 on a protected group, then that is algorithmic discrimination.
0:15:27 And the list of protected groups is very long.
0:15:29 It’s more than just the usual ones.
0:15:37 So, for example, in Colorado, they’ve defined people who may not have English language proficiency
0:15:39 as a protected group.
0:15:43 So, I guess if the model says something bad about, you know, illegal aliens,
0:15:46 then that would be, you know, that would basically violate the law.
0:15:51 I don’t know exactly how model companies are even supposed to comply with this rule.
0:15:56 I mean, presumably, presumably discrimination is already illegal.
0:15:59 So, if you’re a business and you violate the civil rights laws and you engage in discrimination,
0:16:01 you’re already liable for that.
0:16:06 You know, there’s no reason, you know, if you happen to, you know, make that mistake and
0:16:11 you use any kind of tool in the process of doing it, we don’t really need to go after the tool
0:16:15 developer because we can already go after the business that’s made that decision.
0:16:20 But the whole purpose of these laws is to get at the tool.
0:16:27 They’re making not just the business that is using AI liable, they’re making the tool developer liable.
0:16:32 And I don’t even know how the tool developer is supposed to anticipate this because how do you know
0:16:34 all the ways that your tool is going to be used?
0:16:41 How do you know that this output, you know, especially if the output is 100% true and accurate
0:16:48 and the model is just doing its job, you know, then how are you supposed to know that that output
0:16:52 was used as part of a decision that had a disparate impact?
0:16:53 Nevertheless, you’re liable.
0:16:58 And the only way that I can see for model developers to even attempt to comply with this
0:17:06 is to build a DEI layer into their models that tries to anticipate, could this answer have a disparate
0:17:06 impact?
0:17:12 And if it does, we either can’t give you the answer, we have to sanitize or distort the answer.
0:17:18 And, you know, you just take this to its large conclusion and we’re back to, you know, woke AI,
0:17:23 which, by the way, was a major objective of the Biden administration, that Biden executive
0:17:28 order on AI that we rescinded as part of the Trump administration had something like 20
0:17:29 pages of DEI language in it.
0:17:34 They were very much trying to promote DEI values, they called it, in models.
0:17:37 And then we saw what the results of that was.
0:17:42 You know, we saw the whole black George Washington thing where history was being rewritten in real
0:17:46 time because somebody built, you know, a DEI layer into the model.
0:17:55 And, you know, and I almost feel like the term woke AI is insufficient to explain what’s going
0:17:56 on because it somehow trivializes it.
0:17:59 I mean, what we’re really talking about is Orwellian AI.
0:18:09 You know, we’re talking about AI that lies to you, that distorts an answer that rewrites history
0:18:13 in real time to serve a current political agenda of the people who are in power.
0:18:16 I mean, it’s very Orwellian.
0:18:21 And we were definitely on that path before President Trump’s election.
0:18:23 It was part of the Biden EO.
0:18:27 We saw it happen, you know, in the release of that first Gemini model.
0:18:32 That was not an accident that, you know, that those distorted outputs came from somewhere.
0:18:45 So it was, you know, just to me, this is the biggest risk of AI actually is it’s not, it was not described by James Cameron.
0:18:46 It was described by George Orwell.
0:18:49 You know, in my view, it’s not the Terminator.
0:18:57 It’s 1984 that, you know, that as AI eats the internet and becomes the main way that we interact
0:19:06 and get our information online, that it’ll be used by the people in power to control the information we receive,
0:19:12 that it’ll contain an ideological bias, that essentially it’ll censor us.
0:19:17 All that trust and safety apparatus that was created for social media will be ported over to this new world of AI.
0:19:20 Mark, I know that you’ve spoken about this quite a bit.
0:19:21 I think you’re absolutely right about that.
0:19:27 And then on top of that, you’ve got the surveillance issues where, you know, AI is going to know everything about you.
0:19:30 It’s going to be your kind of personal assistant.
0:19:37 And so it’s kind of the perfect tool for the government to monitor and control you.
0:19:41 And to me, that is by far the biggest risk of AI.
0:19:44 And that’s the thing we should be working towards preventing.
0:19:52 And the problem is a lot of these regulations that are being whipped up by these fear-mongering techniques,
0:20:00 they’re actually empowering the government to engage in this type of control that I think we should all be very afraid of, actually.
0:20:07 Sam Allman earlier this week said that in 2028, or by 2028, he expects to have automated researchers.
0:20:14 I’m curious just for your sort of state of AI sort of model development or just progress in general
0:20:16 and what do you think are the implications?
0:20:21 Some people have been sort of, you know, saying that, you know, AGI is two years away.
0:20:26 Sort of the AI 2027 papers, the old Ashton Brenner’s situational awareness papers.
0:20:30 I’m curious kind of what’s your reading of the state of play in terms of AI development
0:20:32 and what are the implications from that?
0:20:39 So my sense is that people in Silicon Valley are kind of pulling back from the, let’s call it, imminent AGI narrative.
0:20:43 I saw Andre Karpathy gave an interview where now all of a sudden,
0:20:48 he’s re-underwritten this and he says AGI is at least a decade away.
0:20:53 He’s basically saying that, you know, reinforcement learning has its limits.
0:20:54 I mean, it’s very useful.
0:20:57 It’s the main paradigm right now that they’re making a lot of progress with.
0:21:01 But he says that actually the way that humans learn is not really through reinforcement.
0:21:08 We do something a little different, which I think is a good thing because it means that human and AI will be synergistic, right?
0:21:15 I mean, the AI is understanding if it’s based on RL will be a little different than the way that we intuit and reason.
0:21:20 But in any event, I sense more of a pullback from this imminent AGI narrative.
0:21:23 You know, the idea that AGI is two years away.
0:21:33 Of course, it’s like kind of unclear what people mean by AGI, but it’s kind of, you know, was used in this like scary way that it’s kind of the super intelligence that would grow beyond our control.
0:21:42 I feel like people are pulling back from that and understanding that, yes, we’re still making a lot of progress and the progress is amazing.
0:21:47 But at the same time, you know, what we mean by intelligence is multifaceted.
0:21:54 And it’s not like, you know, there’s progress being made along some dimensions, but it’s not along every dimension.
0:22:14 And so, therefore, I think, again, I would just, I mean, I’ve described actually the situation we’re in right now is a little bit of a Goldilocks scenario where, you know, the extremes would be, you know, you kind of have the scary Terminator situation, imminent super intelligence that’ll grow beyond our control.
0:22:17 And the other narrative you hear in the press a lot is that we’re in a big bubble.
0:22:19 So, in other words, the whole thing is fake.
0:22:23 And the media is basically pushing both narratives at the same time.
0:22:27 But in any event, I think that the truth is more in the middle.
0:22:31 It’s kind of a Goldilocks scenario where we’re seeing a lot of innovation.
0:22:35 I think the progress is impressive.
0:22:40 I think we’re going to see big productivity gains in the economy from this.
0:22:46 But I like the observations that Balaji made recently where he said there’s a couple of things that really struck me.
0:23:01 One was AI is polytheistic, not monotheistic, meaning what we’re seeing is many, instead of just one all-knowing, all-powerful God, what we’re seeing is a bunch of smaller deities, more specialized models.
0:23:11 You know, it’s not that sort of, we’re not on that kind of recursive self-improvement track just yet.
0:23:15 But, you know, we’re seeing many different kinds of models make progress in different areas.
0:23:26 And then the other one was just his observation that AI was middle-to-middle, whereas humans are end-to-end, and therefore the relationship is pretty synergistic.
0:23:28 And I think that’s right.
0:23:33 I mean, I think all those observations resonate with me in terms of where we’re at right now.
0:23:48 Yeah, and that’s very consistent with what we’re seeing as well, where, you know, ideas that we thought would for sure get subsumed by the big models are becoming amazingly differentiated businesses.
0:24:00 Just because the fat tail of the universe is very fat, and you need really kind of specific understanding of certain scenarios to build an effective model.
0:24:02 And that’s just how it’s going.
0:24:06 You know, no model is just, like, figured out how to do everything.
0:24:10 Yeah, I mean, and the models work best when they have context, you know.
0:24:21 And the more, I mean, we’ve all seen this, the more general your prompt, the less likely it is that you’re going to be able to, you know, get a great response.
0:24:35 And, I don’t know, if you tell the AI, you know, something very general, like, what business can I create to make a billion dollars, it’s not going to give you something actionable, you know.
0:24:46 You have to get very specific about what you’re trying to do, and it has to have access to relevant data, and then it can give you some specific answers to a prompt.
0:24:53 And I think this is, you know, partly biology’s point, which is, you know, the AI does not come up with its own objective.
0:24:55 You know, it needs to be prompted.
0:24:56 It needs to be told what to do.
0:25:00 We’ve seen no evidence that that’s, at this stage, that that’s changing.
0:25:07 We’re still at step zero in terms of AI kind of, you know, somehow coming up with its own objective.
0:25:14 And as a result of that, you know, the model has to be prompted, and then it gives you an output, and that output has to be validated.
0:25:18 You have to somehow make sure it’s correct, because models can still be wrong.
0:25:26 And more likely, you have to iterate a few times, because it doesn’t give you exactly what you want, so now you kind of reprompt.
0:25:27 And we’ve all had this experience, right?
0:25:38 This is why, like, the chat interface is so necessary, is because it takes you a few times to kind of iterate, to get to the output that actually has value for you.
0:25:42 Again, you know, the humans are end-to-end, and the AI is middle-to-middle.
0:25:48 I just don’t, you know, we haven’t seen any evidence that that fundamental dynamic is changing.
0:25:53 I mean, I think, you know, we’re at the, I mean, I’d love to hear what you guys think about this.
0:25:55 I mean, we’re obviously at the outset of agents.
0:26:00 And, you know, in agents, you can give an objective to, and then we’ll be able to take tasks on your behalf.
0:26:05 But I suspect that the agents will work better as well when they have a much more narrow context.
0:26:10 They’re much less likely to go off the rails, start going in weird directions.
0:26:19 If you give it, like, a very broad, you know, a very broad task, it’s just not likely to completely figure it out before it needs human intervention.
0:26:23 But if you give it something very narrow to do, then it’s much more likely to be successful.
0:26:32 So, you know, I would just guess, like, okay, just, you tell the AI, you know, sell my product.
0:26:38 You know, it’s very unlikely that it’s going to figure out, like, what that means and how to do that.
0:26:48 But if you’re a sales rep and you’re using the AI to help you, there’s probably a very specific task that you can tell it to do, and it would be much more successful doing that.
0:26:52 So I just tend to think, I mean, this also kind of speaks to the whole job loss narrative.
0:26:58 I just think that this is going to be a very synergistic tool for a long time.
0:27:02 I don’t think it’s going to wipe out human jobs.
0:27:06 I don’t think the need for human cognition is going away.
0:27:13 It’s something that, you know, we’ll all use to kind of get this big productivity boost, at least for the foreseeable future.
0:27:15 I mean, I don’t know.
0:27:18 I don’t know if anyone can, any of us can predict what’s going to happen beyond five or 10 years.
0:27:20 But, I mean, that’s just what I’m seeing right now.
0:27:21 I don’t know.
0:27:21 I’m curious.
0:27:23 What are you guys seeing at this front?
0:27:25 Yeah, I’m generally consistent with that.
0:27:26 Things are improving.
0:27:34 So, like, on agents, the early agents, the longer the running task, the more they would go, like, completely bananas and off the rails.
0:27:37 People are working on that.
0:27:41 I do think, like, everything’s working better in a context.
0:27:44 At least from what we’ve seen, that will continue.
0:27:53 And even, you know, to your point on, like, super smart models, there’s, like, a dozen video models out there.
0:27:59 And there’s not one that’s the best at everything or even close to the best at everything.
0:28:03 There’s, like, literally a dozen that are all the best at one thing.
0:28:13 And, which is a little surprising, at least to me, because you would think, you know, just the sheer size of the data would be an advantage.
0:28:18 But even that hasn’t quite proven out.
0:28:22 You know, it is, like, depending on what you want.
0:28:23 Do you want a meme?
0:28:25 Do you want a movie?
0:28:27 Do you want, you know, an ad?
0:28:30 Like, it’s all very, very different.
0:28:36 And I think this gets to your main point, which is, and Mark Zuckerberg said something that I really liked.
0:28:37 He’s like, like, intelligence is not life.
0:29:05 And these things that we associate with life, like, we have an objective, we have free will, we’re sentient, those just aren’t part of a mathematical model that is, you know, searching through a distribution and figuring out an answer or even, you know, a model that, you know, through a reinforcement learning technique can kind of improve its logic.
0:29:14 So, it’s just, like, the comparison to humans, I think, is, it just falls short in a lot of ways, is what we’re seeing.
0:29:15 You know, we’re just different.
0:29:18 You know, and the models are very good at things.
0:29:20 They’re better than humans at things already, many things.
0:29:36 The other thing I’d bring up related to this, which is sort of this, which I think is a little orthogonal, but also quite related, which is sort of, is the future of the world going to be one or a small number of companies or, for that matter, governments or super AIs that kind of own and control everything?
0:29:39 And sort of all the value rolls up into a handful of entities.
0:29:41 And, you know, and there you get into this.
0:29:47 There’s, like, the hyper-capitalist version of it where a few companies make all the money or there’s, like, the hyper-communist version of it where you have total state control or whatever.
0:30:01 You know, or is this a technology that’s going to diffuse out and be, like, in everybody’s hands and be a tool of empowerment and creativity and individual effort, you know, expressiveness and as a tool for basically everybody to use?
0:30:13 And I think one of the really striking things about this period of time and you being in this role is that this is the period in which the scenario number two is, I think, very clearly playing out, which is this is the time in which AI is actually hyper-democratizing.
0:30:17 I think, actually, AI is actually hyper-democratized.
0:30:23 It has spread to more individuals, both in the country and around the world, in the shortest period of time of any new technology, I think, in history.
0:30:31 You know, we’re something like 600 million, you know, users today, rapidly on the way to a billion, rapidly on the way to five billion, you know, kind of across all the consumer products.
0:30:35 And then the best AIs in the world are in the consumer products, right?
0:30:45 And so, if you use, you know, current-day ChadGPT or Grok or any of these things, like, you know, I can’t spend more money, you know, and get access to a better AI, it’s in the consumer products.
0:30:51 And so, just in practice, what you have, like, playing out in real time is this technology is going to be in everybody’s hands.
0:31:05 And everybody is going to be able to use it to, you know, optimize the things that they do, have it be a thought partner, have it be, you know, somebody, you know, a, you know, an assistant for building companies, you know, starting companies or creating art or, you know, doing all the things that people want to do.
0:31:11 You know, my wife was just using it this morning to design a new, an entrepreneurship curriculum for our 10-year-old, right?
0:31:15 Right, like, you know, like, literally, it’s like, oh, wow, that’s like a really great idea.
0:31:21 And it took her a couple hours, and she has, like, a full curriculum for him to be able to start his first video game company.
0:31:23 And here’s all the different skills that he needs to learn.
0:31:23 And here’s all the resources.
0:31:26 And, like, that’s just a level of capability.
0:31:36 I mean, to, you know, to have done that without these modern consumer AI tools, you know, you’d have to go, like, hire a, you know, education specialist or something, you know, which is basically impossible to do that kind of thing.
0:31:40 And, you know, everybody has these stories now in their lives and among people they know.
0:31:45 So, we have the, I think we have, like, a lot of proof that the track that this is on is that this is going to be in everybody’s hands.
0:31:47 And, in fact, that is going to be a really good thing.
0:31:51 And I think, David, I think you guys are really playing a key role in making that happen.
0:32:01 I think it’s so important that this technology remain decentralized because the kind of the Orwellian concern is kind of the ultimate centralization.
0:32:06 And, fortunately, so far what we’re seeing in the market is that it’s hyper-competitive.
0:32:10 There’s five major model companies all making huge investments.
0:32:19 And the benchmark, the model performance, the evaluations are relatively clustered.
0:32:22 And there’s a lot of leapfrogging going on.
0:32:29 So, you know, Grok releases a new model, it leapfrogs ChatGPT, but then ChatGPT releases something new, they leapfrog.
0:32:31 So, they’re all, like, very competitive and close to each other.
0:32:33 And I think that’s a good thing.
0:32:39 And it’s the opposite of what was predicted, you know, through this, like, imminent AGI story,
0:32:50 where the sort of the storytelling there was that one model would get a lead, and then it would direct its own intelligence to making itself better.
0:32:55 And then, so, therefore, its lead would get bigger and bigger, and you kind of get this recursive self-improvement.
0:32:58 And pretty soon, you’re off to the singularity.
0:33:00 And we haven’t really seen that.
0:33:05 You know, we haven’t seen one model completely pull away in terms of capabilities.
0:33:07 And I think that’s a good thing.
0:33:11 And so, Eric, to your point about this narrative about the virtual AI researcher,
0:33:20 that was one variant of this sort of imminent AGI narrative is that the steps would be, you know, models get smarter,
0:33:28 the models create virtual AI researcher, and then you get a million virtual AI researchers, and then, you know, it’s singularity.
0:33:34 And I think just the sleight of hand in that is, what is a virtual AI researcher, right?
0:33:39 It’s like a very easy thing to say, but, like, what does that really mean?
0:33:44 And, you know, Tobology’s point about, you know, AI is still middle-to-middle.
0:33:45 It doesn’t, it’s not end-to-end.
0:33:51 So, if an AI researcher is end-to-end, there’s, like, things that has to, you know, things the person has to figure out.
0:33:53 They’ve got to set their own objective.
0:33:56 They’ve got to be able to pivot in ways that AI can’t.
0:34:01 You know, like, is it really, is that really feasible to create a virtual AI researcher?
0:34:07 I think there’s, like, parts of the job that, you know, AI could get really good at, or even better than humans.
0:34:12 But probably that tool has to be used by a human AI researcher.
0:34:26 So, I guess the argument, I suspect, could be sort of teleological in the sense that you might need AGI to create a virtual AI researcher, as opposed to the other way around.
0:34:31 And if that’s the case, you don’t just, you know, you’re not going to get, like, singularity.
0:34:33 So, I’m a little bit skeptical of that claim.
0:34:35 You know, we’ll see.
0:34:38 Would Sam say he could do it in 2028?
0:34:40 I mean, I guess we’ll see in three years.
0:34:45 I think all those claims tend to be, like, recruiting ideas as opposed to actual predictions.
0:34:48 He’s not the first to mention that idea.
0:34:50 Other model companies have been promoting it.
0:34:52 But, you know, Leopold’s mentioned that, too.
0:34:54 You know, we’ll see.
0:35:04 But I suspect that that’s what’s wrong with that argument, is that virtual AI researcher requires AGI.
0:35:10 And so, the idea that you’re going to get AGI through a virtual AI researcher is backwards.
0:35:11 But we’ll see.
0:35:12 You know, we’ll see.
0:35:19 David, you and the administration, I think, have also been very supportive of open source AI, which I think also dovetails into this, in terms of, like, the market being very competitive.
0:35:20 Yes.
0:35:23 Do you want to spend a moment on what you guys have been able to do on that, and how you think about it?
0:35:31 Yeah, I mean, so just to the, open source is very important, because I just think it’s synonymous with, you know, freedom.
0:35:33 I mean, software freedom.
0:35:38 You basically could run your own models on your own hardware and retain control over your own information.
0:35:51 And, by the way, this is what enterprises typically do all the time, is, you know, about half the global data center market is, you know, is on-prem, meaning enterprises and governments create their own data centers.
0:35:52 They don’t go to the big clouds.
0:36:04 And, by the way, I’ve got nothing against the hyperscalers, but just, you know, people like to run their own, you know, own data centers and, you know, maintain control over their own data and that kind of thing.
0:36:07 And I think that will be true for consumers to some degree as well.
0:36:12 So, I do think it’s an important area that we should, you know, want to encourage and promote.
0:36:20 The irony right now in the market is that the best open source models are Chinese.
0:36:24 And it’s sort of a quirk, right?
0:36:26 It’s the opposite of what you’d expect.
0:36:31 You’d expect, like, the American system would promote open and somehow the Chinese system would promote closed.
0:36:35 That has kind of, you know, ended up being a little backwards.
0:36:38 I think there’s, like, good reasons for it.
0:36:50 It could just be, it could just be kind of a historical accident in the fact that the DeepSeek founder was, like, very committed to open source and kind of just kind of that, like, got things started that way.
0:36:53 Or it could be part of a deliberate strategy.
0:37:03 If you’re China and you’re trying to catch up, open source is a really good way to do that because you get all the non-aligned developers to want to help your project, which they can’t do, you know, with a closed project.
0:37:05 So, it’s a great strategy for catching up.
0:37:21 And then also, if you think that your business model, you know, as a company or as a country is, let’s say, scale manufacturing of hardware, then you would want the software part to be free or cheap because it’s your complement, right?
0:37:23 So, you try to commoditize your complement.
0:37:29 And I don’t know, whether it’s by accident or part of design, that seems to be what the Chinese strategy has been.
0:37:36 I think that the right answer for the U.S. in this is to, you know, to encourage our own open source.
0:37:41 I mean, I think it’d be a great thing if we saw more open source initiatives get going.
0:37:52 I guess there’s one promising one called Reflection, which was founded by former, you know, engineers from Google DeepMind.
0:37:56 So, I hope we see more open source innovation in the West.
0:37:58 But look, I think it’s very important.
0:37:59 It’s critical.
0:38:02 And like I said, in my view, it’s synonymous with freedom.
0:38:04 And it’s definitely not something we want to suppress.
0:38:08 Now, just back to the closed ecosystem for a second.
0:38:12 It’s true we have five major competitors there, and they’re all spending a lot of money.
0:38:27 I do worry a little bit that at some point in time that the market consolidates and we end up with, you know, like a monopoly or duopoly or something like that, as we’ve seen in other technology markets.
0:38:30 We saw this with search and, you know, and so on down the line.
0:38:37 And I just think that it would be good if this market stayed more competitive than just one or two winners.
0:38:42 And I don’t really know what to do about that.
0:38:43 I’m just making that observation.
0:38:53 And I do think that having open source as an option always then ensures that even if the market does consolidate, that you do have an alternative.
0:39:06 And it’s an alternative that’s more fully within your control as opposed to a large corporation or, you know, or the deep state, you know, working with that corporation.
0:39:19 As we saw in the Twitter files that, you know, the deep state was working with all these, you know, social media companies, you know, and implementing much more widespread censorship than I think any of us thought possible.
0:39:29 So we’ve seen evidence in the past and, again, in the social networking space about how the government could get involved in nefarious ways.
0:39:39 And it would be good to have alternatives so that, so, you know, to prevent that or to make it less likely that that scenario comes about with AI.
0:39:40 Yeah.
0:39:47 Well, as you know, we and others are very aggressively investing in new model companies of many kinds, including new foundation model companies.
0:39:55 And then also, you know, as you’re probably, there are a whole bunch of new open source efforts, you know, that are not yet public that, you know, hopefully will bear fruit over the next couple of years.
0:40:01 So I think that, at least in the medium term, I think we’re looking at an explosion of model development as opposed to consolidation.
0:40:03 And then, you know, we’ll see what happens from there.
0:40:04 Yeah.
0:40:05 That’s really good to hear.
0:40:16 I mean, I think, you know, if we assess kind of the state of the AI race vis-a-vis China, this is the only area where we appear to be behind is in this area of open source models.
0:40:17 Yeah.
0:40:20 I think, you know, if you don’t care whether it’s open or closed, I think we have the lead.
0:40:21 Yeah.
0:40:27 I think our top models, model companies are ahead of the top Chinese companies, although they’re quite good.
0:40:31 But just this narrow area of open source seems to be where they have an advantage.
0:40:37 So it’s great to hear that you guys are seeing, you know, a lot more efforts coming to market.
0:40:38 Yeah.
0:40:38 Yeah.
0:40:38 There’s more coming.
0:40:39 Yeah.
0:40:39 Good.
0:40:40 Yeah.
0:40:40 Yeah.
0:40:41 Definitely more coming.
0:40:46 Peter Thiel quipped, you know, many years ago that he thought, you know,
0:40:51 crypto would be libertarian or decentralizing and that AI would be communist or centralizing.
0:41:00 And I think one thing we’ve perhaps we’ve learned that technology isn’t deterministic and that there are a set of choices that determine whether these technologies are decentralizing or centralizing.
0:41:06 And maybe we could use that as a segue to go deeper into the state of the race with China.
0:41:11 Maybe, David, you could lay out the sort of what’s most important to get right.
0:41:13 You’ve already indicated, you know, open source is one example.
0:41:17 You know, you alluded earlier to sort of our strategy as it relates to chips.
0:41:25 You know, some people say that, yes, it’s a good idea to do what we’re doing because it’ll, you know, limit domestic semiconductor production.
0:41:30 Other people say, oh, well, you know, some of these companies say chips are their biggest, you know, limiting factors.
0:41:32 And so are we enabling them in some way?
0:41:36 Why don’t you talk about our sort of state of play and then our strategy?
0:41:37 Yeah.
0:41:43 So, you know, when we talk about winning the AI race, sometimes we say we’re in a race against China.
0:41:50 Sometimes we just leave it a little bit more vague because I don’t think we should become overly obsessed with our competitors or adversaries.
0:41:59 I think whether we win or not will mostly have to do with the decisions we make about our own technology ecosystem, not about, you know, what we do vis-a-vis them.
0:42:11 And so the president in his July 23rd speech on AI policy, I think, mentioned a few of the key pillars of, you know, of how we win this AI race.
0:42:13 And by the way, I’m not saying it ever ends.
0:42:17 It might be an infinite game, but we want to be in the lead at least.
0:42:26 And I do think that there could be a period of time where, like, you know, take the internet where, I mean, the internet’s still going on.
0:42:26 It’s happened.
0:42:30 But we understand that kind of who the winners are is kind of baked now.
0:42:35 So there could be a period of time in which, you know, it’s kind of baked who the winners in AI are.
0:42:41 But in any event, you know, in terms of how we win this race, you know, I mentioned a few of the key pillars.
0:42:42 Number one is innovation.
0:42:46 You know, it’s very important to support the private sector because they’re the ones who do the innovation.
0:42:50 We’re not going to regulate our way to beating our adversary.
0:42:52 We just have to out-innovate them.
0:43:01 I mentioned, I think right now, the biggest obstacle is the frenzy of over-regulation happening at the states.
0:43:05 I desperately think we need a single federal standard.
0:43:10 A patchwork of 50 different regulatory regimes is going to be incredibly burdensome to comply with.
0:43:16 I think even the people who support a lot of this regulation are now acknowledging that we’re going to need a federal standard.
0:43:24 The problem is that when they talk about it, what they really want is to federalize the most onerous version of all the state laws.
0:43:26 And that can’t be allowed either.
0:43:28 So, you know, there’s going to be a battle to come.
0:43:43 I think as the states become more and more unwieldy, you know, as it becomes more of a trap for startups that they now have to report into 50 different states at 50 different times, to 50 different agencies about 50 different things, people are going to realize this is crazy.
0:43:44 And they’re going to try to federalize it.
0:43:48 And then the question, I think, is whether we get preemption heavy or preemption light.
0:43:55 You know, do we get a—I think everyone’s going to ultimately be in favor of a single federal standard.
0:44:01 Because I think one of America’s greatest advantages is that we have a large national market, right?
0:44:02 Not 50 separate state markets.
0:44:09 It’s like kind of, you know, Europe before the EU wasn’t competitive at all on the Internet because it’s 30 different regulatory regimes.
0:44:20 And so, you know, if you’re a European startup and even if you won your country, it didn’t get you very far because you still had to, like, you know, figure out how to compete in 30 other countries before you could even win Europe.
0:44:25 And then meanwhile, your American competitors won the entire American market and is ready to scale up globally.
0:44:34 So the fact that we have a single national market is just fundamental to our competitiveness and is why, you know, winners in America then go on to kind of win the whole world.
0:44:38 So we have to preserve that.
0:44:40 And I think we will eventually get some federal preemption.
0:44:44 I think the question will just, again, be whether we preempt heavy or preempt light.
0:44:49 Second big area is infrastructure, you know, and energy.
0:44:53 You know, we want to help this amazing infrastructure boom that’s happening.
0:44:58 And the biggest, I think, limiting factor there is going to be around energy.
0:45:01 I think President Trump’s been incredibly farsighted in this.
0:45:03 I mean, he was talking about Drill Baby Drill many years ago.
0:45:06 He understood that energy is the basis for everything.
0:45:08 It’s definitely the basis for this AI boom.
0:45:21 And we want to basically get all of these unnecessary regulations, the permitting restrictions, a lot of the NIMBYism out of the way so that AI companies can build data centers and get power for them.
0:45:23 And we can talk about that more if you want.
0:45:30 But I think that that’s a second really huge part of what it’s going to take to win the AI race.
0:45:32 And then the third area is around exports.
0:45:36 And maybe this has been the most controversial one.
0:45:41 And it really speaks to the cultural divide between Silicon Valley and Washington.
0:45:50 So, I think all of us in Silicon Valley understand that the way that you win a technology race is by building the biggest ecosystem, right?
0:45:53 You get the most developers building on your platform.
0:45:55 You get the most apps in your app store.
0:45:56 Everyone just uses you.
0:46:03 I mean, you know, those are the companies that typically win, are the ones that get all the users, all the developers, and so on.
0:46:06 And so, we in Silicon Valley have a partnership mentality.
0:46:09 You know, we want to just publish the APIs and get everyone using them.
0:46:12 Washington has a different mentality, right?
0:46:13 It’s much more of a command and control.
0:46:15 We want you to get approved.
0:46:17 You know, we kind of want to hoard this technology.
0:46:19 Only America should have it.
0:46:27 And this was really fundamental, I think, to the Biden diffusion rule, where the point of that rule is to stop diffusion, right?
0:46:29 Diffusion is a bad word.
0:46:33 But in Silicon Valley, we understand that diffusion is how you win.
0:46:37 I mean, I don’t think we ever called it diffusion before.
0:46:38 That was a new word for me.
0:46:39 We just called it usage.
0:46:40 Yeah.
0:46:43 But we understand that, like, getting the most users is how you win.
0:46:47 So, there’s, like, a fundamental culture clash going on right now.
0:46:57 And, you know, the way I kind of parse it is that what we decide to sell to China is always going to be complicated because, you know, they’re our competitor and our adversary.
0:47:00 And there’s the whole potential dual use.
0:47:04 And so, the question of what you sell to China is nuanced.
0:47:10 But what we sell to the rest of the world, that should be an easy question, which is we should want to do business with the rest of the world.
0:47:13 We should want to have the largest ecosystem possible.
0:47:20 And every country we exclude from our technology alliance, we’re basically driving into the arms of China and it makes their ecosystem bigger.
0:47:33 And what we saw under the Biden years is that they were constantly pushing other countries into the arms of China, starting with the Gulf states in October of 2023.
0:47:43 Basically, the Gulf states, you know, I’m talking about countries like Saudi Arabia, UAE, longstanding U.S. allies, they weren’t allowed to buy chips from the U.S.
0:47:47 In other words, they weren’t allowed to set up data centers and participate in AI.
0:47:59 And, you know, here we are telling all these countries that, you know, AI is fundamental to the future, is going to be the basis of the economy, and yet we’re excluding you from participating in the American tech stack.
0:48:01 Well, you know, it’s obvious what they’re going to do.
0:48:04 You know, the only play we’re giving them is to go to China.
0:48:13 And so, you know, all of these rules basically just create pent-up demand for Chinese chips and models, and it creates a Huawei Belt and Road.
0:48:23 And we are hearing that Huawei is starting to proliferate or diffuse in the Middle East and in Southeast Asia.
0:48:26 And I just think it’s a really counterproductive strategy.
0:48:28 We’re completely shooting ourselves in the foot.
0:48:42 And, like, the greatest irony is that the people who have been pushing this strategy of driving all these countries into China’s arms have called themselves China hawks, you know, as if what they’re doing is hurting China.
0:48:44 No, it’s like, no, it’s like, it’s helping China.
0:48:46 I mean, it’s basically just handing them markets.
0:48:48 And our products are better.
0:48:56 But if you don’t give these countries a choice to buy the American tech stack, obviously they’re going to go with the Chinese tech stack.
0:49:23 And, you know, China is out there promoting, you know, DeepSeq models and Huawei chips, and they’re not, like, wringing their hands about, you know, whether, you know, exporting chips for a data center in UAE is going to, like, create the Terminator and, you know, all these, like, ridiculous narratives that we’ve invented to, you know, reasons we’ve invented not to sell American technology to our friends.
0:49:31 So, you know, that has ended up being, I think, surprisingly, maybe the most controversial part of what we’ve advocated for.
0:49:34 But there you have it.
0:49:35 So, in any event, I’ll stop there.
0:49:39 Those are kind of some of the major pillars of what we’ve been advocating.
0:49:48 Should we go deeper on sort of the infrastructure and energy point in terms of what it’s really going to take to get enough capacity or what’s most important in that second bullet you were talking about?
0:49:54 Yeah, I mean, well, so, I mean, there are definitely people who are much more knowledgeable about energy than I am.
0:49:56 I mean, there are experts in the space.
0:50:06 But here’s what I’ve been able to kind of divine is, so, first of all, the administration, President Trump has signed multiple executive orders to allow for nuclear, to make permitting easier.
0:50:15 We’ve even freed up federal land for data centers to hopefully try and help get around some of these state and local restrictions.
0:50:25 And, obviously, the president has made it a lot easier to stand up new energy projects, power generation, all that kind of stuff.
0:50:33 I still think, though, that we have a growing NIMBY problem at the state and local level in the U.S.
0:50:35 that is becoming a little bit worrisome.
0:50:45 And if we don’t figure out a way to address it, then it could really slow down the build out of this infrastructure.
0:50:51 In terms of power, so, my understanding is that nuclear is going to take five or ten years.
0:50:55 It’s just not something that we’re going to be able to do in the next two or three years.
0:51:00 So, in the short term, it really means that, like, gas is the way that these data centers are going to get powered.
0:51:15 And the issue with gas is the shortage there is not, I mean, America has plenty of natural gas, and it exists in enough red states where you could just build out data centers, like, close to the source, which would be smart.
0:51:19 The issue is there’s, like, a shortage of these gas turbines.
0:51:23 You know, there’s only, like, two or three companies that make these things.
0:51:25 And, like, there’s, like, a backlog of two or three years.
0:51:31 So, I think that’s probably the immediate problem there that needs to get solved.
0:51:36 However, I do think that in the next two or three years, we could get a lot more out of the grid.
0:51:51 So, I’ve had, you know, energy executives tell me that if we could just shed 40 hours a year of peak load from the grid to, like, backup generators, to diesel, things like that,
0:51:56 you could free up an additional 80 gigawatts of power, which is a lot.
0:52:04 Because I guess the way that it works is, you know, the grid is only used about 50%.
0:52:14 50% of the capacity is used throughout the year because they have to build enough capacity for the peak days, like the hottest day in summer, the coldest day in winter, those are your peak days.
0:52:19 And they don’t want to commit to a bunch of the capacity being used.
0:52:25 And then, you know, you find out that you have a really cold day in winter and people can’t get enough heat for their homes.
0:52:31 And so, they can’t, like, overcommit to, you know, to, say, contracts or data centers, things like that.
0:52:44 But if you could, again, just, like, if you could shed that, like, 40 hours a year of peak load to backup, then you’d be able to free up 80 gigawatts, which is a lot.
0:52:49 And that would definitely get us through the next, you know, two or three years until the gas turbine bottleneck’s been alleviated.
0:52:51 And then, eventually, you get to nuclear.
0:52:55 So, that would be very good.
0:53:02 I think the issue there is just there’s a whole bunch of insane regulations preventing, you know, load shedding.
0:53:04 So, like, for example, you can’t use diesel.
0:53:10 And this, you know, Chris Wright, the Secretary of Energy, is very good on all this stuff.
0:53:14 And I think he’s working on unraveling all of this so we could actually do this.
0:53:21 It’s funny, David, as you talk about this stuff, it’s, I can’t help it, it’s a little bit like, it’s a little bit like the principle is just do the opposite of the EU.
0:53:29 Like, basically, I think everything we’ve talked about so far is basically the opposite of the European approach.
0:53:31 Yeah.
0:53:35 Well, I mean, the Europeans, I mean, they have a really different mindset for all this stuff.
0:53:44 When they talk about AI leadership, what they mean is that they’re taking the lead in defining the regulations.
0:53:53 You know, it’s like, that’s what they’re proud of is that, like, they think that’s what their comparative advantage is, is that, you know, they get together in Brussels and figure out what all the rules should be.
0:53:55 And that’s what they call leadership.
0:54:06 The EU just announced a, I shouldn’t be done with them too much, but they just announced a big new growth fund, a big new public-private sector tech growth fund to grow EU companies to scale.
0:54:10 And I just, I would literally, I was just like, well, it’s actually quite, it’s almost like a game show or something.
0:54:12 They do everything they can to strangle them in their crib.
0:54:20 And then if they, if they make it, if they make it through like a decade of abuse of small companies, then they’re going to give them money to grow.
0:54:22 Well, it’s kind of, yeah.
0:54:27 Well, it’s what, you know, Ronald Reagan had a line about this, which is, if it moves, tax it.
0:54:29 If it keeps moving, regulate it.
0:54:30 If it stops moving, subsidize it.
0:54:31 Yeah.
0:54:34 The Europeans are definitely at the subsidized stage.
0:54:35 Yeah.
0:54:49 And I, yeah, I shouldn’t be done with them too much, but I just, like, I’ve always been proud to be an American, but particularly now, because like we just, the fact that we’ve, we’ve, we’ve been, we are, we, it really feels like we’re re-centering on core American values in a lot of the things that we’re talking about, which is just really great.
0:54:49 Yeah.
0:54:55 I mean, again, it’s, you know, our, our view is that the, first of all, we have to win the AI race.
0:54:57 We want America to lead in this critical area.
0:55:01 It’s like fundamental for our economy and our national security.
0:55:03 How do you do that?
0:55:06 Well, our companies have to be successful because they’re the ones who do the innovation.
0:55:09 Again, you’re not going to regulate your way to winning the AI race.
0:55:15 I’m not saying we don’t need any regulations, but the point is just, that’s not how, that’s not what’s going to determine whether we’re the winners or not.
0:55:23 David, you recently tweeted that climate doomerism perhaps is giving way to AI doomerism based on, you know, Bill Gates’ recent comments.
0:55:25 What do you mean by this?
0:55:29 Do you mean it’s going to be a major flank, you know, of the, of the, you know, US left?
0:55:32 Or what do you mean by this comment?
0:55:46 Well, I think the left needs a central organizing catastrophe to justify their takeover of the economy and to regulate everything and especially to control the information space.
0:55:53 And I think that you’re seeing that kind of the allure of the whole climate change doomer narrative has kind of faded.
0:55:58 Maybe it’s the fact that they predicted 10 years ago that the whole world would be underwater in 10 years and that hasn’t happened.
0:56:03 So it’s like a certain point you get discredited by your own catastrophic predictions.
0:56:06 I suspect that’s where we’ll be with AI doomerism in a few years.
0:56:10 But in the meantime, it’s a really good narrative to kind of take the place of the climate doomerism.
0:56:13 There’s actually a lot of similarities, I would say.
0:56:23 You know, you’ve kind of got, there’s a lot of kind of preexisting Hollywood storytelling and pop culture that supports this idea.
0:56:27 You know, you’ve got the Terminator movies and the Matrix and all this kind of stuff.
0:56:31 So people have been taught to be afraid of this.
0:56:36 And then, you know, you, there’s enough kind of pseudoscience behind it.
0:56:49 You know, kind of like, you know, with, like you’ve got all these contrived studies that, like the one where they claim that the AI researcher got blackmailed by his own AI model or whatever.
0:56:55 Look, it’s very easy to steer the model towards the answer that you want.
0:56:57 And a lot of these studies have been very contrived.
0:57:00 But there’s this patina of pseudoscience to it.
0:57:07 It’s certainly technical enough that the average person doesn’t feel comfortable saying that this doesn’t make any sense.
0:57:08 I mean, it’s more like you’re not an expert.
0:57:09 What do you know?
0:57:13 And even Republican politicians, I think, are kind of falling for this.
0:57:16 So, yeah, I mean, it’s a really desirable narrative.
0:57:23 And of course, you know, as AI touches more and more things, more and more parts of the economy, every business is going to use it to some degree.
0:57:29 If you can regulate AI, then that kind of gives you a lot of control over lots of other things.
0:57:32 And like I mentioned, AI is kind of eating the Internet.
0:57:33 It’s like the main way that you’re getting information.
0:57:47 So, again, if you can kind of get your hooks into what the AI is showing people, now you can control what they see and hear and think, which dovetails with the whole, with the left censorship, you know, agenda, which they’ve never given up on.
0:57:53 Dovetails with their agenda to brainwash kids, which we, you know, which is kind of the whole woke thing.
0:57:57 So, I mean, this is going to be very desirable for the left.
0:58:00 And this is why, I mean, look, they’re already doing this.
0:58:02 This is not like some prediction on my part.
0:58:10 Basically, after scam bank run fraud did what he did with FTX and got sent to jail, he was like a big effective altruist.
0:58:14 And he had made pandemics like their big cause.
0:58:19 They needed a new cause, and they got behind this idea of X risk, which is existential risk.
0:58:23 The idea being if there’s like a 1% chance of AI ending.
0:58:28 The world, then we should drop everything and just focus on that because you do the expected value calculation.
0:58:34 And so, if it ends humanity, then that’s the only thing you should focus on, even if it’s, you know, a very small percentage chance.
0:58:37 But they really like reorganized behind this with all of them.
0:58:40 And, you know, they’ve got quite a few advocates.
0:58:50 And actually, it’s an amazing story about how much influence they were able to achieve, largely behind the scenes or in the shadows during the Biden years.
0:58:57 They basically convinced all of the major Biden staffers of this view of this, like, imminent superintelligence is coming.
0:58:59 We should be really afraid of it.
0:59:01 We need to consolidate control over it.
0:59:05 There should only be, you know, ideally two or three companies that have it.
0:59:06 We don’t want anyone in the rest of the world to get it.
0:59:13 And then, you know, what they said is, you know, once we make sure that there’s only two or three American companies, we’ll solve the coordination problems.
0:59:16 That’s what they consider to be, you know, the free market.
0:59:19 We’ll solve those coordination problems behind those companies.
0:59:25 And we’ll be able to control this whole thing and prevent the genie from escaping the bottle.
0:59:30 I think it was like this totally paranoid version of what would happen.
0:59:32 And it’s already being, it’s already in the process of being refuted.
0:59:40 But this vision is fundamentally what animated the Biden executive order on AI.
0:59:42 It’s what animated the Biden diffusion rule.
0:59:48 And Mark, I mean, you’ve talked about how you were in a meeting with Biden folks and they were going to basically ban open source.
0:59:51 And they were going to, they’re basically going to anoint two or three winners and that was it.
0:59:54 Yeah, they told us that.
0:59:55 They told us that explicitly.
0:59:58 And yeah, and they told us exactly what you just said.
0:59:59 They told us they’re going to ban open source.
1:00:12 And when we challenged them on the ability to ban open source, because it’s, you know, we’re talking about, you know, math, you know, like mathematical algorithms that are taught in textbooks and YouTube videos and universities, you know, they said, well, during the Cold War, we banned.
1:00:16 Entire areas of physics and put them off limits and we’ll do the same thing for math if we have to.
1:00:18 Yeah.
1:00:21 And that’s the, yeah, that’s the, yeah, that was the.
1:00:27 And you’ll be happy to know that the guy who actually said that is now an Anthropic employee.
1:00:30 No, that’s exactly right.
1:00:30 All those.
1:00:40 And I mean, literally the minute the Biden administration was over, all the top Biden AI employees went to go work at Anthropic, which tells you who they were working with during the Biden years.
1:00:41 Yeah.
1:00:45 But no, I mean, this was, this was very much the narrative.
1:00:48 You sort of had this imminent super intelligence.
1:00:57 And then, you know, the, one of the refrains you heard was that AI is like nuclear weapons and, and GPUs are like uranium or plutonium or something.
1:01:06 And, and so, and, and therefore we need like, you know, the way, the proper way to, to regulate this is with like an International Atomic Energy Commission.
1:01:14 And so, you know, again, everything would be sort of centralized and controlled and they would anoint two or three winners.
1:01:24 And, you know, now this, I think this narrative really started to fall apart with the launch of DeepSeq, which really happened in the first, I don’t know, couple of weeks of the Trump administration.
1:01:41 Because, you know, if you asked any of these people what they thought of, of China during this time when they were pushing all these regulations, they basically, you know, and specifically, well, wait, if, if we shoot ourselves in the foot by over-regulating AI, you know, won’t China just win the AI race?
1:01:46 If you were to ask them that, what they, what they would have said, they did say, is that China is so far behind us, it doesn’t matter.
1:01:57 And furthermore, and this was said completely without evidence, that if we basically slow down to impose, you know, all these supposedly healthy regulations, well, China will just copy us and do the same thing.
1:02:00 I think it was an absurdly naive view.
1:02:03 I think that if we shoot ourselves in the foot, China will just be like, thank you very much.
1:02:06 We’ll just take leadership in this technology.
1:02:07 Why wouldn’t we?
1:02:09 But this is what they said.
1:02:19 And, you know, when, when, when the Biden executive order on AI was crafted, there was no discussion whatsoever of the China compete.
1:02:30 You know, it was just, it was just, again, assumed that we were so far ahead that, that we could basically do anything to our companies and it would just be, it wouldn’t, it wouldn’t really affect our, our competitiveness.
1:02:49 And I think, I think that narrative really started to fall apart with, um, DeepSeq at the model level, uh, back in April, uh, Huawei launched a technology called Cloud Matrix in which they compensated for the fact that their chips individually are not as good as NVIDIA’s chips by networking more of them together.
1:02:57 So they took 384 of them, you use their prowess in networking to create, you know, this rack system, Cloud Matrix.
1:03:11 And it was demonstrated to show that, you know, yes, NVIDIA chips are better, they’re much more power efficient, but at the rack level, at the system level, you know, um, Huawei could get the job done with these, you know, Ascend chips and Cloud Matrix.
1:03:22 And so, again, I think that showed that, you know, we’re not the only game in town on chips, which means that if we don’t sell our chips to, you know, our, our friends and allies in the Middle East and other places, then Huawei certainly will.
1:03:31 So I think it’s just been kind of one revelation after another in which we’ve learned that, um, that a lot of their preconceptions of belief were wrong.
1:03:36 And we’ve talked about the fact that, um, that the, that the markets ended up being much more decentralized than they ever could have predicted.
1:03:50 And, and I would also say one other thing, which is they, they, um, they also believe that there’d be, you know, um, imminent catastrophes that haven’t, so this is kind of like the equivalent to the global warming thing where we’re all supposed to be underwater by now.
1:03:58 They, they were saying that models trained on, I think, I don’t know, like 10 to 25 flops or whatever were like way too risky.
1:04:03 Well, every single model now at the frontier is trained on that level of compute.
1:04:10 And so they would have banned us from even being at the place we’re at today if we had listened to these people back in 2023.
1:04:12 So just a couple of years ago.
1:04:17 So that’s like really important to keep in mind that there are predictions of imminent catastrophe have already been refuted.
1:04:26 And so things are moving in a direction that, um, that I think are very different than, you know, what they thought in, you know, let’s call it the first year after the launch of ChatGPT.
1:04:31 Great. So David, just to, uh, come back real quick while we still have you on, on crypto.
1:04:42 Um, so the administration, um, and I think the country had a significant victory, uh, earlier this year with the president signing the, um, stable coin bill into law, uh, which was, uh, the, the, the genius act.
1:04:48 Um, and I, I think that, uh, I’ll just tell you what we see is like the positive consequences of that law have been even bigger than we thought.
1:04:52 Um, and I would say that’s both for the stable coin industry.
1:04:59 And you now see actually a lot of, a lot of financial institutions of all kinds embracing stable coins, um, in a way that, that they weren’t before.
1:05:04 Um, and, you know, sort of the, the, the, the phenomenon is fretting in America, you know, by the way, do it, you know, being, being in the lead and doing very well there.
1:05:10 Um, but, but, but just also more broadly, just as a signal to the crypto industry that like this, you know, this really is a, you know, this really is a new day.
1:05:18 Um, and there, there really, there really are going to be regulatory frameworks that, that make these things possible and, and, you know, that are responsible, but also make this industry really possible to flourish.
1:05:29 Um, and in the U S, um, as you know, there is a second piece of legislation, um, you know, being constructed right now, which is the market structure bill, uh, called like the clarity act, um, which is sort of the phase two of the legislative agenda.
1:05:36 Um, and, and I wondered maybe if you could just tell us a little bit about your view of, of the importance of that, of that bill and, and then, you know, kind of, how do you think that process is going?
1:05:39 I think it’s extremely important.
1:05:45 Um, so as you mentioned, we, we passed the, the genius act a few, few months ago, but that was just for stable coins.
1:05:49 Stable coins are about 6% of the total market cap in terms of tokens.
1:06:00 So 94% are all the other types of tokens and the clarity act would apply to all of that and provide the regulatory framework for all those other crypto projects and companies.
1:06:20 Um, you know, if we could be sure that, you know, currently we have a great SEC chairman, Paul Atkins, and if we could be sure that Paul Atkins and a person like Paul Atkins was always at the SEC forever, that we wouldn’t necessarily need legislation because they’re already in the process of implementing like much better rules and providing regulatory clarity.
1:06:21 But the truth is that we don’t know for sure.
1:06:29 And if you’re a founder who’s trying to make a decision now about where you’re going to build your company, you want to have certainty for 10 years out, 20 years out.
1:06:32 Uh, you know, we want to encourage long-term projects.
1:06:43 And so, again, I think it’s very important to, to canonize, uh, the, the, the rules that, that first provide the clarity and then to make sure there’s enough stability around them.
1:06:46 And sort of canonize those rules in legislation.
1:06:48 That’s the only way that you provide that long-term stability.
1:06:51 I think that we will get the clarity act done.
1:06:55 Like you mentioned, it passed the house with about 300 votes, about 78 Democrats.
1:06:56 So it was substantially bipartisan.
1:07:00 I think it will ultimately, it’s now going through the Senate.
1:07:02 Um, I think it will ultimately get done.
1:07:07 Uh, we’re negotiating with a dozen or so Democrats.
1:07:08 We have to get to 60 votes.
1:07:12 So that’s the, the hard part is under the filibuster, we got to get to 60.
1:07:15 Uh, so, but we’re negotiating with about a dozen Democrats.
1:07:18 And I, I do think that we will ultimately get to that number.
1:07:26 And we, by the way, we ended up having 68, uh, votes in the Senate for genius, uh, including 18 Democrats.
1:07:34 So I do think that even if we just get, you know, two thirds of the number of Democrats that we got for genius, then, you know, we’ll, we’ll be, we’ll be fine on, on clarity.
1:07:41 But I, you know, this will provide the, the regulatory framework again for all the other tokens besides stable coins.
1:07:44 And, um, I think it’s just a critical piece of, of legislation.
1:07:56 And, uh, yeah, this, this would ultimately, I think, kind of complete the crypto agenda where we’ve kind of, you know, moving from, you know, Biden’s war on crypto to Trump’s crypto.
1:07:56 Capital of the planet.
1:08:05 Um, and then, you know, I think the industry will have the stability it needs and can just focus on innovating and there’ll be, you know, rule updates and things like that.
1:08:10 But, you know, fundamentally have the foundation for the industry in place on genius act.
1:08:14 Um, you know, president Trump really made that bill possible.
1:08:19 And first of all, it was his election that completely shifted the conversation on crypto.
1:08:25 We would still be, you know, if a different result had been reached, we would have, again, sort of like figure out the SEC.
1:08:28 The founders would still be getting prosecuted.
1:08:29 We wouldn’t know what the rules are.
1:08:30 Elizabeth Warren would be calling the shots.
1:08:33 So president Trump’s election made everything possible.
1:08:41 And it’s his commitment to the industry and he, and his commitment to keeping his promises during the election that’s made all of this possible.
1:08:46 But also, I mean, he got directly involved in making sure the genius act passed.
1:08:48 It was, the legislation was declared dead many times.
1:08:59 Uh, I saw it with my own eyes that, you know, he was able to persuade, um, recalcitrant votes and, uh, twist arms, cajole, uh, and, um, charm.
1:09:02 And, um, you know, he ultimately got it done.
1:09:06 And, um, and I think that clarity will be a similar result.
1:09:11 People are always prematurely declaring these things to be dead or whatever.
1:09:14 Um, there are a lot of twists and turns in the legislative process.
1:09:16 It’s definitely true that you don’t want to see the sausage getting made.
1:09:19 But, um, but anyway, I think we’re on a good track right now.
1:09:20 Good.
1:09:20 Fantastic.
1:09:21 Great.
1:09:32 Pete Buttigieg went on All In recently and you guys talked about the, the left’s identity crisis and he’s hoping for a more moderate, you know, center, center left, um, to, to, to emerge.
1:09:34 At the same time we see Mamdani in New York.
1:09:40 I’m curious what you, what you think of, what, what are you seeing in terms of what is the future in terms of, for the, for the Democratic Party in terms of it?
1:09:45 Is there a more moderate presence or is it kind of this Mamdani style, you know, woke populism?
1:09:54 I mean, it certainly seems to me that Mamdani and, and, um, I don’t know, like the woke socialism seems to be the future of the party.
1:09:56 I mean, that’s where all the energy is in their base.
1:09:58 I mean, I don’t want that to be the case.
1:10:04 I’d rather have a rational Democrat Party, but, um, but that seems to be where their base is, where the energy is.
1:10:12 And, um, you don’t really hear, um, Democrats, um, within the party trying to self-police and distance themselves from that.
1:10:16 Uh, you saw, I mean, all the major figures in the Democrat Party have endorsed Mamdani.
1:10:19 So, yeah, I mean, that’s where that party seems to be headed.
1:10:26 Um, I, I, I think that, um, partly it’s where their base is at.
1:10:40 I think partly it, it might be, um, a misread of, of, uh, it’s sort of, it could be kind of like a partial reaction to, to Trump where they feel like, um, you know, establishment politics has kind of failed.
1:10:44 And so they need a populism of the left to compete with a populism of the right.
1:10:49 And so I think that’s maybe part of the calculation for why they’re going in this direction.
1:10:52 But I don’t, you know, I fundamentally, I don’t think it works.
1:10:53 I don’t think socialism works.
1:10:58 I don’t think the, um, you know, defund the police, empty all the jails policies work.
1:11:07 Um, so, you know, I think we’re about to get another, um, you know, uh, case, uh, a teaching moment in New York.
1:11:09 Unfortunately, it’s not going to be good for the city.
1:11:16 Uh, but, uh, you know, we, we’ve seen this movie before, but, uh, but yeah, that’s, that’s where, I mean, it does appear.
1:11:17 That’s where the Democrat party is.
1:11:19 Um, I don’t completely get it.
1:11:25 I mean, other people have made this observation, but they do seem to be on the 20% side of every 80, 20 issue.
1:11:39 Um, uh, you know, opening the border, um, you know, on the, um, soft on crime stuff, you know, releasing all the, the repeat offenders and, um,
1:11:47 and just sort of this, um, you know, anti-capitalist approach, um, you know, which I think will be disastrous for the economy.
1:11:50 I mean, this, but this is, this is kind of where the party’s at right now.
1:12:05 It is, it’s a little scary because it does mean that if we lose elections in places where we do lose elections, it’s like, you know, there, you could end up with something really horrible, not just like, you know, we’re not just playing in the 40 yard lines anymore in American politics.
1:12:07 Uh, and that, that is a little bit scary.
1:12:08 Yeah.
1:12:13 And I, I do think that, you know, if it weren’t for Donald Trump, I think in a way we, we might already be there.
1:12:23 Um, you know, I think, you know, but we have to make sure that this, um, this, um, the, the Trump revolution continues.
1:12:36 Lastly, um, we just talked about New York, um, recently in an episode all in, uh, in San Francisco, you, you, you know, uh, endorsed bringing the, the national guard, you know, Benioff had his comments.
1:12:38 He, he sort of, you know, went back and forth, but worth of those comments.
1:12:47 I’m curious if speaking of teaching moments, I’m curious if you see San Francisco as savable in, in some sense and, and what, what needs to be true to, to get there if so.
1:12:50 Well, Daniel Lurie is the best mayor we’ve had in decades.
1:12:57 So I think he’s doing a very good job, um, within the constraints that San Francisco presents.
1:13:02 Uh, so the, the, the mayor job, unfortunately, we have a weak mayor in San Francisco.
1:13:13 I don’t mean him, I just mean like the way it’s all set up, the board of supervisors has, you know, a ton of power and, um, over time, they’ve been able to kind of transfer power from the mayor to, to themselves.
1:13:16 Um, and then of course you’ve got all these left-wing judges.
1:13:21 I mean, it’s just amazing to me that, um, there’s a case right now.
1:13:30 This is a case that galvanized me several years ago, the case of Troy McAllister, who was a repeat offender, who, uh, killed two people on New Year’s Eve.
1:13:45 I think it was like, uh, 2020, uh, and, um, he was arrested four times, you know, uh, in the year before that, uh, he, he, he ended up killing these two people and he had a very, very long criminal history.
1:13:51 He had committed armed robbery before, stolen many cars, uh, and he should have been in jail.
1:13:58 He should not have been released, but he was basically released thanks to the zero bail policies of Jason Boudin, who was then the district attorney who he got recalled.
1:14:00 There was a huge outcry.
1:14:09 I mean, even in San Francisco, for there to be a recall of a policy, I mean, you gotta be like seriously left-wing to, um, to basically alienate San Francisco.
1:14:14 And Jason Boudin managed to be so far out there that he alienated even San Francisco.
1:14:24 And yet, I don’t know why Tory McAllister isn’t sentenced already and in jail for 20 years plus, but his case is still pending through the courts, never ending.
1:14:28 And there’s a left-wing judge who’s considering just giving him diversion.
1:14:32 Basically means you just get released, maybe with an ankle bracelet or something.
1:14:33 Um, that’s insane.
1:14:36 So, I mean, that’s what we’re dealing with in San Francisco.
1:14:38 I mean, like crazy left-wing judges who want to release all the criminals.
1:14:45 Uh, and, um, you know, and so I just wonder, like, is Daniel up against too many constraints?
1:14:51 And therefore, I know he doesn’t want the president to send in the National Guard, but maybe ultimately it would be helpful.
1:15:03 Um, but in any event, I think the president has, has agreed to kind of hold off on that, um, out of, you know, Daniel had a good conversation with the president and asked him to hold back.
1:15:09 And, um, and, you know, the president, uh, agreed and is giving him time to implement his solutions.
1:15:19 And look, if, if, um, if Daniel and his team can keep making progress and fix the problems without the National Guard having to come in, then so much the better.
1:15:24 Um, we’ll just see if, and I, and I know he wants to, and like I said, he’s the best mayor we’ve had in decades.
1:15:29 It’s just a question of whether he’ll be too constrained by the other powers that be in the city.
1:15:31 David, thank you so much for coming on the podcast.
1:15:32 Yeah, good to see you guys.
1:15:33 That was fantastic.
1:15:34 Thank you, David.
1:15:34 Yeah, David, it was great.
1:15:36 And thank you for the work.
1:15:45 It’s, we, we, as much as anybody, appreciate the work that you’ve done to, uh, fix the things in the past and, and to put us on a great road to the future.
1:15:46 Yep.
1:15:46 Well, thanks.
1:15:48 I appreciate what you guys have done as well.
1:15:51 So thank you for, for your support, everything you’re doing.
1:15:52 So I appreciate it.
1:15:53 Definitely.
1:15:58 Thanks for listening to this episode of the A16Z podcast.
1:16:05 If you liked this episode, be sure to like, comment, subscribe, leave us a rating or review, and share it with your friends and family.
1:16:10 For more episodes, go to YouTube, Apple Podcasts, and Spotify.
1:16:16 Follow us on X at A16Z and subscribe to our substack at A16Z.substack.com.
1:16:19 Thanks again for listening, and I’ll see you in the next episode.
1:16:33 As a reminder, the content here is for informational purposes only, should not be taken as legal business, tax, or investment advice, or be used to evaluate any investment or security, and is not directed at any investors or potential investors in any A16Z fund.
1:16:39 Please note that A16Z and its affiliates may also maintain investments in the companies discussed in this podcast.
1:16:46 For more details, including a link to our investments, please see A16Z.com forward slash disclosures.

David Sacks, White House AI and Crypto Czar, joins Marc, Ben, and Erik to explore what’s really happening inside the Trump administration’s AI and crypto strategy. They expose the regulatory capture playbook being pushed by certain AI companies, explain why open source is America’s secret weapon, and detail the infrastructure crisis that could determine who wins the global AI race.

 

Stay Updated: 

If you enjoyed this episode, be sure to like, subscribe, and share with your friends!

Find a16z on X: https://x.com/a16z

Find a16z on LinkedIn: https://www.linkedin.com/company/a16z

Listen to the a16z Podcast on Spotify: https://open.spotify.com/show/5bC65RDvs3oxnLyqqvkUYX

Listen to the a16z Podcast on Apple Podcasts: https://podcasts.apple.com/us/podcast/a16z-podcast/id842818711

Follow our host: https://x.com/eriktorenberg

Please note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures.

Stay Updated:

Find a16z on X

Find a16z on LinkedIn

Listen to the a16z Podcast on Spotify

Listen to the a16z Podcast on Apple Podcasts

Follow our host: https://twitter.com/eriktorenberg

 

Please note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures.

Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

Leave a Reply

a16z Podcasta16z Podcast
0
Let's Evolve Together
Logo