Ben Horowitz: Why Open Source AI Will Determine America’s Future

0
0
AI transcript
0:00:05 The biggest mistake people make on culture is they think of it as this very abstract thing.
0:00:10 And my favorite quote on this is from the samurai from Bushido, where they say,
0:00:13 look, culture is not a set of beliefs, it’s a set of actions.
0:00:17 So the way for the U.S. to compete is the way the U.S. always competes.
0:00:22 We’re an open society, which means everybody can contribute, everybody can work on things.
0:00:23 We’re not top down.
0:00:28 And the way to get everybody to work on things is to have the technology be open
0:00:30 and give everybody a shot at it.
0:00:31 And then that’s how we’re competitive.
0:00:38 I think when you have new technology, it’s easy for policymakers to make really obvious,
0:00:41 ridiculous mistakes that end up being super harmful.
0:00:46 Today on the podcast, we’re sharing a conversation from Columbia Business School
0:00:49 with A16Z co-founder Ben Horowitz.
0:00:54 Ben is a Columbia College alum from the class of 88 and joined Dean Kostas-Maglaris
0:00:58 for a discussion on AI, culture, and leadership in times of disruption.
0:01:02 They cover how open source AI and blockchain could shape the global race for technological
0:01:08 leadership and why culture, not just strategy, determines which companies thrive through disruption.
0:01:09 Let’s get into it.
0:01:25 What a wonderful, wonderful way to start the semester by inviting an incredible leader and an alum of the college, Ben Horowitz, to join us to talk about sort of a variety of things.
0:01:48 So I’m not going to spend time introducing Ben, but I’m going to make a small anecdote because Ben ran a company in the Bay Area in the late 90s to until the mid-2000s that I, without knowing Ben back then, visited with a bunch of MBA students, I think, in 99 or 2000 as part of our Silicon Valley trip back in January.
0:02:10 So he’s seen through the entire trajectory of both the internet era, Silicon Valley, and I guess around the late 2000s, you started Andresen Horowitz with your partner and has been sort of one of the leading venture capital firms.
0:02:20 So I want to start by talking about AI, we’re going to talk about venture capital, we’re going to talk about leadership and the types of teams and people that you look for.
0:02:36 But I was reading this morning about Anthropic closing their latest round at $183 billion valuation, which speaks a little bit about AI, speaks also a little bit about how venture capital has changed, because that’s a private company that is approaching $200 billion valuation.
0:02:40 Incredible growth, incredible growth, incredible change in capabilities.
0:02:45 Where do you think we are now in that AI cycle?
0:02:54 And you were a war veteran from the 2000s, so in some sense, maybe you can give us your insight about that and then launch from there.
0:03:02 Well, I think we’re early in the cycle in the sense that we just got the technology working like four years ago.
0:03:08 So if you think about technology cycles, they tend to run 25-year sort of arcs.
0:03:10 So we’re really, really early on.
0:03:19 I think there is a question now of how big is the next set of breakthroughs compared to the last set.
0:03:30 So if you look at, you could call like gradient descent, like a 10 out of 10, and then the transformer and reinforcement learning, maybe 8 out of 10 breakthroughs.
0:03:34 Is there another 10 out of 10 breakthrough or even an 8 out of 10 breakthrough on the horizon?
0:03:37 And we haven’t seen it yet, so we’ll see.
0:03:40 There are certainly companies kind of working at that.
0:03:51 And so the big thing is, is there another kind of big discontinuous change in, I’ll just call it probabilistic computing, since AI tends to freak people out?
0:03:55 Or are we just going to keep kind of building on the breakthroughs that we’ve had to date?
0:03:57 And that’s, I would say, an open question right now.
0:04:07 When you think about adoption and disruption in the economy, how far out do you think that is going to be?
0:04:11 And what sectors do you think may start getting affected?
0:04:14 Large sectors, big corporates.
0:04:19 Well, I think it’s kind of like both overrated and underrated in terms of the dislocation.
0:04:34 And so if you look at the long arc of automation, going back to the 1750s, when everybody worked in agriculture, like nobody from 1750 would think any of the jobs that we have now make any sense.
0:04:37 They’re all ridiculous, like completely frivolous ideas.
0:04:45 And so it’s not clear, like what the jobs will be kind of in the next 30, 40 years.
0:04:49 But like I said, the jobs that we have now were unimaginable.
0:04:57 Nobody would think somebody doing graphic design or even certainly being like a marketing executive where that was like an actual job that makes any sense at all.
0:04:59 So, you know, we’ll see on that.
0:05:00 And then the other thing is.
0:05:02 You’re speaking to an MBA crowd.
0:05:14 If you think about computers, so like deterministic computers, what we’ve had since the kind of 40s and 50s, obviously a lot of things have changed.
0:05:17 And like many, many, many jobs are gone because of it.
0:05:22 But it was much more gradual than I think people would have thought it would be when it happened.
0:05:28 And like some of the changes, like the whole private equity industry was created because of the spreadsheet.
0:05:37 Because it turned out that like a huge portion of every business was just like people manually calculating what you’d calculate in a spreadsheet and a model.
0:05:45 So basically private equity companies were like, oh, we use a spreadsheet, take that company over and then get all the money out and so forth.
0:05:46 And so that created that whole industry.
0:05:49 But nobody would have put that together in advance.
0:05:52 It’s just like weird side effects of the tech.
0:06:06 And I think what we’re seeing in AI is it’s kind of starting to automate the mundane and then move to kind of over time, you know, maybe it will eliminate that job.
0:06:09 But the job is kind of morphing as it goes.
0:06:10 So I’ll give you an example.
0:06:18 So my business partner, Mark, and I had dinner with a kind of fairly famous person in Hollywood who’s making a movie now.
0:06:19 And basically half the movie is AI.
0:06:24 But the way that’s working is they’re taking an open source model.
0:06:27 By the way, the open source video models are getting very, very good.
0:06:35 And normally in Hollywood, when you shoot dailies, there’s many dailies that you might shoot a scene like 10 or 20 times.
0:06:41 Now they’ll shoot it like three times and have the AI generate the other like 17 takes.
0:06:44 And it’s indistinguishable.
0:06:52 So it kind of really improves the economics of the current movie making industry, which have gotten extremely difficult with the way distribution has changed.
0:06:55 And it’s going to make it much easier for many more people to make movies.
0:07:01 But I think that the way Hollywood would view AI right now is that it’s just taking all the jobs, right?
0:07:04 Like it’s just going to write all the movies, make all the movies.
0:07:06 I think that’s not going to happen.
0:07:08 It’s just not going to be that way.
0:07:09 It will change.
0:07:15 I think there will be a new medium that’s different than movies, the way movies were different than plays using the technology.
0:07:17 So things are going to change.
0:07:24 I think it’s going to affect every single sector, but not in ways that you would easily anticipate.
0:07:32 By the way, every writer in Hollywood is already using AI to help them like write dialogue that they don’t feel like writing and all that kind of thing.
0:07:33 So that’s already going on.
0:07:36 But that’s not eliminating those positions.
0:07:40 It’s just kind of enabling them to kind of work faster and better.
0:07:42 You mentioned open source.
0:07:44 Where do you fall into that?
0:07:55 I mean, I think I know where you guys fall into the spectrum, but maybe you can tell us a little bit about your thinking about open source and perhaps also talk about US-China and the competition in AI in that context.
0:08:03 Yeah, so well, with open source, so in AI, there’s kind of open source, the algorithm, which is like not that big a deal.
0:08:08 But then open weights is kind of the bigger thing because then you’ve trained on the model and it’s encoded in the weights.
0:08:25 And in that encoding, there’s kind of the quality of the model, but also the subtle things like the value of the models, like the model’s interpretation of history, the model’s interpretation of culture, human rights, all those kinds of things are in the weights.
0:08:55 So the impact of open source, if you think about the control layer of every single kind of thing that every device in the world is going to be AI, right, like you’re going to be able to talk to it, what those weights are matter in terms of the kind of global culture of the world and how people think about everything from race issues to political issues to free speech to Tiananmen Square, what actually happened, that kind of thing.
0:08:57 is all encoded in the weights.
0:09:05 And so whoever has the dominant open source model has a big impact on the way global society ends up evolving.
0:09:11 Right now, so kind of a combination of things happened at the beginning of AI.
0:09:17 One, just the way the U.S. companies evolved in conjunction with the U.S. policy.
0:09:21 So the U.S. policy under the Biden administration was very anti-open source.
0:09:25 And so the U.S. companies ended up being all closed source.
0:09:41 And the dominant open source models are now from China, DeepSeq being the one that I would say most, not only U.S. companies use, but also basically everyone in academia uses DeepSeq in Chinese open source models, not U.S. open source models.
0:09:45 So we’ve certainly, I think, lost the lead on open source to China.
0:09:54 And, you know, open AI, open source, their last model, the problem with going from proprietary to open source is it doesn’t have, what do you call, the vibes.
0:09:59 So open source is very vibe-oriented and the community and the way developers think and so forth.
0:10:03 So if something evolves in open source, it ends up being a little different than if it doesn’t.
0:10:04 So I think it’s really important.
0:10:15 I think that the reason the Biden administration didn’t want the products to open source was, so the rationale, let me describe the rationale and then I’ll say why it was delusional.
0:10:18 The rationale was, okay, we have a lead in AI over China.
0:10:25 I don’t know, we had all these pseudo-smart people running around saying we have a two-year lead and a three-year lead.
0:10:28 Like, I don’t know how you would know that, but they were wrong, it turns out.
0:10:33 And that this was like the Manhattan Project and we had to keep the AI a super secret.
0:10:36 Now, it’s delusional on several fronts.
0:10:45 One, obviously, Chinese AI is really good and their open source models are actually ahead of ours, so we don’t have a lead.
0:10:53 But the kind of dumber thing about it was, like, if you go into Google or OpenAI or any of these places,
0:10:57 do you know how many Chinese nationals work for Google and OpenAI?
0:10:57 Like, a lot.
0:11:01 And you think the Chinese government doesn’t have access to any of them?
0:11:02 Come on.
0:11:03 And you think there’s security?
0:11:04 There’s no skiffs there.
0:11:06 All that stuff’s getting stolen anyway.
0:11:08 Let’s be serious.
0:11:13 There is no information that companies in the U.S. are really locking down.
0:11:16 So the way for the U.S. to compete is the way the U.S. always competes.
0:11:19 We’re an open society, which means everybody can contribute.
0:11:21 Everybody can work on things.
0:11:22 We’re not top-down.
0:11:28 And the way to get everybody to work on things is to have the technology be open and give everybody a shot at it.
0:11:32 And then that’s how we’re competitive, not by keeping everything a secret.
0:11:33 We’re actually the opposite of that.
0:11:35 We’re terrible at keeping secrets.
0:11:37 And so we have to go to our strengths.
0:11:39 And so that’s just a dumb mistake.
0:11:50 But I think when you have new technology, it’s easy for policymakers to make really obvious, ridiculous mistakes that end up being super harmful.
0:11:51 And so we have to be careful here.
0:11:56 So when thinking about AI and national security, are you concerned about that?
0:12:06 Well, I think there’s a real concern on AI and national security, but it’s not in terms of keeping the AI a secret because we can’t.
0:12:09 Look, if that was a viable strategy, then great.
0:12:11 But it’s not a viable strategy.
0:12:14 Like, we’d have to reshape the entire way society works.
0:12:26 And by the way, even on the Manhattan Project, like, the Russians got all the nuclear, they got everything, including, like, the most secret part, which was the trigger mechanism on how to set off the bomb.
0:12:28 They got all of that.
0:12:36 And so even then, with no internet, with the whole thing locked down, with it in a secret space and all that kind of thing, we couldn’t keep it a secret.
0:12:41 So, like, in the age of the internet and, like, by the way, China’s really good at spying.
0:12:44 This is one of the reasons why there’s so much tension between the two countries.
0:12:48 It’s, like, almost like a national pride thing to be good at spying in China.
0:12:50 So they’re really good at it.
0:12:52 And, like, we’re really bad at defending against it.
0:12:54 So, like, that just is what it is.
0:12:59 Now, having said that, all of defense, like, war is going to be very, very AI-based.
0:13:02 We’ve already seen this in the Ukraine with the drones and so forth.
0:13:09 But, like, robot soldiers, autonomous submarines, autonomous drones, all that stuff is basically here.
0:13:12 And so the whole nature of warfare, I think, is changing.
0:13:15 And we have to take that very, very seriously.
0:13:19 But I think that means competing in AI.
0:13:26 And the best thing for the world is that not one country has the AI to rule them all.
0:13:29 That’s the worst scenario where, like, anybody is too powerful.
0:13:38 I think a balance of power and AI is good, which is why open source is good, which is why us developing the technology as fast as we can is important.
0:13:43 It’s why the private sector integrating with the government in the U.S. is important.
0:13:46 China is much better at that than we are.
0:13:47 So we have to get better.
0:13:51 But keeping things a secret, I don’t think, is going to work.
0:13:55 I mean, I actually don’t even think keeping the chips to ourselves is going to work.
0:14:03 Like, so far, we thought, okay, if we stop the export of NVIDIA chips to China, that will stop them from building powerful models.
0:14:03 It really hasn’t.
0:14:15 So, you know, like, a lot of these ideas just end up retarding the growth of the U.S. technology in the industry as opposed to doing anything for national security.
0:14:21 You mentioned the previous administration, and we talked about their attitude.
0:14:24 I want to ask you a question about regulation.
0:14:29 I’ve had so many conversations with European leaders about that.
0:14:30 Maybe you do as well.
0:14:31 All right.
0:14:32 I shouldn’t laugh.
0:14:33 Yeah.
0:14:42 And why don’t you share your thinking a little bit about the American situation and sort of the global situation?
0:14:43 Yeah.
0:14:44 So it’s funny.
0:14:55 Every panel I’ve been on or, like, kind of time I’ve been at a conference with, like, European leaders, they always say that whether they’re in the press or industry or the regulatory bodies, they say the same thing.
0:14:59 Well, Europe may not be the leaders in innovation, but we’re the leaders in regulation.
0:15:04 And I’m like, you realize you’re saying the same thing.
0:15:23 So Europe kind of got down this path, which is known as the kind of precautionary principle in terms of regulation, which means you don’t just regulate, you know, things that are kind of known harmful.
0:15:29 You try and anticipate, you try and anticipate with the technology, you know, anything that might go wrong.
0:15:35 And this is, I think, a very dangerous principle, because if you think about it, we would never have released the automobile.
0:15:38 We’d never released any technology.
0:15:41 I think they, you know, it started in the nuclear era.
0:15:49 And, you know, one could argue that we had the answer to the kind of climate issues in 1973.
0:15:56 And if we would have just built out nuclear power instead of burning oil and coal, we would have been in much better shape.
0:16:03 And if you look at the safety record of nuclear, it’s much better than oil, where people blow up on oil rigs all the time.
0:16:08 And I think more people are killed every year in the oil business than have been killed in the history of nuclear power.
0:16:14 So, you know, these regulatory things have impact.
0:16:22 In the case of AI, there is kind of several categories that people are talking about regulating.
0:16:34 So, there’s kind of the speech things, like, can you, and Europe is very big on this, can it say hateful things?
0:16:38 Can we, you know, can the AI say political views that we disagree with, this kind of thing?
0:16:42 So, very similar to social media and kind of that category of things.
0:16:45 And do we need to stop the AI from doing that?
0:16:56 And then there’s kind of another section, which is, okay, can it tell you instructions to make a bomb or a bioweapon or that kind of thing?
0:17:08 And then there’s, you know, another kind of regulatory category, which is, I think, the one that, you know, most people, like, use this argument to kind of get their way on the other things is,
0:17:14 Well, what if the AI becomes sentient, you know, and, like, turns into the Terminator?
0:17:17 We got to stop that now.
0:17:27 Or, like, kind of the related one, which is kind of a little more technologically believable, but not exactly, is takeoff.
0:17:28 Have you heard of this thing, takeoff?
0:17:42 So, takeoff is the idea that, okay, the AI learns how to improve itself, and then it improves itself so fast that it just goes crazy and becomes a super brain and decides to kill all the people to get itself more electricity and stuff, kind of like the Matrix.
0:17:49 Okay, so, let me see if I can deal with that.
0:17:56 And then there’s another one, which is around copyright, which is important, but probably not on everybody’s mind as much.
0:18:04 So, if you look at the technology, the way to think about it is there’s the foundation, the models themselves.
0:18:15 And it’s important, by the way, that, you know, everybody who works on this stuff calls it models and not, like, you know, AI intelligence and so forth.
0:18:23 And there’s a reason for that, because what it is is it’s a mathematical model that can predict things.
0:18:31 So, it’s a giant version of kind of the mathematical models that you all kind of study to do basic things.
0:18:46 So, if you want to calculate, you know, when Galileo dropped a cannonball off the Tower of Pisa, you know, you drop it off the first floor and the second floor, but then you could write, like, a math equation to figure out what happens when you drop it off, like, the 12th floor.
0:18:48 You know, how fast does it fall?
0:18:54 So, that’s a model with, you know, maybe, like, a couple of variables.
0:18:59 So, think then, what if you had a model with 200 billion variables?
0:19:01 That’s an AI model.
0:19:07 And then you can predict things like, okay, what word should I write next if I’m writing an essay on this?
0:19:08 Like, you can predict that.
0:19:10 And that’s what’s going on.
0:19:11 So, it’s math.
0:19:16 And inside, it’s doing, the model is just doing a lot of matrix multiplication.
0:19:19 You know, linear algebra, that kind of thing.
0:19:27 So, you can regulate the model or you can regulate the applications on the model.
0:19:36 So, I think when we’re talking about publishing a bio, how to make a bioweapon or how to make a bomb or that kind of thing, that’s already illegal.
0:19:40 And the AI shouldn’t get a pass on that because it’s AI.
0:19:48 So, if you build an application like ChatGPT that publishes the rules of making a bomb, like, you ought to go to jail.
0:19:51 Like, that should not be allowed.
0:19:52 And that’s not allowed.
0:19:54 And I think that falls under regular law.
0:19:58 Then the question is, okay, do you need to regulate the model itself?
0:20:06 And the challenge with regulating the model is you’re basically, the regulations are all of the form.
0:20:08 You can do math, but not too much math.
0:20:12 Like, you need too much math, we’re going to throw you in jail.
0:20:14 But if you do just this much math, it’s okay.
0:20:16 And, like, how much math is too much math?
0:20:29 And, look, the problem in that thinking is when you talk about sentient AI or takeoff, you’re talking about sort of thought experiments that nobody knows how to build.
0:20:36 And I think there’s very good arguments, you know, and we do know how to reason about these systems that, like, takeoff is not going to happen.
0:20:40 And that, like, we have no idea how to make takeoff happen.
0:20:51 And so, it’s kind of one of these things like, well, the laws of physics, I can do a thought experiment that says, you know, if you travel faster than the speed of light, you can go backwards in time.
0:21:04 So, do we now need to regulate time travel and outlaw whole branches of physics in order to stop people from traveling back in time and changing the future or changing the present and screwing everything up for us?
0:21:06 That’s probably too aggressive.
0:21:11 And, like, we’re really getting into that territory when we talk about sentient AI.
0:21:14 Like, we don’t even know what makes people sentient.
0:21:16 Like, we literally don’t.
0:21:18 You know who knows the most about consciousness?
0:21:23 Anesthesiologists, because they know how to turn it off.
0:21:26 But that’s, like, the extent of what we know about consciousness.
0:21:29 So, like, we definitely don’t know how to build it.
0:21:31 And we definitely haven’t built it to date.
0:21:34 Like, there’s no AI that’s conscious or has free will or any of these things.
0:21:42 And so, when you get into regulating those kinds of ideas, and I’m not saying that AI can’t be used to improve AI.
0:21:44 It absolutely can.
0:21:49 But computers have been improving computers for, like, since we started them.
0:21:56 But that’s different than takeoff because takeoff requires a verification step that nobody knows how to do.
0:22:03 And so, like, you get into, you know, you get into very, very theoretical cases.
0:22:09 And then you write a law that prevents you from competing with China at all.
0:22:10 And that gets very dangerous.
0:22:16 And so, I just say, like, we have to be really, really smart about how we think about regulation and how that goes.
0:22:18 Copyright is another one.
0:22:25 So, copyright, should you be allowed to have an AI, like, listen to all the music and then, like, reproduce Michael Jackson?
0:22:29 No, definitely that’s got to be illegal because that’s a clear violation of copyright.
0:22:39 But then, can you let it, like, read a bunch of stuff that’s copywritten and create a statistical model to make the AI better but not be able to reproduce it?
0:22:47 Well, if we, that gets very tricky if you don’t allow that because, by the way, that’s what people do, right?
0:22:51 Like, you read a lot of stuff and then you write something and it’s affected by all the stuff you read.
0:22:59 And, by the way, like, competitively with China, they’re absolutely able to do that.
0:23:05 And, you know, the amount of data you train on dramatically improves the quality of the model.
0:23:10 And so, you’re going to have worse models if you don’t allow that.
0:23:13 So, there’s, you know, that’s a trickier one.
0:23:22 But this is where you have to be very careful with regulation to not kill the competitiveness while not actually gaining any safety.
0:23:27 And so, that’s, you know, that’s a big debate right now and it’s something we’re working on a lot.
0:23:33 Let me ask you one question and then I want to move on to crypto and venture and leadership.
0:23:38 But, you mentioned machines, building machines.
0:23:42 And I think of a colleague of mine that is a roboticist.
0:23:43 Yeah.
0:23:51 And what you’re thinking about physical or embodied AI and are you guys invested in that?
0:23:57 Do you think that that’s something that’s going to be big over the next 10, 20, 30 years?
0:23:58 How do you feel about that?
0:23:59 Yeah, no, no.
0:24:03 I definitely think it’s going to be big and it’s going to be very important.
0:24:06 It’s probably going to be the biggest industry is probably going to be robotics.
0:24:13 It’s going to be super important.
0:24:14 I don’t think there’s any question.
0:24:19 I think it’s further away than anybody is saying.
0:24:27 So, if you think about, like, the full humanoid robot, well, just to give you kind of a timescale idea.
0:24:37 So, in 2006, I think Sebastian Thrun won the DARPA challenge and drove and had an autonomous car drive across the country.
0:24:44 And now, in 2025, we’re just getting, like, the Waymo cars and things that you can put on the road.
0:24:49 So, 19 years to kind of solve that problem.
0:24:51 And why did it take so long?
0:25:00 And, by the way, the self-driving robot problem is a much easier problem than the humanoid robot problem
0:25:07 because the data is primarily two-dimensional and then we had, like, all the map data already and so forth.
0:25:12 So, you know, it was, like, a lot easier to get there.
0:25:15 If you think about the robot data, it’s many more dimensions.
0:25:21 Like, you know, the difference between picking up a glass and picking up a shirt is very different or an egg.
0:25:23 So, there’s all these subtleties to it.
0:25:33 And then, with self-driving, like, the thing that, you know, if you look between 2012 and 2025, say, what took so long,
0:25:40 it was the, it turns out that the universe is very fat-tailed and human behavior is very fat-tailed.
0:25:47 And so, like, in working with the Waymo team, you have the things that were extremely hard to deal with
0:25:54 were, like, somebody driving 75 in a 25 zone or, like, somebody just running out in the middle of the street for no reason.
0:25:55 Or that kind of thing.
0:26:01 It was very, very difficult to make the car safe around those kinds of use cases because they just weren’t in the data set.
0:26:08 And then, if you think about, like, robots, you know, we don’t have any data on that.
0:26:13 And you don’t get the data from video because you have to pick stuff up.
0:26:15 You have to do things and so forth.
0:26:20 And then, these humanoid robots are, like, they’re extremely heavy.
0:26:22 You know, the battery problem is hard.
0:26:31 And the models, the models that we have, so to just feed an LLM enough data until it can, like, drive a robot,
0:26:34 it can tell you it hasn’t been working yet.
0:26:38 And so, then there’s the question, do you need another kind of model?
0:26:41 There are a lot of people working on so-called real-world models.
0:26:46 Fei-Fei Li’s got a new company doing that called World Labs and so forth.
0:26:51 But it’s going to take a while to get there.
0:26:56 And you can tell in the video models that they’re not suited for robots because you can’t, like, do things like move the camera angle
0:26:59 because it doesn’t understand what’s in the picture.
0:27:02 And that’s okay for a video.
0:27:03 It’s not okay for a robot.
0:27:09 So, there’s going to be a lot of things that we have to do before we get to robots.
0:27:16 But, you know, those things are, you know, there’s certainly a lot of effort going on to it.
0:27:26 And in terms of a U.S. competitive space, like, probably the most worrisome thing right now is the entire robot supply chain currently is in China.
0:27:32 So, like, every robot supply chain company is either Chinese-based.
0:27:36 I think there’s one in Germany, but it’s all founded by Chinese nationals.
0:27:38 I think it was both by China.
0:27:42 You know, just from, like, a strategic, okay, do you get your supply chain cut off kind of thing.
0:27:46 That’s something that, you know, we probably have to work on.
0:27:53 And, you know, it’s not the most complicated thing to build, the supply chain,
0:27:59 but it’s, you know, something that if we don’t do, we’re going to be in the same situation that we’re in with rare earth minerals
0:28:02 and, you know, chips and these kinds of things.
0:28:06 Quick question about crypto before we talk about people.
0:28:07 All right.
0:28:17 Crypto is changing quite a bit and, you know, a lot of momentum in the last year or so.
0:28:23 How do you feel about crypto and blockchain applications?
0:28:31 And do you envision that over the next five years, we may start to see technology being applied in other areas,
0:28:34 apart from where it is right now?
0:28:40 Yeah, so, you know, crypto is a, you know, a super interesting kind of technology.
0:28:48 And probably if Satoshi Nakamoto wasn’t a pseudonymous person who nobody knows who he is,
0:28:54 he probably would have won the Nobel Prize for mathematics and economics on the Bitcoin paper.
0:28:57 So it’s a very interesting and powerful technology.
0:29:05 I think that the way to think about it in the context of AI is if you look at the evolution of computing,
0:29:08 it’s always been in kind of like two pillars.
0:29:12 One is computers and the other is networks.
0:29:18 So starting with like microwaves and integrated circuits and going to, you know, mainframes and SNA,
0:29:23 to PCs and LANs, to the internet and the smartphone,
0:29:30 they’re very different technology bases, but one without the other is never nearly as powerful.
0:29:34 And if you think about AI, what is the network that AI needs?
0:29:41 So first of all, in order for AI to be really valuable, it has to be an economic actor.
0:29:46 So AI agents have to be able to buy things, have to be able to get money, that kind of thing.
0:29:49 And if you’re an AI, you’re not allowed to have a credit card.
0:29:54 You have to be a person, you have to have a bank account, you have to have social security, all these kinds of things.
0:29:58 So credit cards don’t work as money for AIs.
0:30:01 So the logical thing, the internet native money is crypto.
0:30:03 It’s a bearer instrument.
0:30:04 You can use it and so forth.
0:30:14 And we’ve already seen like new AI banks that are crypto-based where AIs can kind of get KYC’d and that kind of stuff.
0:30:20 That’s called, sorry, know your customer, anti-money laundering laws, these kinds of things.
0:30:25 So crypto is kind of like the economic network for AI.
0:30:33 It’s also, you know, if you think about things like bots, how do you know something’s a human?
0:30:37 Crypto is the answer for like proving that you’re a human being.
0:30:41 Crypto turns out to be the answer for provenance.
0:30:44 So like, is this a deep fake?
0:30:46 Like, is this really me?
0:30:48 Or is this like a fake video of me?
0:30:53 How do I verify that it’s actually me?
0:30:59 And then if I verify that it’s actually me, where should that registry of truth live?
0:31:04 Should we trust the U.S. government what’s true?
0:31:05 Should we trust Google what’s true?
0:31:10 Or should we trust the game-theoretic mathematical properties of a blockchain on what’s true?
0:31:13 So it’s like a very kind of valuable piece of infrastructure.
0:31:19 And then, you know, finally, if you think about, you know, one of the things that AI is best at,
0:31:23 and then probably the biggest security risk that nobody talks about is just breaking into stuff.
0:31:25 It’s like really, really good.
0:31:29 And not just like, you know, breaking into things technologically,
0:31:32 but also like social engineering and that kind of stuff.
0:31:33 It’s amazing.
0:31:39 And so the current architectures of, you know, where your data is
0:31:41 and where your information is and where your money is,
0:31:45 is kind of not well-suited for an AI world.
0:31:50 They’re just giant honeypots of stuff to steal for somebody who uses the AI.
0:31:55 And the right architectural answer to that is a public key infrastructure
0:31:58 where, you know, you keep your data yourself,
0:32:01 and then you deliver a zero-knowledge proof.
0:32:05 Yes, I’m creditworthy, but you don’t have to see my bank account information
0:32:06 to know that I’m creditworthy.
0:32:07 I’m not going to give you that.
0:32:09 I’m just going to prove to you that I’m not.
0:32:11 And that’s a crypto solution.
0:32:16 So it ends up being like a very, very interesting technology in an AI world.
0:32:29 Two, three more questions, and then we’ll open it up a little bit to the audience.
0:32:33 We started by talking about Anthropic at $183 billion.
0:32:37 Open AI may be closing at half a trillion.
0:32:42 What’s happening with the venture capital industry?
0:32:48 And are traditional models changing for, you know, seed, you know, et cetera?
0:32:56 Or why have we seen that sort of change in the last, I would say,
0:32:59 we started seeing it with the Ubers and Airbnbs,
0:33:01 and now it has gone even further?
0:33:06 Yeah, so I think what’s happened is, and this is another regulatory thing,
0:33:09 you know, no good deed goes unpunished, I would say.
0:33:16 So if you go back to the 90s, it shows you how old I am.
0:33:21 If you go back to the 90s, in those days, like, companies went public.
0:33:26 Amazon went public, you know, I think with a $300 million valuation.
0:33:33 When we went public at Netscape, you know, the quarter prior to when we went public,
0:33:35 we had $10 million in revenue and so forth.
0:33:40 And then what happened was kind of a series of regulatory steps.
0:33:47 And they, you know, some of them are so obscure that you’d never know about them.
0:33:53 Things like order handling rules, decimalization, reg FD,
0:33:58 just like a series of regulatory things.
0:34:04 Sarbanes-Oxley, many which came after, like, the great dot-com crash and telecom crash.
0:34:11 And the result of that, of those things, is becoming, going public became very, very onerous
0:34:12 and very difficult.
0:34:17 So you couldn’t, you definitely couldn’t do it, you know, at a $300 million valuation
0:34:22 because, one, like, the cost of being public just in terms of, like, lawyers, accounting,
0:34:26 D&O insurance, and so forth, was so high, it would be a massive percentage of your revenue.
0:34:27 So that’s thing one.
0:34:37 Then secondly, like, you couldn’t, because of the way the particular things like reg FD changed,
0:34:44 the, there’s this kind of asymmetric situation between the company and the short sellers.
0:34:50 So the short sellers became much, much more powerful because, you know,
0:34:54 they were able to do things to manipulate the stock where a company could no longer defend itself
0:34:56 in the way it used to be able to defend itself.
0:34:59 And so, you know, that made it more dangerous.
0:35:00 And then, of course, you get sued like crazy.
0:35:05 So all that happened and made kind of companies stay private longer.
0:35:09 And then the result of companies staying private longer was that the private market,
0:35:12 capital markets, massively developed.
0:35:17 So all of these huge money pools started putting money into the private markets.
0:35:19 And so what does that mean?
0:35:25 Well, it means that, okay, look, if OpenAI can raise $30 billion in the private markets,
0:35:27 what is the value of being public?
0:35:30 That you can get sued more?
0:35:33 That you have to do an earnings call every quarter, right?
0:35:38 Like these things, you know, the trade-off becomes a bad trade to go public.
0:35:40 And that’s kind of where we are today.
0:35:46 I think, look, for the good of the country, the best answer is we fix the public markets.
0:35:50 But in the meanwhile, what’s happened is the, you know, as a venture capital firm,
0:35:57 you kind of have to expand your capabilities all the way up into the very, very high end of the markets
0:36:03 and really kind of take over a lot of the role that investment banks have previously had.
0:36:07 And, you know, that’s just, you know, kind of been what’s happened.
0:36:10 We’ll see where it goes.
0:36:13 I think, you know, right now it’s on the train to continue.
0:36:17 I think the other underlying thing in your question is how in the hell is Anthropoc worth so much money?
0:36:24 And, look, I think that the answer to that is, like, these products,
0:36:28 the biggest thing takeaway from the AI products is how well they work.
0:36:34 So, you know, OpenAI went to $10 billion in revenue, like, in four years,
0:36:36 which, like, we’ve never seen anything like that.
0:36:41 So, when you look at that, you say, well, why is that?
0:36:43 And it’s like, well, how well does ChatGPT work?
0:36:45 Like, it works awesome.
0:36:49 Like, way better than other technologies, products you bought in the past.
0:36:51 Like, the stuff works really well.
0:36:53 Cursor works unbelievable.
0:37:00 And so, I think that because the products work so much better than anything that we’ve had in the past,
0:37:01 they grow much faster.
0:37:05 And as a result of them growing much faster, the valuations grow much faster.
0:37:16 But the numbers are there to justify the valuations in a way that, in the dot-com era, they weren’t.
0:37:18 So, it’s a different phenomenon.
0:37:22 Now, like, the AI land, if there’s another big breakthrough in AI, then, you know,
0:37:29 somebody could get dramatically better products, and then the valuations aren’t sustainable and so forth.
0:37:36 But that’s very theoretical compared to, I could go on for days about what exactly happened during the dot-com era,
0:37:38 but this isn’t the same.
0:37:41 You know, it may have issues, but they’re not the same issues.
0:37:47 So, there were at least two students that brought books of yours for you to sign when we stepped in.
0:37:51 So, you wrote the book, The Hard Thing About Hard Things.
0:37:53 Yeah, ain’t nothing easy.
0:37:54 Yeah.
0:38:03 And for the many MBA students in audience, you know,
0:38:08 what’s one of the sort of counterintuitive hard things that you think about and people need to know about?
0:38:11 There are so many things.
0:38:23 Actually, somebody asked, you know, my friend, Ali Goetzee, who runs Databricks, brought up, like, a couple of days ago,
0:38:31 he goes, Ben, like, you know, one of the best things you told me was, you know, I can’t develop my people,
0:38:34 which I thought was like, oh, wow, I said that.
0:38:43 But I actually had written a post on it, and it’s a kind of a CEO thing that’s not true for managers.
0:38:45 And let me explain to you what I mean by that.
0:38:54 So, you know, when I was, if you’re a manager and, like, you’re a product manager or, like, an engineering manager or this kind of thing,
0:38:58 you know exactly how to do the job that you hire people into.
0:39:01 And so you can develop them.
0:39:02 You can train them.
0:39:08 You can teach them to be a better engineer, a better engineering manager, or a better, you know, accountant or whatever it is.
0:39:16 But as CEO, you know, you’re hiring, like, a CFO, a head of HR, a head of market.
0:39:19 You probably don’t know how to do any of those jobs.
0:39:24 So, like, if they’re not doing a good job, like, you’re spending your time developing.
0:39:26 You don’t know how to do that job.
0:39:27 Like, what are you doing?
0:39:33 And the bigger problem is you’re now distracted.
0:39:35 One, you’re not going to improve them because you don’t know what you’re doing.
0:39:41 And then secondly, you’re taking time away from what you need to be doing.
0:39:45 If you think about what the CEO needs to do, you have to set the direction for the company.
0:39:47 They’ve got to articulate that.
0:39:49 They’ve got to make sure the company is organized properly.
0:39:52 They’ve got to make sure the best people are in place.
0:39:56 They have to make decisions that only they can make.
0:39:59 And if they don’t make them, then the entire company slows down.
0:40:06 So, if you’re not doing that and trying to develop someone who you have no idea how to develop, that’s just a huge mistake.
0:40:10 And it was a very sad lesson for me.
0:40:15 In fact, I wrote a post on it called The Sad Truth About Developing Executives.
0:40:20 And I think the rap quote that I used was wheezy.
0:40:26 And it was, the truth is hard to swallow and hard to say, too.
0:40:30 Now, I graduated from that bullshit, and I hate school.
0:40:32 And that’s how I feel about that lesson.
0:40:35 I just hate the fact that I learned that, but it’s very true.
0:40:46 In another book that you wrote, which is about what you do is who you are, you focus on culture.
0:40:52 This is something that we speak a lot about here as well.
0:40:57 But what’s, in some sense, some of the things that people need to be thinking about?
0:41:08 How they set culture, how do they influence culture within their organizations, the importance of that, and how you have actually put it to work in your own organization?
0:41:14 Yeah, so I think that the biggest mistake people make on culture is they think of it as this very abstract thing.
0:41:24 And my favorite quote on this is from the samurai from Bushido, where they say, like, a culture is not a set of beliefs, it’s a set of actions.
0:41:29 And when you think about it in the organizational context, that’s the way you have to think about it.
0:41:34 So, like, you know, people go, oh, well, our culture is integrity, or we have each other’s backs, or this.
0:41:37 And it’s like, right.
0:41:42 Like, everybody can interpret that however they want.
0:41:49 And, you know, so your culture is probably hypocrisy, if that’s how you define it, because nobody’s doing that.
0:42:04 You know, and by the way, like, the whole thing on these kinds of, you know, virtues, I would just call them, is they only actually, you only break them under stress.
0:42:08 So it’s like, how many of you, like, think you’re honest?
0:42:10 Like, you’re an honest person.
0:42:14 Okay, now think about, how many people do you know who you would consider to be totally honest?
0:42:19 Like, I bet it’s a way lower percentage than the people who raise their hand.
0:42:20 And why is that?
0:42:26 It’s because honesty doesn’t, everybody’s honest until it’s going to cost you something.
0:42:27 Right?
0:42:30 Oh, are you going to be honest if it’s going to cost you your job?
0:42:32 Are you going to be honest if it costs you your marriage?
0:42:33 Are you going to be honest in that situation?
0:42:35 That’s a whole other thing, right?
0:42:40 And so, like, honesty, all the virtues are like that.
0:42:42 They’re only kind of tested under stress.
0:42:46 And so you can’t define, like, the ideal of something you want.
0:42:49 You have to define the exact behavior.
0:42:52 Like, how do you want people to show up every day?
0:42:55 Because culture is a daily thing.
0:42:56 It’s not a quarterly.
0:42:58 You don’t put in the annual review.
0:43:00 Like, do you follow the culture?
0:43:02 It’s like, well, yeah, sure.
0:43:06 I mean, like, who even knows how to evaluate it at that time?
0:43:08 So it’s what do you do, like, every day?
0:43:13 And so what are the, it’s, you want to think of what are the behaviors that indicate the thing that you want?
0:43:27 And so I give you one example at the firm that we have is, you know, one of the difficult things to do that we really wanted to do as a venture capital firm is, like, let’s be very, very respectful of the people building the companies.
0:43:34 You see entrepreneurs and never kind of make them feel small in any way.
0:43:38 And every venture capital firm would say, you want to do that.
0:43:42 But the problem with venture capital is, I have the money.
0:43:44 You have an idea.
0:43:45 You come to me to get the money.
0:43:47 I decide whether you get the money or not.
0:43:56 So, like, if that’s my daily thing, then I might feel like the big person and you might, you know, I might want to make you feel like the small person, you know.
0:43:58 Like, no, I don’t think that’s a good idea.
0:44:01 And so, like, how do you stop that?
0:44:05 So, you know, we put a thing, like, I can tell people not to do that.
0:44:09 But, like, there’s all this kind of other incentive that’s making them do that.
0:44:15 So what I said is, like, if you’re ever late to meeting with an entrepreneur, one, it’s a $10 a minute fine.
0:44:16 I don’t care if you had to go to the bathroom.
0:44:18 I don’t care if you’re on an important phone call.
0:44:19 Like, you’re five minutes late.
0:44:20 You owe me $50 right now.
0:44:21 And you pay on the spot.
0:44:23 Why did I do that?
0:44:30 Well, because I want you to think that nothing is more important in your job or in your day than being on time for that meeting with that entrepreneur.
0:44:36 Because what they’re doing is extremely hard, and you have to respect that, and you have to respect it by showing up on time.
0:44:38 And I don’t give what your excuse is.
0:44:42 If you were getting married, you wouldn’t have to go to the bathroom and be late to the altar.
0:44:43 So, like, I know you can do it.
0:44:45 So, like, don’t give me that.
0:44:49 And that programs people, right, because every day you’re meeting with entrepreneurs.
0:44:52 Like, you know, okay, this is what we’re about.
0:44:52 We’ve got to do that.
0:45:01 Similarly, you know, on that, I’m like, look, if somebody wants to do something larger than themselves and make the world a better place, we’re for that.
0:45:02 We’re dream builders.
0:45:03 We’re not dream killers.
0:45:10 So, if you get on X and say, oh, that’s a dumb idea that, you know, they’re selling dollars for 85 cents, you’re fired.
0:45:11 Like, that’s it.
0:45:11 Gone.
0:45:12 I don’t care.
0:45:14 Because we don’t do that.
0:45:24 And so, like, you put in rules that seem maybe absurd, but they set the – it’s a cultural marker for, like, okay, this is who we are.
0:45:26 And if you want to come work here, you’ve got to be like that.
0:45:30 And so, that’s, you know, that’s a little kind of way you think about culture.
0:45:31 I wrote a whole book on it.
0:45:35 So, if you’re interested in this, there’s many other aspects.
0:45:43 But I think that the worst thing you can do is just go have an off-site and, like, yabba-dabba-doo about, like, the values that you all have and write up
0:45:49 write up a bunch of, you know, flowery language about how, you know, you’re like this.
0:45:49 Okay.
0:45:52 I promise we’ll end on time.
0:45:54 So, I think we’re going to end here.
0:45:57 Ben, thank you so much for coming in.
0:45:58 Thank you.
0:46:02 Thanks for listening to the A16Z podcast.
0:46:08 If you enjoyed the episode, let us know by leaving a review at ratethispodcast.com slash A16Z.
0:46:10 We’ve got more great conversations coming your way.
0:46:12 See you next time.
0:46:29 This podcast has been produced by a third party and may include paid promotional advertisements, other company references, and individuals unaffiliated with A16Z.
0:46:37 Such advertisements, companies, and individuals are not endorsed by AH Capital Management, LLC, A16Z, or any of its affiliates.
0:46:43 Information is from sources deemed reliable on the date of publication, but A16Z does not guarantee its accuracy.
0:46:50 The A16Z is from sources deemed reliable on the date of publication, but A16Z is from sources deemed reliable on the date of publication, but A16Z is from sources deemed reliable on the date of publication, but A16Z does not guarantee its claim on the date of publication, but A16Z.

Ben Horowitz reveals why the US already lost the AI culture war to China—and it wasn’t the technology that failed. While Biden’s team played Manhattan Project with closed models, Chinese developers quietly captured the open-source heartbeat of global AI through DeepSeek, now running inside every major US company and university lab. The kicker: Google and OpenAI employ so many Chinese nationals that keeping secrets was always a delusion, but the policy locked American innovation behind walls while handing cultural dominance to Beijing’s weights—the encoded values that will shape how billions of devices interpret everything from Tiananmen Square to free speech.

 

Resources:

Follow Ben Horowitz on X: https://x.com/bhorowitz

Follow Costis Maglaras on X: https://x.com/Columbia_Biz

 

Stay Updated:

If you enjoyed this episode, be sure to like, subscribe, and share with your friends!

Find a16z on X: https://x.com/a16z

Find a16z on LinkedIn: https://www.linkedin.com/company/a16z

Listen to the a16z Podcast on Spotify: https://open.spotify.com/show/5bC65RDvs3oxnLyqqvkUYX

Listen to the a16z Podcast on Apple Podcasts: https://podcasts.apple.com/us/podcast/a16z-podcast/id842818711

Follow our host: https://x.com/eriktorenberg

Please note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see http://a16z.com/disclosures.

Stay Updated:

Find a16z on X

Find a16z on LinkedIn

Listen to the a16z Podcast on Spotify

Listen to the a16z Podcast on Apple Podcasts

Follow our host: https://twitter.com/eriktorenberg

 

Please note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures.

Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

Leave a Reply

a16z Podcasta16z Podcast
Let's Evolve Together
Logo