DeepSeek: America’s Sputnik Moment for AI?

AI transcript
0:00:03 R1 comes out, and it looks pretty good.
0:00:06 That’s not the best layer to monetize it.
0:00:09 In fact, there might not be any money in that layer.
0:00:11 I have yet to see the GPT wrapper.
0:00:15 The Internet is such a great example because there’s no way this doesn’t play out like the Internet.
0:00:21 It’s actually a very big step when it comes to the proliferation of this model.
0:00:26 It’s a good reminder that there are always pockets of people innovating.
0:00:30 WorldCom and AT&T did not predict the Internet was going to come out of universities.
0:00:33 Two words have caught the Internet by storm.
0:00:35 Deep? Seek.
0:00:40 Specifically, a Chinese reasoning model that seems to rival others at the frontier.
0:00:42 But that’s not all.
0:00:47 Alongside there are one model that dropped in late January, came a fully open-source MIT license,
0:00:54 a paper outlining its methods that some claim may be 45 times more efficient than other methods,
0:01:00 an alleged $5.6 million cost, the release of reasoning traces, a follow-on image model,
0:01:05 and the fact that all of this was released by Hedge Fund in China.
0:01:09 Since then, there have been so many claims, and claims about those claims,
0:01:13 that many are already referring to this as a Sputnik moment.
0:01:21 But if you think about it, the reason that Sputnik, the first satellite launched into lower Earth orbit by Russia in 1957,
0:01:29 the reason that Sputnik still matters in 2025 is because America took all the actions that it did in ’58, ’59, ’60,
0:01:35 a moon landing speech in ’62, all the way up to 1969, when we reached the moon.
0:01:39 Those are the actions that made Sputnik Sputnik.
0:01:41 A wake-up call was responded to.
0:01:46 So, now that we’re here, how should we, whether you’re a casual listener, a founder,
0:01:53 a researcher at a top AI lab, or a policymaker, not just react to this message, but act?
0:01:59 Joining us to discuss this and tease out the signal from the noise are A16Z General Partner
0:02:05 and Pioneer of Software-Defined Networking, Martin Casado, plus Steven Sinovsky,
0:02:12 longtime Microsoft exec, including being the president of the Windows division between 2006 and 2012.
0:02:17 Steven, by the way, has also been a board partner at A16Z for over a decade
0:02:20 and shares his learnings online at Hardcore Software,
0:02:26 where he recently wrote a viral article called “Deepseak has been Inevitable,” and here’s why.
0:02:29 Of course, we’ll link to that in the show notes.
0:02:33 Both Martin and Steven have been on the front lines of prior computing cycles,
0:02:36 from the switching wars to the fiber build-out,
0:02:41 and have even witnessed the trajectory of companies like Cisco, AOL, AT&T,
0:02:43 even WorldCom.
0:02:45 So what really drove this Deepseak frenzy?
0:02:48 And more importantly, what should we take away?
0:02:52 Have bigger and better frontier models been optimizing for the wrong thing?
0:02:54 And where does value in the stack accrue?
0:02:59 Today, we address those questions through the lens of Internet history.
0:03:02 I hope you enjoy.
0:03:06 As a reminder, the content here is for informational purposes only.
0:03:09 Should not be taken as legal, business, tax, or investment advice,
0:03:12 or be used to evaluate any investment or security,
0:03:16 and is not directed at any investors or potential investors in any A16Z fund.
0:03:20 Please note that A16Z and its affiliates may also maintain investments
0:03:22 in the companies discussed in this podcast.
0:03:25 For more details, including a link to our investments,
0:03:28 please see a16z.com/disposures.
0:03:35 It’s been a busy few weeks.
0:03:38 I don’t know about you guys, my Twitter feed, podcast,
0:03:40 everything Deepseak everywhere.
0:03:45 Maybe unsurprisingly, but what’s your TLDR in terms of what came out
0:03:49 and maybe also your take on why it blew up in the way it did,
0:03:53 because we’ve seen lots of releases in the last, let’s say, two years since chatGPT.
0:03:57 The quick overview, of course, is out of essentially nowhere,
0:04:02 a small hedge fund, quasi-computer science research organization in China,
0:04:05 releases a whole model.
0:04:08 Now, those in the note, no, it didn’t just appear.
0:04:11 There’s a year and a half or so of buildup.
0:04:12 And they’re really good.
0:04:13 And they’re really good.
0:04:15 And nothing was enacted.
0:04:20 But it appeared to take the whole rest of the world by surprise.
0:04:23 And I think there were two big things about it
0:04:25 that really caught everybody’s attention.
0:04:29 One was how did they go from nothing to this thing?
0:04:33 And it seems to be a constant factor of compatibility
0:04:36 and capabilities with everybody else.
0:04:40 And this number got thrown around that it only cost $5 million.
0:04:42 The number is $6 million.
0:04:45 The number is irrelevant because it turns out they wrote a paper
0:04:50 and they said, hey, we innovated in this particular set of things on training,
0:04:54 which even here, it was like, oh, well, that was pretty clever.
0:04:57 And then because of the weirdness that we don’t need to get into
0:05:01 of the financial public markets and how this whole thing happened on a Friday,
0:05:05 the whole weekend was like everybody whipping themselves into a frenzy
0:05:11 so they could wake up Monday morning and trade away a trillion dollars of market cap,
0:05:15 which seems to be a complete overreaction and craziness.
0:05:17 But that’s not what we’re here to talk about.
0:05:19 To your point, there’s a lot of moving parts here and there’s a lot to consider.
0:05:21 It’s actually a fairly complicated situation.
0:05:25 So there has been this view that the traditional one-shot LLMs
0:05:29 were starting to maybe asymptote, like GPT-4, there hadn’t been a big advancement.
0:05:31 But then there’s just going to be this new breath of life
0:05:35 and OpenAir released a reasoning model, which is 01,
0:05:37 and everybody’s very excited about that.
0:05:41 And so in this grand tapestry we’re considering,
0:05:43 you have all this excitement about 01
0:05:46 and of how that’s going to drive compute costs and NVIDIA,
0:05:49 and then R1 comes out and it looks pretty good.
0:05:51 And then all of a sudden they’re saying, well, if you can do it just as cheap,
0:05:54 is this going to actually drive the next wave and so forth.
0:05:58 So there’s a lot of build-up to 01, which lent to the R1 hype.
0:06:01 And then I think to your point, people didn’t know really what to think about it.
0:06:04 And I agree with you, it was a total market overcorrection.
0:06:07 By the way, it’s also worth pointing out that in addition to people saying,
0:06:11 “Wow, this is a great model,” there’s a lot of theories and rumor around,
0:06:15 “Oh, well, maybe this is the CCP doing a Psyop.
0:06:17 Maybe it costs a lot more.
0:06:18 Maybe this is very intentional.
0:06:19 It was right by Chinese New Year.
0:06:20 There’s just a ton of rumors.
0:06:23 Maybe we’ll do our best to dissect everything going on.”
0:06:24 Yeah, maybe let’s just do that.
0:06:26 Because to both of your points, there was a lot here, right?
0:06:28 There was the performance element.
0:06:29 There was these quotes around costs.
0:06:30 There’s the China element.
0:06:31 There’s the virality.
0:06:33 It hit number one in the App Store.
0:06:34 There’s also shipping speed.
0:06:37 I think Martini shared that they released an image model shortly after
0:06:39 and then it was released on a Friday.
0:06:41 So there’s this huge mixture of people reacting,
0:06:43 some people who know what they’re talking about
0:06:45 and some people who don’t, quite frankly.
0:06:48 And so we’re like 10 days or so out from this release,
0:06:51 which by the way, as both of you said, that was the R1 release.
0:06:55 There was the V3 release two months ago, which was the base model.
0:06:57 So now that we’re a little bit further out,
0:06:59 what’s the signal from the noise?
0:07:02 So maybe I’ll give you the lens of Chinese people who are smart.
0:07:07 There’s one lens, the lens that I hold, which is China has great researchers.
0:07:10 DeepSeek has actually released a number of Soda models, including V3,
0:07:13 which is actually probably a more impressive feed.
0:07:15 It’s almost like a Chet GPD4.
0:07:20 And oh, by the way, to create one of these chain of thought models,
0:07:23 these reasoning models, you need to have a model like that,
0:07:25 which they had done and we had known about.
0:07:30 All of the contributions that they’ve done have been in the public literature somewhere,
0:07:32 just nobody had really aggregated.
0:07:35 So there’s a thought that I hold, which is this is a very smart team
0:07:38 that has been executing very, very well for a long time in AI.
0:07:39 They are some of the top researchers.
0:07:42 The fact that they spent $6 million just doing the chain of thought
0:07:45 is actually not out of whack, which what Anthropic has now said,
0:07:47 they’ve spent it open AI, has said that they spent.
0:07:51 And so this is a meaningful contribution from a good team in China.
0:07:53 And so it means something and we should respond to it.
0:07:55 So some of the outcry is warranted.
0:07:56 I do think that we respond to it,
0:07:59 but I don’t think for the reasons a lot of people are saying.
0:08:00 I completely agree with that.
0:08:04 And in fact, you also saw the people outside of that team in China
0:08:08 sort of piling on to try to make it more intergalactic than it was.
0:08:11 I mean, my favorite old friend of mine, Kaifu Li,
0:08:15 comes out on X and says something about this is why I said two years ago,
0:08:18 Chinese engineers are better than American engineers.
0:08:23 But the truth is to your point about reaching some asymptotic level of progress.
0:08:28 Yeah, like the previous base models, like the GPT lineage,
0:08:30 seem to have asymptoted around GPT-4.
0:08:31 Right.
0:08:34 But what’s super interesting about that is that asymptote was true.
0:08:37 If you looked at it through the lens of the function
0:08:40 that everybody was optimizing for, which is to my view,
0:08:43 this kind of crazy hyperscaler view of the world,
0:08:47 which is we need more compute and more data, more compute, more data.
0:08:48 And we’re just on that loop.
0:08:50 And a lot of people from the outside were like,
0:08:52 well, you are going to run out of data.
0:08:55 And I would just as, you know, a micro computer person was like,
0:08:58 well, at some point you’re going to end up breaking the problem up
0:09:01 to the seven billion endpoints of the world,
0:09:04 which will have vastly more compute than you can ever squeeze into
0:09:07 one giant nuclear power data center.
0:09:12 And so a lot of what they did was sort of a step function change,
0:09:15 just sort of the improvement, just a change in the trajectory.
0:09:16 Yes.
0:09:19 And that to me is the part where the hyperscalers needed to take
0:09:22 a deep breath and say, okay, why did we get to where we were?
0:09:27 Well, because you were Google and Meta and OpenAI funded by Microsoft,
0:09:30 which all had like billions and billions of dollars.
0:09:35 So you obviously saw the problem through the lens of capital and data.
0:09:37 And of course you had English language data,
0:09:39 which there’s more of than anybody else.
0:09:40 So you could keep going.
0:09:43 The way I thought of it is when Microsoft was small,
0:09:45 we used to just decide, is it a small problem,
0:09:47 a medium problem or a large problem?
0:09:51 And I remember at one point we started joking that we lost the ability
0:09:54 to understand small and medium problems and solutions.
0:09:58 And we only had like large, which was just trivial,
0:10:01 and then huge and like ginormous.
0:10:04 And our default was ginormous because we thought,
0:10:07 well, we could do it and no one else could.
0:10:09 And that’s a strategic advantage.
0:10:13 I feel like that’s where the AI community in the West, if you will,
0:10:15 got just a little carried away.
0:10:18 And it was just like every startup that has too much money,
0:10:20 the snacks get a little too good.
0:10:23 So I’ve heard two theories of why they were able to do this.
0:10:25 One of them is this constraint one that you’ve said,
0:10:26 I think which is actually very true,
0:10:29 which is we’ve just been using this blunt instrument of compute
0:10:30 and blunt instrument of all data.
0:10:33 And we just haven’t thought about a lot of engineering under constraints.
0:10:36 The second theory I heard, I don’t know if it’s true,
0:10:40 but it’s tantalizing, which is the reason B3 is so good
0:10:42 is actually because it has access to the Chinese internet
0:10:45 as well as the public internet, which is actually an isolated thing.
0:10:48 We don’t really have access to the internal Chinese internet.
0:10:51 And we certainly don’t train from it as far as I know, which they do.
0:10:53 So it could be the case both things are true.
0:10:55 They could have had a data advantage.
0:10:57 They definitely have the engineering constraint.
0:11:01 Even on the data, their starting point is the Chinese internet, per se,
0:11:03 that has much more structure to it.
0:11:05 It’s a much better training set.
0:11:06 That’s a great point.
0:11:11 And in as much as human annotated data is important here in for chain of thought,
0:11:14 you do want experts saying, here’s how I would reason about a problem.
0:11:16 I mean, this is what this whole chain of thought is.
0:11:17 It’s basically what are the reasoning steps.
0:11:21 If you want to look at a place to arbitrage really smart educated people
0:11:24 and relatively low cost, it’s hard to beat China globally.
0:11:27 And so they definitely have access to a bunch of potentially highly educated
0:11:29 annotated data, which is very relevant here.
0:11:32 And so I happen to be able to believe that this did not come out of nowhere.
0:11:33 It’s not a psyop.
0:11:35 This is a great team taking advantage of what it has.
0:11:39 But there are still things that are very significant about it that are worth talking about.
0:11:41 For example, the license is very significant.
0:11:46 The fact that they decided to release the reasoning steps is very significant.
0:11:49 Those are two things that you’re not seeing headlines about, right?
0:11:52 You’re seeing headlines about all the other things that we just talked about.
0:11:54 You said the reasoning traces, those were released,
0:11:57 which using the comparable 01 were previously not.
0:11:58 Right.
0:11:59 And then the open source license.
0:12:04 So there’s two things that are pretty remarkable about DeepSeek R1
0:12:06 that have implications on adoption.
0:12:10 We haven’t seen a license this permissive recently for a SOTA model,
0:12:14 which basically MIT license, which is like one page, you can do anything, right?
0:12:16 It’s like free isn’t free beer for real.
0:12:17 Yeah, for real, for real.
0:12:20 I think at A66 we have one of the large portfolio of AI companies,
0:12:23 both at the model layer and at the app layer.
0:12:27 And I will say any company at the app layer is using many models.
0:12:29 I have yet to see the GPT wrapper.
0:12:30 They’re all using a lot of models.
0:12:33 They do use open source models and licenses really matter.
0:12:36 And so this is definitely going to result in a lot of proliferation.
0:12:41 The second thing is, so a reasoning model actually thinks through the steps of the problem,
0:12:45 and it uses that chain of reasoning or chain of thought to come up with deeper answers.
0:12:50 And when OpenAI released 01, they did not release that chain of thought.
0:12:54 Now, we don’t know why they didn’t do it, but it just turns out that that chain of thought,
0:13:00 if you have access, it allows you to train smaller models very quickly and very cheaply.
0:13:01 And that’s called distilling.
0:13:05 And so it turns out that you can get very, very high quality smaller models
0:13:07 by distilling these public models.
0:13:12 And the implications are both that this is just more useful for somebody using R1,
0:13:16 but also you get a lot more models that can run on a lot smaller devices.
0:13:17 So you just get more proliferation that way.
0:13:23 So it’s actually a very big step when it comes to the proliferation of this model.
0:13:24 Absolutely.
0:13:28 And I think that there’s this tendency to peg yourself at, oh, it should just be open,
0:13:32 but without really defining it, which I think is important in this case.
0:13:36 And I think because of where they came from and that they don’t have a business model,
0:13:39 that was part of what was unique about this was it was a hedge fund,
0:13:42 like almost a side project, but not really a side project.
0:13:45 It has this effect that like, well, we’re just going to give the whole thing away.
0:13:49 And the rest of the companies are still trying to figure out their revenue models,
0:13:51 which I would argue was probably premature.
0:13:56 And it starts to look a little to me like, hey, let’s charge for a web server.
0:14:00 And it’s like the business of serving HTTP, not a great business.
0:14:05 And I think everybody just got focused on the first breakthrough, which was the LLM,
0:14:09 which if you look back at the internet, what exactly happened was everybody got very focused
0:14:13 on monetizing the first part of the internet, which was HTML and HTTP.
0:14:17 And then along came, I don’t know, Microsoft and a bunch of other companies to say,
0:14:20 that’s not the best layer to monetize it.
0:14:23 In fact, there might not be any money in that layer.
0:14:27 And the real money is going to be in shopping and in plane tickets and in television.
0:14:33 And even other companies, AT&T got wound up trying to monetize even lower layer.
0:14:37 But that’s not how you’re going to get to 7 billion endpoints.
0:14:42 And I think that the licensing model really matters because what’s going to happen is
0:14:45 that there’s going to end up being some level of standardization.
0:14:48 Now, I don’t know where in the stack or in what level,
0:14:51 but there is going to be some level of standardization.
0:14:56 And the licensing model for the different layers is going to start to matter a lot.
0:15:03 Anyone who’s around during the internet remembers the battles over the different GNU V3, V4,
0:15:05 the openness license.
0:15:10 But you’re doing a dissertation and it turns out even your dissertation,
0:15:13 which part of it and how you released it was a huge issue
0:15:16 because it could make or break a whole approach.
0:15:19 And I think that the US industry lost sight of that importance
0:15:24 because they got so used to this model of like open just means we’re a business
0:15:28 and we pick and choose what we throw out there as evidence that we’re an open company.
0:15:34 And I think that view isn’t aligned with how technology has just shown to evolve
0:15:37 in an era where there’s no cost for distribution.
0:15:41 Before, when there was a cost for distribution, it turns out the free model was irrelevant
0:15:44 because you still couldn’t figure out how to get it to anybody.
0:15:48 I do want to take the other side of this because I actually tend to agree with you.
0:15:52 And so what you just said is, A, it could be the case that the model’s the wrong place to focus
0:15:54 and everybody thinks there’s a lot of value in there.
0:15:58 And so they’re playing all these cute games with openness as opposed to distribution.
0:16:00 And that could very well be true.
0:16:03 But there’s another view, which is actually the models really are pretty valuable.
0:16:06 And in particular, the model itself isn’t an app,
0:16:08 but it could be the case that if you’re building an app,
0:16:10 you need to vertically integrate into the model.
0:16:12 It could be the case.
0:16:15 And therefore, like if I’m building the next version of chatGPT
0:16:17 or we just had today deep research launch,
0:16:21 it could be that the apps actually require you to own the model.
0:16:25 And in that case, DeepSeek is less relevant because they’re not building apps.
0:16:29 And then this means that the impact to the opening AIs or Anthropics are not as great.
0:16:32 And so I do think that there’s this fork that we don’t know the answer.
0:16:35 Fork number one is maybe the models do get commoditized.
0:16:38 You need to focus at the app layer and then the license doesn’t matter.
0:16:41 Or the models really matter up the stack,
0:16:45 in which case the whole DeepSeek phenomenon really isn’t as impactful
0:16:47 an event as people are making it.
0:16:51 So I’m going to build on that just because I want to say you’re right both times.
0:16:53 No, no. And the variable is time.
0:16:57 And the internet is such a great example because there’s no way this doesn’t play out like the internet.
0:16:59 Like it just has to.
0:17:04 And what we saw was for a while, building one app seemed like a crazy thing
0:17:07 because you had to own windows and you had to own office.
0:17:12 But then a new app came along that didn’t own any of those and it was search.
0:17:18 And so that’s why I think a lot of people also because of age and what they live through
0:17:23 immediately jump to, oh, these LLMs are going to replace search.
0:17:26 But it turns out that’s actually going to be really, really hard
0:17:31 because there’s a lot of things that search does that the models are bad at, really bad.
0:17:36 And so what’s going to happen is a new app is going to emerge.
0:17:39 Then when the new app emerges, that’s going to get vertically integrated.
0:17:42 And the research app is a super good example of that.
0:17:45 And then all of a sudden other apps are going to spring up.
0:17:49 Oh, there’s Google Maps and there’s search and then there’s Chrome.
0:17:53 And then it goes back and eats the things that it couldn’t do before.
0:17:56 And I really feel like that’s the trajectory we’re on now.
0:17:59 It’s still a matter of where and what integrates.
0:18:02 But the thing is, is that the apps that ended up mattering on the internet
0:18:05 literally didn’t exist before the internet.
0:18:07 And I think that’s what people are losing sight of.
0:18:08 Same with mobile.
0:18:09 Same with mobile.
0:18:11 They’re all, everybody is complete.
0:18:13 There were no social apps.
0:18:14 Okay, fine, I get it.
0:18:15 There was GeoCities and a bunch of other stuff.
0:18:20 But people get so caught up on new thing, it’s going to replace something.
0:18:22 Zero sum thinking is so dangerous.
0:18:25 Zero sum, and you can think of everything as this spectrum.
0:18:30 And when something new comes along, the whole spectrum gets divided up differently,
0:18:33 which is what Google said when they bought rightly.
0:18:34 What are people going to do with the internet?
0:18:35 They’re going to type stuff.
0:18:36 And what are they going to type?
0:18:39 They’re going to type it, but they’re going to type it with other people.
0:18:40 Okay, so this is great.
0:18:42 So we’re actually seeing this happening now, which was,
0:18:45 someone will come up with a model that does something like in a consumer space.
0:18:46 Let’s say like text to image.
0:18:51 And then it turns out that over time people are like, oh, it’s kind of like Canva.
0:18:52 Exactly.
0:18:54 It’s like slowly do the AI native version.
0:18:56 Just like the cloud native version of Word.
0:18:59 The AI native version of these kind of existing apps.
0:19:03 The reason it’s important is because it looks like Canva, or it looked like Word,
0:19:05 or it looked like PowerPoint, or it looked like Excel.
0:19:08 But what’s important is that they’re actually different.
0:19:10 Nothing is going to ever be PowerPoint again.
0:19:11 Why?
0:19:15 Because PowerPoint, the whole reason for existing was to be able to render something
0:19:17 that couldn’t ever be rendered before.
0:19:22 And so all of the whole product, it’s 3,000 different formatting commands.
0:19:24 Like literally, that’s not a number I made up.
0:19:29 3,000 ways to curn and nudge and squiggle and color and stuff.
0:19:32 And actually it turns out you don’t need to do any of that in AI.
0:19:34 So the whole product isn’t going to have any of those things.
0:19:35 Yeah, exactly.
0:19:39 And then it turns out all those things make it really hard to make it multi-user.
0:19:44 And so then when Google comes along and starts to bundle up their competitor
0:19:47 that’s going to replace it, they’re focused on sharing.
0:19:48 So Stephen, let me ask you this.
0:19:50 You said something really interesting.
0:19:51 I’m good.
0:19:52 I did.
0:19:54 Which is this has to pan out like the internet.
0:19:58 And you guys have used examples of different companies, the mobile wave, cloud era,
0:20:00 those are things we can learn from.
0:20:03 But I just want to probe you, is there something different here?
0:20:04 To bring it back to deep-seek.
0:20:07 This is very important to realize the capabilities of China.
0:20:08 It’s a very credible player.
0:20:12 But I don’t think that R1 itself as a standalone is going to have that deep of an impact.
0:20:16 But on the internet, so there’s actually these parallels.
0:20:21 When it comes to capital build out that you see in the AI, which is it takes a lot of investment.
0:20:24 And there’s a special parallel that Mark Andreessen actually reminded me of it,
0:20:28 which people don’t tend to see as well, which is in the early days of the internet,
0:20:35 like the mid to late 90s, a lot of investors, a lot of big money, I think banks or sovereigns,
0:20:39 they wanted exposure to the internet, but they had no idea how to invest in software companies.
0:20:41 Like what are these new software companies?
0:20:42 Who are these people?
0:20:43 Like they’re all private companies.
0:20:44 So what did all of them do?
0:20:46 They all invested in fiber infrastructure.
0:20:48 So we’re starting to see this thing again.
0:20:51 We see a lot of banks and big investors.
0:20:54 Listen, we want to build up data centers because they don’t know how to invest in startups.
0:20:56 Like we know how to invest in startups, right?
0:20:59 So on one hand, you can be like, oh, we’re going to see all of this kind of capital expenditure
0:21:03 and all this capital expenditure is going to go into physical infrastructure.
0:21:07 And therefore we’re going to have another fiber glut equivalent, but a data center glut.
0:21:12 So the counter to that point where I think is different is at the time of the fiber build out,
0:21:18 you’ve had one company which happened to be cooking its numbers where it had a ton of debt to build all of this out.
0:21:23 When the price of fiber dropped, that company went out of business and that caused a huge issue.
0:21:26 You have a much better foundation for the AI wave.
0:21:29 The primary investors are the big three cloud companies.
0:21:31 They’ve got hundreds of billions of dollars on the balance sheet.
0:21:33 Even if all of this goes away, they’ll be fine.
0:21:35 Nvidia can take a price dip.
0:21:36 Nvidia will be fine.
0:21:41 So I don’t think we’re heading to the same type of glut and crash that people have,
0:21:46 which is very appealing to draw parallels to the internet for that I don’t think is there.
0:21:48 Oh, I am completely with you on that.
0:21:53 That part of it is going to look like the amount that Google invested in the early 2000s
0:21:55 or the amount that Facebook invested five years later.
0:22:00 Or people forget that Microsoft poured, I don’t know, 30, 40 billion dollars into Bing.
0:22:03 And it’s still number three or whatever, but it still doesn’t matter.
0:22:05 Yeah, I would bet, I don’t know this as a fact.
0:22:08 I’ll bet Meta’s spending more money on VR than it is on AI right now.
0:22:09 Yeah.
0:22:10 Not just to show you.
0:22:11 And maybe Apple too.
0:22:12 Oh, Apple.
0:22:17 Well, also because Apple, whatever is bigger than gargantuan is how much they’re spending.
0:22:21 And so it really isn’t about the investing profile.
0:22:25 And I think that is a super important point that you made to really just hammer home.
0:22:28 There’s a certainty that nobody is going to come out of this unscathed,
0:22:31 but the scathing is not going to be at all what anybody thinks.
0:22:33 And then not like what it was before.
0:22:35 Like, welcome, I believe you had 40 billion dollars in debt, right?
0:22:37 I mean, it was just one of these things where structurally it was.
0:22:42 Oh, and there were companies that we’ve all forgotten about that went bankrupt over that era.
0:22:45 Actually, there was one in Seattle whose name I’m forgetting, but that was like 20 billion.
0:22:46 Does poof God.
0:22:49 To your point, these companies have had so much cash on their balance sheet.
0:22:52 They’ve been waiting for a moment to invest in the next generation.
0:22:56 Which also contributes to their willingness to scale up as much as they did.
0:22:57 So let’s talk about that.
0:23:00 In your article, you talk about the difference between scale up and scale out.
0:23:04 And the natural tendency in these early parts of the wave to scale up,
0:23:09 when really there tends to be a shift towards software basically going to zero costs.
0:23:11 So, Stephen, what do you mean by that?
0:23:13 And are we at that change introductory?
0:23:16 Now we’ll just switch to make sure we’re really talking about the technology now.
0:23:17 Not the finances.
0:23:20 But when you’re big, you want to double down on being big.
0:23:27 And so you start building bigger and bigger and bigger computers that don’t distribute the computation elsewhere.
0:23:32 So if you’re IBM, you just say the next mainframe is another mainframe that’s even bigger.
0:23:35 If you’re Sun Microsystems, you just keep building bigger and bigger workstations.
0:23:38 Then if you’re digital equipment, bigger and bigger mini computers.
0:23:43 And by the way, all along, you’re just doing more MIPS in the acronym sense
0:23:46 than the previous maker for less money.
0:23:51 And then the microcomputer comes along and not only did they do like fewer MIPS,
0:23:54 but they cost nothing and they were going to be gazillions of them.
0:24:00 And so you went from an era when IBM would lease 100 or 500 new mainframes in a year
0:24:06 and Sun might sell 500,000 workstations to like, oh, let’s sell 10 million computers in a quarter.
0:24:11 And I think that scale out where there’s less computing but in many more endpoints
0:24:18 is a deep architectural win as well because it gives more people more control over what happens.
0:24:19 It reduces the cost.
0:24:25 So today, the most expensive MIPS you can get are in like a nuclear powered data center
0:24:27 with like liquid cooling and blah, blah, blah.
0:24:32 Whereas the MIPS on my phone are free and readily available for use.
0:24:37 And I think that to me has been a blind spot with the model developers now.
0:24:38 They all do it.
0:24:41 I mean, I run Lama on my Mac and the first time you do it, your mind is blown.
0:24:45 And then you start to go, well, now that’s just how it should happen.
0:24:48 And then you look at Apple and their strategy, which the execution hasn’t been great,
0:24:53 but the idea that all these things will just surfaces features popping up all over my phone
0:24:54 and they’re not going to cost anything.
0:24:56 My data is not going to go anywhere.
0:24:59 That’s got to be the way that this evolves.
0:25:05 Now, will there be some set of features that are only hyperscale cloud inference?
0:25:06 Oh, yeah.
0:25:12 Just like most data operations happen in the cloud now, but most databases are still on my device.
0:25:16 So I’m smiling because this is the story from like a microcomputer guy.
0:25:17 I’ll tell the story from an internet guy.
0:25:20 There’s the perfect parallel, which is, do you remember the switching wars?
0:25:21 Oh, yeah.
0:25:22 Absolutely.
0:25:23 Yeah.
0:25:26 So for the longest time, you had the telephone networks and they were perfect.
0:25:28 They would converge in milliseconds.
0:25:30 They would never drop anything.
0:25:31 You got guaranteed quality of service.
0:25:33 And here comes five dimes.
0:25:34 That’s five dimes.
0:25:35 And then here comes the internet.
0:25:38 He had none of these things like convergence was minutes.
0:25:40 Like it dropped habits all the time.
0:25:42 You couldn’t enforce quality of service.
0:25:46 And there was these crazy wars at the time where like, why are you doing this internet stuff?
0:25:47 It’s silly.
0:25:48 We know how to do networking.
0:25:52 But what the switching people, the telephone people didn’t get was what happens when you
0:25:55 actually have a best effort delivery and then how it enabled the endpoints.
0:25:58 They needed the value to be in the network and they couldn’t think that way.
0:26:00 And that really brought the internet.
0:26:02 And I think the exact same thing is playing out.
0:26:06 I actually see it a lot of the times like people, they look at these models, oh, they hallucinate.
0:26:08 Or, oh, they’re not correct at these things.
0:26:12 But they enable an entirely new set of stuff like creativity and coding.
0:26:15 And it’s an entirely white space and it’s going to grow very quickly.
0:26:21 And to assume that somehow they don’t fit the old model is irrelevant to where it’s going to go.
0:26:26 What I do is I just S slash QOS to hallucinate.
0:26:27 Yeah, exactly.
0:26:31 Because like to now explain what happened was I was going to all these meetings in the 90s
0:26:37 with all these pocket protector AT&T people who would just show up and they would yell at Bill Gates like,
0:26:43 “QOS, QOS!” And we have to go all look up what QOS was because not only were we not using TCP/IP,
0:26:47 but the network we were using never worked because it was like a PC-based network.
0:26:48 And the IBM people…
0:26:49 Like the Nebuy stuff?
0:26:50 Yeah, it was Nebuy.
0:26:53 I am talking to a networking genius.
0:26:55 I should like the ping of death.
0:26:58 But it was just hilarious because they’re telling me about QOS.
0:26:59 I didn’t know what it was.
0:27:05 I walked them over to my office and this was like in the winter of 1994.
0:27:11 And I’m like, “Oh, look, here is a video of the Lilyhammer Winter Olympics playing on my Mac.”
0:27:12 Yeah, awesome.
0:27:16 And it was like, literally it was a postage stamp, the size of an iPhone icon.
0:27:19 And they were like, “Well, that’s 15 frames a second.”
0:27:21 I’m like, “I know, it’s usually like five.”
0:27:23 And like, “Where’s the audio?”
0:27:27 I said, “Well, if I want the audio, I just call up this phone number on your system.”
0:27:28 And then they just laughed at me.
0:27:34 And so here we are, of course, all using Netflix on every device all over the world.
0:27:42 And I think that they can’t understand that these paradigms where like the liabilities either don’t matter or just become features.
0:27:45 And of course, that’s what gave birth to Cisco.
0:27:49 And they just went, “Well, this is how we’ve been doing it and it all works.
0:27:52 It only works in our crazy weird universities and in the Defense Department.”
0:27:54 And now that’s all we use.
0:28:00 And I want to tie this back to DeepSeq because the reason we’re getting so excited about this is because we’ve seen things like DeepSeq come out before.
0:28:01 And it’s not zero sum.
0:28:03 It doesn’t replace the old thing, right?
0:28:05 It is a component of the new thing.
0:28:07 And the new thing, we still haven’t even envisioned yet, right?
0:28:09 It’s like the internet is just coming right now.
0:28:11 And our excitement is for the new thing to come.
0:28:13 And so when I saw DeepSeq, I’m like, “Amazing.
0:28:16 This is another step to basically AGI in your pocket.
0:28:17 These can run on small models.
0:28:19 It shows that we’re going forward.”
0:28:22 My reaction was not, “Oh, shit, I need to like short NVIDIA or whatever.”
0:28:24 I think that’s actually the wrong answer.
0:28:28 Yeah, I mean, I read the “Let’s Short NVIDIA” blog post that flew around that whole weekend.
0:28:30 And I was like, “Are you crazy?”
0:28:33 I’m like, “A, Jensen is a genius.
0:28:35 B, their company is filled with geniuses.”
0:28:37 What about the TAM just expanded, “Don’t you like?”
0:28:39 Yeah, no, it’s exactly…
0:28:41 And so it is super exciting.
0:28:43 This is the scale-out step just happened.
0:28:46 And so now you could see everybody doubling down.
0:28:50 And to your point that you made earlier that I think is super insightful and really important
0:28:52 is this enabling of specialized models.
0:28:55 Because that’s what’s going to end up being on your phone.
0:28:58 And that’s what’s going to enable the app layer to really exist.
0:29:02 To me, this is all the equivalent of the browser getting JavaScript.
0:29:04 Because once the browser got JavaScript,
0:29:08 then all of a sudden you could do anything you needed
0:29:12 without going to some standards body or building your own browser.
0:29:15 And I think that’s where we are right now.
0:29:18 One follow-up there is if you think about how this progresses to date,
0:29:21 I feel like the benchmarks have always been like,
0:29:24 which model has the most parameters, how’s it doing on this coding test,
0:29:28 that isn’t representative necessarily like, what device can this fit on?
0:29:29 How much does it cost?
0:29:33 Do we expect then a different set of benchmarks
0:29:35 or things that we’re judging these models by,
0:29:37 or should we just be looking at the app layer?
0:29:40 Does there need to be some sort of shift that kind of moves this away
0:29:42 from bigger, better as you’re saying scale-up
0:29:44 and something that represents scale-out?
0:29:47 Of course, I thought all those benchmarks were just silly to begin with.
0:29:48 To me, they all seemed like…
0:29:50 Remember the benchmark we used to do with browsers
0:29:53 was like how fast it could finish rendering a whole picture.
0:29:57 And so Mark Andreessen invented the image tag in the browser.
0:30:00 The neat thing that they did in the implementation was progressively render it.
0:30:05 And then what that did is empower stopwatches all over the world of magazines
0:30:08 to write who finishes rendering a picture faster.
0:30:11 And of course, here we stand today, like that’s a thing you can measure even,
0:30:13 that’s a time and it doesn’t matter.
0:30:15 And so I think those will all go away
0:30:19 and we’re just very quickly going to get to what does it actually do.
0:30:22 I do think that the measure that’s going to start to really matter
0:30:25 will depend on the application that people are going after.
0:30:28 Take this research stuff that just appeared like this week.
0:30:30 Well, it turns out when you’re doing research,
0:30:33 the metric that matters is truth.
0:30:37 And all of a sudden you’re giving footnote links and you’re giving sources
0:30:39 because what’s really happening under the covers,
0:30:43 it’s a little bit less of generative and a little bit more of IR.
0:30:46 And all of a sudden vector databases and looking things up
0:30:48 and reproducing them matter.
0:30:51 And so now we’re probably along the lines of ImageNet
0:30:55 and they’re going to start to generate thousands and thousands of routine tests
0:30:58 that are like, is this true?
0:30:59 This is totally an aside.
0:31:02 But you reminded me of a kind of a weird historical errata,
0:31:04 which is the fact that Andreessen made the image tag.
0:31:07 So in a way, he’s also the grandfather till some AI
0:31:10 because Clip, which is an AI model,
0:31:12 basically will take an image and describe it
0:31:15 the way it does is using the meta tags in the image tags.
0:31:17 So he created the metadata to do this.
0:31:20 I will say back on the topic of the images,
0:31:22 here’s one thing I’ve noticed working with these companies
0:31:25 where these models are actually pretty magic by themselves.
0:31:28 If you have a big model, you just expose it, people use them,
0:31:30 which is very different than computers.
0:31:32 You just put the model out there.
0:31:35 The thing is all the other models catch up very quickly
0:31:38 because they distill so well, so it’s not defensible in a way.
0:31:41 And so the companies that are defensible that I’ve seen
0:31:43 is they’ll put out a model that’s very compelling.
0:31:46 And then once the users are engaged in the model,
0:31:49 they find ways to build an app around that actually is retentive.
0:31:51 They’ll start converging on PowerPoint.
0:31:53 It’s more stateful and requires configuration.
0:31:56 So that tends to be very defensible.
0:31:58 And then the applications that use models,
0:32:01 they use lots of models and they do fine-tune these models a whole bunch.
0:32:03 The last two years have been the story of the large model.
0:32:05 It really has been, and they’ve been magic.
0:32:07 People use them and people really like them.
0:32:09 And the first time you’re in chat, you’re like, this is amazing.
0:32:13 And now I think we’re in the era of workflow around models,
0:32:15 which are stateful complex systems, right?
0:32:18 And also many models.
0:32:20 Many models is a great point to build on that.
0:32:22 This is what happened with user interface.
0:32:25 The whole notion of user interface that IBM put forward
0:32:29 was just derived exactly from their green screens in their 3270s.
0:32:31 And they made a shelf of rules on how…
0:32:33 For the characters.
0:32:35 Exactly how the UI should be.
0:32:37 And this is the F10 button and this is the whatever.
0:32:40 And then it turns out that people were building all sorts of UI frameworks.
0:32:42 It actually looks exactly like the browser today,
0:32:44 where there’s a zillion frameworks on the endpoint.
0:32:46 You pick and choose. You do what you want to do.
0:32:49 You can invent a new calendar dropdown if you want
0:32:51 or not waste your time. It’s really up to you.
0:32:54 And I do think that aspect of creativity
0:32:57 is extremely important to applications.
0:32:59 And then for apps to be differentiable
0:33:02 and also to use an MBA, have a moat,
0:33:05 apps are going to also embrace the enterprise.
0:33:08 And for better or worse, one of the lessons that we keep learning
0:33:11 is if you want to get adoption in the enterprise,
0:33:14 you’re going to have to do a bunch of work to turn off parts of your app
0:33:17 or to filter parts of your app or to disable it or whatever it is.
0:33:20 And I think the smartest entrepreneurs are going to recognize
0:33:24 the need for sign-on, single sign-on at the beginning.
0:33:27 RBAC and SSO are like that every time.
0:33:30 Every single time, because it turns out that’s also a great way to price.
0:33:32 It’s not super hard.
0:33:36 And I think so much dumb stuff has been done about AI
0:33:38 and alignment and censorship
0:33:41 and whose point of view is it and all this other stuff
0:33:44 that there’s now a whole industry
0:33:47 that just wants to show up and tell you all the things
0:33:49 that they don’t want out of AI.
0:33:52 And the smartest entrepreneurs are going to actually get ahead of that
0:33:54 and they’ll be there to sell
0:33:57 because it turns out that is actually enormously sticky in the enterprise.
0:34:01 And I think that we’re going to see the smart productivity tools
0:34:03 embrace that immediately.
0:34:05 And it could be even at the most granular level
0:34:07 of turn it off for these users or whatever.
0:34:10 Well, we had Scott Balsky at Speedrun recently
0:34:13 and he talked about Adobe and someone said,
0:34:15 “Well, you have all these licensed images for Firefly.
0:34:17 Do consumers really care about that?”
0:34:19 And he was like, “Honestly, not really.
0:34:21 But you know who does care? It’s the enterprise.”
0:34:23 So to your point, those are two different modalities
0:34:25 and founders are going to have to figure that out.
0:34:28 But I do want to touch on, you know, a lot of people are talking about deep-seek
0:34:30 as this Sputnik moment.
0:34:34 And that can be viewed in the lens of geopolitics, U.S., China.
0:34:36 But also, if you think about Sputnik,
0:34:38 that wouldn’t have been a moment
0:34:40 if Kennedy didn’t do his moon landing speech,
0:34:42 if we didn’t actually get there.
0:34:44 So in other words, if changes weren’t made.
0:34:46 And so let’s say you’re in a boardroom.
0:34:47 You’re an advisor.
0:34:48 I don’t want to talk to the boardroom.
0:34:49 I want to talk to the U.S. government, right?
0:34:52 And so like for me, actually, the biggest a-ha of deep-seek
0:34:54 is nothing we’ve talked about right now.
0:34:58 The biggest a-ha of deep-seek is how blind our policies have been around AI.
0:35:00 They’ve been so wrong-headed.
0:35:04 So our previous policies around AI have been
0:35:07 we can’t open source because it’ll enable China.
0:35:12 We’ve got to limit our big labs.
0:35:14 We’ve got to put all of this regulation on top of it.
0:35:16 And the reason is for safety and all this other stuff.
0:35:17 Export controls.
0:35:20 All the export controls so we can’t enable other countries.
0:35:21 Export controls on chips.
0:35:24 We’ve talked about putting export controls on software.
0:35:25 Weight limits, all of this other stuff.
0:35:26 Like that was our entire policy.
0:35:29 And for me, the biggest, biggest, biggest takeaway,
0:35:32 the whole deep-seek thing is that’s the wrong way to do policy.
0:35:34 China has got a lot of very smart people.
0:35:35 They’re incredibly capable.
0:35:36 They’re great researchers.
0:35:38 They can build stuff as well as we can.
0:35:39 And they can open source it.
0:35:40 We did not enable them.
0:35:42 They did this even with export controls on chips.
0:35:45 So there’s basically all of our activity has been for not.
0:35:48 And what we should be doing is funding and investing in our research labs.
0:35:50 And we should be going as fast as we can.
0:35:54 And it really is the AI race just like we went through the space race.
0:35:55 And we need to win.
0:35:57 And we have everything that we need to win.
0:36:00 The only thing in our way is our own regulatory.
0:36:02 Just to build on that, the lesson is not Sputnik.
0:36:04 The lesson is the internet.
0:36:06 What we learned from the internet,
0:36:09 which Al Gore famously claimed to have invented the internet,
0:36:11 but what he really did was invent the regulation
0:36:13 that allowed the internet to flourish.
0:36:15 And they could have looked at the internet and said,
0:36:17 oh my God, this is a Sputnik moment.
0:36:20 And then try to turn it into what AT&T and WorldCom wanted.
0:36:22 And they were there lobbying trying to make that happen.
0:36:25 And frankly, AOL wanted it to happen that way too.
0:36:30 And so they ignored that and they went with what made the internet strong to begin with.
0:36:37 And so what gave us this deep seek moment was the strength of the worldwide technology community.
0:36:41 And so as much as people want to own it and be the singular provider,
0:36:42 it’s not going to work.
0:36:45 The biggest difference not to overanalyze the analogy.
0:36:49 I think it’s a Sputnik moment in the sense that it’s a wake up call for half the world.
0:36:51 It isn’t a geopolitical wake up call.
0:36:52 It’s not about war.
0:36:55 It’s literally just about technology diffusion.
0:36:59 And we’ve had so many misfires since then.
0:37:04 The whole encryption war where we tried to put export controls on encryption and all this.
0:37:09 And although people thought we were being silly as an industry when many of us would champion this,
0:37:10 but you can’t.
0:37:11 It’s like outlawing math.
0:37:13 It turns out it is outlawing math.
0:37:15 And the fact that it used those chips.
0:37:20 Well, the world’s economy as we’ve seen is very, very hard to put export controls on things.
0:37:22 Remember when we were going to export control playstations?
0:37:23 Oh, yeah.
0:37:30 No, Xbox, like the government came to, or like actually 2048 bit encryption and email.
0:37:31 Yes.
0:37:33 Because people came to, well, we can’t have bad actors.
0:37:35 That’s their favorite phrase.
0:37:37 Bad actors encrypting their email.
0:37:41 I’m like, well, they’re just going to encrypt the attachment themselves.
0:37:43 And then there’s nothing we can do about that.
0:37:44 For sure.
0:37:47 But in this case, we’ve actually put export controls on GPUs before.
0:37:48 I mean, like a perfect analog.
0:37:51 We were like, oh, listen, you can do weapon simulation on these things.
0:37:53 The PlayStation was the first to actually use the SGI.
0:37:54 Right, right, right.
0:37:56 If you remember that, we’re going to export control that.
0:37:58 We can’t let that into Saddam Hussein’s hands.
0:38:01 The whole thing, total failure because it just turns out global markets are global markets.
0:38:05 And we’re much, much better in investing, which at the time we did, in our own infrastructure,
0:38:06 we did a great job of that.
0:38:09 And I think there was great analogy with the internet and with Al Gore.
0:38:11 We should be doing exactly that again.
0:38:15 And some politician needs to stand up and be the Al Gore of this moment.
0:38:17 I think that we will get that.
0:38:19 So I do think that there is now a wake-up call.
0:38:25 I think that the futility of the past four or five years of this kind of stuff is now very, very clear.
0:38:28 And I mean that even more broadly than you were saying.
0:38:36 Like, I mean, like the people who wanted to control this technology at this very granular level and all these think tanks and institutes that were all aligned.
0:38:43 I mean, the number of books written, the number of academic departments started, the number of assaults on technology companies to align.
0:38:47 I mean, whole meetings in Switzerland about aligning, you know, with the world leaders.
0:38:50 That’s just not how anything evolves.
0:39:02 And the biggest lesson for computing starting in 1981 with the IBM PC or, frankly, 1977 with the Apple has been the creativity at the edge, just enabling that.
0:39:09 And I think the problem that the regulators had was they had never faced regulating a connected world before.
0:39:14 And I think the other lesson from DeepSeek is just, okay, the world is already connected.
0:39:17 The world is already native in all of this stuff.
0:39:24 So now the amount of actual calendar time it takes for something to diffuse, technically, is zero.
0:39:30 I mean, DeepSeek, I think, was the number I saw this morning is like 35% of the DAUs of open AI.
0:39:35 And that’s a giant spike because just all the same people are just trying it out because there’s no friction.
0:39:37 It takes no time.
0:39:43 And so it’s so unbelievably exciting to be part of what’s going on right now.
0:39:46 And we just don’t need to throw water on it and be party poopers.
0:39:52 So one thing I will say, it’s like, I personally don’t think this is a crisis moment for open AI or anthropic.
0:39:54 I think like apps are hard to build.
0:39:57 I think that like right now, the apps that they put out are very complex.
0:40:00 They actually know their users, they have very specific use cases.
0:40:05 And so, I mean, I think for them, it’s a bit of a wake up call that they can’t slouch and they got to move very quickly.
0:40:07 But I’m still very, very bullish on our labs.
0:40:09 I think they can stay ahead too.
0:40:15 So again, there’s this view of deep seek as a crisis moment for Nvidia crisis moment for open AI and anthropic.
0:40:16 I don’t buy any of that.
0:40:20 I think it’s more of like a wake up call for the regulatory environment.
0:40:23 And then listen, we should all acknowledge that listen, there’s going to be global competition.
0:40:24 We need to stay ahead.
0:40:32 I would also say that what we should see now the right reaction from all of these frontier folks is they should all just start be building apps.
0:40:38 Because the best feedback loop to build a great platform for other people to use is to be building apps.
0:40:42 And there’s this whole concentrated conversation over competing with your partners or whatever.
0:40:46 Our industry is co-op petition through and through its Andy Groves lesson.
0:40:51 So just everybody should be prepared for these big players to compete with you.
0:40:55 But history has shown that’s no surefire success.
0:40:56 It tamagreezes 10x.
0:40:59 There’s just a lot of room for a lot of folks.
0:41:04 Yeah, I mean, Microsoft spent 10 plus years like a distant number three in the applications business.
0:41:09 And it was a platform shift that all the other players ignored that caused it to win.
0:41:12 And so I think that the tam is going to be 100x.
0:41:14 It’s going to be every endpoint.
0:41:16 The revenue is going to come from the apps side of it.
0:41:19 And then there’ll be a developer side of it.
0:41:24 It’ll just be a different pricing model for different sets of scenarios, but it’s going to be there.
0:41:26 So everything is rising right now.
0:41:33 Since it is this positive some growing world, do you have any thoughts just real quick on the fact that this came from an algorithmic hedge fund?
0:41:34 A quant.
0:41:36 Is that any different to your expectation?
0:41:39 Or does that actually signal that more can participate?
0:41:43 It’s a good reminder that there are always pockets of people innovating.
0:41:47 World Common AT&T did not predict the internet was going to come out of universities.
0:41:52 They did not think that a physics lab in Switzerland was going to invent the protocols that become foundational.
0:41:53 That’s so true.
0:42:01 And they certainly and they also didn’t expect a failed corporate lab to develop TCP/IP that became the standard.
0:42:03 I mean, it wasn’t like the IBM lab.
0:42:09 It was like literally a lab that they’d all but shut down because it failed just down the street at Park.
0:42:14 And so you remember like SRI was like all these places that you don’t even think about.
0:42:15 Right.
0:42:19 And so most of this isn’t going to be even in any history that’s written in five years.
0:42:21 And I think that that is the excitement.
0:42:25 All right.
0:42:26 That is all for today.
0:42:29 If you did make it this far, first of all, thank you.
0:42:33 We put a lot of thought into each of these episodes, whether it’s guests, the calendar touchers,
0:42:37 the cycles with our amazing editor Tommy until the music is just right.
0:42:43 So if you’d like what we put together, consider dropping us a line at ratethespodcast.com/a16z.
0:42:46 And let us know what your favorite episode is.
0:42:47 It’ll make my day.
0:42:49 And I’m sure Tommy’s too.
0:42:51 We’ll catch you on the flip side.
0:42:55 [MUSIC PLAYING]
0:42:59 [MUSIC PLAYING]
0:43:07 [BLANK_AUDIO]

Two words have caught the Internet by storm. DeepSeek. 

The Chinese reasoning model r1 is rivaling others at the frontier with an open-source MIT license, methods that some claim may be 45x more efficient, an alleged $5.6m cost, the release of reasoning traces, a follow-on image model, and the fact that all of this was released by a hedge fund China.

Many are already referring to this as a Sputnik moment. If that’s true, how should we – whether founder, researcher, policy maker – not just react, but act? Joining us to tease out the signal from the noise are a16z General Partner Martin Casado and a16z board partner, Steven Sinofsky. Both Martin and Steven have been on the frontlines of prior computing cycles, from the switching wars to the fiber buildout, and have witnessed the trajectories of companies like Cisco to AOL to ATT – even Worldcom.

So what really drove this DeepSeek frenzy and more importantly what should we take away? Today, we answer that question through the lens of Internet history.

 

Resources:

 

Stay Updated: 

Let us know what you think: https://ratethispodcast.com/a16z

Find a16z on Twitter: https://twitter.com/a16z

Find a16z on LinkedIn: https://www.linkedin.com/company/a16z

Subscribe on your favorite podcast app: https://a16z.simplecast.com/

Follow our host: https://twitter.com/stephsmithio

Please note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures.

Leave a Comment