AI transcript
0:00:03 of the content on average, you know, as a whole on average is average.
0:00:09 The test for whether your idea is good is how much can you charge for it?
0:00:12 Can you charge the value?
0:00:14 Or are you just charging the amount of work it’s going to take the customer to put their
0:00:18 own wrapper on top of OpenAI?
0:00:22 The paradox here would be the cost of developing any given piece of software falls, but the
0:00:27 reaction to that is a massive surge of demand for software capabilities.
0:00:31 And I think this is one of the things that’s always been underestimated about humans is
0:00:37 our ability to come up with new things we need.
0:00:41 There’s no large marketplace for data.
0:00:43 In fact, what there are is there are very small markets for data.
0:00:47 In this wave of AI, big tech has a big compute and data advantage.
0:00:52 But is that advantage big enough to drown out all the other startups trying to rise up?
0:00:57 Well, in this episode, A16Z co-founders Mark Andreessen and Ben Horowitz, who both, by the
0:01:02 way, had a front row seat to several prior tech waves, tackled the state of AI.
0:01:07 So what are the characteristics that’ll define successful AI companies?
0:01:12 And is proprietary data the new oil, or how much is it really worth?
0:01:17 How good are these models realistically going to get?
0:01:19 And what would it take to get 100 times better?
0:01:23 Mark and Ben discuss all this and more, including whether the venture capital model needs a
0:01:27 refresh to match the rate of change happening all around it.
0:01:31 And of course, if you want to hear more from Ben and Mark, make sure to subscribe to the
0:01:35 Ben and Mark podcast.
0:01:37 All right, let’s get started.
0:01:41 It is kind of the darkest side of capitalism when a company is so greedy, they’re willing
0:01:46 to destroy the country and maybe the world to, like, just get a little extra profit.
0:01:50 When they do it, like, the really kind of nasty thing is they claim, oh, it’s for safety.
0:01:55 You know, we’ve created an alien that we can’t control, but we’re not going to stop
0:01:59 working on it.
0:02:00 We’re going to keep building it as fast as we can, and we’re going to buy every freaking
0:02:03 GPU on the planet.
0:02:04 But we need the government to come in and stop it from being open.
0:02:08 This is literally the current position of Google and Microsoft right now.
0:02:13 It’s crazy.
0:02:16 The content here is for informational purposes only, should not be taken as legal, business,
0:02:20 tax or investment advice or be used to evaluate any investment or security and is not directed
0:02:26 at any investor or potential investors in any A16Z fund.
0:02:30 Please note that A16Z and its affiliates may maintain investments in the companies discussed
0:02:35 in this podcast.
0:02:36 For more details, including a link to our investments, please see A16Z.com/disclosures.
0:02:41 Hey, folks, welcome back.
0:02:44 We have an exciting show today.
0:02:45 We are going to be discussing the very hot topic of AI.
0:02:49 We are going to focus on the state of AI as it exists right now in April of 2024, and
0:02:53 we are focusing specifically on the intersection of AI and company building.
0:02:57 Hopefully, this will be relevant to anybody working on a startup or anybody at a larger
0:03:01 company.
0:03:02 We have as usual solicited questions on X, formerly known as Twitter, and the questions
0:03:05 have been fantastic.
0:03:06 We have a full lineup of listener questions, and we will dive right in.
0:03:11 First question, so three questions on the same topic.
0:03:14 Michael asks, “In anticipation of upcoming AI capabilities, what should founders be
0:03:18 focusing on building right now?”
0:03:20 Gwen asks, “How can small AI startups compete with established players with massive compute
0:03:25 and data scale advantages?”
0:03:27 Alistair McLea asks, “For startups building on top of open AI, etc., what are the key
0:03:32 characteristics of those companies that will benefit from future exponential improvements
0:03:36 in the base models versus those that will get killed by them?”
0:03:39 Let me start with one point, Ben, and then we’ll jump right to you.
0:03:42 Sam Maltman recently gave an interview, I think maybe Lex Friedman or one of the podcasts,
0:03:45 and he actually said something I thought was actually quite helpful.
0:03:48 Let’s see, Ben, if you agree with that.
0:03:49 He said something along the lines of, “You want to assume that the big foundation models
0:03:54 coming out of the big AI companies are going to get a lot better, so you want to assume
0:03:57 they’re going to get like a hundred times better.
0:04:00 As a startup founder, you want to then think, “Okay, if the current foundation models get
0:04:03 a hundred times better, is my reaction, oh, that’s great for me and for my startup, because
0:04:08 I’m much better off as a result, or is your reaction the opposite as it, oh, shit, I’m
0:04:12 in real trouble.”
0:04:13 Let me just stop right there, Ben, and see what you think of that as general advice.
0:04:16 Well, I think generally that’s right, but there’s some nuances to it, right?
0:04:22 So I think that from Sam’s perspective, he was probably discouraging people from building
0:04:28 foundation models, which I don’t know that I would entirely agree with that, and that
0:04:34 a lot of the startups building foundation models are doing very well, and there’s many
0:04:38 reasons for that.
0:04:39 One is there are architectural differences, which lead to how smart is the model, there’s
0:04:43 how fast is the model, there’s how good is the model in a domain.
0:04:47 When that goes for not just text models, but image models as well, there are different
0:04:53 domains, different kinds of images that responds to prompts differently.
0:04:58 If you ask Mid Journey and Ideagram the same question, they react very differently depending
0:05:03 on the use cases that they’re tuned for.
0:05:07 And then there’s this whole field of distillation where Sam can go build the biggest, smartest
0:05:13 model in the world, and then you can walk up as a startup and kind of do a distilled
0:05:18 version of it and get a model very, very smart at a lot less cost.
0:05:22 So there are things that, yes, the big company models are going to get way better, kind of
0:05:29 way better at what they are.
0:05:31 So you need to deal with that.
0:05:34 So if you’re trying to go head to head full frontal assault, you probably have a real
0:05:37 problem just because they have so much money.
0:05:41 But if you’re doing something that’s different enough or a different domain and so forth,
0:05:49 for example, at Databricks, they’ve got a foundation model, but they’re using it in
0:05:54 a very specific way in conjunction with their kind of leading data platform.
0:06:00 So, okay, now if you’re an enterprise and you need a model that knows all the nuances
0:06:07 of how your enterprise data model works and what things mean and needs access control
0:06:14 and what needs to use your specific data and domain knowledge and so forth, then it doesn’t
0:06:19 really hurt them if Sam’s model gets way better.
0:06:21 Similarly, 11 Labs with their voice model has kind of embedded into everybody.
0:06:28 Everybody uses it as part of kind of the AI stack.
0:06:31 And so it’s got kind of a developer hook into it.
0:06:35 And then they’re going very, very fast at what they do and really being very focused
0:06:40 in their area.
0:06:41 So there are things that I would say like extremely promising that are kind of ostensibly, but
0:06:47 not really competing with OpenAI or Google or Microsoft.
0:06:52 So I think it sounds a little more coarse green than I would interpret it if I was building
0:06:57 a startup.
0:06:58 Right.
0:06:59 Let’s dig into this a little bit more.
0:07:00 So let’s start with the question of do we think the big models, the god models are going
0:07:03 to get 100 times better?
0:07:04 I kind of think so and then I’m not sure.
0:07:07 So if you think about the language models, let’s do those because those are probably
0:07:11 the ones that people are most familiar with.
0:07:13 I think if you look at the very top models, you know, Claude and OpenAI and Mistral and
0:07:19 Llama, the only people who I feel like really can tell the difference as users amongst those
0:07:26 models are the people who study them, you know, like they’re getting pretty close.
0:07:31 So you would expect if we were talking 100x better that one of them might be separating
0:07:36 from each other a lot more, but the improvement, so 100% better in what way?
0:07:42 Like for the normal person using it in a normal way, like asking it questions and finding
0:07:47 out stuff.
0:07:48 Well, let’s say some combination of just like breadth of knowledge and capability.
0:07:52 Yeah.
0:07:53 Like I think in some of them may are yeah.
0:07:55 Right.
0:07:56 But then also just combined with like sophistication of the answers, you know, sophistication of
0:07:59 the output, the quality of the output, sophistication of the output, you know, lack of hallucination,
0:08:03 factual grounding.
0:08:04 Well, that I think is for sure going to get 100x better.
0:08:08 Like that.
0:08:09 Yeah.
0:08:10 I mean, they’re on a path for that.
0:08:11 The things that are so against that, right, the alignment problem where, okay, yeah, they’re
0:08:18 getting smarter, but they’re not allowed to say what they know.
0:08:21 And then that alignment also kind of makes them dumber in other ways.
0:08:25 And so you do have that thing.
0:08:27 The other kind of question that’s come up lately, which is kind of do we need a breakthrough
0:08:32 to go from what we have now, which I would categorize as artificial human intelligence
0:08:40 as opposed to artificial general intelligence, meaning it’s kind of the artificial version
0:08:45 of us.
0:08:46 We’ve structured the world in a certain way using our language and our ideas and our
0:08:50 stuff.
0:08:52 And it’s learned that very well, amazing.
0:08:55 And it can do kind of a lot of the stuff that we can do, but are we then the asymptote?
0:09:02 Or you need a breakthrough to get to some kind of higher intelligence, more general intelligence.
0:09:08 And I think if we’re the asymptote, then in some ways it won’t get 100x better because
0:09:15 it’s already like pretty good relative to us.
0:09:18 But yeah, like it’ll know more things, it’ll hallucinate less on all those dimensions,
0:09:22 it’ll be 100x better.
0:09:24 There’s this graph floating around.
0:09:26 I forget exactly what the axes are, but it basically shows the improvement across the
0:09:29 different models.
0:09:30 To your point, it shows an asymptote against the current tests that people are using that’s
0:09:33 sort of like at or slightly above human levels, which is what you would think if you’re being
0:09:37 trained on an entirely human data.
0:09:39 Now, the counter argument on that is are the tests just too simple, right?
0:09:42 It’s a little bit like the question people ever run the SAT, which is if you have a lot
0:09:45 of people giving 800s on both math and verbal on the SAT, is the scale too constrained, do
0:09:49 you need a test that can actually test for Einstein?
0:09:52 Right.
0:09:53 It’s memorized the tests that we have and it’s great.
0:09:57 You can imagine SAT that really can detect gradations of people who have ultra-high IQs,
0:10:02 who are ultra-good at math or something.
0:10:03 You can imagine tests for AI, you can imagine tests that test for reasoning above human
0:10:07 levels, one assumes.
0:10:08 Yeah, well, maybe the AI needs to write the test.
0:10:11 Yeah, and then there’s a related question that comes up a lot, it’s an argument we’ve
0:10:15 been having internally, which is also I’ll start to take some sort of more provocative
0:10:18 and probably more bullish or as you would put it, sort of science fictionary predictions
0:10:21 on some of this stuff.
0:10:22 There’s this question that comes up, which is, okay, you take an LLM, you train it on
0:10:25 the internet.
0:10:26 What is the internet data?
0:10:27 What is the internet data corpus?
0:10:28 It’s an average of everything, right?
0:10:29 It’s a representation of sort of human activity.
0:10:32 Representation of human activity is going to kind of, because of the sort of distribution
0:10:34 of intelligence in the population, most of it somewhere in the middle.
0:10:37 And so the data set on average sort of represents the average human.
0:10:40 You’re teaching it to be very average, yeah.
0:10:42 Yeah, you’re teaching it to be very average.
0:10:43 It’s just because most of the content created on the internet is created by average people.
0:10:46 And so kind of the content on average as a whole on average is average.
0:10:51 And so therefore, the answer is our average, right?
0:10:53 You’re going to get back an answer that sort of represents the kind of thing that an average
0:10:56 100 IQ, you know, kind of by definition, the average human is 100 IQ, it’s IQ is indexed
0:10:59 to 100.
0:11:00 That’s the center of the bell curve.
0:11:01 And so by definition, you’re kind of getting back the average.
0:11:03 I actually argue like that may be the case for the default prompt today.
0:11:06 Like you just asked the thing, does the earth revolve around the sun or something?
0:11:09 You get like the average answer to that and maybe that’s fine.
0:11:12 This gets to the point as well.
0:11:13 Okay, the average data might be of an average person, but the data set also contains all
0:11:17 of the things written and thought by all the really smart people.
0:11:20 All that stuff is in there, right?
0:11:21 And all the current people who are like that, their stuff is in there.
0:11:24 And so then it’s sort of like a prompting question, which is like, how do you prompt
0:11:26 it in order to get basically, in order to basically navigate to a different part of
0:11:29 what they call the latent space, to navigate to a different part of the data set that basically
0:11:33 is like the super genius part.
0:11:35 And you know, the way these things work is if you craft the prompt in a different way,
0:11:37 it actually leads it down a different path inside the data set, gives you a different
0:11:40 kind of answer.
0:11:41 And here’s another example of this.
0:11:42 If you ask it write code to do X, write code to sort of list, you know, whatever, render
0:11:46 an image, it will give you average code to do that.
0:11:48 If you say write me secure code to do that, it will actually write better code with fewer
0:11:53 security holes, which is very interesting, right?
0:11:55 Because it’s accessing a different purpose of training data, which is secure code.
0:11:58 Right.
0:11:59 And if you ask, you know, write this image generation thing the way John Carmack would
0:12:01 write it, you get a much better result because it’s tapping into the part of the latent space
0:12:04 represented by John Carmack’s code, who’s the best graphics programmer in the world.
0:12:08 And so you can imagine prompting crafts in many different domains such that you’re kind
0:12:11 of unlocking the latent super genius, even if that’s not the default answer.
0:12:15 Yeah.
0:12:16 I think that’s correct.
0:12:18 I think there’s still a potential limit to its smartness in that.
0:12:24 So we had this conversation in the firm the other day where you have, there’s the world,
0:12:28 which is very complex.
0:12:30 And intelligence kind of is, you know, how well can you understand, describe, represent
0:12:35 the world?
0:12:36 But our current iteration of artificial intelligence consists of human structuring the world and
0:12:45 then feeding that structure that we’ve come up with into the AI.
0:12:50 And so the AI kind of is good at predicting how humans have structured the world as opposed
0:12:56 to how the world actually is, which is, you know, something more probably complicated,
0:13:01 maybe the irreducible or what have you.
0:13:05 So do we just get to a limit where like, it can be really smart, but its limit is going
0:13:10 to be the smartest humans as opposed to smarter than the smartest humans.
0:13:14 And then kind of related, is it going to be able to figure out brand new things, you know,
0:13:21 new laws of physics and so forth?
0:13:22 Now, of course, there are like one in three billion humans that can do that or whatever.
0:13:28 That’s a very rare kind of intelligence.
0:13:30 So it still makes the AI is extremely useful, but they play a different role if they’re
0:13:37 kind of artificial humans than if they’re like artificial, you know, super-duper mega-humans.
0:13:45 So let me make the sort of extreme bull case for the 100, because okay, so the cynic would
0:13:50 say that Sam Altman would be saying they’re going to get 100 times better precisely if
0:13:53 they’re not going to.
0:13:55 Yeah, yeah, yeah, yeah, yeah.
0:13:57 Right?
0:13:58 Because he’d be saying that basically in order to scare people into not competing.
0:14:01 Well, I think that whether or not they are going to get 100 times better, Sam would
0:14:06 be very likely to say that like Sam.
0:14:08 For those of you who don’t know him, he’s a very smart guy, but for sure, he’s a competitive
0:14:13 genius.
0:14:14 There’s no question about that.
0:14:15 So you have to take that into account.
0:14:17 Right.
0:14:18 So if they weren’t going to get a lot better, he would say that.
0:14:20 But of course, if they were going to get a lot better to your point, he would also say
0:14:22 that.
0:14:23 Yes.
0:14:24 Why not, right?
0:14:25 And so let me make the bull case that they are going to get 100 times better or maybe
0:14:28 even, you know, on an upper curve for a long time.
0:14:31 And there’s like enormous controversy, I think, on every one of the things I’m about to say,
0:14:34 that you can find very smart people in the space who believe basically everything I’m
0:14:38 about to say.
0:14:39 So one is there is generalized learning happening inside the neural networks.
0:14:42 And we know that because we now have introspection techniques where you can actually go inside
0:14:46 and look inside the neural networks to look at the neural circuitry that is being evolved
0:14:49 as part of the training process.
0:14:50 And you know, these things are evolving, you know, general computation functions.
0:14:54 There was a case recently where somebody trained one of these on a chess database and, you
0:14:57 know, just by training lots of chess games, it actually imputed a world model of a chess
0:15:00 board, you know, inside the neural network and, you know, that was able to do original
0:15:04 moves.
0:15:05 And so the neural network training process does seem to work.
0:15:06 And then specifically not only that, but, you know, Meta and others recently have been
0:15:10 talking about how so-called overtraining actually works, which is basically continuing to train
0:15:15 the same model against the same data for longer, you know, putting more and more compute cycles
0:15:18 against it.
0:15:19 You know, I’ve talked to some very smart people in the fields, including there, who basically
0:15:22 think that actually that works quite well.
0:15:24 The diminishing returns people were worried about about more training.
0:15:26 And they proved it in the new Lamar release, right?
0:15:29 That’s a primary technique they use.
0:15:31 Yeah, exactly.
0:15:32 Like one guy in the space basically told me, basically, he’s like, yeah, we don’t necessarily
0:15:35 need more data at this point to make these things better.
0:15:37 We maybe just need more compute cycles.
0:15:39 We just train it a hundred times more and it may just get actually a lot better.
0:15:41 So one day the labeling, it turns out that supervised learning ends up being a huge boost
0:15:47 to these things.
0:15:48 Yeah.
0:15:49 So we’ve got that.
0:15:50 We’ve got all of the kind of, you know, let’s say rumors and reports of various kinds of self-improvement
0:15:54 loops, you know, that kind of underway.
0:15:56 And most of the sort of super advanced practitioners in the field think that there’s now some form
0:15:59 of self-improvement loop that works, which basically is, you basically get an AI to do
0:16:03 what’s called chain of thoughts.
0:16:04 You get it to basically go step by step to solve a problem.
0:16:06 You get it to the point where it knows how to do that.
0:16:08 And then you basically retrain the AI on the answers.
0:16:10 And so you’re kind of basically doing a sort of a forklift upgrade across cycles of the
0:16:14 reasoning capability.
0:16:15 And so a lot of the experts think that sort of thing is starting to work now.
0:16:18 And then there’s still a raging debate about synthetic data, but there’s quite a few people
0:16:21 who are actually quite bullish on that.
0:16:23 Yeah.
0:16:24 And then there’s even this trade-off.
0:16:25 There’s this kind of dynamic where like LLMs might be okay at writing code, but they might
0:16:29 be really good at validating code.
0:16:30 You know, they might actually be better at validating code than they are at writing it.
0:16:33 That would be big help.
0:16:34 Yeah.
0:16:35 Well, but that also means like AI is maybe able to self-improve.
0:16:36 They can validate their own code.
0:16:37 Yeah.
0:16:38 Yeah.
0:16:39 They can validate their own code.
0:16:40 And it’s, and we have this anthropomorphic bias is very deceptive with these things because
0:16:43 you think of the model as an it, and so it’s like, how could you have an it that’s better
0:16:46 at validating the code, the writing code, but it’s not an it.
0:16:48 What it is is it’s this giant latent space, it’s this giant neural network.
0:16:51 And the theory would be there are totally different parts of the neural network for
0:16:54 writing code and validating code.
0:16:56 And there’s no consistency requirement whatsoever that the network be equally good at both of
0:16:59 those things.
0:17:00 And so if it’s better at one of those things, right, so then the thing that it’s good at
0:17:04 might be able to make the thing that it’s bad at better and better.
0:17:06 Right.
0:17:07 Right.
0:17:08 Right.
0:17:09 Right.
0:17:10 Right.
0:17:11 Sure.
0:17:12 Sure.
0:17:13 Right.
0:17:14 Sort of a self-improvement thing.
0:17:15 And so then on top of that, there’s all the other things coming, right?
0:17:16 Which is it’s everything is all these practical things, which is there’s an enormous chip constraint
0:17:17 right now.
0:17:18 And the AI that anybody uses today is its capabilities are basically being gated by the availability
0:17:22 of chips, but like that will resolve over time.
0:17:24 You know, there’s also your point on like data labeling, there is a lot of data in these
0:17:27 things now, but there is a lot more data out in the world.
0:17:30 And there’s, you know, at least in theory, some of the leading AI companies are actually
0:17:32 paying to generate new data.
0:17:34 And by the way, even like the open source data sets are getting much better.
0:17:36 And so there’s a lot of like data improvements that are coming.
0:17:39 And then, you know, there’s just the amount of money pouring into the space to be able
0:17:41 to underwrite all this.
0:17:42 And then by the way, there’s also just the systems engineering work that’s happening,
0:17:45 right?
0:17:46 Which is a lot of the current systems.
0:17:47 You know, we’re basically, we’re built by scientists.
0:17:48 And now they’re really world-class engineers are showing up and tuning them up and getting
0:17:51 them to work better.
0:17:52 And you know, maybe that’s not a, maybe that’s not a.
0:17:55 Which makes training, by the way, way more efficient as well, not just inference, but
0:18:00 also training.
0:18:01 Yeah.
0:18:02 Exactly.
0:18:03 And then even, you know, another improvement area is basically Microsoft released their
0:18:05 five small language model yesterday.
0:18:06 And apparently it’s competitive.
0:18:08 It’s a very small model, competitive with much larger models.
0:18:10 And the big thing they say that they did was they basically optimized the training set.
0:18:14 So they basically de-duplicated the training set.
0:18:16 They took out all the copies and they really optimized on a small amount of training data,
0:18:19 on a small amount of high quality training data, as opposed to the larger amounts of
0:18:22 low quality data that most people train on.
0:18:23 You add all these up and you’ve got eight or 10 different combination of sort of practical
0:18:27 and theoretical improvement vectors that are all in play.
0:18:31 And it’s hard for me to imagine that some combination of those doesn’t lead to like
0:18:33 really dramatic improvement from here.
0:18:35 I definitely agree.
0:18:36 I think that’s for sure going to happen, right?
0:18:38 Like if you were still back to Sam’s proposition, I think if you were a startup and you were
0:18:43 like, okay, in two years I can get as good as GPT-4, you shouldn’t do that.
0:18:48 Right.
0:18:49 That would be a bad mistake.
0:18:51 Right.
0:18:52 Right.
0:18:53 Well, this also goes to, you know, a lot of entrepreneurs are afraid of, well, I’ll give
0:18:55 you an example.
0:18:56 So a lot of entrepreneurs, here’s this thing they’re trying to figure out, which is, okay,
0:18:58 I really think, I know how to build a SaaS app that harnesses an LLM to do really good
0:19:02 marketing collateral.
0:19:03 Let’s just make it very similar.
0:19:04 A very similar thing.
0:19:05 Yeah.
0:19:06 And so I build a whole system for that.
0:19:07 Will it just turn out to be that the big models in six months will be even better in
0:19:11 making marketing collateral just from a simple prompt, such that my apparently sophisticated
0:19:16 system is just irrelevant because the big model just does it?
0:19:18 Yeah.
0:19:19 Yeah.
0:19:20 Let’s talk about that.
0:19:21 Like apps, you know, another way you can think about it is that the criticism of a lot
0:19:24 of current AI app companies is their quote unquote, you know, GPT wrappers, they’re sort
0:19:28 of thin layers of wrapper around the core model, which means the core model could commoditize
0:19:31 them or displace them.
0:19:32 But the counterargument, of course, is it’s a little bit like calling all, you know, old
0:19:36 software apps, you know, database wrappers, you know, wrappers around a database.
0:19:40 It turns out like actually wrappers around a database is like most modern software and
0:19:43 a lot of that actually turned out to be really valuable in there.
0:19:45 It turns out there’s a lot of things to build around the core engine.
0:19:47 So yeah.
0:19:48 So Ben, how do we think about that when we run into companies thinking about building
0:19:50 apps?
0:19:51 Yeah.
0:19:52 You know, it’s a very tricky question because there’s also this correctness gap, right?
0:19:56 So you know, why do we have co-pilots?
0:20:00 Where are the pilots?
0:20:01 Right?
0:20:02 Where are the AI?
0:20:03 There’s no AI pilots.
0:20:04 They’re only AI co-pilots.
0:20:05 There’s a human in the loop on absolutely everything.
0:20:09 And that really kind of comes down to this, you know, you can’t trust the AI to be correct
0:20:16 in drawing a picture or writing a program or, you know, even like writing a court brief
0:20:24 without making up citations, you know, all these things kind of require a human and
0:20:30 kind of turns out to be like fairly dangerous to not.
0:20:33 And then I think that so what’s happening a lot with the application layer is people
0:20:37 saying, well, to make it really useful, I need to turn this co-pilot into a pilot.
0:20:43 And can I do that?
0:20:44 And so that’s an interesting and hard problem.
0:20:48 And then there’s a question of, is that better done at the model level or at some layer on
0:20:53 top that, you know, kind of teases the correct answer out of the model, you know, by doing
0:20:59 things like using code validation or what have you?
0:21:01 Or is that just something that the models will be able to do?
0:21:04 I think that’s one open question.
0:21:06 And then, you know, as you get into kind of domains and, you know, potentially rapprocent
0:21:11 things, I think there’s a different dimension than what the models are good at, which is
0:21:16 what is the process flow, which is kind of in database for all to so on the database kind
0:21:23 of analogy, there is like the part of the task in a law firm that’s writing the brief,
0:21:31 but there’s 50 other tasks and things that have to be integrated into the way a company
0:21:38 works, like the process flow, the orchestration of it.
0:21:42 And maybe there are, you know, a lot of these things, like if you’re doing video production,
0:21:46 there’s many tools or music, even, right, like, okay, who’s going to write the lyrics,
0:21:51 which AI, I’ll write the lyrics and which AI, I’ll figure out the music, and then like,
0:21:56 how does that all come together and how do we integrate it and so forth.
0:22:00 And those things tend to, you know, just require a real understanding of the end customer and
0:22:08 so forth in a way, and that’s typically been how like applications have been different
0:22:13 than platforms in the past is like, there’s real knowledge about how the customer using
0:22:19 it wants to function that doesn’t have anything to do with the kind of intelligent or is just
0:22:26 different than what the platform is designed to do.
0:22:30 And to get that out of the platform for a kind of company or a person turns out to be
0:22:35 really, really hard.
0:22:36 And so those things, I think, are likely to work, you know, especially if the process
0:22:41 is very complex.
0:22:42 And it’s something that’s funny as a firm, you know, we’re a little more hardcore technology
0:22:47 oriented, and we’ve always struggled with those, you know, in terms of, oh, this is
0:22:52 like a some process application for like plumbers to figure out this, and we’re like, well,
0:22:59 where’s the technology.
0:23:01 But you know, a lot of it is how do you encode, you know, some level of domain expertise and
0:23:07 kind of how things work in the actual world back into the software.
0:23:13 I often think I’ve been told founders that you can think about this in terms of price,
0:23:16 you can kind of work backwards from pricing a little bit, which is to say sort of business
0:23:19 value and what you can charge for, which is, you know, the natural thing for any technologists
0:23:23 to do is to kind of say, I have this new technological capability, and I’m going to sell it to people
0:23:26 and like, what am I going to charge for it is going to be somewhere between, you know,
0:23:29 my cost of providing it and then, you know, whatever markup I think I can justify, you
0:23:33 know, and if I have a monopoly providing it, maybe the markup is infinite.
0:23:36 But, you know, it’s kind of this technology forward, you know, kind of supplier supply
0:23:40 forward, you know, pricing model, there’s a completely different pricing model for kind
0:23:44 of business value backwards, and sort of, you know, so-called value pricing, value-based
0:23:49 pricing.
0:23:50 And that’s, you know, to your point, that’s basically a pricing model that says, okay,
0:23:53 what’s the business value to the customer of the thing?
0:23:56 And if the business value is, you know, a million dollars, then can I charge 10% of
0:24:01 that and get $100,000, right, or whatever?
0:24:04 And then, you know, why is it cost $100,000 as compared to $5,000 is because, well, because
0:24:09 to the customer, it’s worth a million dollars, and so they’ll pay 10% for it.
0:24:12 Yeah, actually, so a great example of that, like, we’ve got a company in our portfolio,
0:24:19 Crest AI that does things like debt collection, okay, so if I can collect way more debt with
0:24:28 way fewer people with my, you know, it’s a co-pilot type solution, then what’s that
0:24:36 worth?
0:24:37 Well, it’s worth a heck of a lot more than just buying an open AI license because an
0:24:43 open AI license is not going to easily collect debts, or kind of enable your debt collectors
0:24:50 to be massively more efficient, or that kind of thing, so it’s bridging that gap between
0:24:56 the value.
0:24:57 And I think you had a really important point, the test for whether your idea is good is how
0:25:00 much can you charge for it?
0:25:02 Can you charge the value?
0:25:04 Or are you just charging the amount of work it’s going to take the customer to put their
0:25:10 own wrapper on top of open AI, like, that’s the real testimony of, like, how deep and
0:25:17 how important is what you’ve done?
0:25:19 Yeah, and so to your point on, like, the kinds of businesses that technology investors have
0:25:24 had a hard time with, you know, kind of thinking about, you know, maybe accurately is sort of,
0:25:28 it’s the company that is, it’s a vendor that has built something where it is a specific
0:25:32 solution to a business problem, where it turns out the business problem is very valuable
0:25:36 to the customer.
0:25:37 And so therefore they will pay a percentage of the value provided back in the terms for
0:25:43 price for the software, and that actually turns out you can have businesses that are
0:25:47 not very technologically differentiated that are actually extremely lucrative.
0:25:52 And then because that business is so lucrative, they can actually afford to go think very
0:25:56 deeply about how technology integrates into the business, what else they can do.
0:26:00 You know, this is like the story of a Salesforce.com, for example, right?
0:26:03 And by the way, there’s kind of a, a chance, a theory that the models are all getting really
0:26:09 good.
0:26:10 There are open source models, there are, like, that are awesome, you know, Lama, Mistral,
0:26:16 like these are great models.
0:26:18 And so the actual layer where the value is going to crew is going to be, like, tools,
0:26:24 orchestration, that kind of thing, because you can just plug in whatever the best model
0:26:28 is at the time, whereas the models are going to be competing, you know, in a death battle
0:26:33 with each other and, you know, be commoditized down to the, you know, the cheapest one wins
0:26:39 and that kind of thing.
0:26:40 So, you know, you could argue that the, the best thing to do is, is to kind of connect
0:26:47 the power to the people.
0:26:49 Right.
0:26:50 Right.
0:26:51 So that actually takes us to the next question, and this is a two-in-one question.
0:26:54 So Michael asks, and these are, and I’ll say these are diametrically opposed, which
0:26:57 is why I paired them.
0:26:58 So Michael asks, why are VCs making huge investments in generative AI startups when it’s clear
0:27:03 these startups won’t be profitable anytime soon, which was a loaded, loaded question,
0:27:07 but we’ll take it.
0:27:08 And then Kaiser asks, if AI deflates the cost of building a startup, how will the structure
0:27:12 of tech investment change?
0:27:14 And of course, Ben, this goes to exactly what you just said.
0:27:16 So it’s basically the questions are diametrically opposed, because if you squint out of your
0:27:20 left eye, right, what you see is basically the amount of money being invested in the
0:27:23 foundation model companies kind of going up to the right at a furious pace, you know,
0:27:26 these companies are raising hundreds of millions, billions, tens of billions of dollars.
0:27:29 And it’s just like, oh my God, look at these sort of capital, you know, sort of, I don’t
0:27:33 know, infernos, you know, that hopefully will result in value at the end of the process.
0:27:37 But my God, look at how much money is being invested in these things.
0:27:39 If you squint through your right eye, you know, you think, wow, that now all of a sudden it’s
0:27:43 like much easier to build software.
0:27:45 It’s much easier to have a software company.
0:27:46 It’s much easier to like have a small number of programmers writing complex software because
0:27:49 they’ve got all these AI co-pilots and all these automated software development capabilities
0:27:53 that are coming online.
0:27:54 Yeah.
0:27:55 So on the other side, the cost of building an AI like application startup might, you
0:27:59 know, crash.
0:28:00 And it might just be that like the, you know, the Salesforce, the AI Salesforce.com might
0:28:04 cost, you know, a tenth or a hundredth or a thousandth of the amount of money that it
0:28:07 took to build the, you know, the old database driven Salesforce.com.
0:28:10 And so yeah, so what do we think of the dichotomy, which is you can actually look, you can actually
0:28:14 look out of either eye and see cost, either cost to the moon as like for startup funding
0:28:19 or cost actually going to zero.
0:28:21 Yeah.
0:28:22 Well, like so it is interesting.
0:28:24 I mean, we actually have companies in both camps, right?
0:28:27 Like I think probably the companies that have gotten to profitability the fastest, maybe
0:28:33 in the history of the firm have been AI companies or been, you know, AI companies in the portfolio
0:28:37 where the revenue grows so fast that it actually kind of runs out ahead of the cost.
0:28:44 And then there are like, you know, people who are in the foundation model race who are
0:28:49 raising hundreds of millions, you know, even billions of dollars to kind of keep pace and
0:28:54 so forth.
0:28:55 They also are kind of generating revenue at a fast rate.
0:29:00 The headcount in all of them is small.
0:29:02 So I would say, you know, where AI money goes and even, you know, like if you look at open
0:29:09 AI, which is the big spender in startup world, which, you know, we are also investors and
0:29:16 is you know, headcount wise, they’re pretty small against their revenue.
0:29:20 Like it is not a big company headcount.
0:29:22 Like if you look at the revenue level and how fast they’ve gotten there, it’s pretty
0:29:27 small.
0:29:28 Now, the total expenses are ginormous, but they’re going into the model creation.
0:29:33 So it’s an interesting thing.
0:29:35 I mean, I’m not entirely sure how to think about it, but I think like if you’re not building
0:29:40 a foundation model, it will make you more efficient and probably gets profitability
0:29:45 quicker.
0:29:46 Right.
0:29:47 So the counter the counter and this is a very bullish counter argument, but the counter
0:29:51 argument to that would be basically that falling costs for like building new software
0:29:55 companies are a mirage.
0:29:57 And the reason for that is this thing in economics called the Jevons paradox, which I’m going
0:30:01 to read from Wikipedia.
0:30:02 So the Jevons paradox occurs when technological progress increases the efficiency with which
0:30:07 a resource is used, right, reducing the amount of that resource necessary for any one use.
0:30:12 But the falling cost induces increases in demand, right, elasticity, enough that the
0:30:17 resource use overall is increased rather than reduced.
0:30:20 Yeah.
0:30:21 That’s certainly possible.
0:30:23 Right.
0:30:24 And so this is you see, you see versions of this, for example, you build in your freeway
0:30:27 and it actually makes traffic jams worse, right?
0:30:30 Because basically what happens is, oh, it’s great.
0:30:31 Now there’s more roads.
0:30:32 Now we can have more people live here.
0:30:34 We can have more people that, you know, we can make these companies bigger and now there’s
0:30:36 more traffic than ever.
0:30:37 And now the traffic is even worse.
0:30:39 Or you saw the classic examples during the Industrial Revolution, coal consumption, as
0:30:43 the price of coal drops, people use so much more coal that actually the overall consumption
0:30:48 actually increased.
0:30:49 And people are getting a lot more power, but the result was the use of a lot more coal
0:30:53 in the paradox.
0:30:54 And so the paradox here would be, yes, the cost of developing any given piece of software
0:30:59 falls, but the reaction to that is a massive surge of demand for software capabilities.
0:31:05 And so the result of that actually is, although it even, it looks like starting software companies,
0:31:08 the price is going to fall.
0:31:09 Actually, what’s going to happen is it’s going to rise for the high quality reason that you’re
0:31:12 going to be able to do so much more, right, with software.
0:31:16 The products are going to be so much better and the roadmap is going to be so amazing
0:31:19 of the things you can do.
0:31:20 And the customers are going to be so happy with it that they’re going to want more and
0:31:22 more and more.
0:31:24 So the result of it, and by the way, another example of Jevons Faradok’s playing out in
0:31:27 another related industries in Hollywood, you know, CGI in theory should have reduced the
0:31:31 price of making movies.
0:31:33 In reality, it’s increased it because audience expectations went up.
0:31:36 And now you go to a Hollywood movie and it’s wall-to-wall CGI.
0:31:39 And so, you know, movies are more expensive to make than ever.
0:31:41 And so the result of it, you know, so, but the result in Hollywood is at least much more,
0:31:45 let’s say visually elaborate, you know, movies, whether they’re better or not is another question,
0:31:48 but like much more visually elaborate, compelling, kind of visually stunning movies through CGI.
0:31:52 The version here would be much better software, like radically better software to the end user,
0:31:57 which causes end users to want a lot more software, which causes actually the price of development
0:32:01 to rise.
0:32:02 You know, if you just think about like a simple case like travel, like, okay, booking a trip
0:32:07 through Expedia is like complicated, you’re likely to get it wrong, you’re clicking on
0:32:12 menus and this and that and the other and like, you know, in AI version of that would
0:32:17 be like, you know, send me to Paris, put me in a hotel I love at the best price, you know,
0:32:22 send me on the best possible kind of airline, an airline ticket and then, you know, like
0:32:29 make it like really special for me and like maybe you need a human to go, okay, like we’re
0:32:35 going to, you know, or maybe the AI gets far complicated and says, okay, well, we know
0:32:40 the person loves chocolate and we’re going to like, you know, FedEx in the best chocolate
0:32:45 in the world from Switzerland into this hotel in Paris and this and that and the other.
0:32:50 And so like the quality, you can, the quality could get to levels that we can’t even imagine
0:32:56 today just because, you know, the software tools aren’t, aren’t what they’re going to
0:33:00 be.
0:33:01 So that’s right.
0:33:02 Yeah, I kind of buy that actually.
0:33:04 I think I brought the argument you’re both is how about, yeah, or how about I’m going
0:33:10 to land in whatever Boston at six o’clock, I want to have dinner at seven with a table
0:33:13 full of like super interesting people.
0:33:15 Yeah, right, right, right, right, right, right, right, right, like, yeah, yeah, yeah, yeah,
0:33:21 no, no travel agent would do that for you today, nor would you want them to.
0:33:24 No.
0:33:25 No.
0:33:26 Right.
0:33:27 Well, and then you think about it, it’s got to be integrated into my personal AI and
0:33:33 like, and this is, you know, there’s just like unlimited kind of ideas that you can
0:33:37 do.
0:33:38 And I think this is one of the kind of things that’s always been underestimated about humans
0:33:43 is like our ability to come up with new things we need.
0:33:48 Like that has been unlimited.
0:33:50 And there’s a very kind of famous case where John Maynard Keynes, the kind of prominent
0:33:56 economist in the kind of first half of last century, had this thing that he predicted,
0:34:01 which is like, nobody, because of automation, nobody would ever work a 40 hour work week,
0:34:08 you know, like good, because once their needs were met, needs being like shelter and food.
0:34:15 And you know, I don’t even know if transportation was in there.
0:34:17 Like that was it.
0:34:18 It was over.
0:34:19 You would never work past the need for shelter and food.
0:34:23 Like why would you?
0:34:24 Like there’s no reason to, but of course needs expanded.
0:34:27 So then everybody needed a refrigerator, everybody needed not just one car, but a car for everybody
0:34:32 in the family.
0:34:33 Everybody needed a television set, everybody needed like glorious vacations, everybody,
0:34:38 you know.
0:34:39 So what are we going to need next?
0:34:41 I’m quite sure that I can’t imagine it, but like somebody’s going to imagine it and it’s
0:34:46 quickly going to become a need.
0:34:48 Yeah, that’s right.
0:34:49 By the way, as Keynes famously said, his essay I think was economic prospects for our grandchildren,
0:34:55 which was basically that.
0:34:56 Yeah.
0:34:57 You just articulate it.
0:34:58 So Karl Marx said another version of that, I just pulled up the quote.
0:35:00 So that society, when you know, when the Marxist utopia socialism is achieved, society regulates
0:35:06 the general production.
0:35:07 That makes it possible for me to do blah, blah, blah, to hunt in the morning, fish in
0:35:12 the afternoon, rear cattle in the evening, criticize after dinner.
0:35:18 What a glorious life.
0:35:20 What a glorious life.
0:35:21 Like if I could just list four things that I do not want to do, it’s hunt, fish, rear
0:35:26 cattle and criticize.
0:35:27 Yeah.
0:35:28 Yeah.
0:35:29 Right.
0:35:30 And by the way, it says a lot about Marx that those were his four things.
0:35:32 Well, the criticizing being his favorite thing, I think it’s basically communism in
0:35:37 a nutshell.
0:35:38 Yeah.
0:35:39 Exactly.
0:35:40 I don’t want to get too political, but yes, yes, 100%.
0:35:43 And so yeah, so it’s this, this, yeah, do you, what they, what they have with Keynes and
0:35:46 Marx in common is just this incredibly constricted, it’s incredibly constricted view of what people
0:35:50 want to do.
0:35:51 And then, and then correspondingly, you know, the other thing is just like, you know, people,
0:35:53 people who want, people who want to have a mission.
0:35:55 I mean, probably some people just want to fish and hunt, but you know, a lot of, a lot
0:35:58 of people want to have a mission.
0:35:59 They want to have a cause.
0:36:00 They want to have a purpose.
0:36:01 They want to be useful.
0:36:02 They want to be productive.
0:36:03 It’s actually a good thing in life.
0:36:04 It turns out.
0:36:05 It turns out.
0:36:06 Yeah.
0:36:07 In the startling turn of events.
0:36:09 Okay.
0:36:10 So yeah.
0:36:11 So yeah, I think that I’ve long felt, you know, a little bit of the software eats the
0:36:13 world thing a decade ago.
0:36:15 I’ve always thought that, I’ve always thought that basically demand for software is sort
0:36:18 of perfectly elastic, possibly to infinity.
0:36:20 And the theory there basically is if you just continuously bring down the cost of software,
0:36:24 you know, which has been happening over time, then basically demand, you know, basically
0:36:26 is like basically perfectly correlates upward.
0:36:29 And the reason is because, you know, kind of as we’ve been discussing, but it’s kind
0:36:32 of there’s, there’s always something else to do in software.
0:36:35 There’s always something else to automate.
0:36:36 There’s always something else to optimize.
0:36:37 There’s always something else to improve.
0:36:40 There’s always something to make better.
0:36:41 And, you know, in the moment with the constraints that you have today, you may not, you know,
0:36:44 think of what that is, but the minute you don’t have those constraints, you’ll imagine
0:36:47 what it is.
0:36:48 I’ll just give you an example.
0:36:49 I mean, I’ll give you an example of playing out with AI right now, right?
0:36:51 So there have been, and we have, you know, we have companies that do this.
0:36:54 You know, there have been, you know, there have been companies that have made AI, you
0:36:56 know, that have made software systems for doing security cameras forever, right?
0:37:00 And it’s like, for a long time, it was like a big deal to have software that would do
0:37:03 like, you know, have different security camera feeds and store them on a DVR and be able
0:37:06 to replay them and have an interface that lets you do that.
0:37:09 Well, it’s like, you know, AI security cameras, all of a sudden can have like, actual like,
0:37:13 semantic knowledge of what’s happening in the environment.
0:37:14 And so they can say, you know, hey, that’s Ben, and then they can say, oh, hey, you know,
0:37:18 that’s Ben, but he’s carrying a gun.
0:37:19 Yeah.
0:37:20 Right.
0:37:21 Right.
0:37:22 And by the way, that’s Ben and he’s carrying a gun, but that’s because like he hunts on,
0:37:24 you know, on Thursdays and Fridays, as compared to that’s Mary and she never carries a gun
0:37:28 and like, you know, like something is wrong and she’s really mad, right?
0:37:32 She’s got a, yeah, really steamed expression on her face and we should probably be worried
0:37:35 about it, right?
0:37:36 So there’s like an entirely new set of capabilities you can do just as one example for security
0:37:40 systems that were never possible pre AI and the security system that actually has a semantic
0:37:44 understanding of the world is obviously much more sophisticated than the one that doesn’t
0:37:48 and might actually be more expensive to make, right?
0:37:50 Right.
0:37:51 Well, and just imagine healthcare, right?
0:37:53 Like you could wake up every morning and have a complete diagnostic, you know, like
0:38:01 how am I doing today?
0:38:02 Like what are all my levels of everything?
0:38:04 And, you know, how should I interpret them, you know, better than, you know, this is one
0:38:09 thing where AI is really good is, you know, medical diagnosis because it’s a super high
0:38:14 dimensional problem.
0:38:16 But if you can get access to, you know, your continuous glucose reading, you know, maybe
0:38:21 sequester blood now and again, this and that and the other, yeah, you’ve got an incredible
0:38:26 kind of view of things and who doesn’t want to be healthier, you know, like now we have
0:38:32 a scale.
0:38:33 That’s basically what we do, you know, maybe check your heart rate or something, but like
0:38:39 pretty primitive stuff compared to where we could go.
0:38:41 Yeah, that’s right.
0:38:42 Okay, good.
0:38:43 All right.
0:38:44 So let’s go to the next topic.
0:38:45 So on the topic of data, so a major Tom asks, as these AI models allow for us to copy existing
0:38:50 app functionality at minimal cost, proprietary data seems to be the most important moat.
0:38:55 How do you think that will affect proprietary data value?
0:38:58 What other moats do you think companies can focus on building in this new environment?
0:39:01 And then Jeff Weishaupt asks, how should companies protect sensitive data, trade secrets, proprietary
0:39:06 data, individual privacy, and the brave new world of AI?
0:39:09 So let me start with a provocative statement, Betsy, if you agree with it, which is, you
0:39:15 know, you sort of hear a lot, this sort of statement or cliche is like data is the new
0:39:18 oil.
0:39:19 And so it’s like, okay, data is the key input to training AI, making all this stuff work.
0:39:23 And so, you know, therefore, you know, data is basically the new resource.
0:39:26 It’s the limiting resource.
0:39:27 It’s the super valuable thing.
0:39:29 And so, you know, whoever has the best data is going to win, and you see that directly
0:39:32 in how you train AI’s.
0:39:33 And then, you know, you also have like a lot of companies, of course, that are now trying
0:39:36 to figure out what to do with AI.
0:39:38 And a very common thing you’ll hear from companies is, well, we have proprietary data, right?
0:39:42 So I’m, you know, I’m a hospital chain or I’m, you know, whatever, any kind of business,
0:39:46 insurance company or whatever.
0:39:47 And I’ve got all this proprietary data that I can apply, you know, that I’ll be able to,
0:39:50 you know, build things with my proprietary data with AI that won’t just, you know, be
0:39:54 something that anybody will be able to have.
0:39:56 Let me argue that basically, let’s see, let me argue in like almost every case like that,
0:40:01 it’s not true.
0:40:02 It’s basically what the Internet kids would call cope.
0:40:04 It’s simply not true.
0:40:05 And the reason it’s just not true is because the amount of data available on the Internet
0:40:10 and just generally in the environment is just a million times greater.
0:40:16 And so, while it may not, you know, while it may not be true that I have your specific
0:40:19 medical information, I have so much medical information off the Internet for so many people
0:40:24 in so many different scenarios that it just swamps the value of quote, your data, you
0:40:30 know, just, it’s just, it’s just like overwhelming.
0:40:31 And so your, your, your proprietary data as, you know, company acts will be a little bit
0:40:35 useful on the margin, but it’s not actually going to move the needle.
0:40:37 And it’s not really going to be a barrier to entry in most cases.
0:40:40 And then let me cite as proof for the, for my, my belief that this is mostly cope is
0:40:45 there has never been nor is there now any sort of basically any level of sort of rich
0:40:49 or sophisticated marketplace for data, market for data, there’s no, there’s no, there’s no
0:40:54 large marketplace for data.
0:40:56 And in fact, in fact, what there are is there are very small markets for data.
0:40:59 So there are these businesses called data brokers that will sell you, you know, large
0:41:01 numbers of like, you know, information about users on the Internet or something.
0:41:05 And they’re just small businesses, like they’re just not large, it just turns out like information
0:41:09 on lots of people is just not very valuable.
0:41:11 And so if the data actually had value, you know, it would have a market price and you
0:41:15 would see it transacting and you actually very specifically don’t see that, which is
0:41:19 sort of a, you know, yeah, sort of quantitative proof that the data actually is not nearly
0:41:23 as valuable as people think it is.
0:41:25 Where I agree, so I agree that the data, like just as here’s a bunch of data and I can sell
0:41:34 it without doing anything to the data is like massively overrated, like I definitely agree
0:41:42 with that.
0:41:43 And like maybe I can imagine some exceptions, like some, you know, special population genomic
0:41:49 databases or something that are, that were very hard to acquire, that are useful in some
0:41:53 way that’s, you know, that’s not just like living on the Internet or something like that.
0:41:57 I could imagine where that’s super highly structured, very general purpose and not widely available.
0:42:04 But for most data in companies is not like that.
0:42:07 And that it tends to not, it’s either widely available or not general purpose.
0:42:12 It’s kind of specific.
0:42:14 Having said that, right, like companies have made great use of data, for example, a company
0:42:20 that you’re familiar with, Meta, uses its data to kind of great ends itself, feeding
0:42:26 it into its own AI systems, optimizing its products in incredible ways.
0:42:31 And I think that, you know, us, Andreessen Horowitz, actually, you know, so we just raised
0:42:35 $7.2 billion and it’s not a huge deal.
0:42:40 But we took our data and we put it into an AI system and our LPs were able, there’s a
0:42:47 million questions investors have about everything we’ve done, our track record, every company
0:42:53 we’ve invested and so forth.
0:42:55 And for any of those questions, they could just ask the AI, they could be wake up at
0:42:58 three o’clock in the morning, go, “Do I really want to trust these guys?”
0:43:02 And go in and ask the AI a question and boom, they’d get an answer back instantly.
0:43:05 They’d have to wait for us and so forth.
0:43:07 So we really kind of improved our investor relations product tremendously through use
0:43:12 of our data.
0:43:14 And I think that almost every company can improve its competitiveness through use of its own
0:43:21 data.
0:43:22 But the idea that it’s collected some data that it can go like sell or that is oil or
0:43:30 what have you, that’s, yeah, that’s probably not true.
0:43:36 I would say, and you know, it’s kind of interesting because a lot of the data that you would think
0:43:41 would be the most valuable would be like your own code base, right?
0:43:46 Your software that you’ve written, so much of that lives in GitHub.
0:43:49 Nobody is actually, I don’t know of any company, we work with, you know, whatever a thousand
0:43:55 software companies and do we know any that’s like building their own programming model
0:44:00 on their own code?
0:44:02 Like, or, and would that be a good idea?
0:44:05 Probably not just because there’s so much code out there that the systems have been
0:44:09 trained on.
0:44:10 So like that’s not so much of an advantage.
0:44:14 So I think it’s a very specific kind of data that would have value.
0:44:17 Well, let’s, let’s make it actionable then.
0:44:19 If I’m, if I’m running a big company, like if I’m running an insurance company or a bank
0:44:23 or a hospital chain or something like that, like how, or, you know, a consumer packaged
0:44:27 goods company, Pepsi or something, like what, how should I validate like, how should I validate
0:44:32 that I actually have a valuable proprietary data asset that I should really be focusing
0:44:36 on using versus maybe versus in the alternate, by the way, maybe there’s other things that
0:44:40 maybe I should be taking all the effort I would spend on trying to optimize use of that
0:44:43 data and maybe I should use it entirely trying to build things using internet data instead.
0:44:47 Yeah, so, so I think, I mean, look, if you’re right, if you’re in the insurance business,
0:44:53 then like all your actuarial data is both interesting and then I, I don’t know that anybody publishes
0:45:00 their actual, or actuarial data.
0:45:03 And so like, I’m not sure how you would train the model on stuff off of the internet.
0:45:08 Yes.
0:45:09 That’s good.
0:45:10 Let me, let me, can I challenge that one?
0:45:11 So that, that would be good.
0:45:12 That’d be a good thing.
0:45:13 That’d be a good test case.
0:45:14 So I’m an insurance company.
0:45:15 I’ve got records on 10 million people and, you know, the actuarial tables and when they,
0:45:17 when they get sick and when they die.
0:45:18 Okay.
0:45:19 That’s great.
0:45:20 But like there’s lots and lots of actuarial, general actuarial data on the internet for
0:45:24 large scale populations, you know, because governments collect the data and they process
0:45:28 it and they publish reports.
0:45:29 And there’s lots of, there’s lots of academic studies.
0:45:32 And so like, is your, is, is your large data set giving you any additional actuarial information
0:45:38 that the much larger data set on the internet isn’t already providing you?
0:45:41 Like are your, are your insurance clients actually actuarially any different than just
0:45:46 everybody?
0:45:47 I think so.
0:45:48 Cause on intake on the, you know, when you get insurance, they give you like a blood test.
0:45:55 They’ve got all these things we know if you’re a smoker and so forth.
0:45:58 And in the, I think in the general data set, like, yeah, you know who dies, but you don’t
0:46:02 know what the fuck they did coming in.
0:46:05 And so what you really are looking for is like, okay, for this profile of person with
0:46:09 this kind, with these kinds of lab results, how long do they live?
0:46:13 And that’s, that’s where the value is.
0:46:15 And I think that, you know, interesting, like, you know, I was thinking about like a company
0:46:20 like Coinbase where, right, they have incredibly valuable assets in the terms of money.
0:46:27 They have to stop people from breaking in.
0:46:29 They’ve done a massive amount of work on that.
0:46:32 They’ve seen all kinds of break-in types.
0:46:34 I’m sure they have tons of data on that.
0:46:36 It’s probably like weirdly specific to people trying to break into crypto exchanges.
0:46:42 And so, you know, like, I think it could be very useful for them.
0:46:45 I don’t think they could sell it to anybody, but, you know, I think every company’s got
0:46:51 data that if, you know, fed into an intelligent system would help their business.
0:46:57 And I think almost nobody has data that they could just go sell.
0:47:02 And then there’s this kind of in-between question, which is, what data would you want to let
0:47:08 Microsoft or Google or OpenAI or anybody get their grubby little fingers on?
0:47:13 And that I’m not sure.
0:47:19 That’s a — that I think is the question that enterprises are wrestling with more than — it’s
0:47:24 not so much should we go like sell our data, but should we train our own model just so
0:47:29 we can maximize the value?
0:47:32 Or should we feed it into the big model?
0:47:35 And if we feed it into the big model, do all of our competitors now have the thing that
0:47:39 we just did?
0:47:40 And, you know, or could we trust the big company to not do that to us, which I kind of think
0:47:47 the answer on trusting the big company not to F with your data is probably I wouldn’t
0:47:52 do that.
0:47:55 If your competitiveness depends on that, you probably shouldn’t do that.
0:47:58 Well, there are at least reports that certain big companies are using all kinds of data
0:48:02 that they should be using to train their models already.
0:48:04 So.
0:48:05 Yep.
0:48:06 I think like I think those reports are very likely true.
0:48:10 Right.
0:48:11 Or they have open data, right?
0:48:12 Like this is, you know, we’ve talked about this before, but, you know, the same companies
0:48:17 that are saying they’re not stealing all the data from people or taking it in an unauthorized
0:48:22 way, refuse to say open their data.
0:48:26 Like why not tell us where your data came from?
0:48:28 And then in fact, they’re trying to shut down all openness, no open source, no open weights,
0:48:32 no open data, no open nothing, and go to the government and try and get to do that.
0:48:36 You know, if you’re not a thief, then why are you doing that?
0:48:39 Right.
0:48:40 Right.
0:48:41 Right.
0:48:42 What are you hiding?
0:48:43 By the way, there’s other twists and turns here.
0:48:44 The insurance example, I kind of deliberately loaded it because you may know it’s actually
0:48:48 illegal to use genetic data for insurance purposes, right?
0:48:51 So there’s this thing called the GenoLaw Genetic Information Non-Discrimination Act of 2008.
0:48:58 And basically, it basically bans health insurers in the U.S. room actually using genetic data
0:49:02 for the purpose of doing, you know, health assessment, actual assessment of, which by
0:49:06 the way, because now the genomics are getting really good.
0:49:08 Like that data probably actually is, you know, among the most accurate data you could have
0:49:12 if you were actually trying to predict, like, when people are going to get sick and die.
0:49:15 And they’re literally not allowed to use it.
0:49:18 Yeah, it is.
0:49:20 I think that this is an interesting, like weird misapplication of good intentions in
0:49:28 a policy way that’s probably going to kill more people than ever get saved by every kind
0:49:37 of health FDA, et cetera, policy that we have, which is, you know, in a world of AI, having
0:49:46 access to data on all humans, why they get sick, what their genetics were, et cetera,
0:49:51 et cetera, et cetera, is the most, that is, you know, you’re talking about data being
0:49:55 the new oil, like that is the new oil, that’s the healthcare oil is, you know, if you could
0:49:59 match those up, then we’d never not know why we’re sick, you know, you could make everybody
0:50:05 much healthier, all these kinds of things.
0:50:08 But you know, to kind of stop the insurance company from kind of overcharging people who
0:50:15 are more likely to die, we’ve kind of locked up all this data, a kind of better idea would
0:50:23 be to just go, okay, for the people who are likely to, like we subsidize healthcare, like
0:50:29 massively for individuals anyway, just like differentially subsidize, and, you know, and
0:50:38 then like you solve the problem and you don’t lock up all the data.
0:50:41 But yeah, it’s typical of politics and policy, I mean, most of them are like that, I think.
0:50:47 Yeah.
0:50:48 Well, there’s these interesting questions like insurance, like basically, they’re one
0:50:50 of the questions people have asked about insurance is like, if you had perfectly predictive
0:50:53 information on like individual outcomes, does the whole concept of insurance actually still
0:50:57 work, right, because the whole theory of insurance is risk pooling, right, it’s precisely the
0:51:03 fact that you don’t know what’s going to happen in the specific case that means you build
0:51:06 these statistical models, and then you risk pool, and then you have variable payouts depending
0:51:10 on exactly what happens.
0:51:11 But if you literally knew what was going to happen in every case, because for example,
0:51:15 you have all this predictive genomic data, then all of a sudden it wouldn’t make sense
0:51:18 to risk pool, because you just say, well, no, this person’s going to cost X, that person’s
0:51:22 going to cost Y, there’s no.
0:51:24 Self-insurance already doesn’t make sense in that way, right, like insurance, the idea
0:51:29 of insurance is kind of like the, it started with crop insurance where like, okay, you
0:51:35 know, my crop fails, and so we all put money in a pool in case like my crop fails so that,
0:51:41 you know, we can cover it.
0:51:43 It’s kind of designed for it to risk pool for a catastrophic unlikely incident.
0:51:49 Like everybody’s got to go to the doctor all the fucking time.
0:51:53 And some people get sicker than others in that kind of thing.
0:51:56 But like, the way our health insurance works is like all medical gets, you know, paid for
0:52:02 through this insurance system, which is this layer of loss and bureaucracy and giant companies
0:52:08 and all this stuff when like, if we’re going to pay for people’s healthcare, just pay for
0:52:14 people’s healthcare.
0:52:15 Like, what are we doing, right, like, and if you want to disincent people from like
0:52:20 going for nonsense reasons and just up the copay, like, it’s like, what are we doing?
0:52:27 Just, well, then from a justice standpoint, from a fairness standpoint, like, would it
0:52:31 make sense for me, you know, would it make sense for me to pay more for your healthcare?
0:52:35 If I knew that you were going to be more expensive than me, like, you know, I’m directly, you
0:52:38 know, if you, if everybody knows what future healthcare costs is per person, there has a
0:52:43 very good predictive model for it, you know, societal willingness to all pool in the way
0:52:46 that we do today might really diminish.
0:52:47 Yeah, yeah.
0:52:48 Well, and then like, you could also, if you knew, like, there’s things that you do genetically
0:52:54 and maybe we give everybody a pass on that, it’s like, you can’t control your genetics,
0:52:58 but then like, there’s things you do behaviorally that like, dramatically increases your chance
0:53:02 of getting sick.
0:53:03 And so maybe, you know, we incentivize people to stay healthy instead of just like paying
0:53:09 for them not to die.
0:53:12 There’s a lot of systemic fixes we could do to the healthcare system.
0:53:17 It couldn’t be designed in a more ridiculous way, I think.
0:53:20 Well, it couldn’t be designed in a more ridiculous way.
0:53:22 It’s actually more ridiculous in some other countries, but it’s pretty crazy here.
0:53:27 Nathan, Nathan Odie asks, what are the strongest common themes between the current state of
0:53:31 AI and web 1.0?
0:53:33 And so let me start there.
0:53:34 Let me give you a theory, Ben, and see what you think.
0:53:36 So I guess it’s questionable, you know, because of my role in, you know, Ben, you, you with
0:53:39 me at Netscape, you know, we get this question a lot because of our role early on with the,
0:53:43 with the internet.
0:53:44 You know, the internet boom was like a major, major event in technology, and it’s still
0:53:47 within a lot of, you know, people’s memories.
0:53:49 And so, you know, the sort of, you know, people like to reason from analogy.
0:53:53 So it’s like, okay, the AI boom must be like the internet boom.
0:53:55 Starting an AI company must be like starting an internet company.
0:53:58 And so, you know, what, what is this like?
0:54:00 And we actually got a bunch of questions like that, you know, that are kind of an analogy
0:54:03 questions like that.
0:54:04 I actually think, you know, and then Ben, you know, you and I were there for the internet
0:54:06 boom.
0:54:07 So we, you know, we lived through that and the bust and the boom and the bust.
0:54:10 So I actually think that the analogy doesn’t really work for the most, it works in certain
0:54:14 ways, but it doesn’t really work for the most part.
0:54:16 And the reason is because the internet, the internet was a network, whereas AI is a computer.
0:54:24 Yep.
0:54:25 Okay.
0:54:26 Yeah.
0:54:27 So, so some people understand what we’re saying.
0:54:29 So, you know, like the PC boom or even, I would say the microprocessor, like my best
0:54:35 analogy is to the microprocessor or even to like the original computers, like back to
0:54:39 the mainframe era.
0:54:40 And the reason is because, yeah, look, what the internet did was the internet, you know,
0:54:43 obviously was a network, but the network connected together many existing computers.
0:54:47 And then of course, people built many other new kinds of computers to connect to the internet.
0:54:50 But fundamentally, the internet was a network and then, and that’s important because most
0:54:54 of, most of the sort of industry dynamics, competitive dynamics, startup dynamics around
0:54:59 the internet had to do with basically building either building networks or building applications
0:55:03 that run on top of networks.
0:55:04 And this, you know, the internet generation of startups was very consumed by network
0:55:07 effects and, you know, all these positive feedback loops that you get when you connect
0:55:12 a lot of people together.
0:55:13 And, you know, things like met, you know, so-called Metcast Law, which is sort of the value of
0:55:17 a network, you know, expands, you know, kind of the way it expands is you have more people
0:55:20 to it.
0:55:21 And then, you know, there were all these fights, you know, these fights, you know, all the
0:55:23 social networks or whatever fighting to try to get network effects and try to steal each
0:55:26 other’s users because of the network effects.
0:55:29 And so it was kind of, you know, it was dominated by network effects, which is what you expect
0:55:32 from a network business.
0:55:34 AI, like, there are some networks effects in AI that we can talk about, but it’s more
0:55:39 like a microprocessor.
0:55:40 It’s more like a chip.
0:55:41 It’s more like a computer in that it’s a system that basically, right, data comes in, data
0:55:47 gets processed, data comes out, things happen.
0:55:50 That’s a computer.
0:55:51 It’s an information processing system.
0:55:52 It’s a computer.
0:55:53 It’s a new kind of computer.
0:55:54 It’s a, you know, we like to say the sort of computers up until now have been what are
0:55:58 called von Neumann machines, which is to say they’re deterministic computers, which is
0:56:01 they’re like, you know, hyper literal and they do exactly the same thing every time.
0:56:05 And if they make a mistake, it’s, it’s the programmer’s fault, but they’re very limited
0:56:08 in their ability to interact with people and understand the world.
0:56:11 You know, we think of AI and large language models as a new kind of computer, a probabilistic
0:56:15 computer, a neural network based computer that, you know, by the way, is not very accurate
0:56:19 and is, you know, doesn’t give you the same result every time.
0:56:22 And in fact, might actually argue with you and tell you that it doesn’t want to answer
0:56:25 your question.
0:56:26 Yeah, which is very different in nature than the old computers.
0:56:31 And it makes you get kind of compulsability, you know, the ability to build things, big
0:56:37 things out of little things more complex.
0:56:40 Right.
0:56:41 But, but the capabilities are new and different and valuable and important because they can
0:56:45 understand language and images and, you know, that all these, all these things that you,
0:56:48 you see when you use.
0:56:49 All of that means we can never solve with deterministic computers.
0:56:53 We can now go after, right?
0:56:55 Right.
0:56:56 Yeah, exactly.
0:56:57 And so I think, I think, Ben, I think the analogy and I think the lessons learned are
0:57:00 much more likely to be drawn from the early days of the computer industry or from the
0:57:02 early days of the microprocessor than the early days of the internet.
0:57:05 Does that, does that sound right?
0:57:06 I think so.
0:57:07 Yeah.
0:57:08 I definitely think so.
0:57:09 And that doesn’t mean there’s no like boom and bust and all that because that’s just
0:57:12 the nature of technology, you know, people get too excited and then they get too depressed.
0:57:18 So there’ll be some of that, I’m sure.
0:57:19 There’ll be overbuild outs, you know, potentially eventually of chips and power and that kind
0:57:24 of thing.
0:57:25 You know, we start with the shortage, but, but I agree.
0:57:27 Like I think networks are fundamentally different in the nature of how they evolved in computers
0:57:34 and the kind of just the adoption curve and all those kinds of things will be different.
0:57:39 Yeah.
0:57:40 So then, and this kind of goes to where, how I think the industry is going to unfold.
0:57:43 And so this is kind of my best theory for kind of what happens from here of this kind
0:57:46 of this, you know, this, this giant question of like, you know, is, is the industry going
0:57:49 to be a few God models or, you know, a very large number of models of different sizes
0:57:52 and so forth.
0:57:54 So the computer, like famously, you know, the, the original computers, like the original
0:57:58 IBM mainframes, you know, the big computers, you know, they were very, very large and expensive
0:58:03 and there were only a few of them.
0:58:05 And the prevailing view actually for a long time was that’s all there would ever be.
0:58:09 And there was this famous statement by Thomas Watson senior, who was the creator of IBM,
0:58:13 you know, which was the dominant company for the first like, you know, 50 years of the
0:58:16 computer industry.
0:58:17 And he said, he said, he said, I believe this actually true, but he said, I don’t, I don’t
0:58:21 know, I don’t know that the world will ever need more than five computers.
0:58:24 And I think the reason for that, it was literally, it was like the government’s going to have
0:58:27 two, and then there’s like three big insurance companies.
0:58:30 And then that’s it.
0:58:31 Yeah.
0:58:32 Who else would need to do all that math?
0:58:34 Exactly.
0:58:35 Yeah.
0:58:36 Who else would need to, who else needs to keep track of huge amounts of numbers?
0:58:38 Who else needs that level of, you know, calculation capability?
0:58:41 It’s just not a relevant, you know, it’s just not, not, not a relevant concept.
0:58:44 And by the way, they were like big and expensive.
0:58:46 And so who else can afford them, right?
0:58:48 And who else can afford all the headcount required to manage them and maintain them?
0:58:51 I mean, this is in the days, I mean, these things were big, these things were so big
0:58:53 that you would have an entire building that got built around a computer, right?
0:58:57 And they’d have like, they’d famously have all these guys in white lab coats, literally
0:59:00 like taking care of the computer, because everything had to be kept super clean or the
0:59:03 computer would stop working.
0:59:05 And so, you know, it was this thing where, you know, today we have the idea of an AI God
0:59:08 model, which is like a big foundation model that, you know, then we had the idea of like
0:59:11 a God mainframe, like there, there would just be a few, a few of these things.
0:59:14 And by the way, if you watch old science fiction, it almost always has this sort of conceit.
0:59:19 It’s like, okay, there’s a big supercomputer and it either is like doing the right thing
0:59:22 or doing the wrong thing.
0:59:23 And if it’s doing the wrong thing, you know, that’s, that’s often the plot of the science
0:59:26 fiction movies is you have to go in and try to figure out how to fix it or defeat it.
0:59:30 And so it’s sort of this, this idea of like a single top down thing.
0:59:33 Of course, and that helped for a long time.
0:59:35 Like that held for, you know, the first few decades.
0:59:37 And then, you know, even when computers, computers started to get smaller.
0:59:40 So then you had so called mini computers was the next phase.
0:59:42 And so that was a computer that, you know, didn’t cost $50 million.
0:59:45 Instead, it costs, you know, $500,000, but even still $500,000 is a lot of money.
0:59:50 People aren’t putting mini computers in their homes.
0:59:52 And so it’s like midsize companies can, can buy mini computers, but certainly people can’t.
0:59:56 And then of course, with the PC, they shrunk down to like $2,500.
0:59:59 And then with the smartphone, they shrunk down to $500.
1:00:01 And then, you know, sitting here today, obviously you have computers of every shape, size, description,
1:00:06 all the way down to, you know, computers that cost a penny.
1:00:08 You know, you’ve got a computer in your thermostat that, you know, basically controls the temperature
1:00:12 in the room and it, you know, probably cost a penny.
1:00:13 And it’s probably some embedded arm chip with firmware on it.
1:00:16 And there’s, you know, many billions of those all around the world.
1:00:18 You buy a new car today.
1:00:19 It has something new cars today have something on the order of 200 computers in them.
1:00:23 And maybe, maybe more at this point.
1:00:25 And so you just basically assume with the chip today, sitting here today,
1:00:28 you just kind of assume that everything has a chip in it.
1:00:30 You assume that everything, by the way, draws electricity or has a battery
1:00:33 because it needs to power the chip.
1:00:35 And then increasingly, you assume that everything’s on the internet
1:00:37 because basically all computers are assumed to be on the internet or they will be.
1:00:41 And so as a consequence, what you have is the computer industry today is this massive pyramid.
1:00:46 And you still have a small number of like these supercomputer clusters
1:00:49 or these giant mainframes that are like the God model, you know, the God mainframes.
1:00:53 And then you’ve got, you know, a larger number of mini computers.
1:00:56 You’ve got a larger number of PCs.
1:00:57 You’ve got a much larger number of smartphones.
1:00:58 And then you’ve got a giant number of embedded systems.
1:01:01 And it turns out like the computer industry is all of those things.
1:01:03 And, you know, what is it, you know, what size of computer do you want is based on
1:01:08 what exactly are you trying to do and who are you and what do you need?
1:01:11 And so if that analogy holds, it basically means actually we are going to have AI models
1:01:16 of every conceivable shape, size, description, capability, right?
1:01:20 Based on trained on lots of different kinds of data, running at very different kinds of scale,
1:01:24 very different privacy, different policies, different, you know, security policies.
1:01:28 You know, you’re just going to have like enormous variability and variety.
1:01:32 And it’s going to be an entire ecosystem and not just a couple of companies.
1:01:35 Yeah, let me see what you think of that.
1:01:37 Well, I think that’s right.
1:01:38 And I also think that the other thing that’s interesting about this era of computing,
1:01:42 if you look at prayers of computing from the mainframe to the smartphone,
1:01:47 a huge source of lock in was basically the difficulty of using them.
1:01:53 So, you know, nobody ever got fired for buying IBM because like, you know,
1:01:58 you had people trained on them, you know, people knew how to use the operating system.
1:02:03 Like it was, you know, it was just kind of like a safe choice.
1:02:07 Due to the massive complexity of like dealing with a computer.
1:02:12 And then even with a smartphone, like the read, you know, why is the Apple computer
1:02:19 smartphone so dominant, you know, what makes it so powerful?
1:02:23 It’s well, because like switching off of it is so expensive and complicated and so forth.
1:02:27 It’s an interesting question with AI because AI is the easiest computer to use by far.
1:02:32 It speaks English.
1:02:33 It’s like talking to a person.
1:02:35 And so like, what is the lock in there?
1:02:38 And so are you completely free to use the size, price, choice, speed that you need
1:02:45 for your particular task, or are you locked into the God model?
1:02:49 And, you know, I think it’s still a bit of an open question, but it’s pretty interesting.
1:02:56 And that that that thing could be very different than prior generations.
1:03:01 Yeah, yeah, that makes sense.
1:03:03 And then just to complete the question, what would we say?
1:03:05 So, Ben, what would you say are lessons learned from the internet era that we lived through
1:03:08 that would apply that people should think about?
1:03:11 I think a big one is probably just the the boom bus nature of it.
1:03:19 That like, you know, the demand, the interest in the internet, the recognition of what it
1:03:25 could be was so high that money just kind of poured in and buckets and, you know, and
1:03:32 then the underlying thing was in internet age was the telecom infrastructure and fiber
1:03:37 and so forth got just unlimited funding and unlimited fiber was built out and then eventually
1:03:42 we had a fiber glut and all the telecom companies went bankrupt and and that was great fun.
1:03:49 But you know, like we entered in a good place and I think that that’s something like that’s
1:03:53 probably pretty likely to happen in AI where like, you know, every company is going to
1:03:58 get funded.
1:03:59 We don’t need that many AI companies.
1:04:01 So a lot of them are going to bust.
1:04:02 There’s going to be a huge, you know, huge investor losses.
1:04:06 There will be an overbuild out of chips for sure at some point.
1:04:11 And then, you know, we’re going to have too many chips and yeah, some chip companies will
1:04:15 go bankrupt for sure.
1:04:17 And then, you know, and I think probably the same thing with data centers and so forth,
1:04:21 like, well, be behind behind behind and then we’ll over build at some point.
1:04:26 So that that all be very interesting.
1:04:29 I think that and that’s kind of the that’s every new technology.
1:04:34 So Carlotta Perez has a great kind of has done, you know, amazing work on this where
1:04:39 I like that is just the nature of a new technology is that you overbuild, you underbuild it, then
1:04:43 you overbuild and, you know, and there’s a hype cycle that funds the build out and a
1:04:49 lot of money is lost.
1:04:50 But we get the infrastructure and that’s awesome because that’s when it really gets adopted
1:04:54 and changes the world.
1:04:55 I want to say, you know, with the internet, the other the other kind of big kind of thing
1:05:01 is the internet went through a couple of phases, right?
1:05:04 Like it went through a very open phase, which was unbelievably great.
1:05:08 It was probably one of the greatest booms to the economy.
1:05:11 It, you know, it certainly created tremendous growth and power in America, both, you know,
1:05:16 kind of economic power and soft cultural power and these kinds of things.
1:05:21 And then, you know, it became closed with the next generation architecture with, you
1:05:26 know, kind of discovery on the internet being owned entirely by Google and, you know, kind
1:05:31 of other things, you know, being owned by other companies and, you know, AI, I think
1:05:36 could go either way.
1:05:37 So it could be very open or like, you know, with kind of misguided regulation, you know,
1:05:42 we could actually force our way from something that, you know, is open source, open weights,
1:05:48 anybody can build it.
1:05:49 We’ll have a plethora of this technology will be like, use all of American innovation
1:05:55 to compete or we’ll, you know, we’ll cut it all off, we’ll force it into the hands
1:06:02 of the companies that kind of own the internet today and, you know, and we’ll put ourselves
1:06:08 at a huge disadvantage, I think, competitively against China in particular, but, but everybody
1:06:14 in the world.
1:06:15 And so, so I think that’s, that’s something that definitely, you know, that we’re involved
1:06:20 with trying to make sure it doesn’t happen, but it’s a real possibility right now.
1:06:24 Yeah.
1:06:25 There’s sort of an irony is that networks used to be all proprietary and then they opened
1:06:29 up.
1:06:30 Yeah, yeah, yeah.
1:06:31 Landman, Apple Talk, Net Buoy, Net Bios.
1:06:34 Yeah, exactly.
1:06:35 And so these are all the early proprietary networks from all individual specific vendors
1:06:38 and then the internet appeared and kind of TCP/IP and everything opened up.
1:06:41 The AI is trying to go the other, I mean, the big company is trying to take AI the other
1:06:44 way.
1:06:45 It started out as like open, just like basically just like the resource.
1:06:48 Everything was open source and AI, yeah.
1:06:50 Right.
1:06:51 Right.
1:06:52 And now they’re trying to, they’re trying to lock it down.
1:06:53 So it’s a, it’s a, it’s a fairly nefarious turn of events.
1:06:56 Yeah.
1:06:57 Yeah.
1:06:58 Very nefarious.
1:06:59 You know, I can, it’s remarkable to me.
1:07:01 I mean, it is kind of the darkest side of capitalism when a company is so greedy, they’re
1:07:08 willing to destroy the country and maybe the world to like just get a little extra profit.
1:07:12 You know, and they do it like the, the, the really kind of nasty thing is they claim, oh,
1:07:17 it’s for safety.
1:07:18 You know, we’ve created an alien that we can’t control, but we’re not going to stop
1:07:22 working on it.
1:07:23 We’re going to keep building it as fast as we can and we’re going to buy every freaking
1:07:26 GPU on the planet, but we need the government to come in and stop it from being open.
1:07:32 This is literally the current position of Google and Microsoft right now.
1:07:37 It’s crazy.
1:07:38 And we’re not going to secure it.
1:07:40 So we’re going to make sure that like Chinese buys can just like steal our chip plans, take
1:07:44 them out of the country.
1:07:45 We won’t even realize for six months.
1:07:46 Yeah.
1:07:47 It has nothing to do with security.
1:07:48 It only has to do with monopoly.
1:07:49 Yes.
1:07:50 The other, you know, just been going back on your point of speculation, so there’s this
1:07:54 critique that we hear a lot, right?
1:07:56 Which is like, okay, you idiots, basically it’s like you idiots, you idiots, entrepreneurs,
1:07:59 investors, you idiots.
1:08:00 It’s like there’s a speculative bubble with every new technology, like basically like
1:08:04 when are, when are you people going to learn to not do that?
1:08:06 Yeah.
1:08:07 There’s an old joke, there’s an old joke that relates to this, which is the foremost
1:08:10 dangerous words in investing are, this time is different.
1:08:13 The 12 most dangerous words in investing are the foremost dangerous words in investing
1:08:17 are this time is different, right?
1:08:18 Like, so like, does history repeat?
1:08:21 Does it not repeat the, my sense of it, and you referenced Carlotta Perez’s book, which
1:08:25 I agree is good.
1:08:26 Although I think, I don’t think it works as well anymore.
1:08:28 We can talk about some time, but, but you know, is a good, at least background piece
1:08:31 on this.
1:08:32 You know, it’s just like, it’s just incontrovertibly true, basically every significant technology
1:08:35 advance in history was greeted by some kind of financial bubble, basically since financial
1:08:39 markets have existed.
1:08:40 And this, you know, by the way, this includes like everything from, you know, radio and
1:08:43 television, the railroads, you know, lots and lots of prior, by the way, there was a,
1:08:47 there was actually a, a so-called, there was an electronics boom bust in the 60s called
1:08:51 the, it was called the Tronix, every, every company had the name Tronix.
1:08:55 And so, you know, there, there was that, so there, you know, there was like a laser boom
1:08:58 bust cycle.
1:08:59 There, there were all these like boom bust cycles.
1:09:00 And so basically it’s like any new tech, any new technology, that’s what economists
1:09:04 call a general purpose technology, which is to say something that can be used in lots
1:09:07 of different ways.
1:09:08 Like it inspires sort of a speculative mania.
1:09:10 And you know, and look, the critique is like, okay, why do you need to have a spec speculative
1:09:14 mania?
1:09:15 Why do you need to have a cycle?
1:09:16 Because like, you know, people, you know, if people, some people invest in the things
1:09:18 they lose a lot of money, and then there’s this bust cycle that, you know, causes everybody
1:09:21 to get depressed.
1:09:22 Maybe it delays the rollout.
1:09:23 And it’s like two things.
1:09:25 Number one is like, well, you just don’t know, like if it’s a general purpose technology,
1:09:28 like AI is, and it’s potentially useful in many ways, like nobody actually knows upfront
1:09:32 like what the successful use cases are going to be, or what successful companies are going
1:09:36 to be like you actually have to, you have to learn by doing.
1:09:38 You’re going to have to miss this.
1:09:39 That’s venture capital.
1:09:40 Yeah.
1:09:41 We, we…
1:09:42 Yeah, exactly.
1:09:43 Yeah, exactly.
1:09:44 So yeah, the true venture capital model kind of wires this in, right?
1:09:46 Yeah.
1:09:47 We, we basically, in core venture capital, the kind that we do, we sort of assume that half
1:09:50 the companies fail, half the projects fail.
1:09:52 And you know, if, if, if any of us, if we are any of our…
1:09:55 Tell a completely, like lose money, lose money.
1:09:57 Exactly.
1:09:58 Yeah.
1:09:59 And so like, and of course, if we are any of our competitors, you know, could figure
1:10:02 out how to do the 50% that work without doing the 50% that don’t work.
1:10:05 We would do that.
1:10:06 But you know, here we sit 60 years into the field and like nobody’s figured that out.
1:10:10 So there is, there is that, that unpredictability to it.
1:10:13 And then the other, the other kind of interesting way to think about this is like, okay, what
1:10:16 would it mean to have a society in which a new technology did not inspire speculation?
1:10:20 And it would mean having a society that basically is just like inherently like super pessimistic
1:10:25 about both the prospects of the new technology, but also the prospects of entrepreneurship
1:10:29 and you know, people inventing new things and doing new things.
1:10:31 And of course there are many societies like that on planet earth, you know, they’re just
1:10:35 like fundamentally like don’t have the spirit of invention and adventure that, you know,
1:10:40 that a place like Silicon Valley does and, you know, are they better off or worse off?
1:10:44 And, you know, generally speaking, they’re worse off.
1:10:46 They’re just, you know, less future oriented, less, less, less, less focused on, on building
1:10:50 things, less focused on, on figuring out how to get growth.
1:10:53 And so I think there’s a, at least my sense, there’s a comes with the territory thing.
1:10:57 Like we, we, we, we would all prefer to avoid the downside of a speculative boom bus cycle,
1:11:01 but like it seems to come with the territory every single time.
1:11:03 And I, at least I have not, nobody I’m aware, no society I’m aware of has ever figured out
1:11:07 how to capture the good without also having the bet.
1:11:09 Yeah.
1:11:10 And like, why would you?
1:11:11 I mean, it’s kind of like, you know, the, the, the whole Western United States was built
1:11:16 off the gold rush and like every kind of treatment in the like popular culture of the gold rush
1:11:23 kind of focuses on the people who didn’t make it anymore, but there were people who made
1:11:27 a lot of money, you know, and found gold and, you know, in the internet bubble, which, you
1:11:33 know, was completely ridiculed by, you know, kind of every, every movie.
1:11:38 If you go back and watch any movie between like 2001 and 2004, they’re all like how only
1:11:46 morons did.com and this and that and the other, and they’re all these funny documentaries
1:11:51 and so forth.
1:11:52 But like, that’s when Amazon got started, you know, that’s when eBay got started, that’s
1:11:58 when Google got started, you know, these, these companies, you know, with that work
1:12:03 started in the bubble in the kind of time of this great speculation, there was gold in
1:12:08 those companies.
1:12:10 And if you get any one of those, like you funded, you know, probably the next set of companies,
1:12:15 you know, which included things like, you know, Facebook and accent, you know, snap
1:12:19 and all these things.
1:12:20 And so, yeah, I mean, like, that’s just the nature of it.
1:12:24 I mean, like, that’s what makes it exciting.
1:12:26 And you know, it’s just a, it’s an amazing kind of thing that, you know, look, the transfer
1:12:33 of money from people who have excess money to people who are trying to do new things
1:12:39 and make the world a better place is the greatest thing in the world.
1:12:43 Like, and if we, some of the people with excess money lose some of that excess money and trying
1:12:49 to make the world a better place, like, why are you mad about that?
1:12:53 Like that, that’s the thing that I can ever, like, why would you be mad at, you know, young
1:12:59 ambitious people trying to improve the world, getting funded, and some of that being misguided?
1:13:06 Like why is that bad?
1:13:07 Right.
1:13:08 Right.
1:13:09 As compared to, yeah, as compared to, especially as compared to everything else in the world
1:13:12 and all the people who are not trying to do that.
1:13:14 So you’d rather, like, we just buy, like, you know, lots of mansions and boats and jets.
1:13:19 Right.
1:13:20 Like, what are you talking about?
1:13:21 Right.
1:13:22 Right.
1:13:23 Exactly.
1:13:24 We’re donating money to ruin us.
1:13:25 Yeah, ruin us causes.
1:13:26 Right.
1:13:27 Such as, ones that are on the news right now.
1:13:31 Okay.
1:13:32 So, all right.
1:13:33 We’re at a minute 20.
1:13:34 We made it all the way through four questions.
1:13:35 We’re doing good.
1:13:36 We’re doing great.
1:13:37 So let’s call it here.
1:13:38 Thank you, everybody, for joining us.
1:13:39 And I believe we should do a part two of this, if not parts three through six, because we
1:13:42 have a lot more questions to go.
1:13:43 But thanks, everybody, for joining us today.
1:13:45 All right.
1:13:46 Thank you.
1:13:46 Bye.
1:13:47 Bye.
1:13:48 Bye.
1:13:49 Bye.
1:13:49 (upbeat music)
1:13:51 (upbeat music)
In this latest episode on the State of AI, Ben and Marc discuss how small AI startups can compete with Big Tech’s massive compute and data scale advantages, reveal why data is overrated as a sellable asset, and unpack all the ways the AI boom compares to the internet boom.
Subscribe to the Ben & Marc podcast: https://link.chtbl.com/benandmarc
Stay Updated:
Let us know what you think: https://ratethispodcast.com/a16z
Find a16z on Twitter: https://twitter.com/a16z
Find a16z on LinkedIn: https://www.linkedin.com/company/a16z
Subscribe on your favorite podcast app: https://a16z.simplecast.com/
Follow our host: https://twitter.com/stephsmithio
Please note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures.