AI transcript
0:00:11 Grok, Claude, Gemini, Mistral, DeepSeek. I bet most people wouldn’t be able to tell which is which.
0:00:16 Benedict Evans is a technology analyst known for his insightful takes on platform shifts in the
0:00:23 tech industry. He sees AI differently than others. He’s spent decades spotting patterns others miss
0:00:29 and dives into how people really use AI. Why is it that somebody looks at this and gets it and goes
0:00:35 back every week, but only every week? The very high level threat to Google is that you have this
0:00:39 moment of discontinuity in which everybody resets their price that we consider as their defaults.
0:00:43 And so it’s no longer just the default that you go and use Google. There’s this sort of question for
0:00:49 Apple around does this net actually change the experience of what a smartphone is, what the
0:00:54 ecosystem is? Does it end up kind of getting Microsofted in the sense that…
0:01:07 I want to start with your most controversial take on AI.
0:01:13 It’s funny. My, I suppose my take on AI, controversial take on AI, rather like my controversial take on
0:01:21 crypto is being a centrist in that it seems to me very clear this is like the biggest thing since the
0:01:28 iPhone. But I also think it’s only the biggest thing since the iPhone. And there’s a bunch of people who
0:01:35 think, no, it’s much more than that. It’s a minimum. It’s more like computing. And then you’ve got people
0:01:39 going around saying, no, this is more like, you know, the electricity or the industrial revolution or,
0:01:45 you know, transhumanists or something. My sort of base case is to say, this is kind of another platform
0:01:50 shift and all the new stuff will be built around this for the next 10 or 15 years. And then there’ll be
0:01:56 something else. And so the impact on employment will be kind of like the impact on employment from the
0:02:01 other platform shifts and the impact on the economy and productivity and intellectual property. And
0:02:06 there’ll be, there’ll be a whole bunch of different weird new questions, just like there were a bunch of
0:02:11 different weird new questions before. And then in 10 years time, it’ll just be software.
0:02:17 Put this in historical context for us with other platform shifts. Everybody’s saying this time is
0:02:21 different, which everybody does at each platform shift, I would imagine. What’s the same?
0:02:28 Well, that’s the, there’s a, there’s a famous book about financial bubbles called This Time is
0:02:33 Different, because people always say this time is different. And it always is like the dotcom bubble was
0:02:39 different to like the late 80s. And the Japanese financial bubble was different to, you know, pick any other
0:02:45 bubble you want. They’re always different. But that doesn’t mean they’re not a bubble. And the same
0:02:53 thing here, I have a diagram I use a lot from 1995. This research firm made a diagram of something called,
0:02:59 they called cyberspace. Because it wasn’t clear it was just going to be the internet. It was clear that
0:03:05 everyone was going to have some kind of computer thing connected to some kind of network. But remember
0:03:10 the phrase information superhighway? Yeah. Which sort of conveys that it would be centralized and
0:03:14 controlled by cable companies and phone companies and media companies, which is sort of how everything
0:03:18 had always previously worked. It wasn’t clear, no, it was going to be the internet. It wasn’t clear the
0:03:21 internet was going to be kind of radically decentralized and permissionless and anyone could do what they
0:03:25 wanted. It wasn’t clear the internet was going to be the web. And only the web because there were all
0:03:30 these other things going on. If you look at Mary Meeker’s first big public internet report from 1995,
0:03:35 she has a separate forecast for web users and email users. And she thought email users would
0:03:40 be way bigger. It wasn’t clear like that was all one thing. And then it wasn’t clear that it was about
0:03:45 the browser. It wasn’t clear that the browser wasn’t where the value capture was because Microsoft
0:03:49 craybarred its way into dominance in browsers, but that turned out not to matter. And then all the value
0:03:55 was in site advertising and social, which were five years later and 10 years later. And so like,
0:03:59 you can be very, very clear that this is the thing and then still be completely unclear how it’s going to
0:04:03 work. The same thing with mobile internet. Just funny, mobile internet now, it’s kind of like
0:04:08 saying black and white television on color television, desktop internet, mobile internet,
0:04:13 black and white TV, color TV. No one really says mobile internet anymore. It’s like talking about
0:04:18 e-commerce. You’re starting to have people talk about physical retail and retail. And, but it wasn’t
0:04:23 clear, you know, I was a telecoms analyst in 2000 and it was very clear mobile internet was going to be a
0:04:29 thing. It was not clear that there would be basically small PCs. Like that was the fundamental shift
0:04:33 to the iPhone is it’s a small Mac. It’s not a phone with better UI. It’s a small Mac.
0:04:40 And it wasn’t clear that the telcos would get no value. It wasn’t clear Microsoft and Nokia would get
0:04:46 no value. It wasn’t clear it would take 10 years before it took off. And it wasn’t clear it would
0:04:50 replace the PC as the center of the tech industry. I mean, everyone was talking about, well, what’s a
0:04:55 mobile use case? What would you do? You’ll do some things on your mobile phone, but what? But obviously
0:04:58 your PC will be how you use the internet. And of course, that’s not how it worked.
0:05:04 And so we kind of forget because now we don’t see it because now it just kind of became part of the
0:05:08 air we breathe. How weird and strange and different all these things are. There’s something I love
0:05:16 talking about, which is, is the rise of automatic elevators. So until the fifties elevators were
0:05:20 manually operated, they were basically vertical streetcars. They were trams, they were pub trains.
0:05:25 And you have a driver who has a lever with an accelerator and a brake. If you’ve been into a New York co-op,
0:05:30 you may have seen one of these, but they call it an attended elevator. There’s a lever. You push it
0:05:34 that way to go down, middle to stop, that way to go up. And then in the fifties, Otis creates the
0:05:39 autotronic, I think it’s called the autotronic elevator, which had electronic politeness, which
0:05:43 basically meant the infrared thing that stops the door closing. But if you get into an elevator now,
0:05:48 you don’t say, oh, I’m going to use an automatic elevator with electronic politeness. It’s just lift.
0:05:53 We kind of forget how weird and different all the other things were. And yes, this is new and weird
0:05:58 and different in a bunch of kind of strange, confusing, confounding ways we can probably talk
0:06:02 about. But we sort of forget that other things were weird and strange and different too.
0:06:09 Is this the first major platform shift where the incumbents have an advantage because they have the
0:06:09 data?
0:06:17 I’m pretty sure people thought Microsoft had an advantage on the internet and Google and meta had
0:06:24 an advantage on mobile and everyone thought IBM was going to win PCs. Once IBM made a PC, that was it.
0:06:30 It’s all over now. And we kind of forget that there were PCs before and then IBM made one and that kind
0:06:32 of became the standard, but then IBM lost it.
0:06:38 So what happens with the incumbents? Do they grab on to using the technology instead of adopting it
0:06:43 because adopting it would mean killing the golden goose? Like what happens in a platform shift with
0:06:43 incumbents?
0:06:48 The master of my college at Cambridge said that history teaches us nothing except that something
0:06:55 will happen. And, you know, there’s always the example and the counter example. So with any new kind
0:07:02 new platform shift and a platform, the term platform shift itself is, you know, it’s a useful term, but you have to be
0:07:07 careful not to be trapped by your terminology and get into this sort of arguments about, well, is it a platform shift
0:07:14 or is it not a platform shift? And how do you define a platform? Shut up. Like, you know, the thing is when any
0:07:19 with any of these sort of fundamental technology changes, the incumbents always try and make it a feature and they try and
0:07:25 absorb it. And the same thing outside of technology, existing companies try and absorb it and they use it to
0:07:31 automate the stuff they’re already doing. And then over time you get new stuff, you unbundle both the
0:07:36 incumbents intake and you unbundle existing companies because of something that’s possible because of this new
0:07:41 technology. So you can always kind of jump into the new thing. And sometimes the new thing kind of really is just a
0:07:49 feature. And sometimes it’s no, it’s a fundamental change in how everything works. And sometimes that sort of
0:07:54 contingent, you know, there’s this whole sort of parlor game, like drinking game that historians play
0:07:58 about kind of historical inevitability, you know, well, what would have happened if that battle had
0:08:04 been lost or if that politician had been assassinated or not assassinated? And it depends. Sometimes the
0:08:09 answer is, well, no, nothing, then everything would have been completely different. And sometimes the
0:08:14 answer is, well, no, then, you know, what if Napoleon had won at Waterloo? Well, then he’d have lost
0:08:17 another battle six months later, like nothing would have changed. The whole environment had changed.
0:08:22 What if, you know, the revolution hadn’t happened in spring of 1917? Then it would have happened in
0:08:27 the summer or the autumn. Sometimes it’s like really clear. I mean, I always think the Kodak
0:08:30 example here is kind of interesting.
0:08:31 Tell me about it.
0:08:35 Because, you know, like, it’s like the cliche that people say, oh, Kodak had digital cameras and they
0:08:39 didn’t get it or they ignored it or they didn’t want to do it because it would destroy their business. But
0:08:43 then you go and look at it and that’s like, well, that was 1975. And the thing they had was the size
0:08:48 of a refrigerator. I know that was not a consumer product. And it took until the late 90s before the
0:08:53 technology was actually viable as a consumer product. So of course, like they didn’t do it in the 70s
0:08:59 because you couldn’t. What actually happens is once it starts happening, Kodak go all in on digital
0:09:04 cameras. At one point, they were the best selling digital camera vendor in the USA. And if you look at their
0:09:08 annual ports at the time, they think this is going to be great because they’re going to sell way more photo
0:09:13 printers. So they’re selling these inkjet photo printers. People are going to produce way more photos. So they’re
0:09:16 going to take way more, they’re going to print them all. Two things that screw Kodak. One of them is
0:09:22 smartphones. And you’ve got to argue that what actually screws Kodak is not the camera. It’s the social
0:09:28 media and it’s not printing anymore. Because that’s what killed. That’s one side. The other side
0:09:35 of it is that film was this high margin product where they had a bunch of unique intellectual
0:09:41 property. And digital cameras are a low margin commodity where they were competing with the
0:09:46 entire consumer electronics industry with no differentiation. And so even if they go, even at
0:09:51 the point being, even if you go all in into that market, it’s still a crappy market where you’ve got
0:09:56 no differentiation. So you can kind of, you know, you can, you can kind of put all of these things on the
0:10:00 table and shuffle them around and say, well, in hindsight, obviously BlackBrew was screwed.
0:10:04 And in hindsight, obviously Google is going to be able to make the jump. And in hindsight,
0:10:10 in hindsight, and yeah, maybe. Is there a parallel between the second point you made about Kodak and
0:10:16 Google today where, you know, they have a high margin in search business and a low margin AI business?
0:10:23 So I think it’s, I’d be nervous about knowing what the margins are in AI because we’ve had,
0:10:27 you know, depending on who you ask, like the price to get a given result has probably come down by two
0:10:33 orders of magnitude. But then that’s, that was the state of the art two years ago. And now there’s a
0:10:38 new thing, which is more extensive. And so there’s a little awful lot of kind of shifting planes and
0:10:42 shifting, you know, there’s, there’s a lot of, there’s a lot of algebra and all the variables and the
0:10:46 algebra are all changing at the moment. So it’s kind of hard to quite to know what that is. I think
0:10:58 the, the, the, the, the obvious Google threat right now is that Google shows you a bunch of links
0:11:05 and results and ideas, and those could now be solved in a different way. Well, let me kind of go back a
0:11:12 second. The very high level threat to Google is that you have this moment of discontinuity in which
0:11:18 everybody resets their prize and reconsides their defaults. And so it’s no longer just the default
0:11:23 that you go and use Google. And for this search or that search, like maybe Bing is 10% better on that
0:11:28 search. In fact, as we saw from the, the, the, the Google TAC trial last year, actually Bing is,
0:11:32 Google is still the best search engine by quite a long margin relative to the other traditional
0:11:39 search engines. But what we have now is like a reset of the playing field. And Google has a whole
0:11:45 bunch of advantages as to why they might win in that playing field. But there’s a reset both of
0:11:50 what the product is and, and, and how you sell it and your org structure around selling it. And do you
0:11:53 have the right politics and the right org structure to build that and the right incentives and internal
0:11:56 conflicts? And then the consumer behavior kind of gets reset as well.
0:12:02 My understanding of AI is that so much is data-driven. Like you have proprietary data sources, you have
0:12:07 better data as you can train better models. That gives you one of the key inputs.
0:12:12 So I’d actually, I think it’s actually the opposite, which is that everyone’s kind of using the same data,
0:12:20 which is you need such an enormous amount of generalized text that the amount that Google
0:12:26 has or that Meta has is not actually enough to move, to be a kind of a fundamental difference
0:12:27 in what you can train with.
0:12:34 So you don’t think like YouTube as a repository is an advantage for like a significant.
0:12:36 So it depends.
0:12:37 Push back, yeah.
0:12:42 So it depends. So the models that we’re training now, we’re training on text.
0:12:48 So that’s not really being trained on YouTube. We saw this lawsuit around book copyright with Meta
0:12:53 that they downloaded a torrent of pirated books. Because guess what? That’s, they don’t have
0:12:59 enough stark text and it’s not the right kind of text. They don’t have lots of prose. They’ve got
0:13:04 lots of short snippets of text. So I think the generality of LLMs is you just
0:13:08 need such an enormous amount of data that everyone kind of needs all the text that there
0:13:12 is. And all the text that there is, is kind of equally available to anyone.
0:13:16 So we’ve kind of like the data is a level playing field effectively.
0:13:22 Yes, because you need so much more. And it’s also not necessarily the kind of data that you
0:13:27 have. So obviously Google has, you know, an enormous repository of scrape data because
0:13:31 they read the web all the time. But anyone else with a billion dollars can go out and do
0:13:31 that.
0:13:33 Right.
0:13:35 Or you can go and download the common crawl from AWS.
0:13:41 How far away do you think we are from autonomous sort of AI making AI better?
0:13:49 So no human intervention, but AI sort of going out in the real world, getting feedback, adapting
0:13:51 itself and making itself better.
0:13:56 There is this sort of like people parody of like the foomph, like suddenly, magically, this
0:13:58 thing grows and becomes amazing and learns everything.
0:14:05 I don’t think we’re at that stage now. I don’t think anyone really knows when it would happen.
0:14:07 So it’s very sort of impressionistic.
0:14:14 I think another answer might be you kind of have to be very careful looking at headlines
0:14:18 and thinking like, what exactly is that telling me?
0:14:26 So Anthropic has done a bunch of things where they say like, the AI was threatening to blackmail
0:14:26 me.
0:14:28 Yeah, I saw that.
0:14:33 And you read the story and you think, okay, you asked what’s basically a story generating
0:14:41 machine. Please tell me a story of what you would do if in this situation where most people
0:14:47 would probably say X and the machine says probably X. And you say, my God, it said it would do X.
0:14:51 It’s like, well, yeah, it would blackmail.
0:14:52 It’s based on human behavior.
0:14:55 It would blackmail you. Okay, how would it do that?
0:14:55 Yeah.
0:14:59 What do you mean it would blackmail you? It’s kind of like the reductio ad absurdum of this
0:15:05 is you write, this is a point somebody else made, that like you write murder is good on a piece
0:15:10 of paper and you put it in a photocopio and you press go and you say, my God, the machine
0:15:15 says murder is good. Well, no, you told the machine to say that. And that’s what these Anthropic
0:15:20 studies are. They’re basically, you tell the machine to say a thing and then it says it.
0:15:23 Like, well, you haven’t proved anything.
0:15:29 You know, people talk a lot about product market fit, sales tactics or pricing strategy.
0:15:34 The truth is success in selling often comes down to something much simpler, the system behind
0:15:40 the sale. That’s why I use and love ShopPay because nobody does selling better than Shopify.
0:15:47 They built the number one checkout on the planet. And with ShopPay, businesses see up to 50% higher
0:15:53 conversions. That’s not a rounding error. That’s a game changer. Attention is scarce. Shopify helps you
0:15:58 capture it and convert it. If you’re building a serious business, your commerce platform needs to
0:16:05 meet your customers wherever they are on your site, in store, in their feed or right inside their inbox.
0:16:11 The less they think, the more they buy. Businesses that sell more sell on Shopify. If you’re serious
0:16:16 about selling, the tech behind the scenes matters as much as the product. Upgrade your business and get
0:16:23 the same checkout that I use. Sign up for your $1 per month trial period at shopify.com slash knowledge
0:16:30 project. All lowercase. Go to shopify.com slash knowledge project. Upgrade your selling today.
0:16:33 Shopify.com slash knowledge project.
0:16:39 Do you ever struggle to stay focused? There’s a reason I reach for my remarkable paper pro when I need
0:16:44 to think clearly. If you’re looking for something that can help you really hone in on your work without
0:16:49 all the distractions. Remarkable, the paper tablet might just be what you’re looking for.
0:16:55 Remarkable just released their third generation paper tablet, Remarkable Paper Pro. It’s thin,
0:17:01 minimalist, and feels just like writing on paper, but comes with powerful digital features such as
0:17:07 handwriting conversion, built-in reading light, productivity templates, and more. It’s not just another
0:17:15 device. It’s a subtractive tool. No notifications, no inbox, no apps, just you, your ideas, and a blank
0:17:20 page that feels like paper, but smarter. Whether you’re in a meeting or deep in a creative session,
0:17:25 this device is built to keep you in the zone. In a world built to steal your focus, this tablet gives
0:17:31 it back. It’s become the place I do my deepest work, and it travels light, slim enough for any bag,
0:17:37 powerful enough for any boardroom. Not sure if it’s for you? No worries. You can try Remarkable
0:17:42 Paper Pro for up to 100 days with a satisfaction guarantee. If it’s not the game changer you were
0:17:48 hoping for, you’ll get your money back. Get your paper tablet at Remarkable.com today.
0:17:51 Where do you stand on regulation of AI?
0:17:58 So I think regulation of AI is sort of the wrong level of abstraction. Talking about regulating AI
0:18:03 as AI is the wrong level of abstraction. It’s like saying we’re going to regulate databases,
0:18:09 regulate spreadsheets, or regulate cars. Well, we do, but not like that. When you regulate stuff,
0:18:14 there are trade-offs. You learn about this in your first year in economics class, like regulation has
0:18:21 costs and consequences, and it’s not necessarily, you know, there’s always a trade-off. And often you’re
0:18:26 making product decisions or engineering decisions that do actually have trade-offs. There’s like a
0:18:29 three-way trade-off of like what’s good for the product, what’s good for the consumer,
0:18:33 what’s good for competition, what’s good for the company, what’s good for the consumer.
0:18:39 I think the regulatory stuff is interesting in the framework of multiple countries sort of competing
0:18:45 for superintelligence. How would you advise a country to prepare for AI? If I’m the president
0:18:52 of the United States and I call you and I say, Benedict, you have five minutes, I need, what do I need to
0:18:55 prepare for? What can I do to put our country in the best position possible?
0:19:01 Well, what’s your objective? Is your objective to have a nice press release?
0:19:03 No, it’s to dominate AI.
0:19:08 A long time ago, I used to get these questions about like, how can we replicate Silicon Valley?
0:19:14 And I always feel like the answers to those questions as much as possible is like, you can’t,
0:19:18 I mean, occasionally there are things you can do, like you can create funding structures,
0:19:22 you can make it easy, you can, you know, you can try and jumpstart startup ecosystems,
0:19:26 you can try and jumpstart funding availability. But most of the answer is things like getting
0:19:32 out of the way. I think that the idea of trying to create national champions is very hard. Now,
0:19:37 that almost kind of becomes an economist’s question rather than a technology question. How do you create
0:19:41 national champions? Where does that work? Where does that not work? I’m sure there’s a bunch of
0:19:46 books and papers about, you know, where does industrial policy work? Where does it not work?
0:19:52 From a technology analyst perspective, I think of this in terms of, A, what are you doing that would
0:19:56 make this harder? And B, think of this as just more startups.
0:20:02 What are the things that we’re doing that make it harder to develop that ecosystem without picking
0:20:05 a winner? It’s not about picking a company and backing them.
0:20:12 Well, if you do like this ridiculous law that California had a year or two ago, if you treat
0:20:18 this as like nuclear weapons, and you say this is incredibly dangerous, and we need to have it under
0:20:21 extremely tight control so that nobody does anything bad with it.
0:20:23 Which is basically the EU approach.
0:20:30 To go back to your economics class, policies have trade-offs. To govern is to choose. You’re
0:20:34 making a choice when you do that, and you’re choosing that has costs. Personally, like most
0:20:38 people in tech, I think the idea that this is all going to kind of produce bioweapons and take over
0:20:42 the world and kill us all is just idiotic. Like, I think it’s just a bunch of kind of childish
0:20:46 logical fallacies within that. But you have to be conscious. You say, if we’re going, you know,
0:20:53 the kind of the Biden approach to generative AI very explicitly was to say, this is sort of social
0:20:59 media 2.0. Like, social media to one was terrible and destructive and bad. And I think there’s a,
0:21:04 I don’t agree with that. I think there’s a huge dose of moral panic within that. But be that as it may,
0:21:09 if you make a decision that says we are deliberately and explicitly going to make it really hard to build
0:21:14 models, and really hard to start a company that builds models, and really hard to do anything with
0:21:19 any of this stuff, then guess what? It’s kind of like, you know, the mayoral election in New York
0:21:24 today. Like, if you make it really hard and expensive to build houses, houses will be more
0:21:30 expensive. You’ve made that choice. If you do that, you cannot then complain that houses are more
0:21:35 expensive. You can choose that, but you can’t complain. Why do you think as a society, we don’t
0:21:42 understand that? Part of this is that, like, in most non-emotive fields, we kind of do. You understand,
0:21:47 people understand that, you know, if you make, you know, more employment regulation tends to produce
0:21:51 slower growth, but more protection for employment, and you’re choosing a trade-off. You know, people
0:21:55 kind of, I think everybody on both sides of that equation understands that, that that’s a trade-off,
0:21:59 and you’re choosing one versus the other. The point is, you can have a fully functioning free market,
0:22:06 and you can regulate some of the negative externalities of free markets that anybody in any part of the
0:22:09 economic spectrum understands there are negative externalities to free markets. You can also have,
0:22:13 like, a government-provided alternative. You can have the government do the fire department.
0:22:20 Where you have some of the kind of most obvious gaps between the U.S. and Europe, it seems to me
0:22:26 sometimes, are in places where you kind of have neither. So, the U.S. neither has a government-controlled
0:22:32 healthcare system, nor a free market healthcare system. Do you see what I mean? You have neither
0:22:36 government-controlled housing, which you have in, like, in weird places like Singapore,
0:22:41 nor a free market in housing. So, you kind of break the free market. So, you stop the price
0:22:45 signaling. This is like the great insight of Hayek is that pricing is a signal. Pricing is an
0:22:50 information system. It’s telling people what’s wanted. It’s not just a signal of worth. It’s a
0:22:55 signal of demand. There’s a fascinating book I read a while ago called Red Plenty, which is about
0:23:02 Soviet central planning in the 60s, 70s, 80s. And it’s about sort of what happens when you have a central
0:23:07 planning that just cannot cope with the level of complexity of a sophisticated economy in the 60s
0:23:13 and 70s, as opposed to let’s make grain and tractors and locomotives in the 20s and 30s and steel, which
0:23:17 really kind of works. But once you actually have a sophisticated industrial economy, central planning
0:23:22 can’t handle the complexity. And so, you try and create incentives and structures around that while
0:23:27 not having pricing. And that just doesn’t work. I suppose there’s a sort of a generalized point,
0:23:32 which is like a market economy is a system. And if you pull a lever here, something will move
0:23:37 there. And you can’t just pull a lever here and say, well, I don’t want that to move because it’s a
0:23:43 democracy. It will move anyway. And so, you have to understand how the system works and understand what
0:23:47 consequences you want from that and what your parameters are within this.
0:23:52 One of the things that I admire about you is that you’re sort of known for spotting patterns.
0:23:58 I have a theory on how to learn pattern matching. And I’d love to hear your pushback on this.
0:24:02 My theory on how we learn is, I call it the learning loop. We have an experience.
0:24:07 We reflect on that experience and we create a compression. And that compression becomes our
0:24:13 takeaway. So, we can watch a movie, read a book, and you come away with a compression of it. But you can
0:24:19 work backwards from that compression to the experience. But what we consume most of the time is other
0:24:24 people’s compressions. So, like when people read your newsletter, they’re consuming a compression
0:24:30 of the work that you’ve done, but not the actual raw work. So, in a way, it’s an illusion of knowledge
0:24:33 if you haven’t done the work in that area.
0:24:41 It’s funny. I have a draft thinking about like what LLMs do to web search and publishing and
0:24:48 discovery in e-commerce and like big foot of hand wavy fuzzy, all of that stuff. And I was sort of
0:24:52 thinking about this and there’s a book written by a French academic sort of 20 years ago or something
0:25:02 called How to Talk About Books You Haven’t Read, which sounds very kind of snide. But kind of his
0:25:06 point is that like there’s the book you read when you were 17 and you really didn’t get it.
0:25:11 And if you read it now, you’d get it. And there’s the book that like, he’s got this kind of list of
0:25:14 like, there’s the books that everybody else has read. So, it’s to say you’ve read them. There’s the
0:25:17 books that like you’ve read three other books by that writer. So, you don’t really need to read this
0:25:21 one too. You get it. Like, do you need, you know, do you need to read another Malcolm Gladwell book?
0:25:25 You’ve kind of got the Malcolm Gladwell experience. So, there’s this sort of generalized sense of
0:25:32 pattern and accumulation of what you’ve seen, what you’ve half seen, what you half remember.
0:25:40 There’s also, I think, you know, what your viewers, listeners might notice is I kind of have two modes,
0:25:45 two or three modes. I have a mode that’s sort of discursive and slightly rambling and
0:25:48 free associating and I’ll kind of spiral off in different directions and hopefully come back to
0:25:56 the point. And then there’s a mode where I want to try and pin the thing down and break it apart and
0:26:01 say, what are the two, three, four things that are happening here? Which is what you see in the slides
0:26:08 is no, like, what is it? It’s this and then this and then that. And as that capturing, that’s a way of
0:26:14 trying to understand what this is. I try and work out what I think about this, how I understand it,
0:26:19 how you can break it apart by kind of pinning it down. The thing about data is, and the thing about
0:26:26 the slides and the analysis is, is like, I’m always asking who cares. And I’m always asking,
0:26:32 yes, but what actually matters here? Why are you showing me this slide? Why am I showing you this
0:26:36 chart? And so you kind of have to ask, like, well, what are the actual questions?
0:26:40 What are the questions we’re not asking on AI that we should be asking?
0:26:43 I mean, we’re asking, okay, well, there are some people who are saying, oh, all the value
0:26:48 capture is going to be in the models. There’s this kind of funny split between people who are just
0:26:52 talking about the models getting better and everybody else who’s saying, well, all the value is going to
0:26:58 be in the application layer. And, you know, what are the companies? And let’s fund Cursor and let’s
0:27:02 fund all of this stuff. And why isn’t there a consumer breakout yet? And other people are saying,
0:27:05 what do you mean there isn’t a consumer breakout? Everyone’s using chat GPT, which answers, well,
0:27:10 not exactly, which is my data point. Some people are using chat GPT. Most people look at it and don’t
0:27:15 get it still, which is fascinating. There’s this kind of core where’s the value capture question.
0:27:19 Then there’s like a bunch of questions we could have asked two years ago where we don’t have an
0:27:24 answer. Will the error rate ever be controllable or manageable? Will you ever get to a model that knows
0:27:29 when it’s wrong? Which to me seems like, given the statistical system, seems like a contradiction in terms.
0:27:33 But maybe it was a bunch of kind of, you could make a list of like a dozen questions we could have asked
0:27:39 in early 23. We don’t really have answers to any of those. I mean, there were some people who were asking,
0:27:43 are these things commodities? Will China catch up to a true answer? Even then was obviously yes, of course,
0:27:49 which is what happened, which Deepsea kind of demonstrated. But we don’t have that many new
0:27:55 questions since then. The thing that I puzzle about right now is, first of all, there’s this whole
0:28:02 nexus, as I said, of like, what is SEO for LLMs? You know, we have infinite product, infinite retail,
0:28:06 infinite media. How will you choose what to buy? What happens if I go to an LLM and say, what mattress
0:28:13 should I buy? What life insurance should I get? How does that work? That poses dozens of questions we
0:28:20 don’t know yet. Then there’s a question around like, the differentiation in the LLMs as product.
0:28:26 Like it seems to me right now, you could do like a double blind test of the same prompt given to
0:28:35 Grok, Claude, Gemini, Mistral, Deepseek, do a double blind test. I bet most people wouldn’t be able to tell
0:28:40 which is which. That question of like, is there product differentiation? Can there be product
0:28:46 differentiation around the LLM as consumer product? Because right now the models are commodities, but
0:28:52 ChatGPT is way, way, way, way, way more usage. So ChatGPT is at the top of the outstore rank. Gemini
0:28:58 bubbles between like 50 and 100. None of the others were in the top 100. Same in Google Trends, same in
0:29:02 the usage numbers, same in the revenue. There’s revenue for corporate APIs. Corporate is a whole other
0:29:07 story. But as a consumer thing, it’s like ChatGPT is now like the brand. It’s the default. It’s the
0:29:12 Google. You use it because you’ve heard of it. And none of the others have broken it. Is that where
0:29:17 we are now? But then if you look at the products, the products, not any of the models are all the
0:29:21 same. The underlying model is the same. The product’s all the same. It’s really hard to tell the
0:29:23 difference except like they’ve got different color schemes and different icons.
0:29:24 Different branding.
0:29:28 Different branding. But the products are all the same. And this reminded me of looking at
0:29:33 browsers. In that browsers are all the same. The rendering engine underneath might be
0:29:37 different, just as the LLM might be different. But you’ve got an app, an input box, an output
0:29:42 box. And the output box wonders what the rendering engine gives you. And the only innovation in
0:29:46 browsers in the last 25 years is basically tabs and merging search in the address part.
0:29:47 Yeah.
0:29:51 And there’s like the browser project. There’s people trying to do it now, but it hasn’t worked.
0:29:54 It hasn’t got traction. And is that sort of how LLMs will work in that it’s about the
0:29:59 distribution and the brand is not actually about the product or the model? Or is it maybe more
0:30:06 like social in that, yeah, photo showing is a commodity, but you, there’s a big difference
0:30:11 between Instagram and Flickr and all the other people that try to do photo showing. And so
0:30:12 you have to really…
0:30:16 That would almost be an argument that it’s sort of winner take all, right? It’s very hard
0:30:22 for, like use Claude as an example. It’d be very hard for Claude to compete if they don’t
0:30:27 have enough usage to continuously make the investment.
0:30:33 Well, that’s a slight, this is a slightly different thing. So the winner, there doesn’t
0:30:38 appear to be a sort of self-reinforcing cycle in which more people use it because more people
0:30:42 use it. The product gets better because more people use it. So more people use it, which
0:30:46 is what you have with operating systems because you have more apps, therefore more users, therefore
0:30:50 more apps. So what you have with Google search, that Google has all the feedback from how people
0:30:54 use it that makes the search engine better. You have a network effect in social media that
0:30:56 you’re there because you’re there, because your friends are there, because you’re there.
0:31:02 There’s no apparent equivalent in LLMs right now. There’s no reason why the LLMs get better
0:31:08 because more people use them. Now that may come. You have that, OpenAI and people have been
0:31:13 doing memory where it remembers what else you’ve asked, but that seems more like a switching
0:31:17 cost than a network effect. And also it might be easier for you to just ask it what it knows
0:31:23 about you and then tell Claude or vice versa. So it’s not clear if that. But we are at that
0:31:26 sort of stage where you’re looking at the browser and saying, is there a way that you can create
0:31:30 stickiness here or that you can create a network effect on the browser? Or is it just that the browser
0:31:37 itself is a commodity? Now, capital is not a winner-takes-all effect in that convention. At any
0:31:41 rate, it’s a different kind of winner-takes-all effect. I mean, I wouldn’t conventionally think of
0:31:45 capital as a network effect. Or is it, it’s not a product that’s in, it’s not something that’s
0:31:52 inherent in the product. It’s something else. It may be that, yes, ChatGP, the OpenAI has more money
0:31:53 so they can make their model better.
0:31:58 There’s like six rabbit holes I want to go down before we move on to something new. If OpenAI can,
0:32:03 I liked your point about sort of at the point where AI gets better because people are using it,
0:32:05 then there’s a huge advantage to being OpenAI.
0:32:08 We don’t have any, we don’t have visibility on what that would be yet.
0:32:12 At that point, though, whoever’s in the lead would sort of…
0:32:17 If it did, then you could get kind of a runaway. But we should kind of go back and think about
0:32:24 MySpace. Because you have like, and it would be MySpace, you know, in the early phases of these
0:32:29 things, and you see the same thing with the early PC industry, you’ve got a dozen of them. And there’s
0:32:34 often an early leader that falls away later. And so MySpace was the early leader that fell away
0:32:39 later. Then you get a late stage where the S-curve is kind of flattened out, where all the network
0:32:43 effects have kind of solidified and the product quality has solidified. It was very easy actually
0:32:48 to get people to switch back and forth between MySpace and Facebook and Bebo and Friends Reunited
0:32:54 and whatever the, Orkut and all these other things. In the early days, then you kind of get this
0:33:00 separation out. But then, of course, then you get, then Instagram comes along. And then TikTok
0:33:04 comes along. Right. And so as soon as you have something that’s a different proposition, that
0:33:10 turned out to be extremely easy to pull that away. You know, Google lost to YouTube, they had to buy
0:33:15 YouTube, Facebook lost to Instagram and WhatsApp, and they had to buy them both. So those are quite
0:33:21 fragile and quite narrow when it takes all effects, or at least they appear to be. We don’t know what
0:33:25 that would be or what the modalities would look like. Modalities, sorry, that’s a great meaningless
0:33:29 word. It’s like saying societal. We don’t know what that would look like. And therefore we can’t,
0:33:33 we don’t know how rigid it would be or how it would work because we don’t have it yet.
0:33:37 You can’t, as I’m sure you know, you’re not, they’re not retraining the models all the time
0:33:42 with the data. So you don’t have that kind of a runaway effect as like continuous flow and more
0:33:47 queries produces, you know, better results. So it’s kind of tricky to do that yet.
0:33:52 I want to come back to something you said. You said some people look at ChatTBT and don’t get it.
0:33:56 Yeah, I think this is really important. There’s a whole bunch of survey data on how people,
0:34:00 how many people are using this stuff. You’ve got the numbers from, so, so, so, so Chat OpenAI
0:34:05 pretty say, well, we’ve got this many weekly active users. Funny thing about social is people,
0:34:08 when social happened, people would talk about registered users. You know, in the early days
0:34:13 of the internet, people would talk about hits. Yeah. And then we realized that if your web page
0:34:17 has seven items in the menu bar, that’s seven gifts. So that’s seven hits. So hits was meaningless
0:34:20 and you had to switch to page impressions and then it’s registered users. And then it’s monthly
0:34:23 active users. And on social people said, well, hang on, if you’re using Instagram once a month,
0:34:27 you’re not using it. It’s daily active users or nothing. And weekly active users, we don’t like
0:34:32 either. Now OpenAI is doing weekly active users. And Sam Waltman was a social media startup founder.
0:34:36 He knows this. It’s a bullshit number. You look at survey data, and I did this slide in the last
0:34:40 presentation I did of like five different surveys from the US from late last year, earlier this year.
0:34:47 And it’s all roughly the same. It’s like something around 10%, give or take three or four percent of
0:34:52 people, depending on the survey, they’re using this, say they’re using this every day. Another
0:34:59 sort of 15 to 20% of people say they’re using it every week. And then there’s, say you’ve got like,
0:35:06 say 10% of people using it every day, say 15 or 20% of people using it every week. Another 20 or 30%
0:35:11 of people who say I use it every month or two. And another 20 or 30% of people who said, yeah,
0:35:16 I had a look, I didn’t get it. And then you have this survey where people say 70% of people are using
0:35:20 AI. And I’m like, wait, what do you mean? There’s a whole other rabbit hole, which is, you know,
0:35:25 people say, well, did you use Snapchat’s face filters? Oh, so you’re using AI. What do we mean by AI?
0:35:29 So which is, let’s be specific. Let’s talk about, are you using a consumer facing LLM chatbot?
0:35:33 Like you’re going to chat to UPT or Claude and asking questions. Like that’s the number we want to look at.
0:35:40 Most people don’t think about their inbox as a system, but I do. Email used to be the thing that
0:35:44 helped me run my business. Lately, it felt like the thing getting in the way of it. I’d spend too
0:35:49 much time weeding through low priority messages, trying not to miss the one or two that actually
0:35:55 mattered. And it was draining my focus. Then I started using Notion Mail and everything changed.
0:36:01 Notion Mail is the inbox that thinks like you. It’s automated, personalized, and flexible to finally
0:36:06 work the way that you were. With AI that learns what matters to you, it can organize your inbox,
0:36:12 label messages, draft replies, and even schedule meetings. No manual sorting required. I’ve created
0:36:18 custom views that split my inbox by urgency and topic so I can focus without distraction. And I use
0:36:24 snippets to fire off my most common emails, follow-ups, intros, and scheduling without rewriting
0:36:29 anything. The best part is it works seamlessly with my Notion workspace and is powered by Notion,
0:36:35 the tool trusted by over half of Fortune 500 companies. Notion is known for powerful connectivity,
0:36:41 intuitive functionality, and the ability to supercharge productivity. Get Notion Mail for
0:36:46 free right now at notion.com slash knowledge project and try the inbox that thinks like you.
0:36:51 That’s all lowercase letters, notion.com slash knowledge project to get Notion Mail for free right
0:36:57 now. When you use our link, you’re supporting our show too. Notion.com slash knowledge project.
0:37:09 And to me, there’s a bunch of you could matrixes. So some of this is it’s early. There’s a counterpoint
0:37:13 here, which if people do the chart and they say, oh my God, it’s so fast. It’s like it’s faster than
0:37:17 smartphones. Yes, because you didn’t need to buy a thousand dollar smartphone. Right. It’s faster than
0:37:22 PCs. Yes, because you know what PCs cost in the 80s adjusted for inflation? It’s like five grand.
0:37:27 It’s free for a lot of them. It’s free. It’s a website. You just go there. Of course,
0:37:31 it’s got faster adoption. And there’s way more people online as well. So even the absolute numbers
0:37:35 are faster than they were for Facebook 20 years ago, 15 years ago, because there’s way more people
0:37:40 online now. Yes. So that’s again an example of my unfair but relevant comparison. You’re sort of
0:37:44 standing on the shoulders of giants. So of course you can get to way more people quicker. But do you have
0:37:51 to keep asking, well, what? Yes, but why do so many more people look at this and not get it? Or even
0:37:57 worse, the not getting it, I can kind of see because people look at everything and don’t get it. Why is
0:38:05 it that somebody looks at this and gets it and goes back every week, but only every week? Right. Why is it
0:38:11 they can only think of something to do with this once a week? I worry about those people. I mean, I’m just
0:38:18 thinking if these numbers are accurate, the 10%, the 15%, you know, 90% of the people that I spend the
0:38:23 most time with are within that 10%. Well, I’m not. Interesting. Tell me more about that. Well,
0:38:29 here, actually, I’ll preface this conversation with my kids don’t use Google anymore. They use it to
0:38:37 find phone numbers or local businesses or places distance. Everything else, they basically have
0:38:43 defaulted to chat GPT now. Again, I’m going to, all AI conversations seem to be analogies. So I’ll-
0:38:49 100%. And it’s like, it’s like nuclear weapons. It’s like, no, it’s not. The comparison, I think,
0:38:53 is interesting here. It was not perfect, but it’s interesting is to look at early spreadsheets
0:38:58 for software spreadsheets, spreadsheets with paper. Dan Bricklin creates, and I can’t remember the
0:39:03 other guy’s name, who created VisiCalc in the late 70s. And I think to get an Apple II to run it with
0:39:08 a screen and everything costs like 15 grand adjusted for inflation. And you show this to
0:39:13 an accountant and it’s like, you can change the interest rate here and all the other numbers
0:39:20 change. And we see that now and we’re like, yes, 1978, that was a week of work. Almost literally.
0:39:21 That was like amazing.
0:39:27 Yeah. He would do a week of work in half an hour or less. And he has all these stories about
0:39:32 accountants who would, you know, they would be given a one month project and they’d get it done
0:39:35 in a week and then they’d like go and play golf for three weeks because they, partly because they
0:39:38 could probably, they didn’t actually want to tell the client I needed it in a week because the client
0:39:43 would think they hadn’t done it properly. So I look at ChatGPT and I think, right, I don’t write code.
0:39:48 I have zero use of something that will write code for me. I don’t really do brainstorming.
0:39:57 I don’t do summarization of things. I don’t do the things where it’s sort of out of the box,
0:40:03 easy and obvious. And then there’s a sort of mental load of, okay, I’ve got to kind of try
0:40:09 and think of what things am I doing that it could do for me. And that most people don’t think like
0:40:13 that. So there’s a sort of, I said a memory ago, there’s like a matrix. There’s a matrix of like,
0:40:20 who has the kinds of tasks that it’s good at? Obviously. Who is good at, has the kind of tasks
0:40:26 that it’s good at? Not obviously. Who is good at thinking about new tools for the things that
0:40:32 they’re doing? Who isn’t? If you, I’m kind of blown away. You don’t reflexively use AI.
0:40:39 If you were using Salesforce and you had a button that said, drove me an email to reply to this
0:40:45 client, then that gets massive adoption. Well, that’s a feature that we talked about earlier.
0:40:51 But that’s a different thing. Yeah. Is it, is it the chatbot as product where you get this blank
0:40:56 screen and you kind of look at it and you scratch your head and you have to think, well, what is it
0:41:01 that I would do with this? And then you have to form new habits around it. Or is it that it’s
0:41:06 wrapped in product and UI where somebody else has said, it would be really useful for this,
0:41:09 wouldn’t it? And then you look at it and go, oh yeah, that would, I could do that.
0:41:13 Do you think it’s better with qualitative or quantitative analysis?
0:41:22 I think it is presently, and I’m going to get a binary statement. I think today it has zero value for
0:41:28 quantitative analysis. Oh, interesting. Because if, well, let me, let me qualify that. Do the numbers
0:41:33 need to be right or roughly right? Because what all of these things do is they give you something
0:41:39 that’s roughly right. And roughly is a spectrum, but it’s always… You don’t want pi to be 3.1.
0:41:42 Depends how big the circle you’re measuring is.
0:41:49 You know, this is the line about pi that, you know, we can calculate it. And however many digits we have
0:41:54 is like enough to calculate, you know, the diameter of the universe or something, but people still adding
0:41:57 more numbers. So, you know, there’s a little bit of Xeno’s paradox in here. You know, like you get
0:42:01 infinitely close. At a certain point, it doesn’t matter. And this is at a high level, this is some of
0:42:06 the AGI argument that if the thing gets infinitely close to reasoning, without ever actually reasoning,
0:42:10 does it matter? Like at a certain point, the thing, if the thing is always right, if the thing is only
0:42:15 wrong once in a billion years, does it matter that it’s not, that it’s not always right? The problem
0:42:19 today is, it’s not wrong once in a billion years. It’s wrong a dozen times a page.
0:42:21 You don’t want to spit that out and give it to somebody.
0:42:22 And I don’t know.
0:42:23 Yeah, yeah.
0:42:28 So, I had a very early example of this. And I was going to speak at an event at the beginning of 2023.
0:42:32 And the conference people had asked me for a long biography of myself. And I didn’t have,
0:42:37 I don’t have, still don’t have one. And so they’d made one and they’d used it with chat TPT and not
0:42:39 told me. And they just sent it to me to check. And I looked at them and I said, what the fuck is
0:42:40 this bullshit?
0:42:44 That’s 2020. That’s like, but, but generations ago.
0:42:48 That’s not relevant to the point I’m going to make. The point I’m going to make is,
0:42:53 A, it was always the right kind of biography. It was the right kind of degree, the right kind of
0:42:56 university, the right kind of experience, the right kind of jobs. It just wasn’t actually the right things.
0:43:04 But B, I could take that and fix it. So for them, it was useless. For me, it was completely,
0:43:08 very useful. I could spend 30 seconds fixing it instead of spending an hour scratching my head,
0:43:14 which is why I say right or wrong depends, which is a very kind of French philosopher kind of
0:43:19 question. Is the answer, is the prompt, does it, does the prompt have errors? It kind of depends
0:43:27 why you wanted it. Okay. Well that, I don’t have use cases where I want something that’s roughly right.
0:43:33 I don’t have use cases where I want a list of 10 ideas or I want it to brainstorm or I want it to
0:43:38 draw for me an email or want it to write code or I don’t want it to generate some images. You know,
0:43:44 my friend who works at a consultancy and they want pencil sketches of concepts and now they can
0:43:49 just use Vigione to make those. That’s great. Does that sketch, like, does that person at the back
0:43:54 have three legs? Not anymore. No. And if they did, it wouldn’t matter. You could Photoshop that out.
0:44:01 But I don’t do that. I don’t create images. So I don’t have a good mapping of the stuff this is good
0:44:08 for early against the stuff that I do. And the stuff that it, that it maybe would be useful for
0:44:15 is the stuff where it’s actually not yet very good. And the things where you would mitigate that by
0:44:19 saying, well, I would fix it. I don’t do those things.
0:44:24 Okay. So this is a good thing because I wanted to come back to something you said. You said you
0:44:32 think by writing and in a world where you’re taking something generated by AI and editing it,
0:44:35 that’s different than writing. Talk to me about thinking by writing.
0:44:42 I, why she, my, my, my chat GPT use case, which is more mental model than a practical thing is I
0:44:46 write something. And I think it’s actually, I would, what I always would ask in the past is kind of your
0:44:51 point about pattern recognition is I look at something and say, am I adding value here? Am I
0:44:55 saying something useful? Am I saying something different? Am I asking the key question? Am I
0:45:00 pushing further? Am I pushing the, am I asking the next question rather than just answering the obvious
0:45:05 questions? Now I can just say, is this what chat GPT would have said? And if the answer is,
0:45:09 this is what chat GPT would have said, then I didn’t publish it. Not because people can get it
0:45:15 from chat GPT, but because anyone would have said that. That’s a perfect analysis in the sense that
0:45:24 it raises the baseline of what qualifies as insight. The difference is the slope of the insights. And
0:45:31 so you wouldn’t say it if chat GPT is going to say, and push back on this by all means, but the chat GPT
0:45:34 level of insight to use an example, it could be clode, it could be
0:45:43 GROC could be any of them is increasing at a faster pace than most people. And eventually those slopes
0:45:47 intersect and it’s probably intercepted already or intersected already with, you know, maybe up to
0:45:54 intern level. And next year it might be master’s level, or it might even far surpass that. And it
0:45:59 hasn’t some domains in terms of math, um, the year after it might. And so maybe it’s like five years
0:46:06 before it, it passes Benedict. Um, and maybe it’s four years before it passes somebody else. And maybe
0:46:12 it’s like passed me a way long time ago. So I think there’s a, there’s, there’s two or three
0:46:19 directions we could, we could take that. One of them is, it’s an interesting theoretical philosophical
0:46:27 question is originality, which is to say alpha go could do original moves because it could do all
0:46:31 the moves and do it, do moves that no one had done before, not knowing what people had done before,
0:46:34 but it had an external scoring system. It knew that that move was good.
0:46:36 Mm-hmm. The, because it had feedback.
0:46:41 Yeah, it had a feedback loop because every move has a score. It can evaluate the score every move.
0:46:47 The classic, um, parable of the monkeys and typewriters, or, you know, the Borgias infinite library is there’s no
0:46:48 feedback loop. Yeah.
0:46:52 Yes, the Borgias infinite library contains new masterpieces generated at random, or the monkeys
0:46:55 with typewriters would generate new masterpieces, but there’s no feedback loop. So there’s no way of knowing.
0:46:59 You’ll see this with music now. You can generate new music. It could generate new stuff that you
0:47:06 wouldn’t know. For an LLM, variance is bad. Originality is, is, is a lower score. So what’s
0:47:13 the feedback loop for original but good? Now, it might be that that’s the same sort of false
0:47:19 question as saying, is it really reasoning or is it just right? 99 times, you know, nine, nine,
0:47:23 nine followed by many zeros. Does it actually understand or is it just always right without
0:47:27 understanding? Does it actually know that’s original and different or does it not? And that’s a,
0:47:31 that’s a kind of a puzzling, I don’t think we know the answer to that. And it may be the wrong
0:47:37 question, but it’s kind of a puzzle as to how would these, how far can these things make things
0:47:44 that are both different from the training data and good? And is knowing that this is different
0:47:50 but good, is it really different or is it just matching the pattern on a longer frequency?
0:47:55 Do you see what I mean? And how much could you actually have predicted that given enough data
0:48:00 that it’s not actually outside the pattern? It just kind of looks like it is if you’re zoomed in more.
0:48:04 And if you zoom out more, then that is matching the pattern. How would you know that people will,
0:48:08 it’s like, you know, thinking about music, like, how would you know that people would like punk?
0:48:14 You could get, you can very easily imagine a generative AI system that can make you more
0:48:18 stuff that sounds like, yes, or more stuff that sounds like Pink Floyd. It might not sound like
0:48:22 good Pink Floyd for Floyd, but you could imagine it would make more stuff that sounds like,
0:48:26 like the Grateful Dead. You know what Grateful Dead fans say when they run out of drugs?
0:48:28 This music’s terrible.
0:48:37 You can imagine, I’m being unkind, but like you can imagine the challenge is knowing that now people
0:48:44 are really fed up of 70s prog rock and they would really like something else and that something else
0:48:49 would be punk and that would work. It’s knowing that people in the 40s were really fed up of the war
0:48:54 and would want luxury and that Christian Dior’s new look would work and would express that.
0:48:59 Could an LLM do that thing? How much variance do you need? I don’t know. It’s an interesting,
0:49:05 like, thought experiment to ask that question. There’s a completely different place to take this,
0:49:12 which is to say this is an appeal for boutiques and in-person events and the unique and the curated
0:49:16 in the individual. There’s a shop I always used to always talk about. I’m not sure if it’s actually
0:49:19 still there. There’s a shop in Tokyo that just sells one book in Ginza that just sells,
0:49:24 one book and they change what it is once a month. It may have closed 10 years ago. I’ve been talking
0:49:27 about it for 20 years. But the point is you don’t go into the shop and have to work out what I’m
0:49:32 booked by, but you have to know the shop exists. Or you can be Amazon and you’ve got 500 million
0:49:37 scoots and they know they’ve got everything. Actually, there’s some stuff they don’t have
0:49:41 because they want to be individual. They don’t have LLMH. But for the sake of argument, Amazon
0:49:45 has everything, but you can’t go to Amazon and say, what’s a good book? Or, you know, what’s a good
0:49:47 lamp? They have all the lamps. You can’t just go to it and say, what lamp should I buy?
0:49:54 All of retailing and merchandising and advertising is about where are you on that spectrum and what
0:50:00 else do you do? Do you spend the money on rent or advertising or shipping or what? And how
0:50:07 does that work? And the more there’s a sort of a polarization between, well, if I know I
0:50:12 want the thing, I can get it within 12 hours. But how do I know I want the thing? And as I
0:50:19 alluded to earlier, an LLM is one, paradoxically, an LLM could suggest you the unique individual thing.
0:50:25 Would the LLM also create the unique individual thing? That’s a high word. And a second, that’s a
0:50:31 different question. It’s a question further down the pipe. But the more that the LLM can do what
0:50:37 everybody would probably do or say what everyone would probably say, then the more you push to other
0:50:41 places. That makes a lot of sense. I mean, there’s always going to be a market for insight,
0:50:46 whether it comes from LLMs or people, you have to be providing insight.
0:50:51 You know, our world as quote unquote content creators is a very wide spectrum of people who
0:50:56 do very different stuff. And there’s, you know, there’s people doing AI slop and there’s people
0:51:01 doing, you know, the, what is it called? So, you know, the passive income thing. But there’s people
0:51:04 who do very different kinds of content coming from different places for different reasons. You know,
0:51:08 Scott Galloway does very different kind of stuff to me. Mary Meeker does very different kind of stuff
0:51:13 than me. You do very different kind of stuff to me. It’s just part of that is about who you are and
0:51:18 your story and the authenticity of it. And some of it is about no one cares who you are, but you’re
0:51:22 saying interesting stuff. And some of it’s a recommendation algorithm or something else.
0:51:27 There’s a book by Zola about the creation of department stores called Bonheur de Dame,
0:51:32 which means the happiness of women. And it’s basically about a 19th century Jeff Bezos calling
0:51:37 a department store into existence out of thin air through force of will. And like he invents
0:51:44 fixed prices so that you can have discounts and loss leaders and mail order and advertising.
0:51:49 And, you know, he puts the slow moving expensive stuff at the top of the store and he puts food
0:51:54 and makeup on the bottom of the store. There’s nothing new under the sun. And meanwhile, the shopkeepers
0:51:57 on the other side of the street are saying, like, have you seen what that maniac is doing now?
0:52:02 He’s selling hats and gloves in the same shop. He’s got no morals. He’ll be selling fish next.
0:52:05 And of course, he’s got this counter. Like there’s like the whole plot point is about
0:52:09 loss leaders. So there’s like you can again, you’re going to step back and think, well,
0:52:13 people have freaked out about industrialized mass produced product before. People are freaked
0:52:17 out about there being too much content. There’s a line that Erasmus was the last person to have
0:52:23 read every book. There’s too much AI content slop on the internet now. Like, yeah, how many books
0:52:26 do you think were being published in 1980? Do you think everyone was reading all the books
0:52:27 then?
0:52:32 Yeah, same thing. Just different scales, I guess. What advice would you give students today?
0:52:36 Well, when I was a student, we’re all supposed to be learning Japanese. I think that was just
0:52:41 the tail end of that. You know, I was sort of lucky to have a sort of very expensive and old
0:52:47 fashioned and handcrafted education that was all about learning how to learn and learning how
0:52:56 to think. I think there are skills that people used to sneer at that probably shouldn’t have
0:53:00 been sneered at and certainly shouldn’t now. I mean, I’m old enough to remember when people
0:53:05 would just sort of smugly say, well, I’m not computer literate. So that was like, it was like
0:53:10 being a car mechanic or something. I don’t know how to do that. That’s not my problem. That’s
0:53:14 somebody else’s problem. And I don’t think anyone now, partly this is because of mobile. I don’t
0:53:20 think anyone thinks like that anymore. Should you learn to code? No, I think you should find
0:53:23 out if you want to learn how to code. I think this is like saying, should you learn an instrument
0:53:29 or should you, you know, go take theatre classes? That may or may not be what you should be doing.
0:53:33 Of course, what does learn to code need in 10 years time? That’s a different question. But
0:53:37 should you, I don’t think you should presume you will or won’t be a software engineer.
0:53:42 I think you should presume that you will need to be curious and that you’ll have many careers
0:53:48 jobs and different kinds of jobs. I think you should be focusing on learning how to think.
0:53:51 But I don’t know, I think you should be presuming that everything will change.
0:53:56 Everybody says something like learning how to think. I feel like you would have a really good,
0:54:00 what does that mean? Like break that down for me, because it probably means different things to
0:54:06 people. So every now and then I’m slightly perplexed to get an email asking for career advice because I
0:54:11 think if you looked at my LinkedIn and I sort of company shut down, company shut down, company shut
0:54:17 down, like lasted a year there, that didn’t work. Coming from the UK and seeing the US system, I never
0:54:24 really liked the US idea that like, if you want a good job, you should be doing maths and business and
0:54:31 engineering. Now, that may be how people hire students here. But I never liked the idea that
0:54:36 you’re like studying philosophy or studying history or studying literature is useless because you’re
0:54:41 learning about history. That’s not what I learned. Yes, you know, I wrote lots of analogies about
0:54:46 history, none of which are actually things I studied at university. What I learned studying history at
0:54:53 Cambridge was how to ask what the next question is, how to break this apart, how to read 100 books or 50
0:54:58 books in a week and find the bits that you need, how to synthesize lots of information, how to ask,
0:55:03 well, what does that actually mean as opposed to what it looks like? It means, do you believe this?
0:55:08 Is this credible or should you just jettison that idea? How do you put this together and think about
0:55:14 how you explain something? And that’s what my friends who studied English did, or my friends who studied
0:55:19 philosophy did, or my friends who studied engineering did. That was what you were being taught how to do.
0:55:26 you weren’t being taught to be an English language, to be a historian or to, and I’d hesitate to think
0:55:33 that, you know, you can only build a company if you’ve had it or only work for Goldman’s or McKinsey or a big
0:55:40 law firm. If you had a particular kind of education, a particular kind of degree, I think you should be
0:55:48 looking for what’s going to challenge you and push you and give you the ability to learn and think in
0:55:54 different ways. But again, this is me, you know, what are the skills that you have? How does your brain
0:55:58 work? How do you think about things? And it took me, God, 20 years to work out what I was good at.
0:56:04 Although I’m not sure that you can know that as a student. So you have to try and find what you’re
0:56:10 good at, as well as, you know, learning to think. Maybe learning to think is what I do. Maybe that’s
0:56:13 not what you should be doing. You should be learning what is it that you should be learning to do? What
0:56:16 are the things that you’re good at? Try all the different things. I don’t know. It sounds like a,
0:56:21 like in a university commencement speech now. I don’t fucking know. But you don’t know what you’re
0:56:26 going to be good at. So you kind of want to try and like create options for yourself.
0:56:36 What did you learn about investing working at a 16Z? So there’s a bunch of like maxims or sayings.
0:56:40 That’s probably, I wouldn’t want to dignify them as like theses or anything else, but there’s a whole
0:56:45 bunch of maxims and sayings in venture, which, you know, we could have a podcast talking about,
0:56:49 but there are better people to give you a podcast talking about the mechanics of venture. But like
0:56:54 you have, you’re understanding what startups are and how they work and how the machine works and
0:56:58 startups are an industry and Silicon Valley is like a machine for creating startups.
0:57:03 And still too many people kind of look and say, well, that was a dumb idea. And it’s like, well,
0:57:08 that’s the wrong question. The question is, if you look at a startup and you think, could it work?
0:57:12 And if it did work, what would it be? And could those people make it work? And then you understand
0:57:16 more, you like the mechanics of, well, how does social media work? And how do people build companies? And
0:57:21 what is it like to create a startup, which is a whole other conversation. I think something else
0:57:27 that I learned was, um, calibration. This is sort of, again, another metaphor I always think of,
0:57:32 which is that if you go to like a really great art gallery, like you go to the MoMA or, you know,
0:57:39 the Met or the Louvre or something, everything there is a masterpiece. If you go to a smaller,
0:57:44 weirder art gallery, like London’s a gallery in London called the Wallace Collection, or I was in
0:57:50 Rome a couple of weeks ago and I went to one of the sort of old aristocratic palaces. And this palace
0:57:57 is like, it’s like 10 or 15 rooms of pictures. And they’ve got like a quite good Tintoretto and a
0:58:05 maybe Titian and a Raphael. You see it glowing across the room. And they’re like, oh, that’s why he’s
0:58:11 Raphael. And it’s the same when you see lots and lots of startups, like, oh, that’s why he’s
0:58:17 Oh, no, that’s why this is bollocks. Like, you get 10 minutes in and like, God, I’ve got another
0:58:21 45 minutes. I’ve got to pretend to be interested and polite. So the founder has a good experience.
0:58:31 Um, it’s seeing that contrast and texture and seeing what good looks like, seeing what worked,
0:58:36 what didn’t work, what people tend to say, how this tend to work, pattern recognition as much as
0:58:41 anything else. You also get, there’s a whole, all sorts of other kind of cultural contexts you get
0:58:45 around, you know, Silicon Valley is, can be very high school. You know, I was used to say it’s an
0:58:49 industry town. And I always used to say, it was like being a college town where there’s one subject.
0:58:54 So everybody you meet is doing the same thing. And so in some ways that’s very powerful.
0:58:58 You, you know, everybody around you is doing a, you want to do a PhD, everyone around you is doing a
0:59:01 PhD. Here’s the world expert on the subject. Of course you are. It’s like being a middle class kid.
0:59:03 Of course you’re going to university. What do you mean you’re not going to, everyone’s going to
0:59:06 university. Of course you’re going to do great work. Of course you’re going to start a company
0:59:11 and you’re surrounded by the people who’ve done it. Um, you want to get a CTO who’s done it five
0:59:16 times. You want to get the head of grace. He’s done it five times. They’re all there. The other side of
0:59:20 that is you’ll never meet anybody who isn’t working on exactly the same stuff and isn’t interested in
0:59:24 what you’re working on. So you have no external context. You have no external perspective.
0:59:28 The nearest theater is in LA than, you know, Chicago. I think like, is there a theater in LA?
0:59:31 You want to go to an art gallery, you’ve got to go to LA.
0:59:35 Who’s the best positioned right now? So from the outside looking in, you know,
0:59:42 SOC is, is going all in on AI. Elon seems to be going all in on AI. Uh, which of sort of the,
0:59:48 the leading companies do you think like, Hey, why, why are they all of a sudden shifting and, and really,
0:59:53 you know, they were dabbling in it before, but now they’re, they’re really committing, you know,
0:59:58 tens and hundreds of billions of dollars. And then who’s the best positioned in this sort of
1:00:03 space. If you had to pick one and invest your entire net worth in it, who would it be?
1:00:05 Well, that’s, that’s, that’s several different questions.
1:00:06 Let’s pick them apart.
1:00:11 Like this was a technology that had been kind of floating around before chat GPT 3.5 and
1:00:15 everyone kind of saw that it didn’t work very well. And then chat GPT was, no, actually it
1:00:20 works well enough. And then there’s this explosion of interest since then. And so I think last year,
1:00:28 the Google, Google, Microsoft, AWS, and Meta spent about $220 billion of CapEx last year.
1:00:32 And we’ll probably spend something over $300 this year. And so that’s basically more than doubled,
1:00:36 almost tripled, I think, from a couple of years ago. So there’s enormous surge in CapEx in investment
1:00:41 in this. And we’ve got these stories about like, well, Meta bought half 49% of scale.ai for $15
1:00:48 billion, apparently looked at both of the other recent OpenAI spin out, SafeRoute, Superintelligence,
1:00:52 and what’s the other one? Thinking Machines, which are both basically pre-product, pre-revenue
1:00:58 labs with somebody from OpenAI at multiple tens of billions valuations. And he apparently,
1:01:03 like Sam Altman, complained that Mark is offering people $100 million to join. So Mark’s in beast
1:01:08 mode. Microsoft has this kind of weird situation in that his own models aren’t actually very
1:01:13 good, but it’s got this very kind of weird relationship with OpenAI. OpenAI, Sam Altman
1:01:17 is, I was going to say polarizing figure, but actually opinions about him tend to be fairly
1:01:22 unanimous and tend to be fairly negative. Like everybody who’s ever worked with him quit.
1:01:29 OpenAI itself still kind of sets the agenda, but much less so than it was two years ago.
1:01:34 And I wouldn’t want to do like a detailed, like calling the scores on whose models are good and
1:01:38 whose labs are good. But, you know, objectively, Google is clearly like firing on all cylinders
1:01:44 now and is making great models. Lama seems to have been an embarrassment.
1:01:49 And so Metro is kind of scrambling to catch up. Apple is a slightly different position in that
1:01:54 they are always sort of taking the position that they don’t want to be first. They want to do it
1:01:59 right. And that they don’t need to be doing whatever the latest consumer internet thing is like. They
1:02:04 don’t have a YouTube. I think Craig Federighi said in an interview after WWDC, like, we don’t have a
1:02:07 YouTube. He didn’t call it YouTube. We don’t have a YouTube. We don’t have a car showing service.
1:02:12 We don’t do grocery delivery. We also don’t have a chatbot. Okay. That wasn’t quite the question.
1:02:18 The question for Apple is how much does integrating an LLM into the operating system change the
1:02:24 experience of what it is, potentially shifting the competitive balance with Pixel, which at the
1:02:27 moment has basically only gets bought by people who work for Google and people who work for the
1:02:32 tech press and like literally no one else buys Pixels. Well, because Google doesn’t want to compete with
1:02:36 Samsung. We could go down a whole smartphone industry rabbit hole. There’s this sort of
1:02:41 question for Apple around, does this actually change the experience of what a smartphone is,
1:02:48 what the ecosystem is? Does it end up kind of getting Microsofted in the sense that you’re going
1:02:53 to still, for the time being, you’re still going to buy a smartphone. It’s not at all apparent
1:02:56 there’s going to be another device. And if there is, it’s a long way away and it might be an Apple
1:03:00 device as well. But you’re still going to buy a smartphone. You’re still going to buy the nice one
1:03:05 with a good battery and the fast chip to run the models and the good screen and the best camera,
1:03:08 which will still be an iPhone because Apple still has the best chip team and a whole bunch of other
1:03:13 hardware advantages. But everything you do on it will be from someone else. And it won’t be someone else in
1:03:17 the sense that it’s an app from the app store. It will be someone else in the sense that it’s a model
1:03:22 running in the cloud, which is what happened to Microsoft in the 2000s, which was everyone had to
1:03:26 get on the internet. To get on the internet, you needed a computer. You weren’t going to buy a Linux
1:03:30 computer. You probably wouldn’t buy a Mac either. So everyone bought a Windows PC, but they were using
1:03:38 it to do web stuff, not Microsoft stuff. So Microsoft kind of lost that. And so that would
1:03:42 be the concern for Apple would be you’ll still buy your new iPhone and you’ll buy the iPhone air this
1:03:47 autumn because it will be center and lighter and it will be a lovely phone and you’ll use it to do
1:03:56 chat GPT. But the counter argument would be to say, yeah, you’ll use it to do chat GPT and DoorDash and
1:04:03 Uber and Instagram and that cool new game and the other new cool game that’s enabled by LLMs and to do
1:04:08 TikTok and to do and, and, and, and, and, and it will be kind of the same except they’ll…
1:04:12 So there’s a sort of, you see what I mean, there’s this sort of slight fuzziness around what the bear
1:04:19 case for Apple actually is. Does it sort of end up like Microsoft did? How bad is that exactly? What
1:04:20 does that mean? Yeah.
1:04:27 If we all go to wearing something like this, that’s lit up by AI, then that’s a bigger shift, but it’s
1:04:34 very unclear how close that really is. Um, just for the optics. There’s another axis here, which is
1:04:39 what happens to Google search? Where does that money go? How do you map the search activity that
1:04:46 goes to an LLM that, and how do you map that against where the revenue is? And also from the other side,
1:04:52 how do you map that against where the publishers are? How do you think about whether you just shift your
1:04:57 habit and you’re actually using chat GPT as Google? It’s basically doing what Google does, but you’ve
1:05:02 shifted the brand and you’re going to that search box instead of the other search box. I don’t think you
1:05:09 can count them out in absorbing that in. So who would you, out of the public companies,
1:05:14 you had to put your money in one of the top like seven? Well, then there’s a valuation question. And
1:05:19 I, well, a long time ago, I was a public markets analyst and I was bad at being a public markets
1:05:22 analyst, equities analyst for a bunch of reasons. One of which was I was never interested in
1:05:26 share prices. Well, we’ve given the standard disclaimer. You’re forced to. What would you?
1:05:36 It’s hard to see iPhone sales slipping from what we see now. Even half the cool, sexy Google stuff is in the
1:05:43 Google app on the iPhone. I think meta and Google, there is this sort of big question around where the ad
1:05:50 revenue goes and how much the ad revenue gets pulled away to different places. I think Instagram is probably
1:05:56 in a very good place in terms of changing what advertising looks like and how that works. I mean,
1:06:03 I had a slide in my presentation, which was like, um, what meta and Amazon want to do is to make LLM’s
1:06:08 commodity and for the sold at cost. Yeah. Now this is why meta made it open source because they want to
1:06:11 make it commodity and for the sold at cost. And they differentiate on top with meta stuff with
1:06:15 Facebook, social, Instagram stuff. And they want the model itself to be just infrastructure. Amazon would
1:06:19 also like it to be commodity infrastructure that sold its cost because that’s what anybody wants is. They sell
1:06:23 commodity infrastructure costs and they do it better than anybody else. And they make a lot of money from
1:06:27 doing that. Go to Amazon’s financials and basically all the money comes from AWS and the ads. People
1:06:32 who complain about AWS don’t come and realize the ads make it. Amazon did 50 billion, 60 billion dollars
1:06:39 of ad revenue last year. So Amazon seems to be fine, but there’s a bunch of stuff to navigate around
1:06:44 how does this change how people buy stuff on Amazon. Who does that leave? Microsoft, it was this line
1:06:49 from Bismarck that the great man is somebody who hears God’s footsteps through history and grabs onto his
1:06:54 code as he walks past. And like Satya is like tried to, first of all, he tried to grab onto VR and AR
1:07:01 with HoloLens saying that we don’t talk about that anymore. Now it’s AI, their own models are not
1:07:06 really ranking. I mean, they hired Mustafa, but like they’re still struggling. They’ve got this weird
1:07:12 contentious relationship with Sam Altman and OpenAI and it’s basically not their models. On the other hand,
1:07:18 like they’re going to sell an awful lot of Azure to run all this stuff, which again is this tension.
1:07:24 Is it, is it that everybody’s used, just uses ChatGP2 to do the thing? Or is it that someone is going to
1:07:31 come to you with a great accounting product to run Farnham Street and it runs on Azure and it uses
1:07:37 some LLM? Who cares which one it is? It’s just, it’s just better. You know, you can edge to your bank and it
1:07:42 does the cool stuff and you can edge to that and it does the cool stuff. You know, my use case for an LLM is do my
1:07:49 fucking invoicing for me. It’s not even that, it’s work out why exactly it is that that client’s ERP
1:07:55 doesn’t like my bank account and not have me spend the next three months bouncing emails back and forth
1:08:02 with somebody in India about one second on getting this done. That would be a great use case that LLMs
1:08:07 can’t do that yet. If they could, that would be great, but we’re not there yet. So Microsoft and Google
1:08:12 are in this sort of position of being the incumbent. Both, you know, how can I put this? I’ll give you
1:08:15 a more systematic, again, I’m sort of thinking my way through to the answer to your question.
1:08:21 For Google and Microsoft, they have an incumbent business that is potentially disrupted pretty
1:08:25 profoundly by this, but they also have a cloud business that sells all the new stuff for this.
1:08:30 Amazon has an incumbent business that doesn’t get disrupted by this, at least much less,
1:08:33 obviously, and a cloud business that will be very happy selling all of this stuff.
1:08:39 Meta doesn’t have a cloud business selling this stuff and has a bunch of new ways to make money
1:08:42 from all of this new stuff, except they’ve got to have some better models.
1:08:50 Apple, is this a competitive threat to the iOS ecosystem? A lot of stuff would have to happen
1:08:56 first. And they’d have to drop a lot more balls before that was to happen. And meanwhile,
1:09:00 they’re still going to sell you the nicest glowing rectangle to do all of this stuff.
1:09:05 And then we’re talking about glasses and VR, which is a whole other two-hour conversation about
1:09:10 when does that happen. There’s still people poking around in crypto, like Web3, Web3, remember? Maybe
1:09:14 that. Someone said that people still working on crypto are like those Japanese soldiers on islands
1:09:18 in the Pacific. You don’t know the war’s over. But there’s still people working on crypto. So
1:09:22 that’s like another disruptive thing coming down the pipe. Who are the other incumbents?
1:09:32 Netflix is a TV company. It was a car company. Elon Musk. There’s a whole Tesla conversation
1:09:37 fascinates me because Tesla bears balls think it’s a software company and Tesla bears think it’s a car
1:09:42 company. And at the moment, it’s a car company. I mean, yes, they launched autonomous driving.
1:09:48 But what did they launch? They launched half a dozen existing model cars with test drivers.
1:09:54 Tesla’s doing a geofence drive that everyone else was doing 10 years ago. Is that going to scale?
1:09:59 Is they finally going to get the flywheel of having all the camera data, meaning they will work with just
1:10:05 cameras? We’ve been… I don’t know. There’s a conversation you could have asked 10 years ago.
1:10:09 In fact, I wrote stuff 10 years ago. Are there winner takes all effect in autonomous cars?
1:10:13 Will Tesla get it working with cameras before everybody else gets it working with LiDAR? Well,
1:10:18 Waymo’s got it working with 50 grand of LiDAR or whatever that stack costs. It’s tens of thousands
1:10:22 of dollars of extra stuff on the car. So they’ve got it working with all of that stuff. Tesla does
1:10:29 not have it working with cameras. Will Tesla get it working with cameras before Waymo can get rid of
1:10:37 the LiDAR? We could have had that conversation. I literally was on podcasts seven, eight years ago
1:10:43 having those conversations. We don’t know the answer. Maybe. We don’t know. The interesting Tesla
1:10:47 point is people always looked at it and said, it’s the iPhone of cars. No, it’s not. What’s happening
1:10:53 is that cars are becoming Android with no iPhone. And Tesla is just selling and is in that metaphor,
1:10:58 Tesla is just another Android phone maker. And they’re competing with the whole Chinese industrial
1:11:02 policy to make more. And there’s just going to be a flood outside the US. They’re protected by tariffs.
1:11:07 Everywhere else, it’s very clear what’s happening. It’s just a flood of EVs that are just as good as
1:11:13 Tesla’s. We always end on the same question, which is what is success for you? We live in the luckiest
1:11:18 time. You know, we are not worried about rockets landing on our heads. We’re not worried about our
1:11:22 children as dying from diseases. We’re not worried that the bank might be closed tomorrow and all of
1:11:28 your money’s gone. We’re doing something interesting that we enjoy and that pays the rent that we want to
1:11:34 be able to pay. I get paid to fly around the world and give slides for money. So I think I’m doing okay. I could
1:11:44 always be doing more. But I’m always looking for the next question. Like, I’m always trying to be curious.
1:11:47 This was a great conversation. Thanks for taking the time today.
1:11:48 Great. Thank you.
1:11:53 Thanks for listening and learning with us. Be sure to sign up for my free weekly newsletter at
1:11:59 fs.blog slash newsletter. The Farnham Street website is also where you can get more info on our membership
1:12:06 program, which includes access to episode transcripts, my repository, ad-free episodes, and more. Follow
1:12:12 myself and Farnham Street on X, Instagram, and LinkedIn to stay in the loop. If you like what we’re doing
1:12:16 here, leaving a rating and review would mean the world. And if you really like us, sharing with a friend
1:12:19 is the best way to grow this community. Until next time.
Benedict Evans has been calling tech shifts for decades. Now he says forget the hype: AI isn’t the new electricity. It’s the biggest change since the iPhone, and that’s plenty big enough.
We talk about why everyone gets platform shifts wrong, where Google’s actually vulnerable, and what real people do with AI when nobody’s watching.
Evans sees patterns others don’t. This conversation will change how you think about what’s actually happening versus what everyone says is happening.
—–
Approximate Timestamps:
(00:00) Introduction
(01:04) What’s your Most Controversial Take On AI?
(05:11) Platform Shifts – The Rise Of Automatic Elevators
(10:07) Profit Margins In AI
(26:37) What Are The Questions We Aren’t Asking About AI
(39:41) What Benedict Uses AI For
(44:21) Thinking By Writing
(47:35) Can AI Make Something Original?
(52:31) Advice for Students In The Age Of AI?
(59:32) Who Will Win The AI Race?
(1:11:09) What Is Success For You?
—–
Thanks to our sponsors for this episode:
SHOPIFY: Sign up for your one-dollar-per-month trial period at www.shopify.com/knowledgeproject
ReMarkable for sponsoring this episode. Get your paper tablet at reMarkable.com today
NOTION MAIL: Get Notion Mail for free right now at notion.com/knowledgeproject
—–
Upgrade: Get a hand edited transcripts and ad free experiences along with my thoughts and reflections at the end of every conversation. Learn more @ fs.blog/membership
——
Newsletter: The Brain Food newsletter delivers actionable insights and thoughtful ideas every Sunday. It takes 5 minutes to read, and it’s completely free. Learn more and sign up at fs.blog/newsletter
——
Follow Shane Parrish
Insta @farnamstreetLinkedIn
Learn more about your ad choices. Visit megaphone.fm/adchoices