AI transcript
0:00:02 You know what’s better than the one big thing?
0:00:03 Two big things.
0:00:04 Exactly.
0:00:09 The new iPhone 17 Pro on TELUS’ five-year rate plan price lock.
0:00:11 Yep, it’s the most powerful iPhone ever,
0:00:15 plus more peace of mind with your bill over five years.
0:00:16 This is big.
0:00:21 Get the new iPhone 17 Pro at telus.com slash iPhone 17 Pro
0:00:22 on select plans.
0:00:23 Conditions and exclusions apply.
0:00:27 What’s up, everybody?
0:00:30 Hey, it’s Cam Hayward, your Steelers captain and host of Not Just Football.
0:00:35 On this week’s episode, we break down everything that went down against the Bengals.
0:00:37 The good, the bad, and we got to move forward.
0:00:40 Then we’re shifting our gears to focus on Green Bay.
0:00:44 Whether you bleed black and gold or you’re just a football fan who loves the game,
0:00:46 this is a conversation you need to hear.
0:00:50 Catch Not Just Football with Cam Hayward on YouTube, Spotify, Apple Podcasts,
0:00:51 or wherever you get your podcasts.
0:00:52 Let’s go.
0:00:57 Giving up is unforgivable.
0:01:01 So whether we’re reading more serious books about democracy in our book clubs
0:01:04 or talking about it at family gatherings,
0:01:08 I think Thanksgiving is going to be lit for a bunch of families this year.
0:01:10 That’s something that we can all do.
0:01:14 I’m Preet Bharara, and this week, former U.S. attorney and author,
0:01:18 Joyce Vance, joins me to discuss her manual for protecting our democracy
0:01:21 and the rule of law from President Trump’s overreach.
0:01:23 The episode is out now.
0:01:26 Search and follow Stay Tuned with Preet wherever you get your podcasts.
0:01:30 Today is number 43.
0:01:34 That’s the percentage decrease in peanut allergies over the last decade.
0:01:35 It’s a true story.
0:01:38 I’m pretty sure I had a nut allergy when I was a kid.
0:01:41 My parents thought I was trying to avoid church.
0:01:44 Listen to me.
0:01:45 Markets are bigger than us.
0:01:48 What you have here is a structural change in the wealth distribution.
0:01:49 Cash is trash.
0:01:51 Stocks look pretty attractive.
0:01:52 Something’s going to break.
0:01:52 Forget about it.
0:01:54 Oh, my God.
0:01:55 That’s pretty bad, right?
0:01:59 I guess you’re coming a mile away, in all transparency.
0:02:01 Brings up an interesting question.
0:02:02 Scott, do you have any allergies?
0:02:03 No.
0:02:04 I literally am.
0:02:06 As a matter of fact, I can’t.
0:02:07 I’m so sick of.
0:02:11 Waiters are, I think, legally mandated in the U.K.
0:02:14 I don’t know if they are in the U.S. to ask you, does anyone have any allergies?
0:02:14 Is that right?
0:02:16 Yeah, they have to ask you, do you have any allergies?
0:02:19 And I’m like, bad service?
0:02:23 I’ve had it with the service in London.
0:02:24 You are so spoiled in New York.
0:02:26 The service is so good.
0:02:26 That’s true.
0:02:28 What are some of your horror stories?
0:02:28 Nothing horror.
0:02:30 I mean, you know, I don’t.
0:02:33 It takes somewhere in three minutes to get me my second Makers and Ginger.
0:02:34 Those are my horror stories.
0:02:38 My worst days are better than most people’s best days, but it’s not.
0:02:44 I go to this place called, since Chiltern burned down, I got to go to this place called
0:02:48 Kensington Roof Gardens, which has way too many fucking dudes and it’s way too crowded.
0:02:49 Beautiful place, though.
0:02:51 I was there over the summer.
0:02:52 Unbelievable.
0:02:52 Yeah.
0:02:56 It’s great for the 11 days a year you can actually go into the roof garden, but otherwise we’re
0:02:59 all crammed inside wishing the Chiltern was still open.
0:03:03 And I’m not exaggerating.
0:03:07 There’s 700 people lining up at the bar, 695 of the men.
0:03:13 And you’d think, and there’s like two bartenders sitting there like slicing limes very methodically
0:03:14 and elegantly.
0:03:18 And I’m like, Jesus Christ, dude, just start like literally spraying beers at us and we’ll
0:03:19 open our mouths.
0:03:23 That’s the thing about, that’s freaked me out about the members club.
0:03:25 You’re a member of Shea Margot because now that we’re paying it.
0:03:26 I’m not a member of that club.
0:03:26 You’re not?
0:03:27 I keep seeing you there.
0:03:29 Yeah, I get invited by members.
0:03:31 Well, anyways, the thing about it is it freaked me out.
0:03:34 I’m like, it’s so young here.
0:03:39 And then I realized it’s not, the reason I think it’s so young is, you know what it has?
0:03:43 Most of these clubs have guys in their 40s and women in their 20s and 30s.
0:03:47 And what Shea Margot has is it has a bunch of dudes in their 20s, which makes it feel like
0:03:48 high school.
0:03:53 I got to agree with you.
0:03:57 Surely, surely, surely that’s a, you’re not very happy about that.
0:03:57 I’m fine with that.
0:03:58 I like it.
0:03:58 They all come up to me.
0:03:59 I love your content on men.
0:04:00 Hey, what’s that like?
0:04:05 And I literally, you, were you student body president or the mascot or something?
0:04:07 Everybody knows you.
0:04:11 And it totally bums me out because I like to get a little fucked up and talk to everyone
0:04:16 at the bar, throw an umlit cigarette in my mouth, put on sunglasses, and just be a cliche
0:04:16 of me.
0:04:21 And then when people come up and say they know you, I feel as if I have to act semi-respectable.
0:04:23 I’m like, oh, hey, hi.
0:04:23 Yeah.
0:04:23 Oh, yeah.
0:04:24 No, I don’t know what to do.
0:04:26 What do you think is going to be the next interest rate cut?
0:04:28 God.
0:04:30 God.
0:04:31 They’re all so fucked.
0:04:31 This is the trouble.
0:04:32 You shouldn’t have been a professor.
0:04:37 You clearly, you should have been like an actor or like a pop star or something.
0:04:38 That’s what you really want.
0:04:39 A hundred percent.
0:04:42 I’m just not that talented, but I agree with you.
0:04:43 No, I think you are.
0:04:45 You’re just, you’re talented in the thing.
0:04:47 You’ve got a good brain.
0:04:49 You’ve got a good economic brain.
0:04:52 And really what you want to be doing is like, you’ve got the Bezos thing.
0:04:55 You want to be, you want to be like Justin Bieber or something.
0:04:58 You had me till the Justin Bieber part.
0:05:02 I would have gone for Ryan Reynolds or George Clooney.
0:05:04 He seems smart and very politically oriented.
0:05:10 But the thing, the thing I hate about your friends, that’s a good way to start a sentence,
0:05:15 is they’re so, they’re so annoyingly earnest.
0:05:19 Professor Calloway, it’s so nice to meet you.
0:05:21 I know Ed, he’s such a good person, isn’t he?
0:05:23 Fuck off.
0:05:29 Anyways, they’re so, anyway, it’s, but it’s, it’s great to meet your friends out.
0:05:31 I’ve got to learn about who these friends are.
0:05:31 I haven’t been there.
0:05:32 You’re Princeton buddies.
0:05:36 What is it, like a reading club or a drinking club?
0:05:40 Like you guys, you guys have Pimms cups or something and play cricket?
0:05:40 Eating, yeah.
0:05:41 Is that what it is?
0:05:42 Eating club, that’s right.
0:05:44 So instead of fraternities, they have eating clubs?
0:05:45 They actually have both.
0:05:45 Oh, really?
0:05:47 But yeah, that is what it’s called.
0:05:48 It’s, it’s called an eating club.
0:05:51 But you were in an eating club and you would get into anyone because of that faux British
0:05:53 accent, which by the way, everyone is figuring out is total bullshit.
0:05:56 You’re from Nashville.
0:05:59 All right, and enough of this.
0:06:00 Get to the headlines.
0:06:01 One of our douchiest.
0:06:06 Okay, here is our conversation with Daron Asimoglu, Nobel Prize winning economist, New York Times
0:06:08 bestselling author and professor.
0:06:10 This is not an easy segue.
0:06:13 This is not an easy segue.
0:06:15 Oh, God.
0:06:20 Nobel Prize winning economist, New York Times bestselling author and professor of economics
0:06:21 at MIT.
0:06:23 Professor Asimoglu, very good to have you on the show.
0:06:24 Thanks, Ed.
0:06:26 Great being with you and Scott.
0:06:33 So we want to start with AI for obvious reasons.
0:06:36 That’s all we can really talk about at the moment.
0:06:41 We’ve been seeing a lot of bullish sentiment in AI.
0:06:44 We’ve also been seeing a lot of circular investments happening.
0:06:46 A lot of people saying that it’s a bubble.
0:06:53 Last year, you were asked to answer on a scale of one to 10, how much of an impact AI is going
0:06:54 to have on the world.
0:06:58 And your answer at the time was negative six.
0:07:05 So we want to unpack this, starting with that hot take.
0:07:08 What are your views on AI at this point?
0:07:12 And do you still have a negative six rating on AI right now?
0:07:15 Let’s break that into three pieces.
0:07:21 One is, is AI a transformative technology with great capabilities?
0:07:22 Yes.
0:07:24 That’s why it’s not minus one or plus one.
0:07:26 It’s minus six.
0:07:33 Second, how quickly will this technology reach fruition?
0:07:38 And there, I think the answer is, it depends on how quickly it’s pushed.
0:07:44 So I think for a positive development path, I think we need a more deliberative approach.
0:07:54 We are rushing into AI in a way that I think makes applications using AI less likely to develop
0:07:57 because we are just doing it too quickly.
0:08:03 And we also don’t have a roadmap of what it is that we really want from AI, while we are,
0:08:08 everybody recognizes that this is a technology that’s going to have tremendous number of side
0:08:10 effects, foreseen and unforeseen consequences.
0:08:18 So all of these things, plus, very importantly for me, the focus on automation and AGI, while
0:08:23 there are better things to do with AI, I think tip the scales towards negative.
0:08:28 So when it’s minus six, minus five, minus seven, take your pick, but I’m very worried
0:08:33 about the direction of AI, where it’s much more concentrated, who uses, who controls information
0:08:34 and what we do with it.
0:08:35 Yeah.
0:08:39 Could you break down what that negative impact would actually look like?
0:08:43 There’s the, there’s the aspect of the concentration of power, and I’m sure that could have many
0:08:44 implications.
0:08:45 We’re also worried about it.
0:08:53 There is the implication of automation and the idea that this would replace labor, it would
0:08:54 replace people’s jobs.
0:09:01 Say more about how it goes from negative one to negative six in your view.
0:09:03 What is that destructive impact that you’re so worried about?
0:09:13 In the production domain, if we use AI, mostly for automation, I think not only would we be
0:09:22 missing some of the really transformative uses of it, but we would create much smaller productivity
0:09:24 gains than expected.
0:09:34 And we would also create various social outcomes that are negative related to job loss for certain
0:09:41 groups, lack of employment opportunities for certain groups, wage stagnation or declines like
0:09:44 we’ve experienced in the 1990s.
0:09:49 All of these are on the negative part related to the production process.
0:09:59 But I’m also very worried about the fact that AI is first and foremost a communication technology.
0:10:05 And as a communication technology, it changes political and social dynamics.
0:10:15 And when it centralizes information in the hands of a few companies, it can have a variety of
0:10:25 very negative effects on democracy, on dissent, on diversity, a variation in opinion, all sorts
0:10:27 of things that we are really not prepared for.
0:10:34 What would you say to someone who would say that you are being a Luddite or, I mean, we’ve
0:10:42 seen transformative technologies in the past, whether it be oil or the electric grid or the
0:10:48 internet, and it seems as though when these transformative technologies come along, there is a lot of
0:10:52 concern about what it will do to our economy and how it will negatively impact our economy.
0:11:00 But many people would say, well, eventually it works out and we shouldn’t be so worried about
0:11:01 technology.
0:11:03 What do you say to those people?
0:11:11 Well, there are really two theories that people could have about long-run effects of technologies
0:11:12 in general.
0:11:19 The first one is that in the long run, things will work out by themselves.
0:11:20 Let it rip.
0:11:28 And just the dynamics of things, for example, in the labor market or via democratic processes,
0:11:34 sometimes semi-democratic processes, is that we’ll just find the right way of dealing with
0:11:35 things.
0:11:44 The second one is that, no, it’s a deliberative set of choices that we have to make in order
0:11:50 to make sure that the long-run effects are better than the short-run ones.
0:11:56 And I think if you look at several transformative technologies, indeed, they did have fairly
0:11:58 negative short-run effects.
0:12:02 The beginning of the British Industrial Revolution is associated, depending on how you measure it,
0:12:10 70, 80, 90 years of real wage declines or stagnation and huge increase in inequality.
0:12:16 The transition out of agriculture, similarly, at first created quite a lot of social and
0:12:17 economic hardship.
0:12:21 But in both cases, later adaptation worked out much better.
0:12:26 But I would say, even though decisions were made without a roadmap, there were specific decisions
0:12:32 about how to use technology, how to change the organization of production, and also political
0:12:33 decisions that were quite important.
0:12:35 And so it’s not an automatic process.
0:12:38 So I definitely, I’m not an AI pessimist.
0:12:43 And so I don’t know what your definition of Luddite is, but I’m not an AI pessimist.
0:12:49 But I do not believe that we’re going to get the best out of AI or even the second best out
0:12:52 of AI if we just say, oh, let’s not worry about all the disruptions.
0:12:54 Somehow things are going to work out.
0:13:05 So it feels as if a lot of the concern or discussion around AI has moved towards who can secure reliable,
0:13:08 large amounts of electrons or energy.
0:13:11 One, is this an attempt?
0:13:16 It sounds like, do you think this is an attempt to paint a future where the demand is going to
0:13:17 be unlimited and a bit of a head fake?
0:13:23 Or do you see the same sort of power constraints that these guys are seeing?
0:13:25 And what do you see as kind of the downstream impact of that?
0:13:36 I do not believe that power, GPU capacity is the main limiting factor for the kind of AI that
0:13:38 I have in mind.
0:13:47 Because if we’re going to make AI really serve our needs, I think it needs to have much more
0:13:52 human complementary domain-specific expertise.
0:14:01 It has to be able to be an aid to electricians, to accountants, to journalists, to academics.
0:14:10 And for that, high-quality domain-specific information, data, is going to be the real scarce resource.
0:14:18 Whereas right now, the energy demand is mostly from foundation models that are very impressive
0:14:25 in some ways, but also very, very expensive to run, and have not reliably reached that context-specific
0:14:28 domain-relevant expertise.
0:14:33 So I think we have to find a way of getting the best out of foundation models, but combine
0:14:35 them with domain-specific models.
0:14:40 What I’ve also seen is that a lot of the different LLMs are hitting sort of a technical
0:14:45 parity by most kind of, I don’t know, hardcore metrics.
0:14:46 Do you think there’s a scenario?
0:14:52 We had Robert Armstrong from the empty on several times, and he said that there’s certain technologies
0:14:59 where no one or small set of companies is able to ring-fence stakeholder and therefore
0:15:00 shareholder value.
0:15:06 That the airlines, PCs, vaccines were huge innovations, but you didn’t have a small number
0:15:09 of companies garnering trillions of dollars in market cap.
0:15:14 Do you think this might qualify as one of those industries where, even if it ends up being having
0:15:19 a huge impact on society, that we’re overestimating the ability of a small number of companies to
0:15:21 capture a ton of shareholder value?
0:15:26 I would be very worried about an industry that is so concentrated.
0:15:34 On the other hand, it’s a cutthroat industry, and we don’t understand what sort of industrial
0:15:36 organization of AI is going to emerge.
0:15:46 So it is almost certain that whoever is doing the foundation models advances is not going to
0:15:49 be the same one, same company that also does all of the applications.
0:15:51 So you’re going to form an AI stack.
0:16:00 So once you form that AI stack, where are the risks going to reside and which part, which
0:16:03 layer of the stack is going to get most of the returns?
0:16:04 I think that remains to be seen.
0:16:11 It’s going to depend on where the real bottleneck is in terms of doing useful things and whether
0:16:18 the foundation models are close substitutes for each other when it comes to serving as the
0:16:20 first layer of that stack.
0:16:22 I think those are really interesting questions.
0:16:31 And it is made more interesting by the fact that industries that look very competitive at some
0:16:38 point later on can be very non-competitive because early competition is about being the one that
0:16:39 controls things later on.
0:16:44 That’s why it’s so vicious because everybody thinks that they’re going to get the prize and
0:16:46 the prize is dominate the industry.
0:16:51 So I don’t know whether the competition that you’re seeing right now is going to repeat
0:16:57 itself in 10 years time or whether we’re going to go to a winner takes all sort of structure
0:16:59 at the foundation layer.
0:17:06 In your book, Power and Progress, you basically take us through history and the history of
0:17:07 technological progress.
0:17:15 And you make the point that technology has not necessarily been as beneficial as we think
0:17:21 of it because of how it has been distributed throughout societies, which seems extremely prevalent
0:17:25 to what we’re likely about to see with AI.
0:17:27 Take us through that history.
0:17:30 What is your reasoning for that?
0:17:31 How does that play out?
0:17:39 Yeah, I think there are essentially several reasons why the full potential of a suite of
0:17:43 technologies may not be realized or may not be realized quickly.
0:17:44 One is monopoly.
0:17:55 If one or a few companies dominate everything and they use that in order to extract all the
0:18:02 rents, but also slow down innovation, that’s one recipe for many good things not happening.
0:18:09 So today, I think the digital world or communication world would be very different if AT&T had remained
0:18:12 like the sole monopoly.
0:18:16 So the breakup there probably opened the field for more shakeup.
0:18:27 The second is, you know, whether that technology is working with labor or is sort of just replacing
0:18:33 labor and worse, like during many parts of human history is becoming a tool for repressing labor.
0:18:36 Those don’t work out great.
0:18:39 I mean, slavery was not a very efficient system.
0:18:45 It wasn’t just bad for the coerced people, but it wasn’t actually generating economic dynamism.
0:18:53 At the time of the Civil War, the U.S. South was falling further and further behind while the number of patents,
0:18:59 innovations, industrial production, and all sorts of other things were advancing rapidly in other parts of the United States.
0:19:06 So I think all of these things we have to sort of take into account, is AI going to be a monitoring technology
0:19:14 where workers become more and more powerless because there’s so much data being collected about them?
0:19:17 That’s another concern that we don’t often talk about.
0:19:23 So there are many issues here that are intersecting, and we see parallels for each one of them in history.
0:19:25 None of those are perfect parallels.
0:19:31 We’ve never been confronted with a technology that’s so widespread in its potential applications.
0:19:35 But we’ve had other technologies that are quite transformative as well.
0:19:39 So how do we harness AI in a good way?
0:19:48 I mean, what does regulation look like in your view, such that it is a net benefit to society versus a negative six?
0:19:54 First of all, I think we need to be thinking about what it is that we want from AI.
0:20:04 Of course, not everybody’s going to agree on that, but at least that conversation needs to be had in a more open, constructive way.
0:20:09 And second, we also need to be clear about what we mean by regulation.
0:20:14 I think what most people mean by regulation is a reactive kind of regulation.
0:20:22 Something happens, AI companies do something, and then we are worried about certain additional harms, and then we regulate that.
0:20:26 I think instead, what we want is something more proactive.
0:20:38 Let’s think about where it is that AI can do most good and ask ourselves whether it is going in that direction and what are the impediments for it not to go in that direction,
0:20:41 and see whether we can do certain things to facilitate that.
0:20:49 So, if we did not do those facilitatory things, we would not have the internet because the government support there was quite important.
0:20:58 We would not have renewable technologies that are now, at least in certain applications, cost competitive with fossil fuels.
0:21:12 So, it’s not like we have a sort of law that says the market or, in particular, a few companies that are steering technology are necessarily going to choose the right paradigm.
0:21:24 I think the market is excellent in doing certain things, but market participants are also locked in a particular type of business model often.
0:21:33 They are going after a particular kind of prize, and it is possible to step back and say, well, is there another prize that we should be focusing on?
0:21:47 That’s what I’m arguing that human complementary AI, where we try to augment human capabilities, expand human capabilities, could have real benefit, and that’s not the direction in which we’re going.
0:21:57 We’ll be right back after the break, and if you’re enjoying the show so far, be sure to give Prof G Markets a follow wherever you get your podcasts.
0:22:09 This episode is brought to you by Peloton, a new era of fitness is here.
0:22:14 Introducing the new Peloton Cross-Training Tread Plus, powered by Peloton IQ.
0:22:20 Built for breakthroughs, with personalized workout plans, real-time insights, and endless ways to move.
0:22:25 Lift with confidence, while Peloton IQ counts reps, corrects form, and tracks your progress.
0:22:29 Let yourself run, lift, flow, and go.
0:22:33 Explore the new Peloton Cross-Training Tread Plus at OnePeloton.ca.
0:22:38 Support for the show comes from Workday, the to-do list of a small business leader.
0:22:41 Close the books, get your people paid, and bring on new hires.
0:22:46 Look, running a small or mid-sized business can be exciting, but it can also be chaotic.
0:22:48 That’s where Workday comes in.
0:22:51 Workday Go makes simplifying your business a whole lot simpler.
0:22:56 Imagine this, the important aspects of your company, HR and finance, all on one AI platform.
0:22:59 No more juggling multiple systems, no more worrying about growing too fast.
0:23:04 Just the full power of Workday helping small to mid-sized businesses like yours run more smoothly.
0:23:06 And Workday Go activates quickly.
0:23:09 You can be up and running in 30 to 60 business days.
0:23:11 So, simplify your business.
0:23:12 Go for growth.
0:23:13 Go with Workday Go.
0:23:16 Visit Workday.com slash go to learn more.
0:23:21 Hey, it’s me.
0:23:23 Hey, it’s me.
0:23:24 Hey, it’s me.
0:23:25 Um, you.
0:23:26 From the future.
0:23:29 Uh, big thanks for getting the HPV vaccine.
0:23:34 I mean, with that one move, you help protect us against several cancers later in life.
0:23:35 So, thank you.
0:23:37 Or, thank us?
0:23:40 I’ll just text you myself.
0:23:45 The HPV vaccine is safe, effective, and free for eligible youth.
0:23:48 Learn more at healthlinkbc.ca slash HPV.
0:23:50 A message from the government of BC.
0:23:58 We’re back with Prof G Markets.
0:24:08 Do you think the U.S. is going to be able to maintain – I mean, other than DeepSeq, it’s just very difficult to think of another AI player of almost any importance globally outside of the U.S.
0:24:14 Do you think that the U.S. is going to be able to maintain that type of lead in the AI ecosystem?
0:24:17 China has an engineering advantage.
0:24:20 They have a huge number of engineers.
0:24:34 They’re generally well-selected, meaning that more talented, quantitatively sort of skilled people enter into engineering because it’s a very prestigious thing, and they have exams that are relatively unbiased.
0:24:37 And engineers are highly regarded.
0:24:42 So, when it comes to pure engineering things, I think China could have an advantage.
0:24:46 On the other hand, the top-down system is hugely inefficient.
0:24:50 There are so many places where inefficiencies build up.
0:24:51 People are afraid of taking initiative.
0:24:56 There is no decentralized sort of process.
0:24:58 There, the U.S. has an advantage.
0:25:01 How that will shake out at the end would really depend.
0:25:03 DeepSeq is an engineering marvel.
0:25:05 They didn’t come up with any of the new methods.
0:25:10 The new methods that DeepSeq was using, some of them were invented by Google and OpenAI.
0:25:15 Many of them were invented 20 years before by machine learning scientists.
0:25:19 So, they just took them, but they combined them quite well.
0:25:21 How many more times can they do that?
0:25:22 That’s going to be one of the questions.
0:25:25 And then, of course, it’s not just U.S. and China.
0:25:28 Can other countries catch up?
0:25:30 Europe is behind, clearly.
0:25:35 But I don’t think there is a law that Europe has to be behind.
0:25:40 There are many talented AI scientists in Europe that just happen to be all working in Silicon Valley.
0:25:45 I think this is a good segue into your 2012 book, Why Nations Fail.
0:25:50 We’re discussing how America could fall behind in the AI race.
0:26:00 And it’s interesting, you’re kind of highlighting that on the one hand, there are some benefits to this top-down communist structure where you can set the agenda and the tone for the nation.
0:26:04 The tone being, we love engineers and we love people who build AI.
0:26:10 On the other hand, it can be a problem when you don’t have the competitive forces of capitalism at work.
0:26:18 And it seems as though this is going to be sort of the defining question of our time, which one works.
0:26:18 100%.
0:26:21 I think that’s very well put.
0:26:26 Which brings me to the question, why do nations fail?
0:26:31 I mean, you’ve done Nobel Prize-winning research on this, the role of institutions.
0:26:35 Why do nations fail?
0:26:36 It’s mostly institutions.
0:26:43 There are other factors, but many of these other factors, such as civil wars, are institutional as well.
0:27:04 And the role of institutions, both formal rules, but also informal arrangements and norms, they become much more important when we’re dealing with sectors that are forward-looking, innovative, require small players to scale up.
0:27:09 All of those are things that the U.S. was doing quite well.
0:27:19 You know, the United States had an ecosystem of startups that, you know, were extremely confident.
0:27:21 You know, people, when they opened, I mean, I knew many of them.
0:27:22 I know many of them.
0:27:31 Very few people think, well, if I launch a startup and if I’m successful, will I be shut down by courts?
0:27:33 Will I be crushed by my competitors?
0:27:38 Will I be able to get any contracts when my competitors are favored by the government?
0:27:39 Nobody thought about that.
0:27:50 Nobody thought about that, partly because I think people were on the optimistic side, but largely because U.S. institutions had a pretty good track record of not doing that.
0:27:53 I think we’re no longer sure.
0:27:56 There are favored companies and not favored companies.
0:28:00 Courts are much less impartial.
0:28:04 Scaling up may become much harder when there is more uncertainty.
0:28:13 So, I think there are a bunch of issues where the institutional advantage that the U.S. economy had is more of a question mark today.
0:28:19 And the problem is that when you mess up certain things, you pay the price right away.
0:28:27 If you mess up institutions, especially as they pertain to innovations, you don’t pay the price because the impact is not going to be felt for another five, 10 years.
0:28:46 So, if there are some fundamental negative effects from Trump’s attack on independent judiciary, the costs of that will be realized not in 2027 or 2028, but probably in the 2030s.
0:28:51 What is, like, the counterfactual to an institution-led society?
0:29:01 Like, when we talk about, it’s the societies that have had strong institutions, those are the ones that have worked out, and that’s what your research has explained to us.
0:29:04 What is the alternative?
0:29:09 What does a society that is not led by institutions actually look like?
0:29:10 What enters the void?
0:29:16 How does that lead to a less prosperous path?
0:29:24 Well, I mean, I would think that every society has forms of institutions, but let’s change your question to state-led versus not state-led.
0:29:25 Okay.
0:29:33 So, Soviet Union was state-led, but too much state, not enough market, not enough decentralization, and that was horrible.
0:29:43 But, you know, Somalia, where clans ruled for several decades after the collapse of the state, is the other opposite.
0:29:46 There are no state institutions that were functional.
0:29:48 There’s no third-party enforcement.
0:29:54 There’s anarchic ways in which even small problems would escalate.
0:29:57 Neither of these two are great.
0:30:05 Probably the Somalia is worse than the Soviet Union for economic activity, although probably the Soviet Union makes up for it by killing people more effectively.
0:30:16 So, I think that happy medium where there is enough decentralization of especially economic activity, but also other things like communication, dissent, etc.
0:30:26 But there are state institutions that can be leveraged for doing good things, such as supporting innovation, providing public services, defense.
0:30:34 I think that happy medium is hard to maintain, but many societies did maintain it for, you know, several decades.
0:30:45 Yeah, if I could sort of summarize what my takeaway from your research to be, it’s something along the lines of this question of why are institutions even good for us?
0:30:57 It’s something along the lines of, they prevent extreme concentration of power into the hands of someone who may not know what they’re doing.
0:31:10 That, to me, is the defining difference as to why there are societies that work out, and there are societies that don’t, as proven with the Soviet Union.
0:31:18 You could argue maybe even today with Russia, where the institutions have been kind of taken over by Putin, and any other failed society.
0:31:20 Is that right?
0:31:33 Absolutely, and if you look at the data, you see that, for example, economic performance under dictatorships is not just worse than democracies, but it’s also more variable.
0:31:43 And that reflects exactly what you’re articulating, which is sometimes you’re going to end up with a complete idiot as your dictator, and that’s a real disaster.
0:31:51 But even very smart people can be very dangerous because their incentives are not aligned with the rest of society.
0:31:59 So Stalin didn’t do a huge amount of damage because he was an idiot.
0:32:10 He was definitely no genius, but he did know some of the things he was doing in terms of killing people and creating an enormously powerful secret service, secret police.
0:32:20 So somebody who has the wrong incentives and the wrong motivations, even when they are talented, could do a lot of damage.
0:32:33 So a democratic system, via checks and balances, civil society mobilization, ability to change and kick out politicians when they don’t do what they’re supposed to do,
0:32:38 I think creates a lot of pathways for not falling victim to that.
0:32:44 Given all of that, what do you make of what’s happening in America today?
0:32:48 What do you make of Trump’s administration thus far?
0:32:53 What do you think, what kinds of impacts do you think it will have on the economy?
0:32:59 I think institutions are really the secret sauce for the United States.
0:33:10 The U.S. is one of the most innovative economies in the world, and that comes because people are fairly confident that they can do new things and succeed.
0:33:18 The American advantage in the American advantage in finance is also about institutions.
0:33:26 During the global financial crisis, which was initiated, largely speaking, in the United States, what did foreign investors do?
0:33:28 They put more of their money in the U.S.
0:33:42 Because during a crisis, they believe U.S. assets, equities, corporate debt, government debt, are just much more reliable, much more liquid than the alternative.
0:33:43 That’s also institutional.
0:33:45 You don’t want to be subject to Chinese courts.
0:34:02 So all of those require a degree of independence in the judiciary and predictability and impartiality in the broader institutional rules.
0:34:24 Trump’s agenda, which is not unique to Trump, but is an extreme version, is to build a much more executive presidency, meaning the president has far greater power than what has been the norm, and the other branches of government and the agencies are not as powerful.
0:34:39 I think that comes with a serious risk that those institutional balances are going to be disrupted, and we’re already seeing that in terms of, you know, corruption, in terms of people in the administration enriching themselves.
0:34:47 But more importantly, I think a lot of uncertainty, for example, in the area of tariffs, you know, what’s going to happen next month to tariffs?
0:34:50 That’s the kind of uncertainty that strong institutions avoid.
0:34:57 If that happens, that secret sauce that has been so valuable to the American economy will start disappearing.
0:35:08 We’ve been surprised at how well the economy, or at least the markets have done, because we see the same issues you do, a lack of rule of law, lack of competition, regulatory capture.
0:35:17 But meanwhile, it looks as if the American economy, and there’s some warning signs, but we’ve been shocked at how well it continues to grind on.
0:35:29 A, are you surprised, and B, do you think that there’s just a lag, or that we’re not seeing kind of the real issues here?
0:35:35 Where is the state of the economy right now relative to what your perceptions would have been about, given some of the concerns you’ve raised?
0:35:36 All of the above.
0:35:37 All of the above.
0:35:42 So, I think some of it is that there are lags.
0:35:51 Some of it is that AI optimism is masking the problems.
0:35:58 Part of the reason why the economy and the stock market are booming is because there’s a huge amount of AI investment.
0:36:14 Part of it is that, you know, tax cuts would, if you did nothing else, nothing else changed, and you just did a tax cut that favored capital, that would lead to stock market valuations increasing.
0:36:18 And then the stock market is, of course, the incumbents.
0:36:32 So, if there are changes in the economy, such that startups start having a harder time, that may not be great for the economy, but it’s not going to be as bad for the incumbents.
0:36:33 That will be protected from the startup.
0:36:43 So, there are a number of layers here that I think are intersecting, but some of it, I think, is just that with tariffs, for example, we haven’t seen their full effects on prices.
0:36:49 We haven’t seen their full effects on supply chains because, you know, everything is changing so rapidly.
0:36:50 I’m surprised.
0:36:58 What I’m surprised by is, I’ll tell you, Scott, is that people haven’t been spooked as much by uncertainty.
0:37:06 So, the belief among economists, macroeconomists, was that uncertainty spooks investment.
0:37:08 That hasn’t happened.
0:37:18 We’ve had a lot of uncertainty, and that hasn’t really translated into people saying, well, let me hold back on my investments because I don’t know what the future is going to look like.
0:37:21 So, that may be because of AI, maybe because of other things.
0:37:22 I don’t know.
0:37:28 When you look at America right now, what are your top concerns?
0:37:41 I mean, you mentioned that institutions are the secret source of America, and it appears that institutions are under attack in some form or another, whether that’s the BLS or whether it’s the Federal Reserve.
0:37:47 What are your major concerns for America right now?
0:37:55 There are several layers of institutions that worry me, starting from the top, not in terms of, you know, importance.
0:37:59 I think I would have to think a little bit harder to give you importance weights.
0:38:10 But first of all, our ability to control corruption, enrich self-enrichment, enrichment of friends and family, those have become much weaker.
0:38:22 Independence of bad rock institutions, judicial branches, that’s much weaker.
0:38:32 Like FBI, like it or not, and there were many things not to like about it, but it had an ethos of independence and not being political.
0:38:34 That’s gone.
0:38:49 There is a network of information provision institutions from the government accounting office, office of budget management, BLS, BEA, census.
0:38:51 Those are being weakened.
0:38:55 So our ability to track the economy is going to be much weaker.
0:39:13 And then, you know, very fundamentally, because politics is becoming more conflictual and polarized, there are concerns that we’re going to be much less successful in the future in keeping politicians accountable.
0:39:17 We’ll be right back.
0:39:22 And for even more markets content, sign up for our newsletter at ProfitUMarkets.com slash subscribe.
0:39:35 We know you love the thought of a vacation to Europe.
0:39:46 But this time, why not look a little further to Dubai, a city that everyone talks about and has absolutely everything you could want from a vacation destination.
0:39:56 From world-class hotels, record-breaking skyscrapers, and epic desert adventures, to museums that showcase the future, not just the past.
0:39:59 Choose from 14 flights per week between Canada and Dubai.
0:40:01 Book on Emirates.ca today.
0:40:07 20th Century Studios presents Springsteen, Deliver Me From Nowhere.
0:40:14 Witness a true story of risking it all.
0:40:17 These new songs, they’re the only thing making sense to me right now.
0:40:19 To fight for what you believe in.
0:40:20 This is not going to be good for Bruce.
0:40:22 I don’t need to be perfect.
0:40:23 I just want it to feel right.
0:40:26 Springsteen, Deliver Me From Nowhere.
0:40:27 Only in theaters Friday.
0:40:36 Did you lock the front door?
0:40:37 Check.
0:40:38 Closed the garage door?
0:40:38 Yep.
0:40:42 Installed window sensors, smoke sensors, and HD cameras with night vision?
0:40:42 No.
0:40:49 And you set up credit card transaction alerts, a secure VPN for a private connection, and continuous monitoring for our personal info on the dark web?
0:40:51 Uh, I’m looking into it?
0:40:53 Stress less about security.
0:40:57 Choose security solutions from Telus for peace of mind at home and online.
0:41:01 Visit telus.com slash total security to learn more.
0:41:02 Conditions apply.
0:41:11 We’re back with Prof G Markets.
0:41:16 When you, we talk, it seems like these discussions are always about the U.S. versus China.
0:41:22 I’m living in London right now, and you’re originally from Turkey.
0:41:30 Do you see any, do you see any other nations or academic institutions that you’re very hopeful on?
0:41:37 If you were to make a bet on an economy right now, other than the U.S. or China, which economies would you make a bet on?
0:41:44 Well, I think the problem is, Scott, that in many areas, especially in AI, scale matters.
0:42:04 You know, Swiss academic institutions are doing great relative to their scale, but they’re never going to, Switzerland as a country is never going to be a rival by itself to the United States or to China, just in terms of resources, both human and financial.
0:42:15 So, you really need European academic system to come together, and it has some great strengths, and it has many weaknesses.
0:42:18 It is not integrated in its best.
0:42:20 The American system was much, much more integrated.
0:42:32 You know, institutions in California and Cambridge, you know, thousands of miles apart, they’re still much more integrated than, you know, those in different parts of Europe were.
0:42:40 And that creating a hub like Silicon Valley or other places, at some point it was in Boston, that’s very, very important.
0:42:45 And you need that kind of concentration of energy.
0:42:49 I don’t think that it’s impossible for Europe to achieve that.
0:42:54 And there are several people who are thinking about that.
0:43:09 For example, Mario Draghi of the European Central Bank fame sort of recently came out with a report asking much more investment in AI and digital technologies to sort of kickstart something at the European level.
0:43:11 You know, what will succeed?
0:43:19 And when it’s government-directed, there are many risks, especially when you have the national bureaucracies overlaid with European Commission bureaucracy.
0:43:24 So, I wouldn’t bet that it’s going to work out anytime soon, but I wouldn’t want to bet against it either.
0:43:37 But also for, you know, I think the world would be much healthier if we had a true multipolar world in terms of research, in terms of new ideas, not just U.S. versus China, but U.S., China, Europe, but also something from the developing countries.
0:43:48 There’s tremendous energy in India, in other parts of the world about new techs, new ideas, new entrepreneurship, new risk-taking.
0:43:56 I think the world would be just much richer if those countries also had their intellectual fingerprints on these advances.
0:44:19 We think a lot about the well-being of young men on this program and something that has, I won’t speak for Ed, has me suitably freaked out is the collision between AI and synthetic relationships that could potentially further sequester and isolate young men from relationships with their parents, diminish their desire to develop romantic relationships, establish friendships.
0:44:22 Are we catastrophizing here?
0:44:30 I see this as a disaster with no guardrails, and I can’t tell if it’s just as I get older, Professor, I’m getting more angry and depressed, or if I see something here.
0:44:34 What are your thoughts about character AI, synthetic relationships?
0:44:36 I’m very worried.
0:44:38 But not just for men.
0:44:42 I mean, look, for social media, actually, the effect has been larger on women.
0:44:53 I think many of these technologies that completely transform social relations have so many unforeseen consequences.
0:45:06 You know, despite many mistakes and misdeeds that, you know, Facebook, Meta, et cetera, Instagram did, they didn’t set out to create a mental health crisis.
0:45:08 That was a side effect.
0:45:16 Now, with some of these things like character AI, they’re actually intending to create, you know, completely artificial bubbles.
0:45:20 So, yes, I would definitely be worried about that.
0:45:32 But the only thing I have as a small source of sort of silver lining is that they’re not going to happen overnight.
0:45:43 So, these models are still going to be very clunky, so we still have some time, and that’s why it’s so important to ask the questions, what is it that we want from AI?
0:45:48 Do we want, you know, character AI and all of these synthetic relations and all of this isolation?
0:45:52 Do we want, you know, sort of how fast automation?
0:45:56 Or is there something we can do with this very promising technology?
0:46:02 What field or what specific application do you feel holds the most promise for AI?
0:46:08 I think by its nature, AI could be revolutionary for many fields.
0:46:14 Certainly, the process of science is already benefiting.
0:46:15 We can do many more things.
0:46:26 I think a lot of occupations that involve interaction with the real world in a problem-solving manner can really benefit from AI.
0:46:35 So, that’s like electricians, plumbers, blue-color workers, because they’re engaged in series of problem-solving tasks.
0:46:41 And AI-based, context-specific, reliable information could be a game-changer there.
0:46:47 A lot of other non-scientific creative occupations could get a boost as well.
0:46:50 So, there are actually a lot of different things that we could do.
0:46:56 Healthcare and education are becoming a bigger and bigger share of national income everywhere.
0:47:00 And those are the two sectors where we’ve had very little productivity gains.
0:47:04 So, macroeconomic productivity gains are held back because of these services.
0:47:12 If we can do anything with AI to kick-start faster productivity growth in these sectors, that would be a game-changer.
0:47:16 You’ve been described as the Wilt Chamberlain of economics.
0:47:24 You know, you are sort of a powerhouse academic, powerhouse economist.
0:47:26 You’ve written books.
0:47:27 You’ve taught at MIT.
0:47:28 You’ve won a Nobel Prize.
0:47:30 The list goes on.
0:47:37 I’d love to, on the subject of institutions, I’d love to get your views on just the future of academic institutions right now.
0:47:41 They’ve also been under attack in a variety of ways.
0:47:44 How do you feel about academia as a field?
0:47:46 Do you feel optimistic, pessimistic?
0:47:49 What does the future of academics actually look like?
0:47:59 An important part of the institutional fabric of this society is also to provide foundational inputs into innovation via the academic educational process.
0:48:01 And that’s also in danger.
0:48:07 Like, look, I think there were many things that were wrong with academia before Trump.
0:48:18 But the current attack on the autonomy of universities cannot be justified by those problems.
0:48:29 And I think it risks being more encompassing than those problems, whether you’re going to call them DEI or whatever they are, ever could have been.
0:48:42 Because now funding and government control are all levers that a centralized authority can exercise over universities and academics.
0:48:54 And whenever that has happened in other countries, for example, in my country of birth in Turkey or in Hungary, you see the costs have been quite extreme.
0:48:56 Yeah, how does that play out?
0:48:59 How do those costs materialize?
0:49:06 Well, first of all, I think very important projects that are forward-looking don’t get funded.
0:49:12 You know, today it’s very difficult, for example, to get funding for mRNA vaccines.
0:49:24 And those who are going to potentially play an important role in fighting cancer and other sort of diseases for which we could, you know, develop new vaccine-based approaches.
0:49:27 And that’s just the tip of the iceberg.
0:49:39 But even more importantly, I think when the autonomy of universities starts being threatened, people become more timid in their risk-taking.
0:49:46 And that risk-taking there involves going against established ways or things that powerful actors believe.
0:49:54 So, I mentioned earlier on, in China, the top-down structure is really very costly.
0:49:57 I think one of the costs you can see very clearly is in academia.
0:50:08 In Chinese academia, despite the fact that you have very talented people selected into it and you have high degree of engineering or other expertise, it’s much more political.
0:50:18 So, nobody wants to rock the boat, which means people are not really exploring more controversial topics, whether it is in social sciences or in physical sciences.
0:50:22 And that environment, once it sets in, is very difficult to reverse.
0:50:32 One of the great things about American academia was that sort of very risk-taking attitude, exploring things that are at the edges.
0:50:34 You know, that could become much harder.
0:50:47 Just observationally or anecdotally, from my perspective, I feel like academia and pursuing a career in academia has very recently gotten kind of a bad rap.
0:50:59 It’s almost as if academia, the expert class, there’s this stuffy expert class that is either captured by wokeness or by, you know, they’re trapped in the ivory tower or whatever it is.
0:51:10 And I see this among my friends, many of whom were going to pursue careers in academia and then decided, no, this world rewards entrepreneurs.
0:51:14 This world rewards people who start companies and work for big tech.
0:51:17 I’m wondering if you’ve seen that yourself.
0:51:22 Have you seen that there is sort of a lesser interest in pursuing these kinds of careers?
0:51:23 I have, absolutely.
0:51:29 And I think some of it is real and some of it is a perception.
0:51:35 So, indeed, I think there is the right kind of technocracy and the wrong kind of technocracy.
0:51:46 As the world becomes more complex and there are more areas in which expertise, deep knowledge matters, you need technocracy.
0:51:54 But you need technocracy to be accountable and to be sort of in conversation with the rest of community.
0:52:05 So, in some sense, yes, the wrong kind of technocracy has emerged to some degree in many countries where, you know, experts, whether they are in universities or bureaucracy, sometimes know it best.
0:52:17 But I think some of what you’re describing is also an exaggerated feeling of, you know, in fact, you know, some of the things that those experts were saying are, in fact, true.
0:52:25 For example, on vaccines or climate change, it’s just that they did not communicate it very well and the whole thing became unnecessarily politicized.
0:52:27 And it’s not just in academia.
0:52:31 Look, for example, mRNA vaccines were not spearheaded by academia.
0:52:39 They were spearheaded by startups and companies that thought they could really break the mold by doing something very different.
0:52:43 So, it’s that whole ecosystem that’s being threatened right now.
0:52:56 It almost feels as if our culture is just so dead set on finding out who are the experts, who are the people who are telling us how it is, and can we find a way to discredit them and tell them that they’re wrong?
0:53:02 And in a lot of ways, it seems ridiculous to say, oh, it’s the university professors that are telling us.
0:53:06 It’s like, actually, these are not the people who are in control right now.
0:53:08 You’re being told a whole set of them.
0:53:08 They’ve never been.
0:53:09 They’ve never been.
0:53:16 I mean, they may be accused of being arrogant, that’s for sure, but that arrogance was never matched with real power.
0:53:17 Exactly.
0:53:21 Yeah, I wish more people could understand that.
0:53:39 My final question, do you have any advice for young academics, for people who are interested in pursuing a career in academia, who are maybe asking those questions that we just discussed, as someone who’s had so much success in the field, what would your advice be?
0:53:51 You know, I get very frustrated when, you know, people say, oh, you know, you should work on this topic because this topic has, you know, potential and there might be demand here.
0:53:57 No, I think the real secret sauce in academia is you should work on whatever you’re passionate about.
0:54:05 And the best academic research comes when it’s really sort of your own passions that you are following.
0:54:16 And that’s even more important when academia is under attack because some of the great research can be done even when budgets are slashed because you’re just committed to it.
0:54:19 And it’s not about the biggest lab.
0:54:26 It’s not about the best measurement instruments, but it’s about doggedly pursuing something that you feel is right and you can prove it.
0:54:33 Our own Asimoglu is an institute professor at MIT and co-director of the University Center on Inequality and Shaping the Future of Work.
0:54:43 He is also the author of six books, including the New York Times bestselling Why Nations Fail and Power and Progress, Our Thousand-Year Struggle Over Technology and Prosperity.
0:54:52 He was awarded the Nobel Prize in Economic Sciences in 2024, his studies of how institutions are formed and affect prosperity.
0:54:55 Professor Asimoglu, we really appreciate your time.
0:54:59 And I think you’re the first Nobel Prize-winning guest we’ve had on the podcast.
0:55:00 So it’s a win for us, too.
0:55:02 My pleasure, Ed.
0:55:03 Thank you very much, Scott.
0:55:07 It’s great and very happy to have been able to join you guys.
0:55:08 It’s our pleasure.
0:55:09 Congrats again on your good work.
0:55:31 Thank you for listening to Prof2Markets from Prof2Media.
0:55:36 If you liked what you heard, give us a follow and join us for a fresh take on markets on Monday.
0:55:38 Thanks for listening to Prof2Media.
0:55:38 Thanks for listening to Prof2Media.
0:55:41 Thanks for listening to Prof2Media.
0:55:41 Thanks for listening to Prof2Media.
0:55:43 Thanks for listening to Prof2Media.
0:55:43 Thanks for listening to Prof2Media.
0:55:44 Thanks for listening to Prof2Media.
0:55:45 Thanks for listening to Prof2Media.
0:55:46 Thanks for listening to Prof2Media.
0:55:47 Thanks for listening to Prof2Media.
0:55:48 Thanks for listening to Prof2Media.
0:55:48 Thanks for listening to Prof2Media.
0:55:49 Thanks for listening to Prof2Media.
0:55:50 Thanks for listening to Prof3Media.
0:55:52 Thanks for listening to Prof2Media.
0:55:53 Thanks for listening to Prof2Media.
0:55:54 Thanks for listening to Prof2Media.
0:55:54 Thanks for listening to Prof2Media.
0:55:55 Thanks for listening to Prof2Media.
0:55:56 Thanks for listening to Dr3Media.
0:56:15 Thanks for listening to Prof2Media.
0:00:03 Two big things.
0:00:04 Exactly.
0:00:09 The new iPhone 17 Pro on TELUS’ five-year rate plan price lock.
0:00:11 Yep, it’s the most powerful iPhone ever,
0:00:15 plus more peace of mind with your bill over five years.
0:00:16 This is big.
0:00:21 Get the new iPhone 17 Pro at telus.com slash iPhone 17 Pro
0:00:22 on select plans.
0:00:23 Conditions and exclusions apply.
0:00:27 What’s up, everybody?
0:00:30 Hey, it’s Cam Hayward, your Steelers captain and host of Not Just Football.
0:00:35 On this week’s episode, we break down everything that went down against the Bengals.
0:00:37 The good, the bad, and we got to move forward.
0:00:40 Then we’re shifting our gears to focus on Green Bay.
0:00:44 Whether you bleed black and gold or you’re just a football fan who loves the game,
0:00:46 this is a conversation you need to hear.
0:00:50 Catch Not Just Football with Cam Hayward on YouTube, Spotify, Apple Podcasts,
0:00:51 or wherever you get your podcasts.
0:00:52 Let’s go.
0:00:57 Giving up is unforgivable.
0:01:01 So whether we’re reading more serious books about democracy in our book clubs
0:01:04 or talking about it at family gatherings,
0:01:08 I think Thanksgiving is going to be lit for a bunch of families this year.
0:01:10 That’s something that we can all do.
0:01:14 I’m Preet Bharara, and this week, former U.S. attorney and author,
0:01:18 Joyce Vance, joins me to discuss her manual for protecting our democracy
0:01:21 and the rule of law from President Trump’s overreach.
0:01:23 The episode is out now.
0:01:26 Search and follow Stay Tuned with Preet wherever you get your podcasts.
0:01:30 Today is number 43.
0:01:34 That’s the percentage decrease in peanut allergies over the last decade.
0:01:35 It’s a true story.
0:01:38 I’m pretty sure I had a nut allergy when I was a kid.
0:01:41 My parents thought I was trying to avoid church.
0:01:44 Listen to me.
0:01:45 Markets are bigger than us.
0:01:48 What you have here is a structural change in the wealth distribution.
0:01:49 Cash is trash.
0:01:51 Stocks look pretty attractive.
0:01:52 Something’s going to break.
0:01:52 Forget about it.
0:01:54 Oh, my God.
0:01:55 That’s pretty bad, right?
0:01:59 I guess you’re coming a mile away, in all transparency.
0:02:01 Brings up an interesting question.
0:02:02 Scott, do you have any allergies?
0:02:03 No.
0:02:04 I literally am.
0:02:06 As a matter of fact, I can’t.
0:02:07 I’m so sick of.
0:02:11 Waiters are, I think, legally mandated in the U.K.
0:02:14 I don’t know if they are in the U.S. to ask you, does anyone have any allergies?
0:02:14 Is that right?
0:02:16 Yeah, they have to ask you, do you have any allergies?
0:02:19 And I’m like, bad service?
0:02:23 I’ve had it with the service in London.
0:02:24 You are so spoiled in New York.
0:02:26 The service is so good.
0:02:26 That’s true.
0:02:28 What are some of your horror stories?
0:02:28 Nothing horror.
0:02:30 I mean, you know, I don’t.
0:02:33 It takes somewhere in three minutes to get me my second Makers and Ginger.
0:02:34 Those are my horror stories.
0:02:38 My worst days are better than most people’s best days, but it’s not.
0:02:44 I go to this place called, since Chiltern burned down, I got to go to this place called
0:02:48 Kensington Roof Gardens, which has way too many fucking dudes and it’s way too crowded.
0:02:49 Beautiful place, though.
0:02:51 I was there over the summer.
0:02:52 Unbelievable.
0:02:52 Yeah.
0:02:56 It’s great for the 11 days a year you can actually go into the roof garden, but otherwise we’re
0:02:59 all crammed inside wishing the Chiltern was still open.
0:03:03 And I’m not exaggerating.
0:03:07 There’s 700 people lining up at the bar, 695 of the men.
0:03:13 And you’d think, and there’s like two bartenders sitting there like slicing limes very methodically
0:03:14 and elegantly.
0:03:18 And I’m like, Jesus Christ, dude, just start like literally spraying beers at us and we’ll
0:03:19 open our mouths.
0:03:23 That’s the thing about, that’s freaked me out about the members club.
0:03:25 You’re a member of Shea Margot because now that we’re paying it.
0:03:26 I’m not a member of that club.
0:03:26 You’re not?
0:03:27 I keep seeing you there.
0:03:29 Yeah, I get invited by members.
0:03:31 Well, anyways, the thing about it is it freaked me out.
0:03:34 I’m like, it’s so young here.
0:03:39 And then I realized it’s not, the reason I think it’s so young is, you know what it has?
0:03:43 Most of these clubs have guys in their 40s and women in their 20s and 30s.
0:03:47 And what Shea Margot has is it has a bunch of dudes in their 20s, which makes it feel like
0:03:48 high school.
0:03:53 I got to agree with you.
0:03:57 Surely, surely, surely that’s a, you’re not very happy about that.
0:03:57 I’m fine with that.
0:03:58 I like it.
0:03:58 They all come up to me.
0:03:59 I love your content on men.
0:04:00 Hey, what’s that like?
0:04:05 And I literally, you, were you student body president or the mascot or something?
0:04:07 Everybody knows you.
0:04:11 And it totally bums me out because I like to get a little fucked up and talk to everyone
0:04:16 at the bar, throw an umlit cigarette in my mouth, put on sunglasses, and just be a cliche
0:04:16 of me.
0:04:21 And then when people come up and say they know you, I feel as if I have to act semi-respectable.
0:04:23 I’m like, oh, hey, hi.
0:04:23 Yeah.
0:04:23 Oh, yeah.
0:04:24 No, I don’t know what to do.
0:04:26 What do you think is going to be the next interest rate cut?
0:04:28 God.
0:04:30 God.
0:04:31 They’re all so fucked.
0:04:31 This is the trouble.
0:04:32 You shouldn’t have been a professor.
0:04:37 You clearly, you should have been like an actor or like a pop star or something.
0:04:38 That’s what you really want.
0:04:39 A hundred percent.
0:04:42 I’m just not that talented, but I agree with you.
0:04:43 No, I think you are.
0:04:45 You’re just, you’re talented in the thing.
0:04:47 You’ve got a good brain.
0:04:49 You’ve got a good economic brain.
0:04:52 And really what you want to be doing is like, you’ve got the Bezos thing.
0:04:55 You want to be, you want to be like Justin Bieber or something.
0:04:58 You had me till the Justin Bieber part.
0:05:02 I would have gone for Ryan Reynolds or George Clooney.
0:05:04 He seems smart and very politically oriented.
0:05:10 But the thing, the thing I hate about your friends, that’s a good way to start a sentence,
0:05:15 is they’re so, they’re so annoyingly earnest.
0:05:19 Professor Calloway, it’s so nice to meet you.
0:05:21 I know Ed, he’s such a good person, isn’t he?
0:05:23 Fuck off.
0:05:29 Anyways, they’re so, anyway, it’s, but it’s, it’s great to meet your friends out.
0:05:31 I’ve got to learn about who these friends are.
0:05:31 I haven’t been there.
0:05:32 You’re Princeton buddies.
0:05:36 What is it, like a reading club or a drinking club?
0:05:40 Like you guys, you guys have Pimms cups or something and play cricket?
0:05:40 Eating, yeah.
0:05:41 Is that what it is?
0:05:42 Eating club, that’s right.
0:05:44 So instead of fraternities, they have eating clubs?
0:05:45 They actually have both.
0:05:45 Oh, really?
0:05:47 But yeah, that is what it’s called.
0:05:48 It’s, it’s called an eating club.
0:05:51 But you were in an eating club and you would get into anyone because of that faux British
0:05:53 accent, which by the way, everyone is figuring out is total bullshit.
0:05:56 You’re from Nashville.
0:05:59 All right, and enough of this.
0:06:00 Get to the headlines.
0:06:01 One of our douchiest.
0:06:06 Okay, here is our conversation with Daron Asimoglu, Nobel Prize winning economist, New York Times
0:06:08 bestselling author and professor.
0:06:10 This is not an easy segue.
0:06:13 This is not an easy segue.
0:06:15 Oh, God.
0:06:20 Nobel Prize winning economist, New York Times bestselling author and professor of economics
0:06:21 at MIT.
0:06:23 Professor Asimoglu, very good to have you on the show.
0:06:24 Thanks, Ed.
0:06:26 Great being with you and Scott.
0:06:33 So we want to start with AI for obvious reasons.
0:06:36 That’s all we can really talk about at the moment.
0:06:41 We’ve been seeing a lot of bullish sentiment in AI.
0:06:44 We’ve also been seeing a lot of circular investments happening.
0:06:46 A lot of people saying that it’s a bubble.
0:06:53 Last year, you were asked to answer on a scale of one to 10, how much of an impact AI is going
0:06:54 to have on the world.
0:06:58 And your answer at the time was negative six.
0:07:05 So we want to unpack this, starting with that hot take.
0:07:08 What are your views on AI at this point?
0:07:12 And do you still have a negative six rating on AI right now?
0:07:15 Let’s break that into three pieces.
0:07:21 One is, is AI a transformative technology with great capabilities?
0:07:22 Yes.
0:07:24 That’s why it’s not minus one or plus one.
0:07:26 It’s minus six.
0:07:33 Second, how quickly will this technology reach fruition?
0:07:38 And there, I think the answer is, it depends on how quickly it’s pushed.
0:07:44 So I think for a positive development path, I think we need a more deliberative approach.
0:07:54 We are rushing into AI in a way that I think makes applications using AI less likely to develop
0:07:57 because we are just doing it too quickly.
0:08:03 And we also don’t have a roadmap of what it is that we really want from AI, while we are,
0:08:08 everybody recognizes that this is a technology that’s going to have tremendous number of side
0:08:10 effects, foreseen and unforeseen consequences.
0:08:18 So all of these things, plus, very importantly for me, the focus on automation and AGI, while
0:08:23 there are better things to do with AI, I think tip the scales towards negative.
0:08:28 So when it’s minus six, minus five, minus seven, take your pick, but I’m very worried
0:08:33 about the direction of AI, where it’s much more concentrated, who uses, who controls information
0:08:34 and what we do with it.
0:08:35 Yeah.
0:08:39 Could you break down what that negative impact would actually look like?
0:08:43 There’s the, there’s the aspect of the concentration of power, and I’m sure that could have many
0:08:44 implications.
0:08:45 We’re also worried about it.
0:08:53 There is the implication of automation and the idea that this would replace labor, it would
0:08:54 replace people’s jobs.
0:09:01 Say more about how it goes from negative one to negative six in your view.
0:09:03 What is that destructive impact that you’re so worried about?
0:09:13 In the production domain, if we use AI, mostly for automation, I think not only would we be
0:09:22 missing some of the really transformative uses of it, but we would create much smaller productivity
0:09:24 gains than expected.
0:09:34 And we would also create various social outcomes that are negative related to job loss for certain
0:09:41 groups, lack of employment opportunities for certain groups, wage stagnation or declines like
0:09:44 we’ve experienced in the 1990s.
0:09:49 All of these are on the negative part related to the production process.
0:09:59 But I’m also very worried about the fact that AI is first and foremost a communication technology.
0:10:05 And as a communication technology, it changes political and social dynamics.
0:10:15 And when it centralizes information in the hands of a few companies, it can have a variety of
0:10:25 very negative effects on democracy, on dissent, on diversity, a variation in opinion, all sorts
0:10:27 of things that we are really not prepared for.
0:10:34 What would you say to someone who would say that you are being a Luddite or, I mean, we’ve
0:10:42 seen transformative technologies in the past, whether it be oil or the electric grid or the
0:10:48 internet, and it seems as though when these transformative technologies come along, there is a lot of
0:10:52 concern about what it will do to our economy and how it will negatively impact our economy.
0:11:00 But many people would say, well, eventually it works out and we shouldn’t be so worried about
0:11:01 technology.
0:11:03 What do you say to those people?
0:11:11 Well, there are really two theories that people could have about long-run effects of technologies
0:11:12 in general.
0:11:19 The first one is that in the long run, things will work out by themselves.
0:11:20 Let it rip.
0:11:28 And just the dynamics of things, for example, in the labor market or via democratic processes,
0:11:34 sometimes semi-democratic processes, is that we’ll just find the right way of dealing with
0:11:35 things.
0:11:44 The second one is that, no, it’s a deliberative set of choices that we have to make in order
0:11:50 to make sure that the long-run effects are better than the short-run ones.
0:11:56 And I think if you look at several transformative technologies, indeed, they did have fairly
0:11:58 negative short-run effects.
0:12:02 The beginning of the British Industrial Revolution is associated, depending on how you measure it,
0:12:10 70, 80, 90 years of real wage declines or stagnation and huge increase in inequality.
0:12:16 The transition out of agriculture, similarly, at first created quite a lot of social and
0:12:17 economic hardship.
0:12:21 But in both cases, later adaptation worked out much better.
0:12:26 But I would say, even though decisions were made without a roadmap, there were specific decisions
0:12:32 about how to use technology, how to change the organization of production, and also political
0:12:33 decisions that were quite important.
0:12:35 And so it’s not an automatic process.
0:12:38 So I definitely, I’m not an AI pessimist.
0:12:43 And so I don’t know what your definition of Luddite is, but I’m not an AI pessimist.
0:12:49 But I do not believe that we’re going to get the best out of AI or even the second best out
0:12:52 of AI if we just say, oh, let’s not worry about all the disruptions.
0:12:54 Somehow things are going to work out.
0:13:05 So it feels as if a lot of the concern or discussion around AI has moved towards who can secure reliable,
0:13:08 large amounts of electrons or energy.
0:13:11 One, is this an attempt?
0:13:16 It sounds like, do you think this is an attempt to paint a future where the demand is going to
0:13:17 be unlimited and a bit of a head fake?
0:13:23 Or do you see the same sort of power constraints that these guys are seeing?
0:13:25 And what do you see as kind of the downstream impact of that?
0:13:36 I do not believe that power, GPU capacity is the main limiting factor for the kind of AI that
0:13:38 I have in mind.
0:13:47 Because if we’re going to make AI really serve our needs, I think it needs to have much more
0:13:52 human complementary domain-specific expertise.
0:14:01 It has to be able to be an aid to electricians, to accountants, to journalists, to academics.
0:14:10 And for that, high-quality domain-specific information, data, is going to be the real scarce resource.
0:14:18 Whereas right now, the energy demand is mostly from foundation models that are very impressive
0:14:25 in some ways, but also very, very expensive to run, and have not reliably reached that context-specific
0:14:28 domain-relevant expertise.
0:14:33 So I think we have to find a way of getting the best out of foundation models, but combine
0:14:35 them with domain-specific models.
0:14:40 What I’ve also seen is that a lot of the different LLMs are hitting sort of a technical
0:14:45 parity by most kind of, I don’t know, hardcore metrics.
0:14:46 Do you think there’s a scenario?
0:14:52 We had Robert Armstrong from the empty on several times, and he said that there’s certain technologies
0:14:59 where no one or small set of companies is able to ring-fence stakeholder and therefore
0:15:00 shareholder value.
0:15:06 That the airlines, PCs, vaccines were huge innovations, but you didn’t have a small number
0:15:09 of companies garnering trillions of dollars in market cap.
0:15:14 Do you think this might qualify as one of those industries where, even if it ends up being having
0:15:19 a huge impact on society, that we’re overestimating the ability of a small number of companies to
0:15:21 capture a ton of shareholder value?
0:15:26 I would be very worried about an industry that is so concentrated.
0:15:34 On the other hand, it’s a cutthroat industry, and we don’t understand what sort of industrial
0:15:36 organization of AI is going to emerge.
0:15:46 So it is almost certain that whoever is doing the foundation models advances is not going to
0:15:49 be the same one, same company that also does all of the applications.
0:15:51 So you’re going to form an AI stack.
0:16:00 So once you form that AI stack, where are the risks going to reside and which part, which
0:16:03 layer of the stack is going to get most of the returns?
0:16:04 I think that remains to be seen.
0:16:11 It’s going to depend on where the real bottleneck is in terms of doing useful things and whether
0:16:18 the foundation models are close substitutes for each other when it comes to serving as the
0:16:20 first layer of that stack.
0:16:22 I think those are really interesting questions.
0:16:31 And it is made more interesting by the fact that industries that look very competitive at some
0:16:38 point later on can be very non-competitive because early competition is about being the one that
0:16:39 controls things later on.
0:16:44 That’s why it’s so vicious because everybody thinks that they’re going to get the prize and
0:16:46 the prize is dominate the industry.
0:16:51 So I don’t know whether the competition that you’re seeing right now is going to repeat
0:16:57 itself in 10 years time or whether we’re going to go to a winner takes all sort of structure
0:16:59 at the foundation layer.
0:17:06 In your book, Power and Progress, you basically take us through history and the history of
0:17:07 technological progress.
0:17:15 And you make the point that technology has not necessarily been as beneficial as we think
0:17:21 of it because of how it has been distributed throughout societies, which seems extremely prevalent
0:17:25 to what we’re likely about to see with AI.
0:17:27 Take us through that history.
0:17:30 What is your reasoning for that?
0:17:31 How does that play out?
0:17:39 Yeah, I think there are essentially several reasons why the full potential of a suite of
0:17:43 technologies may not be realized or may not be realized quickly.
0:17:44 One is monopoly.
0:17:55 If one or a few companies dominate everything and they use that in order to extract all the
0:18:02 rents, but also slow down innovation, that’s one recipe for many good things not happening.
0:18:09 So today, I think the digital world or communication world would be very different if AT&T had remained
0:18:12 like the sole monopoly.
0:18:16 So the breakup there probably opened the field for more shakeup.
0:18:27 The second is, you know, whether that technology is working with labor or is sort of just replacing
0:18:33 labor and worse, like during many parts of human history is becoming a tool for repressing labor.
0:18:36 Those don’t work out great.
0:18:39 I mean, slavery was not a very efficient system.
0:18:45 It wasn’t just bad for the coerced people, but it wasn’t actually generating economic dynamism.
0:18:53 At the time of the Civil War, the U.S. South was falling further and further behind while the number of patents,
0:18:59 innovations, industrial production, and all sorts of other things were advancing rapidly in other parts of the United States.
0:19:06 So I think all of these things we have to sort of take into account, is AI going to be a monitoring technology
0:19:14 where workers become more and more powerless because there’s so much data being collected about them?
0:19:17 That’s another concern that we don’t often talk about.
0:19:23 So there are many issues here that are intersecting, and we see parallels for each one of them in history.
0:19:25 None of those are perfect parallels.
0:19:31 We’ve never been confronted with a technology that’s so widespread in its potential applications.
0:19:35 But we’ve had other technologies that are quite transformative as well.
0:19:39 So how do we harness AI in a good way?
0:19:48 I mean, what does regulation look like in your view, such that it is a net benefit to society versus a negative six?
0:19:54 First of all, I think we need to be thinking about what it is that we want from AI.
0:20:04 Of course, not everybody’s going to agree on that, but at least that conversation needs to be had in a more open, constructive way.
0:20:09 And second, we also need to be clear about what we mean by regulation.
0:20:14 I think what most people mean by regulation is a reactive kind of regulation.
0:20:22 Something happens, AI companies do something, and then we are worried about certain additional harms, and then we regulate that.
0:20:26 I think instead, what we want is something more proactive.
0:20:38 Let’s think about where it is that AI can do most good and ask ourselves whether it is going in that direction and what are the impediments for it not to go in that direction,
0:20:41 and see whether we can do certain things to facilitate that.
0:20:49 So, if we did not do those facilitatory things, we would not have the internet because the government support there was quite important.
0:20:58 We would not have renewable technologies that are now, at least in certain applications, cost competitive with fossil fuels.
0:21:12 So, it’s not like we have a sort of law that says the market or, in particular, a few companies that are steering technology are necessarily going to choose the right paradigm.
0:21:24 I think the market is excellent in doing certain things, but market participants are also locked in a particular type of business model often.
0:21:33 They are going after a particular kind of prize, and it is possible to step back and say, well, is there another prize that we should be focusing on?
0:21:47 That’s what I’m arguing that human complementary AI, where we try to augment human capabilities, expand human capabilities, could have real benefit, and that’s not the direction in which we’re going.
0:21:57 We’ll be right back after the break, and if you’re enjoying the show so far, be sure to give Prof G Markets a follow wherever you get your podcasts.
0:22:09 This episode is brought to you by Peloton, a new era of fitness is here.
0:22:14 Introducing the new Peloton Cross-Training Tread Plus, powered by Peloton IQ.
0:22:20 Built for breakthroughs, with personalized workout plans, real-time insights, and endless ways to move.
0:22:25 Lift with confidence, while Peloton IQ counts reps, corrects form, and tracks your progress.
0:22:29 Let yourself run, lift, flow, and go.
0:22:33 Explore the new Peloton Cross-Training Tread Plus at OnePeloton.ca.
0:22:38 Support for the show comes from Workday, the to-do list of a small business leader.
0:22:41 Close the books, get your people paid, and bring on new hires.
0:22:46 Look, running a small or mid-sized business can be exciting, but it can also be chaotic.
0:22:48 That’s where Workday comes in.
0:22:51 Workday Go makes simplifying your business a whole lot simpler.
0:22:56 Imagine this, the important aspects of your company, HR and finance, all on one AI platform.
0:22:59 No more juggling multiple systems, no more worrying about growing too fast.
0:23:04 Just the full power of Workday helping small to mid-sized businesses like yours run more smoothly.
0:23:06 And Workday Go activates quickly.
0:23:09 You can be up and running in 30 to 60 business days.
0:23:11 So, simplify your business.
0:23:12 Go for growth.
0:23:13 Go with Workday Go.
0:23:16 Visit Workday.com slash go to learn more.
0:23:21 Hey, it’s me.
0:23:23 Hey, it’s me.
0:23:24 Hey, it’s me.
0:23:25 Um, you.
0:23:26 From the future.
0:23:29 Uh, big thanks for getting the HPV vaccine.
0:23:34 I mean, with that one move, you help protect us against several cancers later in life.
0:23:35 So, thank you.
0:23:37 Or, thank us?
0:23:40 I’ll just text you myself.
0:23:45 The HPV vaccine is safe, effective, and free for eligible youth.
0:23:48 Learn more at healthlinkbc.ca slash HPV.
0:23:50 A message from the government of BC.
0:23:58 We’re back with Prof G Markets.
0:24:08 Do you think the U.S. is going to be able to maintain – I mean, other than DeepSeq, it’s just very difficult to think of another AI player of almost any importance globally outside of the U.S.
0:24:14 Do you think that the U.S. is going to be able to maintain that type of lead in the AI ecosystem?
0:24:17 China has an engineering advantage.
0:24:20 They have a huge number of engineers.
0:24:34 They’re generally well-selected, meaning that more talented, quantitatively sort of skilled people enter into engineering because it’s a very prestigious thing, and they have exams that are relatively unbiased.
0:24:37 And engineers are highly regarded.
0:24:42 So, when it comes to pure engineering things, I think China could have an advantage.
0:24:46 On the other hand, the top-down system is hugely inefficient.
0:24:50 There are so many places where inefficiencies build up.
0:24:51 People are afraid of taking initiative.
0:24:56 There is no decentralized sort of process.
0:24:58 There, the U.S. has an advantage.
0:25:01 How that will shake out at the end would really depend.
0:25:03 DeepSeq is an engineering marvel.
0:25:05 They didn’t come up with any of the new methods.
0:25:10 The new methods that DeepSeq was using, some of them were invented by Google and OpenAI.
0:25:15 Many of them were invented 20 years before by machine learning scientists.
0:25:19 So, they just took them, but they combined them quite well.
0:25:21 How many more times can they do that?
0:25:22 That’s going to be one of the questions.
0:25:25 And then, of course, it’s not just U.S. and China.
0:25:28 Can other countries catch up?
0:25:30 Europe is behind, clearly.
0:25:35 But I don’t think there is a law that Europe has to be behind.
0:25:40 There are many talented AI scientists in Europe that just happen to be all working in Silicon Valley.
0:25:45 I think this is a good segue into your 2012 book, Why Nations Fail.
0:25:50 We’re discussing how America could fall behind in the AI race.
0:26:00 And it’s interesting, you’re kind of highlighting that on the one hand, there are some benefits to this top-down communist structure where you can set the agenda and the tone for the nation.
0:26:04 The tone being, we love engineers and we love people who build AI.
0:26:10 On the other hand, it can be a problem when you don’t have the competitive forces of capitalism at work.
0:26:18 And it seems as though this is going to be sort of the defining question of our time, which one works.
0:26:18 100%.
0:26:21 I think that’s very well put.
0:26:26 Which brings me to the question, why do nations fail?
0:26:31 I mean, you’ve done Nobel Prize-winning research on this, the role of institutions.
0:26:35 Why do nations fail?
0:26:36 It’s mostly institutions.
0:26:43 There are other factors, but many of these other factors, such as civil wars, are institutional as well.
0:27:04 And the role of institutions, both formal rules, but also informal arrangements and norms, they become much more important when we’re dealing with sectors that are forward-looking, innovative, require small players to scale up.
0:27:09 All of those are things that the U.S. was doing quite well.
0:27:19 You know, the United States had an ecosystem of startups that, you know, were extremely confident.
0:27:21 You know, people, when they opened, I mean, I knew many of them.
0:27:22 I know many of them.
0:27:31 Very few people think, well, if I launch a startup and if I’m successful, will I be shut down by courts?
0:27:33 Will I be crushed by my competitors?
0:27:38 Will I be able to get any contracts when my competitors are favored by the government?
0:27:39 Nobody thought about that.
0:27:50 Nobody thought about that, partly because I think people were on the optimistic side, but largely because U.S. institutions had a pretty good track record of not doing that.
0:27:53 I think we’re no longer sure.
0:27:56 There are favored companies and not favored companies.
0:28:00 Courts are much less impartial.
0:28:04 Scaling up may become much harder when there is more uncertainty.
0:28:13 So, I think there are a bunch of issues where the institutional advantage that the U.S. economy had is more of a question mark today.
0:28:19 And the problem is that when you mess up certain things, you pay the price right away.
0:28:27 If you mess up institutions, especially as they pertain to innovations, you don’t pay the price because the impact is not going to be felt for another five, 10 years.
0:28:46 So, if there are some fundamental negative effects from Trump’s attack on independent judiciary, the costs of that will be realized not in 2027 or 2028, but probably in the 2030s.
0:28:51 What is, like, the counterfactual to an institution-led society?
0:29:01 Like, when we talk about, it’s the societies that have had strong institutions, those are the ones that have worked out, and that’s what your research has explained to us.
0:29:04 What is the alternative?
0:29:09 What does a society that is not led by institutions actually look like?
0:29:10 What enters the void?
0:29:16 How does that lead to a less prosperous path?
0:29:24 Well, I mean, I would think that every society has forms of institutions, but let’s change your question to state-led versus not state-led.
0:29:25 Okay.
0:29:33 So, Soviet Union was state-led, but too much state, not enough market, not enough decentralization, and that was horrible.
0:29:43 But, you know, Somalia, where clans ruled for several decades after the collapse of the state, is the other opposite.
0:29:46 There are no state institutions that were functional.
0:29:48 There’s no third-party enforcement.
0:29:54 There’s anarchic ways in which even small problems would escalate.
0:29:57 Neither of these two are great.
0:30:05 Probably the Somalia is worse than the Soviet Union for economic activity, although probably the Soviet Union makes up for it by killing people more effectively.
0:30:16 So, I think that happy medium where there is enough decentralization of especially economic activity, but also other things like communication, dissent, etc.
0:30:26 But there are state institutions that can be leveraged for doing good things, such as supporting innovation, providing public services, defense.
0:30:34 I think that happy medium is hard to maintain, but many societies did maintain it for, you know, several decades.
0:30:45 Yeah, if I could sort of summarize what my takeaway from your research to be, it’s something along the lines of this question of why are institutions even good for us?
0:30:57 It’s something along the lines of, they prevent extreme concentration of power into the hands of someone who may not know what they’re doing.
0:31:10 That, to me, is the defining difference as to why there are societies that work out, and there are societies that don’t, as proven with the Soviet Union.
0:31:18 You could argue maybe even today with Russia, where the institutions have been kind of taken over by Putin, and any other failed society.
0:31:20 Is that right?
0:31:33 Absolutely, and if you look at the data, you see that, for example, economic performance under dictatorships is not just worse than democracies, but it’s also more variable.
0:31:43 And that reflects exactly what you’re articulating, which is sometimes you’re going to end up with a complete idiot as your dictator, and that’s a real disaster.
0:31:51 But even very smart people can be very dangerous because their incentives are not aligned with the rest of society.
0:31:59 So Stalin didn’t do a huge amount of damage because he was an idiot.
0:32:10 He was definitely no genius, but he did know some of the things he was doing in terms of killing people and creating an enormously powerful secret service, secret police.
0:32:20 So somebody who has the wrong incentives and the wrong motivations, even when they are talented, could do a lot of damage.
0:32:33 So a democratic system, via checks and balances, civil society mobilization, ability to change and kick out politicians when they don’t do what they’re supposed to do,
0:32:38 I think creates a lot of pathways for not falling victim to that.
0:32:44 Given all of that, what do you make of what’s happening in America today?
0:32:48 What do you make of Trump’s administration thus far?
0:32:53 What do you think, what kinds of impacts do you think it will have on the economy?
0:32:59 I think institutions are really the secret sauce for the United States.
0:33:10 The U.S. is one of the most innovative economies in the world, and that comes because people are fairly confident that they can do new things and succeed.
0:33:18 The American advantage in the American advantage in finance is also about institutions.
0:33:26 During the global financial crisis, which was initiated, largely speaking, in the United States, what did foreign investors do?
0:33:28 They put more of their money in the U.S.
0:33:42 Because during a crisis, they believe U.S. assets, equities, corporate debt, government debt, are just much more reliable, much more liquid than the alternative.
0:33:43 That’s also institutional.
0:33:45 You don’t want to be subject to Chinese courts.
0:34:02 So all of those require a degree of independence in the judiciary and predictability and impartiality in the broader institutional rules.
0:34:24 Trump’s agenda, which is not unique to Trump, but is an extreme version, is to build a much more executive presidency, meaning the president has far greater power than what has been the norm, and the other branches of government and the agencies are not as powerful.
0:34:39 I think that comes with a serious risk that those institutional balances are going to be disrupted, and we’re already seeing that in terms of, you know, corruption, in terms of people in the administration enriching themselves.
0:34:47 But more importantly, I think a lot of uncertainty, for example, in the area of tariffs, you know, what’s going to happen next month to tariffs?
0:34:50 That’s the kind of uncertainty that strong institutions avoid.
0:34:57 If that happens, that secret sauce that has been so valuable to the American economy will start disappearing.
0:35:08 We’ve been surprised at how well the economy, or at least the markets have done, because we see the same issues you do, a lack of rule of law, lack of competition, regulatory capture.
0:35:17 But meanwhile, it looks as if the American economy, and there’s some warning signs, but we’ve been shocked at how well it continues to grind on.
0:35:29 A, are you surprised, and B, do you think that there’s just a lag, or that we’re not seeing kind of the real issues here?
0:35:35 Where is the state of the economy right now relative to what your perceptions would have been about, given some of the concerns you’ve raised?
0:35:36 All of the above.
0:35:37 All of the above.
0:35:42 So, I think some of it is that there are lags.
0:35:51 Some of it is that AI optimism is masking the problems.
0:35:58 Part of the reason why the economy and the stock market are booming is because there’s a huge amount of AI investment.
0:36:14 Part of it is that, you know, tax cuts would, if you did nothing else, nothing else changed, and you just did a tax cut that favored capital, that would lead to stock market valuations increasing.
0:36:18 And then the stock market is, of course, the incumbents.
0:36:32 So, if there are changes in the economy, such that startups start having a harder time, that may not be great for the economy, but it’s not going to be as bad for the incumbents.
0:36:33 That will be protected from the startup.
0:36:43 So, there are a number of layers here that I think are intersecting, but some of it, I think, is just that with tariffs, for example, we haven’t seen their full effects on prices.
0:36:49 We haven’t seen their full effects on supply chains because, you know, everything is changing so rapidly.
0:36:50 I’m surprised.
0:36:58 What I’m surprised by is, I’ll tell you, Scott, is that people haven’t been spooked as much by uncertainty.
0:37:06 So, the belief among economists, macroeconomists, was that uncertainty spooks investment.
0:37:08 That hasn’t happened.
0:37:18 We’ve had a lot of uncertainty, and that hasn’t really translated into people saying, well, let me hold back on my investments because I don’t know what the future is going to look like.
0:37:21 So, that may be because of AI, maybe because of other things.
0:37:22 I don’t know.
0:37:28 When you look at America right now, what are your top concerns?
0:37:41 I mean, you mentioned that institutions are the secret source of America, and it appears that institutions are under attack in some form or another, whether that’s the BLS or whether it’s the Federal Reserve.
0:37:47 What are your major concerns for America right now?
0:37:55 There are several layers of institutions that worry me, starting from the top, not in terms of, you know, importance.
0:37:59 I think I would have to think a little bit harder to give you importance weights.
0:38:10 But first of all, our ability to control corruption, enrich self-enrichment, enrichment of friends and family, those have become much weaker.
0:38:22 Independence of bad rock institutions, judicial branches, that’s much weaker.
0:38:32 Like FBI, like it or not, and there were many things not to like about it, but it had an ethos of independence and not being political.
0:38:34 That’s gone.
0:38:49 There is a network of information provision institutions from the government accounting office, office of budget management, BLS, BEA, census.
0:38:51 Those are being weakened.
0:38:55 So our ability to track the economy is going to be much weaker.
0:39:13 And then, you know, very fundamentally, because politics is becoming more conflictual and polarized, there are concerns that we’re going to be much less successful in the future in keeping politicians accountable.
0:39:17 We’ll be right back.
0:39:22 And for even more markets content, sign up for our newsletter at ProfitUMarkets.com slash subscribe.
0:39:35 We know you love the thought of a vacation to Europe.
0:39:46 But this time, why not look a little further to Dubai, a city that everyone talks about and has absolutely everything you could want from a vacation destination.
0:39:56 From world-class hotels, record-breaking skyscrapers, and epic desert adventures, to museums that showcase the future, not just the past.
0:39:59 Choose from 14 flights per week between Canada and Dubai.
0:40:01 Book on Emirates.ca today.
0:40:07 20th Century Studios presents Springsteen, Deliver Me From Nowhere.
0:40:14 Witness a true story of risking it all.
0:40:17 These new songs, they’re the only thing making sense to me right now.
0:40:19 To fight for what you believe in.
0:40:20 This is not going to be good for Bruce.
0:40:22 I don’t need to be perfect.
0:40:23 I just want it to feel right.
0:40:26 Springsteen, Deliver Me From Nowhere.
0:40:27 Only in theaters Friday.
0:40:36 Did you lock the front door?
0:40:37 Check.
0:40:38 Closed the garage door?
0:40:38 Yep.
0:40:42 Installed window sensors, smoke sensors, and HD cameras with night vision?
0:40:42 No.
0:40:49 And you set up credit card transaction alerts, a secure VPN for a private connection, and continuous monitoring for our personal info on the dark web?
0:40:51 Uh, I’m looking into it?
0:40:53 Stress less about security.
0:40:57 Choose security solutions from Telus for peace of mind at home and online.
0:41:01 Visit telus.com slash total security to learn more.
0:41:02 Conditions apply.
0:41:11 We’re back with Prof G Markets.
0:41:16 When you, we talk, it seems like these discussions are always about the U.S. versus China.
0:41:22 I’m living in London right now, and you’re originally from Turkey.
0:41:30 Do you see any, do you see any other nations or academic institutions that you’re very hopeful on?
0:41:37 If you were to make a bet on an economy right now, other than the U.S. or China, which economies would you make a bet on?
0:41:44 Well, I think the problem is, Scott, that in many areas, especially in AI, scale matters.
0:42:04 You know, Swiss academic institutions are doing great relative to their scale, but they’re never going to, Switzerland as a country is never going to be a rival by itself to the United States or to China, just in terms of resources, both human and financial.
0:42:15 So, you really need European academic system to come together, and it has some great strengths, and it has many weaknesses.
0:42:18 It is not integrated in its best.
0:42:20 The American system was much, much more integrated.
0:42:32 You know, institutions in California and Cambridge, you know, thousands of miles apart, they’re still much more integrated than, you know, those in different parts of Europe were.
0:42:40 And that creating a hub like Silicon Valley or other places, at some point it was in Boston, that’s very, very important.
0:42:45 And you need that kind of concentration of energy.
0:42:49 I don’t think that it’s impossible for Europe to achieve that.
0:42:54 And there are several people who are thinking about that.
0:43:09 For example, Mario Draghi of the European Central Bank fame sort of recently came out with a report asking much more investment in AI and digital technologies to sort of kickstart something at the European level.
0:43:11 You know, what will succeed?
0:43:19 And when it’s government-directed, there are many risks, especially when you have the national bureaucracies overlaid with European Commission bureaucracy.
0:43:24 So, I wouldn’t bet that it’s going to work out anytime soon, but I wouldn’t want to bet against it either.
0:43:37 But also for, you know, I think the world would be much healthier if we had a true multipolar world in terms of research, in terms of new ideas, not just U.S. versus China, but U.S., China, Europe, but also something from the developing countries.
0:43:48 There’s tremendous energy in India, in other parts of the world about new techs, new ideas, new entrepreneurship, new risk-taking.
0:43:56 I think the world would be just much richer if those countries also had their intellectual fingerprints on these advances.
0:44:19 We think a lot about the well-being of young men on this program and something that has, I won’t speak for Ed, has me suitably freaked out is the collision between AI and synthetic relationships that could potentially further sequester and isolate young men from relationships with their parents, diminish their desire to develop romantic relationships, establish friendships.
0:44:22 Are we catastrophizing here?
0:44:30 I see this as a disaster with no guardrails, and I can’t tell if it’s just as I get older, Professor, I’m getting more angry and depressed, or if I see something here.
0:44:34 What are your thoughts about character AI, synthetic relationships?
0:44:36 I’m very worried.
0:44:38 But not just for men.
0:44:42 I mean, look, for social media, actually, the effect has been larger on women.
0:44:53 I think many of these technologies that completely transform social relations have so many unforeseen consequences.
0:45:06 You know, despite many mistakes and misdeeds that, you know, Facebook, Meta, et cetera, Instagram did, they didn’t set out to create a mental health crisis.
0:45:08 That was a side effect.
0:45:16 Now, with some of these things like character AI, they’re actually intending to create, you know, completely artificial bubbles.
0:45:20 So, yes, I would definitely be worried about that.
0:45:32 But the only thing I have as a small source of sort of silver lining is that they’re not going to happen overnight.
0:45:43 So, these models are still going to be very clunky, so we still have some time, and that’s why it’s so important to ask the questions, what is it that we want from AI?
0:45:48 Do we want, you know, character AI and all of these synthetic relations and all of this isolation?
0:45:52 Do we want, you know, sort of how fast automation?
0:45:56 Or is there something we can do with this very promising technology?
0:46:02 What field or what specific application do you feel holds the most promise for AI?
0:46:08 I think by its nature, AI could be revolutionary for many fields.
0:46:14 Certainly, the process of science is already benefiting.
0:46:15 We can do many more things.
0:46:26 I think a lot of occupations that involve interaction with the real world in a problem-solving manner can really benefit from AI.
0:46:35 So, that’s like electricians, plumbers, blue-color workers, because they’re engaged in series of problem-solving tasks.
0:46:41 And AI-based, context-specific, reliable information could be a game-changer there.
0:46:47 A lot of other non-scientific creative occupations could get a boost as well.
0:46:50 So, there are actually a lot of different things that we could do.
0:46:56 Healthcare and education are becoming a bigger and bigger share of national income everywhere.
0:47:00 And those are the two sectors where we’ve had very little productivity gains.
0:47:04 So, macroeconomic productivity gains are held back because of these services.
0:47:12 If we can do anything with AI to kick-start faster productivity growth in these sectors, that would be a game-changer.
0:47:16 You’ve been described as the Wilt Chamberlain of economics.
0:47:24 You know, you are sort of a powerhouse academic, powerhouse economist.
0:47:26 You’ve written books.
0:47:27 You’ve taught at MIT.
0:47:28 You’ve won a Nobel Prize.
0:47:30 The list goes on.
0:47:37 I’d love to, on the subject of institutions, I’d love to get your views on just the future of academic institutions right now.
0:47:41 They’ve also been under attack in a variety of ways.
0:47:44 How do you feel about academia as a field?
0:47:46 Do you feel optimistic, pessimistic?
0:47:49 What does the future of academics actually look like?
0:47:59 An important part of the institutional fabric of this society is also to provide foundational inputs into innovation via the academic educational process.
0:48:01 And that’s also in danger.
0:48:07 Like, look, I think there were many things that were wrong with academia before Trump.
0:48:18 But the current attack on the autonomy of universities cannot be justified by those problems.
0:48:29 And I think it risks being more encompassing than those problems, whether you’re going to call them DEI or whatever they are, ever could have been.
0:48:42 Because now funding and government control are all levers that a centralized authority can exercise over universities and academics.
0:48:54 And whenever that has happened in other countries, for example, in my country of birth in Turkey or in Hungary, you see the costs have been quite extreme.
0:48:56 Yeah, how does that play out?
0:48:59 How do those costs materialize?
0:49:06 Well, first of all, I think very important projects that are forward-looking don’t get funded.
0:49:12 You know, today it’s very difficult, for example, to get funding for mRNA vaccines.
0:49:24 And those who are going to potentially play an important role in fighting cancer and other sort of diseases for which we could, you know, develop new vaccine-based approaches.
0:49:27 And that’s just the tip of the iceberg.
0:49:39 But even more importantly, I think when the autonomy of universities starts being threatened, people become more timid in their risk-taking.
0:49:46 And that risk-taking there involves going against established ways or things that powerful actors believe.
0:49:54 So, I mentioned earlier on, in China, the top-down structure is really very costly.
0:49:57 I think one of the costs you can see very clearly is in academia.
0:50:08 In Chinese academia, despite the fact that you have very talented people selected into it and you have high degree of engineering or other expertise, it’s much more political.
0:50:18 So, nobody wants to rock the boat, which means people are not really exploring more controversial topics, whether it is in social sciences or in physical sciences.
0:50:22 And that environment, once it sets in, is very difficult to reverse.
0:50:32 One of the great things about American academia was that sort of very risk-taking attitude, exploring things that are at the edges.
0:50:34 You know, that could become much harder.
0:50:47 Just observationally or anecdotally, from my perspective, I feel like academia and pursuing a career in academia has very recently gotten kind of a bad rap.
0:50:59 It’s almost as if academia, the expert class, there’s this stuffy expert class that is either captured by wokeness or by, you know, they’re trapped in the ivory tower or whatever it is.
0:51:10 And I see this among my friends, many of whom were going to pursue careers in academia and then decided, no, this world rewards entrepreneurs.
0:51:14 This world rewards people who start companies and work for big tech.
0:51:17 I’m wondering if you’ve seen that yourself.
0:51:22 Have you seen that there is sort of a lesser interest in pursuing these kinds of careers?
0:51:23 I have, absolutely.
0:51:29 And I think some of it is real and some of it is a perception.
0:51:35 So, indeed, I think there is the right kind of technocracy and the wrong kind of technocracy.
0:51:46 As the world becomes more complex and there are more areas in which expertise, deep knowledge matters, you need technocracy.
0:51:54 But you need technocracy to be accountable and to be sort of in conversation with the rest of community.
0:52:05 So, in some sense, yes, the wrong kind of technocracy has emerged to some degree in many countries where, you know, experts, whether they are in universities or bureaucracy, sometimes know it best.
0:52:17 But I think some of what you’re describing is also an exaggerated feeling of, you know, in fact, you know, some of the things that those experts were saying are, in fact, true.
0:52:25 For example, on vaccines or climate change, it’s just that they did not communicate it very well and the whole thing became unnecessarily politicized.
0:52:27 And it’s not just in academia.
0:52:31 Look, for example, mRNA vaccines were not spearheaded by academia.
0:52:39 They were spearheaded by startups and companies that thought they could really break the mold by doing something very different.
0:52:43 So, it’s that whole ecosystem that’s being threatened right now.
0:52:56 It almost feels as if our culture is just so dead set on finding out who are the experts, who are the people who are telling us how it is, and can we find a way to discredit them and tell them that they’re wrong?
0:53:02 And in a lot of ways, it seems ridiculous to say, oh, it’s the university professors that are telling us.
0:53:06 It’s like, actually, these are not the people who are in control right now.
0:53:08 You’re being told a whole set of them.
0:53:08 They’ve never been.
0:53:09 They’ve never been.
0:53:16 I mean, they may be accused of being arrogant, that’s for sure, but that arrogance was never matched with real power.
0:53:17 Exactly.
0:53:21 Yeah, I wish more people could understand that.
0:53:39 My final question, do you have any advice for young academics, for people who are interested in pursuing a career in academia, who are maybe asking those questions that we just discussed, as someone who’s had so much success in the field, what would your advice be?
0:53:51 You know, I get very frustrated when, you know, people say, oh, you know, you should work on this topic because this topic has, you know, potential and there might be demand here.
0:53:57 No, I think the real secret sauce in academia is you should work on whatever you’re passionate about.
0:54:05 And the best academic research comes when it’s really sort of your own passions that you are following.
0:54:16 And that’s even more important when academia is under attack because some of the great research can be done even when budgets are slashed because you’re just committed to it.
0:54:19 And it’s not about the biggest lab.
0:54:26 It’s not about the best measurement instruments, but it’s about doggedly pursuing something that you feel is right and you can prove it.
0:54:33 Our own Asimoglu is an institute professor at MIT and co-director of the University Center on Inequality and Shaping the Future of Work.
0:54:43 He is also the author of six books, including the New York Times bestselling Why Nations Fail and Power and Progress, Our Thousand-Year Struggle Over Technology and Prosperity.
0:54:52 He was awarded the Nobel Prize in Economic Sciences in 2024, his studies of how institutions are formed and affect prosperity.
0:54:55 Professor Asimoglu, we really appreciate your time.
0:54:59 And I think you’re the first Nobel Prize-winning guest we’ve had on the podcast.
0:55:00 So it’s a win for us, too.
0:55:02 My pleasure, Ed.
0:55:03 Thank you very much, Scott.
0:55:07 It’s great and very happy to have been able to join you guys.
0:55:08 It’s our pleasure.
0:55:09 Congrats again on your good work.
0:55:31 Thank you for listening to Prof2Markets from Prof2Media.
0:55:36 If you liked what you heard, give us a follow and join us for a fresh take on markets on Monday.
0:55:38 Thanks for listening to Prof2Media.
0:55:38 Thanks for listening to Prof2Media.
0:55:41 Thanks for listening to Prof2Media.
0:55:41 Thanks for listening to Prof2Media.
0:55:43 Thanks for listening to Prof2Media.
0:55:43 Thanks for listening to Prof2Media.
0:55:44 Thanks for listening to Prof2Media.
0:55:45 Thanks for listening to Prof2Media.
0:55:46 Thanks for listening to Prof2Media.
0:55:47 Thanks for listening to Prof2Media.
0:55:48 Thanks for listening to Prof2Media.
0:55:48 Thanks for listening to Prof2Media.
0:55:49 Thanks for listening to Prof2Media.
0:55:50 Thanks for listening to Prof3Media.
0:55:52 Thanks for listening to Prof2Media.
0:55:53 Thanks for listening to Prof2Media.
0:55:54 Thanks for listening to Prof2Media.
0:55:54 Thanks for listening to Prof2Media.
0:55:55 Thanks for listening to Prof2Media.
0:55:56 Thanks for listening to Dr3Media.
0:56:15 Thanks for listening to Prof2Media.
Ed Elson and Scott Galloway are joined by Nobel Prize–winning economist and MIT Professor Daron Acemoglu to discuss the economic consequences of AI. He breaks down his research on why nations fail, shares his biggest concerns about America’s future, and offers advice for the next generation of scholars.
Subscribe to the Prof G Markets newsletter
Order “The Algebra of Wealth” out now
Subscribe to No Mercy / No Malice
Follow Prof G Markets on Instagram
Follow Scott on Instagram
Learn more about your ad choices. Visit podcastchoices.com/adchoices

Leave a Reply
You must be logged in to post a comment.