AI transcript
0:00:10 work of their lives. And here’s the encouraging part. When people have the right skills,
0:00:16 everything gets better. Collaboration improves, projects run smoother, innovation actually takes
0:00:21 off. That’s why Project Management Institute is such a powerful resource. As the home of the PMP
0:00:27 and globally recognized certifications, PMI gives teams the confidence and capability to turn big
0:00:33 ideas into real results. Invest in your people and watch remarkable things happen. Project Management
0:00:44 Institute. Learn more at PMI.org. There’s lots of domains. Like in finance, bonuses are a big part
0:00:51 of the pay. In many cases, it’s most of the pay. Do you know of any industry where negative bonuses
0:01:02 are common? None. And why? Because people hate them. So instead of paying somebody $100,000,
0:01:10 unless you do something bad in which it’s $90,000, pay them $90,000 and say, if you do something good,
0:01:17 you get $100,000. People will be much happier and they’ll keep working for you, which is necessary
0:01:22 for anything to work.
0:01:29 Good morning, everybody. This is Guy Kawasaki. And this is another episode of the Remarkable People
0:01:37 Podcast where we bring people like Jane Goodall and Tony Fauci and Nobel laureates like today’s guest,
0:01:46 Richard Thaler, also with Alex Ema. So these are two behavioral economists that have really changed the
0:01:52 world. And I got to tell you that they both teach at the University of Chicago Booth School of Business,
0:01:59 but their impact is worldwide. I’m a little bit of a fanboy today because interviewing Richard Thaler is a
0:02:06 big freaking deal for me. And there’s Jane Goodall, there’s Tony Fauci, there’s Richard Thaler,
0:02:12 there’s Angela Duckworth, Carol Dweck. This is all just game changers and you don’t get to talk to
0:02:18 people like this very often. So we’re going to talk about behavioral economics and the anomalies in
0:02:21 life. So welcome to the show, Alex and Richard.
0:02:29 It’s great to be here. Thanks, Guy. And Alex is going to join that company soon.
0:02:33 He is. He’s the next Richard Thaler.
0:02:37 I’m going to keep my aspirations in check.
0:02:40 Aim high, Alex. Come on.
0:02:45 That’s true. This is just for the podcast. I’m actually super ambitious.
0:02:55 I’ll start you off with an easy question. What’s the role of nudges in US society circa 2025? Because
0:03:02 it seems to me like nudge has come to shove and we’re a long way from just nudging people. So
0:03:04 what’s the role of a nudge today?
0:03:15 What is it or what it should be? I should say, so let’s define some terms. So in our book published
0:03:25 in 2007, Cass Sunstein and I coined two phrases, libertarian paternalism and choice architecture.
0:03:32 And somehow book publishers didn’t think either of those phrases would sell any books. So we called
0:03:41 the book nudge. And we defined a nudge as something that influences choices, but doesn’t force anyone to
0:03:50 do anything. And the world is, there were lots of nudges. We didn’t invent nudges. Adam and Eve were
0:03:57 nudging. So it’s been around for a while. But you’re right. And in the private sector,
0:04:07 there’s a lot of nudging for good and for evil. In the public sector, we’ve certainly seen the government
0:04:18 leaning against this sort of libertarian flavor of nudging, of giving people suggestions, but not telling
0:04:29 them what to do and shoving. And so nowhere more than in the area of health with RFK Jr., you know,
0:04:35 vaccines for kids were not a nudge. They were a mandate. And it was a mandate with a way of getting
0:04:42 out of it. So if you claimed a religious objection, you could get out of it. But almost everybody,
0:04:53 almost everywhere followed the advice about vaccinating their kids. And now we’re almost
0:04:59 to the point of the guy in charge of health telling parents not to vaccinate their kids,
0:05:01 which is pretty horrific.
0:05:07 What are some nudges that we could do? Is it mandatory voting? Or when you register for your
0:05:15 car license, you automatically are registered to vote, or you can vote by mail? What could we do to increase the
0:05:21 viability of democracy?
0:05:26 One simple nudge that I think has been shown to be extremely effective is just
0:05:33 simplifying forms for people. People are busy, they have restrained attention, they have their lives to
0:05:39 live, and they get these giant forms to apply for a mortgage or something like that. And a lot of people
0:05:43 don’t have time to go through all the fine details, or they assume they trust the company. This is a bank that
0:05:48 they’ve been working on for years. And all of a sudden, they get this form with 20 pages of terms,
0:05:53 and then they just go through it and sign at the bottom line. So I think one really effective nudge
0:05:59 is just simplification, is just here are the things that you need to pay attention to. Here’s why they’re
0:06:05 important. Do you want to enter this contract? And as far as nudge versus not nudge, this is not
0:06:11 changing how people decide or what choices they’re facing. This is just simplifying the environment so
0:06:14 everyday people with their busy lives can make better decisions.
0:06:20 Yeah. And one of the sneaky aspects of the changes that have been introduced,
0:06:27 Cass and I refer to the opposite of nudge as sludge, where in order to do something,
0:06:36 you have to fill out a bunch of forms. And the changes to Medicaid are primarily
0:06:45 that in order to qualify the so-called work rules are mostly form filling out rules. And we only make
0:06:54 poor people fill out forms. Rich people, they file their tax return. And you claim a bunch of deductions,
0:07:03 unless you get audited, it’s trust and they’re firing half the people at the IRS. So we’re moving more in
0:07:12 the direction of trusting rich people to fill out their tax returns. But if you want to be on Medicaid,
0:07:23 you have to come and prove every six months that you’re working. And did you get a real ID? To do that
0:07:32 in Illinois, you had to come and bring exactly the right set of forms. And it wasn’t really hard, but it
0:07:41 was a nuisance. And if you didn’t have the right bank statement on the right date, they could send you away.
0:07:47 Imagine doing that every six months just to get health care benefits.
0:07:52 Simplification is good. Complification is bad.
0:08:02 And am I being paranoid or do you think it was done out of a legitimate desire to stop fraud or it was
0:08:08 done out of an illegitimate desire to reduce the number of poor people voting and acting?
0:08:12 Yeah. They haven’t done so much on the voting, but the health care,
0:08:18 it’s clear that this is meant to reduce enrollment. That’s completely clear.
0:08:23 And for SNAP too, for things like basic human welfare programs, like-
0:08:25 SNAP, which is food stamps. Right.
0:08:29 Right. The intention is clear. It’s basically stated in the documents that
0:08:33 some of the reason why you want to fill out this form is to say,
0:08:37 some people are just not going to enroll and we’re going to save money that way.
0:08:42 I don’t think that’s the way to run a welfare program to give food to poor people is to
0:08:47 make the complicated forms so long and onerous that some people just end up not filling them
0:08:55 out and not getting food for their kids. One of the policies that I greatly favor is that in many states,
0:09:05 it used to be that to get a free breakfast and lunch at school, you had to fill in forms in English.
0:09:14 And there was some stigma to being in the line of kids that got the free lunch. And gradually people
0:09:23 figured out that in a school where the average kid is poor, let’s give free food to everybody. And that
0:09:31 food is going to otherwise go to waste. If little Alex’s parents have forgotten to fill in the form,
0:09:38 that’s just going to be a hungry Alex. And it’s not going to be that food goes to somebody else. So
0:09:46 that’s become kind of universal. And a politician can say, well, that’s waste, fraud and abuse. No,
0:09:56 no. If accidentally we feed some kids who would be eligible, but whose parents wouldn’t have had to act
0:10:04 together to fill out a form. That’s fine. That’s good. And the marginal cost of feeding an extra kid is
0:10:13 basically zero. So I want to go back in history a little bit to the 1980s. And if I majored in psychology
0:10:21 in college, I went to Stanford and I was a disciple of Philip Zimbardo. But if I could do it all over
0:10:29 again, I would major in behavioral economics. I think it is such a fascinating field. So my question for you is
0:10:38 that why did it take until 1980 or 1990 to create this discipline called behavioral economics? Because
0:10:44 it seems to me that a lot of people are interested in why people do things and spend money. And a lot
0:10:50 of people are interested in why people have their psychological actions. So why did it take so long
0:10:57 to put the two things together? When I was in grad school, which was around when you were an undergrad,
0:11:06 or a little earlier, I felt like I was the kid staring at the naked emperor. Because I was listening
0:11:14 to my professors describe these agents. They don’t even call them people in economics. You won’t see
0:11:22 the word people in an entire economics textbook. There are agents. And these are like fictional creatures.
0:11:31 They’re like what AI might be at some point if we program the AIs to be jerks. But they’re really
0:11:38 smart and they think fast like a computer. And I kept saying, “Really? Really? Those aren’t the people
0:11:46 that I see.” Not even my fellow grad students are like that. And the truth is that economics didn’t used to
0:11:55 be that way. You were just born too late. If you had studied with Adam Smith in 1776,
0:12:03 he was a behavioral economist. He talks about overconfidence. He talks about self-control problems.
0:12:11 And economics was kind of reasonable all the way up to World War II. If you read Keynes, who was writing
0:12:19 in the 1930s, he’s a behavioral economist. Then what happened is there was a mathematical revolution
0:12:28 starting right after World War II. And economists got busy making their arguments more mathematically
0:12:39 rigorous. And then we run into the bounded rationality of economists. Economists know more math than most
0:12:45 people, but they’re not the greatest mathematicians or they would have become mathematicians. Most of us
0:12:53 dropped out of math at some point. So the easiest models to write down are of people being very smart,
0:13:00 because then you can just write maximize and then say, “That’s what they do.” And hard, you were a psychology
0:13:08 major, it’s not an accident that there aren’t many formal models in psychology. And the reason is so much is
0:13:17 going on and essentially everything matters. And writing that down as a formal model was hard.
0:13:25 So if you look at economics between say 1950 and 1980, those agents are just getting smarter and
0:13:28 smarter and smarter. But people weren’t.
0:13:34 I was in a psychology neuroscience major in undergrad with also in economics, but I viewed those as two
0:13:40 completely separate things. I had my economics, which was fully rational models, which I was like,
0:13:44 “That’s cute. Seems like this is not going to actually work in the real world.” And then I had
0:13:51 psychology as a separate field. And when I actually learned about behavioral economics after I graduated
0:13:56 undergrad, I was going to go to medical school. I had absolutely no intention of going into economics or
0:14:01 getting a PhD. And while I was applying to medical school, I actually heard an interview with Richard
0:14:06 and NPR. And he was talking about behavioral economics. And I just dropped everything and
0:14:11 applied to econ PhD programs, because I thought that this seemed a lot more realistic and something that
0:14:17 I actually wanted to work on as my career. And I think the sort of revolution that’s happened in
0:14:23 economics through behavioral economics, has made these models more realistic. The people in the
0:14:28 models are gradually getting dumber again. And Alex, how did the conversation go with your
0:14:34 parents when you said, “Ah, I don’t think I’ll be a doctor anymore. I’ll just study behavioral economics.”
0:14:41 As I said earlier, I’m from Moldova, and immigrant kids with immigrant parents either need to be doctors,
0:14:47 sometimes a lawyer will suffice. And so the conversation did not go very well. When I told my parents, “Look,
0:14:53 I’ve worked for eight years to get into medical school, actually get into medical school, go through the
0:14:58 entire process, but hey, I’m going to drop all of that and get a PhD in something you’ve never heard of.”
0:15:03 That didn’t go very well. We kind of stopped talking about it for a long time. Finally, you know,
0:15:08 I’m a University of Chicago professor, my parents are from Chicago. I think they’re okay with the
0:15:12 decision now. But it’s taken a few years.
0:15:19 So Richard, did you like consciously set out to create a new discipline or it just
0:15:25 happened and bada bing, bada bang, one day there’s behavioral economics as a discipline?
0:15:35 Well, I set out to do something new. I explicitly decided not to create a new discipline in the sense that
0:15:42 I actively fought the idea of having a new journal. Somebody wanted to start a journal of behavioral
0:15:53 economics and I tried to prevent it. And the reason was that I wanted to do change from within.
0:16:02 And I thought that having to fight our way into the mainstream journals, like the American Economic
0:16:10 Review was a good discipline. I’m often quoted as saying that in all these years, I don’t think I
0:16:18 changed anyone’s mind. And what I mean by that is my professors, I didn’t change any of their minds.
0:16:25 My advisor, I once threatened him to write a book and thank him for teaching me everything
0:16:31 I knew. If he wasn’t nice to me, I was going to do that. And since I couldn’t change anyone’s mind,
0:16:39 I adopted the strategy of corrupting the youth. And Alex is an example.
0:16:48 That’s former youth. Still youth for me, Alex. But Danny Kahneman and I started a summer camp.
0:16:57 This is academics’ idea of fun. So this is what we call a summer camp. It has some formal name,
0:17:04 but we all call it summer camp. It’s two weeks of studying economics. And it’s for grad students all
0:17:13 around the world. And we did the first one, I think in 1994 in Berkeley. And it’s been going on every two
0:17:20 years ever since. Alex, I think Alex went twice, right? But did we hold you back or bring you back
0:17:22 as a TA or…
0:17:28 I failed the first one. No, I went there once. There’s now multiple some behavioral economic summer camps.
0:17:33 So I was faculty at one with Richard, and then I was a student at the first one.
0:17:40 So this is like when McDonald’s has a summer camp for promising basketball players,
0:17:44 and you have a summer camp for promising behavioral economics.
0:17:54 Right. And I must say, that is probably the single thing that Danny and I did that changed the profession
0:17:59 the most. More than anything we wrote. I guess if we hadn’t written anything, there wouldn’t have been
0:18:07 anything to teach. But there are now hundreds of economists scattered around the best departments
0:18:17 in the world. People like Alex that went through this education slash indoctrination program. And they’re
0:18:23 not in departments of behavioral economics. They’re in economics departments, publishing in mainstream
0:18:29 journals. And it’s something we talk about at the end of the book. We don’t think we’ve changed the
0:18:38 discipline. We’ve made people realize that there are these problems. But if you went back to school now and
0:18:46 took an economics 101 class, there might be one week on behavioral economics. And the rest of it would be
0:18:54 taught as if no one had ever heard of it. And in my humble opinion, that is totally ass backwards. Because
0:19:00 in day-to-day life, it’s more about behavioral economics than it is about marginal demand and marginal
0:19:06 costs and the intersection and the perfect clearing price. So that’s all bullshit. So anyway.
0:19:29 Starting a business comes with its share of ups and downs, which is why staying true to your vision
0:19:35 is essential. A non-negotiable for Romeo and Milka Bregali, Capital One business customers and co-owners of
0:19:40 Ra’s plant-based restaurant in New York. Romeo and Milka took a leap of faith when starting their own
0:19:46 restaurant, gutting an empty space and building it from the ground up. Every pipe, every wall, every
0:19:51 detail. But building from scratch came with a heavy financial burden, which is when they turned to
0:19:56 their Capital One business card. With the flexibility of the card’s no preset spending limit, they were able
0:20:02 to spend more and earn more rewards while bringing their vision to life. Today, Ra’s success is proof
0:20:06 that with passion and the right support, it’s possible to make your dreams a reality.
0:20:12 Learn more at CapitalOne.com/businesscards.
0:20:17 You’re listening to Remarkable People with Guy Kawasaki.
0:20:26 As you look back, have studies using college students as the experimental subjects, has it proven to
0:20:32 extrapolate generally and accurately? Because it seems like a lot of the studies from your book
0:20:39 is from way back when, and it’s about college undergraduate students trying to make money or get credit.
0:20:44 So I think that was the goal of writing this book in the first place was to show the key part of the
0:20:51 title is then and now. And then was all the anomalies columns in the 90s. You see a lot of the studies
0:20:57 with college students, low stakes, and a lot of the criticisms from the broader profession was who
0:21:02 cares about college students? We care about market participants. They’re going to leave college and
0:21:06 they’re either going to participate in the market or we don’t care about them. And those who participate
0:21:11 in the market will be really smart. We don’t really care about them. And so one of the reasons to write
0:21:18 the book was to say after each anomaly, we have this update, where are we now on each of these
0:21:24 behavioral anomalies? And there’s two things that we emphasize in the book. One, the original experiments
0:21:29 all replicate. And we have a supplementary internet appendix where you can go look at the replications,
0:21:34 replicate everything yourself. But then the other thing is what’s happened to behavioral economics as
0:21:40 a field is it’s gone outside of the lab with college students and into the real world. So a lot of the
0:21:45 behavioral economics, you open up a econ journal, it’s not going to be college students for the most part.
0:21:49 It’s going to be professional traders who are making mistakes. It’s going to be professional sports
0:21:56 teams trying to bid for players who are making mistakes. So I would say that it’s been extremely
0:22:01 successful as far as the extrapolation from the original studies. And the original studies are also
0:22:10 very, very robust. You mentioned the word sports and I’m going to digress for a second. So I want you to
0:22:17 explain because I found this one of the most fascinating parts of the book. I want you to explain why a
0:22:24 professional golfer should care as much about putting for a par as a birdie.
0:22:34 Yeah. Okay. I guess I should handle that one since I’m the golf addict of the team. So here’s a basic
0:22:41 fact. In a golf tournament, the only thing that matters is the total number of strokes you’ve taken
0:22:50 at the end. The strokes don’t come with labels, right? If you’re on a par four, you hit a drive, you hit a
0:22:58 second shot, you hit a third shot, the ball goes in, that’s a par, it’s four. And that’s the only thing.
0:23:08 Now, sometimes a pro will be on the green in regulation. So in two, putting for three,
0:23:16 which historically is called a birdie, but it’s a three. It’s just a number three. And other times,
0:23:23 it’ll have taken for a golfer like me, it’s more likely it took three shots to get there instead of
0:23:30 two. And so you’re making that same attempted putt, but it’s going to score a four. It has the effect,
0:23:38 exactly the same effect on your total score. If you make it, you have one less stroke. If you miss it,
0:23:46 you’ll have one more. But one of the most important findings in behavioral economics, and this is one,
0:23:52 many of the things we learned from Danny Kahneman and Amos Tversky is this idea of loss aversion,
0:24:03 that when you lose, it feels more painful than when you win. And it’s just human. If you’re a golf pro,
0:24:11 you’re expecting to get a par in every hole, and they mostly do. And if you get one better than par,
0:24:20 you get a birdie, that’s good. If you miss par and you get a bogey, oh, that’s bad. And so the result,
0:24:26 and this was discovered by a colleague of ours, Devin Pope, who I don’t think has ever been on a golf
0:24:34 golf course. But he’s great with large data sets. He had a data set of essentially every putt any pro
0:24:43 had taken for 10 years. It’s millions of putts. And so he could analyze whether pros were more or less
0:24:53 likely to make exactly the same putt on the same green, depending on whether it was to score a birdie
0:25:01 or a par or bogey. And they make more par putts than birdie putts because, oh my god, if I miss
0:25:09 the par putt, that’s a loss. Whereas if I make the birdie putt, that’s just a gain. And it’s a really
0:25:16 good example of what we were just talking about of moving from the lab. So Danny and I had done
0:25:24 experiments with college students buying or selling coffee mugs. And now our friend Devin is writing this
0:25:33 paper with Tiger Woods as one of the subjects, one of the data points, and the results are just the same.
0:25:44 Okay. So I have to ask you a very general question, which is, is the assumption that people are rational,
0:25:48 is that a good assumption or a bad assumption with all your experience?
0:25:55 I think, as we say in the book, I think it’s a good aspiration. Rationality is a great aspiration.
0:26:00 That’s how you make the best decisions. That’s the reason that it was written down in the models
0:26:05 in the first place. As far as predicting behavior, on the other hand, it’s not a great assumption.
0:26:10 That’s kind of the point of the book is showing here is the assumption of rationality. This is the
0:26:16 standard model. Let’s test it. Do people actually behave like that? And the goal of economics,
0:26:21 Milton Friedman termed this positive economics is like, the goal of writing down a model is to be
0:26:27 predictive of what people are actually going to do. And that’s where the models fail is that it’s not a
0:26:32 very good assumption to assume that people look, have all of the information in the world, know how to
0:26:39 process that information and move throughout their decision by just maximizing utility and not making
0:26:44 any mistakes. What we see is just systematic mistakes. So depending on what you want to do,
0:26:49 if you want to say, this is how you should behave, rationality is pretty good. If you want to say,
0:26:53 how will people actually behave, that gets a little bit worse.
0:27:00 A good example would be on your phone, you can download a chess program that unless you’re a grand
0:27:09 master, it will beat you. And amazingly, the entire program resides on your phone, right? You can play this
0:27:19 game offline. Now, what would be a good model of amateur chess players? It wouldn’t be that they play the
0:27:26 way that the thing on their phone does. Now, that would be a good model of Magnus Larsen, right? The
0:27:38 question is, who are we trying to describe? Is it an expert? Or is it a consumer? And the more it’s
0:27:46 experts, the more it’s going to get close to the rational model. Although, Alex has a paper, Alex,
0:27:53 why don’t you tell them that professional traders are kind of like the golf pros, that they also suffer
0:27:59 from loss aversion. Yeah. So there’s two models of the stock market. There’s one that’s kind of the
0:28:02 fully rational model where there’s a bunch of traders, they have all of the information,
0:28:09 they’re maximizing their own utility, and prices in the stock market just reflect expected future cash
0:28:16 flows of the company. The other model is by Keynes, which is animal spirits, beauty contest. Essentially,
0:28:21 the way that prices work is, I think that somebody else is going to buy the stock. So therefore, maybe
0:28:26 the stock price is going to go up. So I’m going to buy the stock too. It’s a very simple model. This is
0:28:33 a GameStop model, essentially. And what we find in this paper called selling fast and buying slow is we
0:28:39 actually got a data set of institutional investors. So these aren’t even retail traders. These are people
0:28:45 with million billion dollar portfolios. They’re trading actively every single day. So if you’re
0:28:50 thinking about a population that should be least likely to show an anomaly or some sort of behavioral
0:28:56 economic failure of rationality, this is where we should not be finding it. And we look at what they’re
0:28:59 buying and what they’re selling, and they’re actually really good at buying. They’re doing their
0:29:03 research, they’re paying attention to it. The stuff that they’re adding to their portfolio is really
0:29:08 good. But then they have to sell in order to buy, right? So in order to buy something that they want to
0:29:12 get, they have to actually get rid of something, raise the cash and buy it. And what we find is that
0:29:18 they not only do poorly on selling, they actually do worse than random. I can throw a dart at their
0:29:24 portfolio and actually do better than what they actually ended up doing. And what we find is that,
0:29:29 as Richard was saying, they found this endowment effect in the classroom with college students and
0:29:35 mugs. We find is that on the selling side, they basically show an endowment effect with their stocks.
0:29:38 What are they selling? They’re selling the things that they’re least attached to.
0:29:44 So something that they just recently added to the portfolio. And it turns out the things that they’ve
0:29:48 added to the portfolio are the things that are actually gaining them money. These are going to
0:29:53 be the things that you should not be selling. And we find basically looking at the mechanism,
0:29:58 what are they doing? Well, they have some amount of attention that they’re spending on selling stocks,
0:30:03 they’re putting all of that attention to buying and spending very little time on selling. And so
0:30:10 they end up losing a ton of money. So maybe now that we’re on the topic of stocks, you can shed some
0:30:19 light on this. For the life of me, I cannot understand why Tesla stock is doing well. It seems to me that
0:30:27 Tesla car sales are down. It’s taxicab business is dubious compared to Waymo. It has a CEO that’s,
0:30:33 you know, going off the wall, but they’re trying to give him a trillion dollar package and the stock is
0:30:39 going up. So explain that to me in a market of professional traders who are rational and have
0:30:48 self-interest. I will say it’s not as overpriced as it used to be. But one of the things we talk about
0:30:59 in the book, it’s not possible to prove that the stock market is too high or too low or that anyone’s
0:31:09 security is too high or too low. Tesla is at least a valuable company. There are these meme stocks
0:31:17 that are companies that are not worth trillions, but are worth billions and just lose money every month.
0:31:28 But you can’t prove any of that. And so there’s no anomaly in our book where we show that the market
0:31:33 is too high or too low. If I knew how to do that, I wouldn’t bother talking to you guys.
0:31:42 Now, this is in spite of the fact that I’m a principal in a money management firm located quite near you,
0:31:51 actually in San Mateo. But we don’t try to say whether the market is too high or too low. We try to say
0:32:02 whether what set of stocks are more likely to go up than others. And we have no professional view
0:32:11 on whether the market will go up or down because historically we have no ability, we or nobody else,
0:32:19 no one has ever produced a forecasting machine for the level of the market that’s worth anything.
0:32:29 So my advice to young people has always been invest in a diversified portfolio of stocks
0:32:31 and then only read the sports section.
0:32:41 – Going off of that, Richard, I think… – You failed the second part, Alex.
0:32:46 – Yeah, I’m not reading the sports section, but I’m not reading the stock market section either.
0:32:51 But I think it’s hard to forecast mistakes, but it’s easy to go back and look at them and say,
0:32:58 hey, that was a mistake. So in the Tesla example, there’s a test that Matthew Rabin recently published,
0:33:05 basically saying, look, is the movement in prices driven by irrationality? Because it’s just excessive,
0:33:10 right? You see this giant jump in the Tesla price when recently there was news announced that Musk was
0:33:16 getting some more Tesla shares. He was buying some more Tesla shares. And you can look at that jump and
0:33:22 say, hey, is there something in the price that’s irrational? Because no amount of information should
0:33:29 basically move people’s beliefs so much that you have this jump. And when you look at these prices on
0:33:34 stocks that kind of have this meme-like quality, I would actually put Tesla into that bucket. We all
0:33:40 know the GameStop, the AMC. Tesla, if you look at these jumps, Musk or somebody else tweets something
0:33:45 or put something on the internet, all of a sudden the prices go absolutely nuts. That’s a good diagnostic
0:33:50 tool that says, hey, something’s up here. There’s some anomalous pricing in that stock.
0:33:56 We can say that those stocks are too volatile. And we can say that some of these stocks,
0:34:03 if you ask me, is that a bubble? I would say yes, but it could go up. So don’t sell short.
0:34:10 So speaking of bubbles that could go up, I’m almost afraid to ask you two guys,
0:34:14 so what do you think of crypto in general then? Alex?
0:34:24 So crypto in general, I mean, it has no intrinsic value, right? A company when you’re purchasing a
0:34:33 stock has cash flows that you’re pricing. Crypto has no cash flows. So the only way that crypto could
0:34:39 be worth something other than zero is if people literally think that this is going to be the next
0:34:45 gold or something like that, that this is going to be the next store of value for a country like the
0:34:51 United States. But okay, look at Dogecoin or something like that. Does anybody actually believe
0:34:57 that the United States will put its assets in Dogecoin as a hedge against global risk?
0:35:03 I think the answer is no. So if the price of Dogecoin or whatever, any of these crypto coins
0:35:11 is higher than zero, then this is clearly this idea that Keynes talked about this, right? So this is not
0:35:17 very new. This is 60, 70 year old idea that the price of Dogecoin is simply reflecting the fact that
0:35:23 other people think that other people think are going to buy Dogecoin and that’s it. It’s just purely a bubble.
0:35:34 And worse than that, it’s a bubble that’s using enormous amounts of power. Bitcoin mining and AI
0:35:44 together are an enormous draw on the power supply. You can just say this just proves what idiots we are
0:35:51 because I would have given the same answer about Bitcoin when it was at $10.
0:35:57 Don’t take investment advice for me about about cyber.
0:36:03 Richard, if I told you, Oh, this is a mysterious Japanese guy. We don’t really know if he exists.
0:36:10 And he says he’s making this thing that has only 11 million copies of it. And you’re going to buy it
0:36:16 because the investment thesis is there are always people more stupid than you. How much do you want,
0:36:26 Richard? My golf buddies and I, for years, we would say, I’ll make you a bet, one Bitcoin payable in 10
0:36:36 years. And the joke was, this was a zero bet. That’s a pretty funny joke now in hindsight. I’m just hoping no
0:36:40 one has recorded all those bets.
0:36:44 It’s going to be hard to explain that to your wife.
0:36:45 Yeah, it is.
0:36:54 When I was reading your book about the rational and self-interest and you know how there’s seemingly
0:36:59 what exactly what people should do if they’re rational and self-interest. But it seems to me
0:37:07 there’s a complication about you could make the case that being cooperative and fair, while it may seem
0:37:15 irrational and not in your self-interest, in the long run, it is rational and it is in your self-interest.
0:37:24 Where do you divide the long run that, you know, I might do something that might not be maximizing now,
0:37:26 but in the long run, it’ll be maximizing?
0:37:33 So first of all, let’s be careful with some terms. Economists assume that people are rational
0:37:40 and that they’re selfish. Those are different things. No one thinks that it’s irrational to care
0:37:48 about somebody else. If you make a donation to NPR or Doctors Without Borders, no one says,
0:37:58 “Oh, you’re irrational.” Now, economists have, for a long time, assumed that people are selfish.
0:38:07 They don’t say you have to be selfish to be rational. They just assume that I would rather keep the money
0:38:12 than give it to you. All right? So that’s a distinction. Is it rational to care about other
0:38:23 people? I happen to think so. But we don’t need to assume that. What we can show is that people who
0:38:33 cooperate with each other collectively do better. There’s the commons dilemma that if there’s a pasture
0:38:41 that everybody’s using that it will tend to get overused because everybody thinks of it as a free
0:38:50 good. And economists well understand that. Now, the solutions are for people to cooperate with one
0:38:58 another. And you can show that people who do cooperate with each other do better. And a good strategy
0:39:09 is to act in a way that will induce other people to cooperate with you. So we talk about several
0:39:16 simple games in the book. One is the ultimatum game. And the way this works is, I’m the experimenter,
0:39:25 I give Guy $100. And I tell him, Guy, you can share it with Alex, who you don’t know. And you make an offer to
0:39:34 Alex of some portion of the $100. He can say yes or no. Let’s say you offer him 10 bucks. Alex says
0:39:40 yes or no. If he says yes, he gets 10, you get 90. If he says no, you both get nothing. Now, if you think
0:39:49 Alex, he’s an economics professor. So he probably thinks $10 is more than zero. So you offer him $10 and
0:39:54 he’s going to take it for sure. If you run that experiment, offers of less than 20%
0:40:02 get rejected. Alex says $10. Come on, Guy. You know, what the hell? What did I do to you?
0:40:14 I’d rather pay $10 to say no to you. So it turns out that the profit maximizing
0:40:21 offer in the ultimatum game is probably around 25%. It’s not $1. Now, this is a lesson that
0:40:33 President Trump could learn. He seems to act as if he’s playing the ultimatum game and that people will
0:40:42 take any offer that he makes. And it could be that given the power he’s in, maybe that’s the right
0:40:52 strategy in some situations for him. But it’s a terrible strategy for the country because these are
0:40:58 what we call repeated games. And we make a big deal about signing a treaty.
0:41:07 And then if you haven’t been honoring the treaties that were there before, why should anybody think
0:41:12 you’re going to honor the next one? So basically, I think your book makes the case
0:41:19 that cooperation and fairness pays off, right? And you alluded to the fact that that’s not exactly our
0:41:24 policy right now. So what do you think is going to happen in the long run when we go and we
0:41:34 arrest 300 South Koreans and Hyundai says maybe we won’t make batteries in the US anymore? What’s the
0:41:35 long-term prognosis here?
0:41:44 Neither of us are experts in this area, but just common sense says that other countries
0:41:51 are not going to want to build factories here, especially technical factories, right? That was
0:42:01 a battery factory. And the reason why there were 300 Koreans there is that constructing such a factory
0:42:07 is you need specialists. And we don’t want to start sneaker factories.
0:42:19 Right? It’s fine to let somebody else build all the sneakers. And if we want somebody to build
0:42:29 automobile factories or battery factories or chip factories, they’re going to need experts. And we better
0:42:39 let them hire them and not come and arrest their entire employment. I mean, talk about short sighted behavior.
0:42:45 Here’s a challenge to any young economics student listening to this. Try and write down a model
0:42:55 in which that behavior is rational. Send that in and Alex and I will read it.
0:43:02 And then for extra credit, you could write down why it’s in your self-interest to do that too.
0:43:09 Yeah, feel free. I’ll write a whole book. And then your guy will put you on his podcast.
0:43:19 No, I won’t. Because by definition, you’re not remarkable if you believe that. I love the whole
0:43:28 discussion of bidding. And my takeaway, and please correct me if I’m wrong, is that more or less in an
0:43:36 auction, the winner usually overpaid and is the loser. Is that an accurate interpretation of what you wrote?
0:43:46 No. So it’s not accurate. Oh, it’s close. So the winner’s curse is a phrase that refers to the fact
0:43:54 that in some situations, specifically when everybody is bidding for something that’s worth the same
0:44:01 to everyone, a technical term for that is a common value auction, meaning the winner gets the same thing,
0:44:08 like oil leases. This is where it was discovered. Then you have hundreds of companies bidding.
0:44:18 In that situation, if there are a lot of bidders, the winner is likely the one whose engineers
0:44:26 had the most optimistic forecasts. And the way this was discovered was some engineers at Atlantic
0:44:34 Richfield realized that every one of the auctions that they won, there was less oil in the ground
0:44:42 than their engineers had predicted. And they thought they had good engineers. So they wonder what’s going
0:44:50 on. And then they realized there’s this asymmetry that we shouldn’t care about winning the auction. We
0:45:00 should care about how much oil is in the ground on the auctions that we win. So the lab experiment versions
0:45:10 of that are, you get a jar of coins, and you count the money, and you auction that off in your class. And let’s
0:45:18 say the jar is worth $87. And let’s suppose, so the students bid, they don’t know what’s in there. They just
0:45:29 see a jar of coins. And there’s going to be a distribution of estimates. What you find is the students are a bit cautious.
0:45:37 So on average, they underestimate the amount of money in the jar. But the winner almost always has
0:45:46 overestimated. And that’s how he became the winner. And I choose the gender there carefully. I’ve probably
0:45:51 auctioned off at least 100 of those jars. I don’t think a woman has ever won.
0:46:01 really? Why? Because on average, women tend to be less overconfident and a little less
0:46:14 risk-seeking. So there’s usually some YOLO guy in the class who bids a lot and wins the jar and
0:46:25 gives me money. And you can make a lot of money. We recommend this as a bar trick, but be careful.
0:46:35 We’re not guaranteeing how the patrons will treat you. But in this discussion of bidding and with ARCO and
0:46:43 they come to this conclusion, well, at some level, if you never win a bid, you never drill. So then what
0:46:51 happens this is really interesting. And there hasn’t been much written about it. I had a couple lines
0:46:59 in the original column I wrote about this, which was, what should you do? And maybe the smartest thing
0:47:08 to do is to write an article about it. And I’m deadly serious about that. Because imagine these engineers
0:47:17 go to the board and say, hey, we lose 10% every time we win. And they say, yeah, we’re an oil company.
0:47:25 We got to find oil. So telling everybody, hey, you should lower your bids. Now, another strategy,
0:47:33 which is almost certainly not legal, is to just collude. And this is what Major League Baseball did for a
0:47:41 while. They realized there was overbidding for players. And then one year they all just went to
0:47:48 some smoke-filled room and decided not to bid for any free agents. And that’s even better than writing
0:47:53 an article. But we don’t endorse collusion.
0:48:20 – Yeah, this is not legal advice or economic advice. And they lost an anti-trust case, right? But I’ll give you an A for that question, Guy. It’s a very interesting question. And so maybe they did the right thing. But it’s a very subtle thing to realize, if I say to you, all right, we’re having this, and there are 20 people in the room, and now 20 more come in. Should you raise your bid or lower your bid?
0:48:42 – Yeah. So good answer. Lower is the correct answer. But it’s not intuitive. The intuitive is to say, wow, there are 40 people. I’m never going to win with this bid. I better raise my bid. But if I raise it enough, I’m pretty sure I’m going to win. And I’m pretty sure I’m going to have to give Thaler some money.
0:48:48 – Up next on Remarkable People.
0:49:08 – But the thing about selling cars is that there’s different models. Some models are easier to sell than others, but they might make the dealer less money. So basically, the dealers ended up gamifying the system to still get their bonuses, but they were just basically selling the easier cars that were making them more money.
0:49:14 – So that’s all to say that these things are really complicated, and you should run experiments.
0:49:28 – Meet Nicole Nicholas, Capital One business customer and co-owner of Ansett Uncles, a plant-based restaurant and community space in Brooklyn, New York, that got its start from a need for unity.
0:49:40 – The inspiration, it was born from the desire to create a space that felt like home, where we can connect community culture, good food, and come together with family and friends. That’s how we birthed aunts and uncles.
0:49:48 – Nicole and her husband, Mike, were fulfilling their dream of bringing people together out of their home kitchen, but they soon learned that the demand for community was greater than they knew.
0:50:00 – It became overwhelming, and we were like, we need home, but not in our actual home. We realized that there was also a need in our community for something bigger in our neighborhood, so we had to find a place.
0:50:08 – Moving from a home operation into a storefront was a huge next step, but Nicole and Mike were able to take it on with the help of Capital One Business.
0:50:22 – It’s not for the weak. As a small business, finding resources is super important because that’s the way you’ll be able to manage and scale. We would have never done that without having Capital One to be able to help us along the way.
0:50:32 The cashback rewards are very helpful. You know, it just gave us that runway to be able to breathe a little bit. Then you get to focus on the cooking of the food and making the experience great.
0:50:36 – To learn more, go to CapitalOne.com/businesscards.
0:50:46 – Become a little more remarkable with each episode of Remarkable People. It’s found on Apple Podcasts or wherever you listen to your favorite shows.
0:50:51 – Welcome back to Remarkable People with Guy Kawasaki.
0:51:03 – Let’s hypothetically say that the CEO of eBay calls up you guys and says, “Listen, I read the book. I listened to the podcast about bidding.
0:51:11 You got any advice for me about how to make eBay, which is based on bidding, more successful?”
0:51:21 – Yeah, there’s a big literature on this, as Richard said. I think the key part is to understand who you are bidding against.
0:51:36 So that’s kind of at the center of everything that Richard was saying, is that the reason the winner’s curse is happening is because people are not taking into account the fact that other people are also bidding, they’re also smart, and they’re also acting on their own information.
0:51:44 So that means that if I just bid based on my estimate without taking into account what other people are doing, that means I’m going to lose money.
0:51:52 But if I take all of that into account and know who I’m bidding against, which is why Richard said you should write an article about it.
0:51:54 So there’s common knowledge about this, right?
0:51:59 So now everyone knows that everyone knows that they’re thinking in the same way.
0:52:01 Now bids might go down.
0:52:08 So in terms of what our company is trying to do when they’re aware of the winner’s curse, what are they trying to do?
0:52:15 They’re trying to figure out what their competitors are doing, what their competitors’ beliefs are, so they can react optimally to that.
0:52:27 So I think in terms of a company asks us how to overcome the winner’s curse, understand who you’re bidding against, what they’re going to do, what information that they’re acting against.
0:52:32 Right now, meanwhile, there are other companies that are in the auction business.
0:52:37 And let’s go from eBay to casinos.
0:52:43 Casinos hire a lot of smart data scientists.
0:52:48 And essentially, their job is take slot machines.
0:52:53 Slot machines are the most profitable part of a casino.
0:53:00 Here’s the most useful advice that you’re going to hear on this podcast today.
0:53:02 Don’t bet slot machines.
0:53:12 The rate of return of betting on slot machines is minus 10% per poll, right?
0:53:13 It’s just horrible.
0:53:24 But casinos have spent a lot of money devising slot machines that get people to play more.
0:53:28 And we can talk about the ethics of that.
0:53:29 We’re not experts on that.
0:53:31 We’re not experts on that.
0:53:33 But it’s completely predictable.
0:53:41 If we have a casino business, people are going to spend money making it more attractive.
0:53:44 So they build these fancy casinos now.
0:53:46 They used to be scuzzy.
0:53:48 They’re now very attractive.
0:54:02 But you’re being lured there to engage in games in which the house always wins in the long run.
0:54:04 And that’s the business they’re in.
0:54:09 They’re not as bad as state lotteries that take half the money.
0:54:12 Slots are only taking 10%.
0:54:16 And look, that’s true for the whole internet.
0:54:19 What the largest economics department in the world is?
0:54:20 Amazon.
0:54:26 And I do not mean in any sense that those people are evil.
0:54:32 Amazon is the low cost provider for almost everything, right?
0:54:44 But they are running experiments every second, trying to devise a place where you will spend as much money as possible.
0:54:45 And they can do that.
0:54:49 And at the same time, there’s no place that’s cheaper.
0:54:57 There are lots of people trying to create an environment that will make money for the seller.
0:55:12 And as a buyer, buyer beware, there are things called penny auctions that I’m not going to go into except that if you ever come across one, just close your computer and go do something else.
0:55:26 I just want to interject a little side point that I can only think of one person who has failed in the casino business, which you just pointed out is kind of a way to print money.
0:55:27 Let’s not get into that.
0:55:32 And he happens to be running the largest economy in the world right now.
0:55:33 But I digress.
0:55:34 Yeah.
0:55:39 He’s not the only one that has failed, but he did famously fail at that business.
0:55:49 So, guys, I have to ask you, why is there a differential between the willingness to pay and the willingness to accept?
0:55:51 Why is there such a bridge there?
0:55:53 Or gap is the right word?
0:55:57 I think, Guy, what you’re referring to is the endowment effect.
0:56:11 The fact that basically what Richard, Danny Kahneman, Jack Netsch documented is that just simply owning something or having something on your desk, in their examples, that increases the amount that you value it almost instantly.
0:56:19 And that leads to a gap between how much you would need to be compensated for it versus how much you’d be able to be willing to pay for it before getting it.
0:56:22 So, the classic example is, again, with a coffee mug.
0:56:31 If you put a coffee mug on a student’s table and ask them, how much are you willing to be compensated for this mug in order to get rid of it, in order to sell it?
0:56:34 So, people give an answer like three and a half bucks or four bucks.
0:56:42 And you ask a separate group of people who are looking at the exact same mug who randomly didn’t get it, how much are you willing to pay for that same mug?
0:56:45 And the answer is something like $2, maybe $1.50.
0:56:52 So, there’s this gap between how much I need to be compensated in order to sell it versus how much I’d be needing to buy it for.
0:57:01 And this gap called the endowment effect in the behavioral economics literature is explained through the concept that Richard had talked about earlier called loss aversion.
0:57:07 Essentially, when the mug is on my desk, now having to sell it comes at a loss.
0:57:13 So, I need to be compensated for that feeling of loss that I’m going to be experiencing when I’m selling it to somebody.
0:57:20 Whereas, if I don’t have the mug, there’s no loss because there’s no loss aversion over kind of things like cash.
0:57:22 And so, I’m willing to pay less for that mug.
0:57:36 So, loss aversion, this idea that we talked about with Tiger Woods and other professional golfers shows up in basically big distance between how much people are willing to pay for something versus how much they’re willing to sell that same thing.
0:57:45 And there’s always a lot of discussion about evolution and how we got to this.
0:57:55 And Amos Tversky, who I had the privilege of knowing and is possibly the smartest person I ever met, he was famous for his one-liners.
0:58:10 And one of his one-liners was that there may have been species that did not exhibit loss aversion or the endowment effect, but they’re now extinct.
0:58:19 And you can see that if you’re at subsistence, you better fight for that pile of food that you’ve managed to get.
0:58:34 And I’m not suggesting that’s where it came from, but the point is that however we got to where we are, we’re hardwired now to be more sensitive to losing than to gaining.
0:58:39 Maybe in another million years, we’ll have outgrown that.
0:58:41 But for now, we’re stuck with that.
0:58:43 That’s the way our brain is wired.
0:58:46 And we have to resist it.
0:58:57 We have to teach ourselves, no, if I would only pay $2 to get that mug, then why am I asking $5 to sell it?
0:59:07 What we want people who read this book to do is to be starting to think about asking themselves those kinds of questions.
0:59:13 Saying, oh, wait a minute, am I about to fall into one of these traps?
0:59:14 Yeah.
0:59:20 Okay, so Richard, from the dedication of your book, it’s clear you have a lot of grandchildren.
0:59:25 And I have a real practical question about the endowment effect.
0:59:32 So let’s say you want to encourage your grandchildren to do better in school.
0:59:37 So do you say to them, if you get an A, I will give you $50?
0:59:43 Or do you say to them, if you don’t get an A, I’m going to take $50 from you?
0:59:46 Which one is going to be more effective?
0:59:48 I would say neither.
0:59:54 I will say I was like a B student and my father was an A student.
0:59:57 He was an actuary and really good student.
0:59:59 And he was very frustrated.
1:00:06 And he started offering me $100 for a straight A report card, which I never, never collected.
1:00:11 But there is some relevant data on this, not for the students, but for the teachers.
1:00:28 So our colleague John List ran some experiments with teachers, trying to give them an incentive for the students to do better in the quantitative exams at the end of the year.
1:00:34 And in one condition, they would get an extra $1,000 if the students did well.
1:00:40 And in the other, they would lose $1,000 if the students didn’t do well.
1:00:49 And what they found was that the fine was more motivating, but the teachers hated it.
1:00:59 So in theory, having fines would be more motivating if anybody was still working for you.
1:01:15 But I don’t know of any school district that decided after those experiments to implement fines for the teachers whose classes didn’t do as well.
1:01:21 And in fact, look around the world, there’s lots of domains.
1:01:27 Like in finance, bonuses are a big part of the pay.
1:01:29 In many cases, it’s most of the pay.
1:01:36 Do you know of any industry where negative bonuses are common?
1:01:37 None.
1:01:39 And why?
1:01:41 Because people hate them.
1:01:56 So instead of paying somebody $100,000, unless you do something bad in which case it’s $90,000, pay them $90,000 and say, if you do something good, you get $100,000.
1:02:03 People will be much happier and they’ll keep working for you, which is necessary for anything to work.
1:02:07 But what about the impact of clawbacks?
1:02:09 Isn’t a clawback a fine?
1:02:13 I mean, look, clawbacks are when somebody has done something really bad.
1:02:16 But we should have more clawbacks.
1:02:25 And even for cases where people have done something really bad, there aren’t as many clawbacks as you might think.
1:02:32 So there’s actually a nice example that one of the largest car companies in the world, one of the big three.
1:02:34 I’m not allowed to say which one.
1:02:38 There’s a paper recently came out that they wanted to implement these clawbacks for their car dealers.
1:02:47 Because it’s really hard to incentivize car dealers because the way that the system of car dealers works, they’re supposed to be completely separate from the actual car company.
1:02:54 Where the car company has say Ford or something like that has very little control of how to incentivize the dealers, right?
1:02:59 They want them to sell more cars for them, but they can’t have a straight up contract of how to do that.
1:03:05 So the one thing that they ended up doing is asking dealers to introduce these clawback contracts.
1:03:13 If you do not sell above this amount for this particular model, you will not make the bonus or you will get that bonus back taken from you.
1:03:25 And one of my colleagues, Alex Rees-Jones, he actually convinced this company to say, instead of doing this at scale at every single dealership, why don’t you run an experiment first just to see how it works?
1:03:27 The results were pretty crazy.
1:03:30 They lost a ton of money from the clawbacks.
1:03:33 But the reason that they lost money was not obvious.
1:03:37 The clawbacks were actually extremely, extremely motivating for people.
1:03:41 People really did not want to sell below that value.
1:03:44 But the thing about selling cars is that there’s different models.
1:03:51 Some models are easier to sell than others, but they might make the dealer less money.
1:04:01 So basically, the dealers ended up gamifying the system to still get their bonuses, but they were just basically selling the easier cars that were making them no money.
1:04:06 So that’s all to say that these things are really complicated and you should run experiments.
1:04:11 If a company comes to you and says, look, here’s the contract I want to implement.
1:04:13 I want to put some behavioral economics.
1:04:18 Be smart like Alex and say, why don’t you run an experiment and see what it’s like.
1:04:21 It could be that you’re missing something really important.
1:04:22 Yeah.
1:04:31 And here’s an anomaly we don’t talk about in the book, which is firms are really reluctant to run experiments.
1:04:33 And here’s an example.
1:04:38 I’ve been a professor for many years, longer than Alex has been alive.
1:04:50 And I have at various places suggested, why don’t we take a portion of the student body in some other way?
1:04:53 Suppose our entering class is 500.
1:04:58 Let’s take 50 of them using some other criteria and see what happens.
1:05:02 I’ve convinced exactly no places to ever do that.
1:05:07 And it’s basically true in any line of work.
1:05:11 You could learn something by trying stuff.
1:05:12 And you know what?
1:05:15 We have armies of grad students.
1:05:24 If you want to run an experiment and you don’t have people at your company that know how to do it, let us know.
1:05:27 And we’ll send you a needy student.
1:05:33 And it’s crazy how little companies are willing to invest in learning.
1:05:44 You know, Richard, I know the chancellor of UC Santa Cruz, and I once told her, why don’t you try an experiment where you admit people purely at random?
1:05:46 You don’t care about their essay.
1:05:47 You don’t care about their GPA.
1:05:49 You don’t care about their score.
1:05:55 If they apply, you just accept them at random and see how that compares to other class.
1:05:57 She laughed at me.
1:05:58 Yeah.
1:05:59 Yeah.
1:06:00 Yeah.
1:06:01 Yeah.
1:06:03 I knew what the answer was going to be.
1:06:04 No.
1:06:05 Yeah.
1:06:10 So listen, we got to get to the expected utility theory and the prospect theory.
1:06:12 So you guys got to explain that.
1:06:14 I love that discussion.
1:06:18 I must admit that I’m not Nobel prize laureate level intelligence.
1:06:21 So it was difficult for me to follow that.
1:06:23 So can you explain those two theories?
1:06:24 All right.
1:06:26 We better get the young guy doing.
1:06:31 Richard, a little digression.
1:06:35 Have you heard the story of the physicist and the chauffeur?
1:06:36 No.
1:06:37 Go for it.
1:06:38 Okay.
1:06:42 So this physicist has written a book.
1:06:48 He’s on a book tour and he goes to a city and a chauffeur picks him up, takes him to all these meetings.
1:06:54 So the chauffeur sitting in the back, listens to the first lesson lecture, listens to the second lecture.
1:06:59 By the time to get to the fourth lecture, the physicist is really tired.
1:07:04 So the physicist says to the chauffeur, you’ve seen me do this three times already.
1:07:05 You know what I’m going to say?
1:07:06 Why don’t you do it?
1:07:08 And I’ll be the chauffeur sitting in the back.
1:07:09 So they do that.
1:07:14 And the chauffeur does it very well, but he ends too early.
1:07:17 So now the host says there’s time for question and answer.
1:07:20 So we’re going to take a question from the audience.
1:07:26 Somebody asks a question about physics and the chauffeur says that it’s such a simple question.
1:07:29 I’m going to let the chauffeur in the back of the room answer.
1:07:31 So Alex is your chauffeur.
1:07:35 It’s a great story.
1:07:37 That’s an anomaly.
1:07:39 That’s a great story.
1:07:41 Can you come pick me up after?
1:07:54 So the expected utility hypothesis is essentially that, look, we get utility, which basically means the pleasure that I get from eating an apple or something like that.
1:07:55 That’s just utility.
1:07:57 What is expected utility?
1:08:00 Expected utility says we have utility theory.
1:08:07 How do people behave given utility theory when there’s risk, when there’s lotteries and things like that going on?
1:08:14 And basically expected utility theory says is you take the utility you get from something and multiply by the probability of getting it.
1:08:18 And that is the expected utility you get from getting that item.
1:08:26 So if I get five utility from apples and there’s a 30% chance I’m going to get an apple, you just multiply that five times 0.3 and that’s the utility.
1:08:28 So that’s expected utility in a nutshell.
1:08:34 The thing about expected utility theory, though, is that it makes assumptions of how I’m approaching the problem.
1:08:35 So let’s say I get the gamble.
1:08:38 There’s an upside of $2 and a downside of $1, right?
1:08:41 So this is a positive expected value gamble.
1:08:43 50-50 odds.
1:08:46 I should take the gamble if I’m risk neutral.
1:08:51 I don’t care about the risk, but all I care about is the expected value of the gamble, right?
1:09:03 And what expected utility says is that because of one main assumption that I take all of my wealth into account when I’m making any decisions, including future wealth.
1:09:09 So when I’m given this little tiny lottery, I have to think about not how much money I have in my pocket right now.
1:09:13 I have to think about how much money I have in the bank, how much money I’m going to make in the future.
1:09:16 Usually that sum is a lot of money.
1:09:23 So I take this teeny tiny little gamble integrated with all of this wealth and unexpected utility.
1:09:33 Essentially, all of risk aversion, the fact that I don’t like risk, comes from the curvature of my utility function over my entire lifetime wealth.
1:09:44 So when I’m thinking about this little tiny gamble, if you zoom into that function, if you take a curve over a giant, giant space of wealth, if you zoom in, you know what it looks like?
1:09:45 It looks straight.
1:09:51 It doesn’t look curvy anymore, which means that I’m actually risk neutral over small amounts.
1:09:57 And that’s where prospect theory comes in, because what people found out is that people are not risk neutral.
1:10:02 People turn down these small bets, even if they have a positive expected value.
1:10:05 So prospect theory says is that people aren’t doing that.
1:10:11 They’re not doing this crazy thing that when you give them a gamble, they’re thinking about the money in their bank account, how much they’re going to earn in the future.
1:10:12 They’re not doing that.
1:10:15 They’re just thinking like, do I like this gamble or not?
1:10:24 And when you take that gamble, and instead of putting it on this giant function over lifetime wealth, but just say, I’m only looking at this gamble.
1:10:28 And I’m only thinking about, do I like losing a dollar?
1:10:36 If I’m loss averse, as we talked about before, I don’t like losing, I’m going to multiply that loss by loss aversion, because I don’t like losing.
1:10:40 And then how much do I need to be compensated for potentially losing?
1:10:41 Usually it’s more than $2.
1:10:50 So therefore, I’m going to reject that gamble because I’m not doing this whole giant calculus of my huge wealth and integrating that in the gamble.
1:10:52 I’m just thinking about it in isolation.
1:10:56 And the second part of prospect theory is that I don’t like losses.
1:10:58 I’m going to multiply that loss by loss aversion.
1:11:06 So what you get is from prospect theory is the fact that people are going to be a lot more risk averse than expected utility suggests.
1:11:08 They’re going to be turning down these gambles.
1:11:14 They’re going to be not participating in the stock market as much as expected utility theory says.
1:11:24 They’re going to be doing a whole bunch of other kind of very risk averse looking things because they’re not integrating all of those choices with everything else going on in their lives.
1:11:43 Gentlemen, can I ask you, if you ever worry that, you know, this kind of experiments you’re doing and this kind of choices you’re giving people, I understand at a very basic level, like what if it’s as simple as people don’t understand probability and math?
1:11:55 Have you heard the famous example that the one third pounder failed against the quarter pounder because people thought that a one third pounder with a number three is smaller than four.
1:11:59 So the one third pounder is less meat than the quarter pounder.
1:12:01 That’s the level of intelligence we’re dealing with.
1:12:07 So aren’t you worried that maybe that’s why people are making these kinds of decisions?
1:12:23 So look, Guy, remember, we’re trying to convince people, our fellow economists, that those agents in their models may not be as smart as John von Neumann or Albert Einstein.
1:12:28 So that’s all we need.
1:12:35 If people were just B plus college students, that would be fine.
1:12:36 That would be fine.
1:12:43 And of course, the world is complicated and there are all kinds of mistakes people are making.
1:12:47 The economists don’t really have a problem with that.
1:12:57 I think if the mistakes people make sort of cancel out, that’s not really a problem because the theory will be right on average.
1:13:08 So the insight that I got from my friends, Kahneman and Tversky that really launched my career was yes, people make mistakes and they’re predictable.
1:13:22 And like when I heard that, I had the big aha moment because what that meant was economists are happy to admit that they know people who are dumb.
1:13:27 And I’ll tell you a funny story.
1:13:30 I was having dinner.
1:13:42 I was having dinner once years ago with my witty friend, Amos Tversky and a very famous economist who was a big believer in rational models.
1:13:52 And so Amos at some point during dinner, he asks this guy, how was his wife’s decision making?
1:13:56 And he regales us with all the dumb things she does.
1:14:02 And then he says, what about the president, whoever was president at the president of the United States at the time?
1:14:04 Oh, more stories.
1:14:05 What about your students?
1:14:07 Oh, more stories.
1:14:08 All right.
1:14:14 So he had stories about the idiocy of anyone Amos could bring up.
1:14:22 And Amos is having him walk down this plank that he doesn’t realize he’s walking down.
1:14:28 And when he gets to the end of the plank, he says, so let me get this straight.
1:14:35 Basically, everybody you think is an idiot, but all the people in your models are geniuses.
1:14:37 What gives?
1:14:54 And in a sense, that story is the essence of behavioral economics.
1:14:56 I have two more questions.
1:15:03 I really want to ask you, but I think I may have to end the podcast here because it’s going to be hard to top that story.
1:15:08 My God, I think it is a good place to end.
1:15:12 And off the air, I’ll tell you who the economist was.
1:15:14 It wasn’t me.
1:15:15 I wasn’t alive.
1:15:19 You know what?
1:15:22 I’ve enjoyed this conversation immensely.
1:15:25 I live half the year in Berkeley.
1:15:27 Let’s do it again in person.
1:15:29 Oh, yeah, I would love to.
1:15:32 Now, let me just sign off here.
1:15:35 I want to thank you both for being on the podcast.
1:15:36 It’s been such a pleasure.
1:15:40 This is probably the longest episode of the podcast ever.
1:15:47 I just want you to know that because this is if I were a football fan, it would be like interviewing Tom Brady.
1:15:48 But I digress.
1:15:58 And anyway, I want to thank Madison Neisman, co-producer, Jeff C co-producer, Tessa Neisman, researcher and Shannon Hernandez, design engineer.
1:16:01 But most of all, I want to thank you, Richard Thaler.
1:16:07 And I want to thank you, Alex, because, man, I hope you can see how much I enjoyed this.
1:16:08 And I hope you enjoyed it, too.
1:16:14 And you’re not going to get an interview like this from any other podcaster in the world for your book.
1:16:15 I agree.
1:16:16 Thank you very much.
1:16:17 I agree.
1:16:18 Thank you so much, Guy.
1:16:24 This is Remarkable People.
What makes humans so predictably irrational? Nobel Laureate Richard Thaler and Alex Imas join Guy Kawasaki to reveal the quirks that shape our decisions—from golf greens to stock markets. Drawing from their new book, The Winner’s Curse: Then and Now, they revisit the field they helped pioneer: behavioral economics. This episode is a masterclass in understanding why the smartest people make the strangest choices—and how awareness turns mistakes into wisdom.
—
Guy Kawasaki is on a mission to make you remarkable. His Remarkable People podcast features interviews with remarkable people such as Jane Goodall, Marc Benioff, Woz, Kristi Yamaguchi, and Bob Cialdini. Every episode will make you more remarkable.
With his decades of experience in Silicon Valley as a Venture Capitalist and advisor to the top entrepreneurs in the world, Guy’s questions come from a place of curiosity and passion for technology, start-ups, entrepreneurship, and marketing. If you love society and culture, documentaries, and business podcasts, take a second to follow Remarkable People.
Listeners of the Remarkable People podcast will learn from some of the most successful people in the world with practical tips and inspiring stories that will help you be more remarkable.
Episodes of Remarkable People organized by topic: https://bit.ly/rptopology
Listen to Remarkable People here: **https://podcasts.apple.com/us/podcast/guy-kawasakis-remarkable-people/id1483081827**
Like this show? Please leave us a review — even one sentence helps! Consider including your Twitter handle so we can thank you personally!
Thank you for your support; it helps the show!
See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

Leave a Reply
You must be logged in to post a comment.