a16z Podcast: Innovating in Bets

AI transcript
0:00:06 Hi, everyone. Welcome to the A6NC podcast. I’m Sonal, and today Mark and I are doing another one
0:00:10 of our book author episodes. We’re interviewing Annie Duke, who’s a professional poker player and
0:00:17 World Series champ and is the author of Thinking in Bets, which is just out in paperback today.
0:00:21 The subtitle of the book is Making Smarter Decisions When You Don’t Have All the Facts,
0:00:25 which actually applies to startups and companies of all sizes and ages, quite frankly. I mean,
0:00:30 basically any business or new product line operating under conditions of great uncertainty,
0:00:34 which I’d argue is my definition of a startup and innovation. So that will be the frame for
0:00:39 this episode. Annie is also working on her next book right now and founded HowIDecide.org,
0:00:43 which brings together various stakeholders to create a national education movement around
0:00:48 decision education, empowering students to also be better decision makers. So anyway,
0:00:51 Mark and I interview her about all sorts of things in and beyond her book,
0:00:55 going from investing to business to life. But Annie begins with a thought experiment,
0:00:58 even though neither of us really know that much about football.
0:01:02 So what I’d love to do is kind of throw a thought experiment at you guys so that we can
0:01:06 have a discussion about this. So I know you guys don’t know a lot about football,
0:01:09 but this one’s pretty easy. You’re going to be able to feel this one, which is do this thought
0:01:16 experiment. Pete Carroll calls for Marshawn Lynch to actually run the ball.
0:01:18 So we’re betting on someone who we know is really good.
0:01:21 Well, they’re all really good, but we’re betting on the play that everybody’s expected.
0:01:25 This is the default. This is the assumed irrational thing to do.
0:01:30 Right. So he has Russell Wilson handed off to Marshawn Lynch. Marshawn Lynch goes to barrel
0:01:35 through the line. He fails. Now they call the time out. So now they stop the clock,
0:01:39 they get another play now, and they hand the ball off to Marshawn Lynch,
0:01:45 what everybody expects. Marshawn Lynch again, attempts to get through that line and he fails.
0:01:52 End of game, Patriots win. My question to you is, are the headlines the next day
0:01:58 that worst call in Super Bowl history? Is Chris Collins we’re saying, I can’t believe the call,
0:02:04 I can’t believe the call, or is he saying something more like, that’s why the Patriots are so good,
0:02:09 their line is so great. That’s the Patriots line that we’ve come to see this whole season.
0:02:15 This will seal Belichick’s place in history. It would have all been about the Patriots.
0:02:21 So let’s sort of divide things into, we can either say the outcomes are due to skill or luck,
0:02:27 and luck in this particular case is going to be anything that has nothing to do with Pete Carroll.
0:02:31 And we can agree that the Patriots line doesn’t have anything to do with Pete Carroll. Belichick
0:02:34 doesn’t have anything to do with Pete Carroll. Tom Brady doesn’t have anything to do with Pete
0:02:38 Carroll as they’re sealing their fifth Super Bowl victory. So what we can see is there’s two
0:02:44 different routes to failure here. One route to failure, you get resulting. And basically what
0:02:50 resulting is, is that retrospectively, once you have the outcome of a decision, once there’s a
0:02:55 result, it’s really, really hard to work backwards from that single outcome to try to figure out
0:02:59 what the decision quality is. This is just very hard for us to do. They say, oh my gosh, the outcome
0:03:05 was so bad. This is clearly, I’m going to put that right into the skill bucket. This is because of
0:03:10 Pete Carroll’s own doing. But in the other case, they’re like, oh, you know, there’s uncertainty.
0:03:15 What could you do? Weird, right? Yeah. Okay. So you can kind of take that and you can say, aha,
0:03:21 now we can sort of understand some things. Like, for example, people have complained for a very
0:03:28 long time that in the NFL, they have been very, very slow to adopt what the analytics say that
0:03:32 you should be adopting, right? And even though now we’ve got some movement on like fourth down
0:03:36 calls and when are you going for two point conversions and things like that, there’s still
0:03:40 nowhere close to where they’re supposed to be. So they don’t make the plays corresponding to
0:03:45 the statistical probabilities? No. In fact, the analytics show that if you’re on your own one
0:03:51 yard line and it’s fourth down, you should go for it, no matter what. The reason for that is if you
0:03:54 kick it, you’re only going to be able to kick to midfield. So the other team is basically almost
0:03:59 guaranteed three points anyway. So you’re supposed to just try to get the, try to get the yards.
0:04:03 Like, when have you ever seen a team on their own one yard line on fourth down be like, yeah,
0:04:08 let’s go for it. That does not happen. Okay. So we know that they’ve been like super slow
0:04:12 to do what the analytics say is, is correct. And so you sit here and you go, well, why is that?
0:04:18 And that thought experiment really tells you why, because we’re all human beings. We all
0:04:23 understand that there are certain times when we don’t allow uncertainty to bubble up to the surface
0:04:29 as the explanation. And there are certain times then we do. And it seems to be that we do when we
0:04:35 have this kind of consensus around the decision, there’s other ways we get there. And so, okay,
0:04:39 if I’m a human decision maker, I’m going to choose the path where I don’t get yelled at.
0:04:45 Yeah, exactly. So basically we can kind of walk back and we can say, are we allowing the uncertainty
0:04:49 to bubble to the surface? And this is going to be the first step to kind of understanding what
0:04:55 really slows innovation down, what really slows adoption of, of what we might know is good decision
0:04:58 making, because we have conflicting interests, right, making the best decision for the long run,
0:05:03 or making the best decision to keep us out of a room where we’re getting judged or
0:05:07 yelled at or possibly fired. So can I, let me propose the framework that I used to think about
0:05:12 this and see if you agree with it. So it’d be a two by two, a two by two grid and it’s consensus
0:05:17 versus non-consensus and it’s right versus wrong. And the way we think about it, at least in our
0:05:24 business is basically consensus, right is fine. Consensus, non-consensus right is fine. In fact,
0:05:29 generally you get called a genius. Consensus wrong is fine because you just, you know,
0:05:32 it’s just the same mistake everybody else made. You all agree, right, it was wrong.
0:05:36 Non-consensus wrong is really bad. It’s horrible. It’s radioactively bad.
0:05:40 Right. And so, and then, and then as a consequence of that, and maybe this gets to the innovation
0:05:44 stuff that you’ll be talking about, but as a consequence of that, there are only two scripts
0:05:49 for talking about people operating in, in the non-consensus directions. One script is they’re
0:05:54 a genius because it went right and the other is they’re a complete moron because it went wrong. Is
0:05:58 that, does that map? That’s, that’s exactly, that’s exactly right. And I think that the problem
0:06:04 here is that what is right and wrong mean in your two by two, wrong and right is really this,
0:06:08 just to turn out well or not. Yeah, okay. And this is where we really get into this problem
0:06:13 because now what people are doing is they’re trying to swat the outcomes away and they understand,
0:06:19 just as you said, that on that consensus wrong, you will have like a cloak of invisibility over
0:06:24 you. Like, you don’t have to deal with it. Right. So, let’s think about other things besides
0:06:30 consensus. So, consensus is one way to do that, especially when you have like complicated cost
0:06:34 benefit analyses going into it. I don’t think that people, when they’re getting in a car,
0:06:41 are actually doing any kind of calculation about what the cost benefit analysis is to their own
0:06:47 productivity versus the danger of something very bad happening to them. Like, well, as a society,
0:06:50 someone’s done this calculation, we’ve all kind of done this together. And so therefore,
0:06:54 like getting in a car is totally fine. I’m going to do that. And nobody second guesses anybody.
0:06:57 Somebody dies in a car crash. You don’t say, wow, what a moron for getting in a car.
0:07:03 No. Another way that we can get there is through transparency. So, if the decision is pretty
0:07:09 transparent, another way to get there is status quo. So, like a good status quo example that I
0:07:14 like to give because everybody can understand it is, you have to get to a plane and you’re with
0:07:21 your significant other in the car and you go the usual route. So, you go your usual route.
0:07:26 Like, you go literally, this is the route that you’ve always gone and there’s some sort of accident,
0:07:30 there’s bad traffic, you missed the plane and you’re mostly probably comforting each other
0:07:35 in the car. It’s like, what could we do? But then you get in the car and you announce to
0:07:41 your significant other, I’ve got a great shortcut. So, let’s take the shortcut to the airport. And
0:07:46 there’s same accident, whatever, horrible traffic, you missed the flight. And that’s like that status
0:07:50 quo versus non-status quo decision. Right. You’re going against what’s familiar and comfortable.
0:07:56 Exactly. If we go back to the car example, when you look at what the reaction is to a pedestrian
0:08:02 dying because of an autonomous vehicle versus because of a human, we’re very, very harsh with
0:08:07 algorithms. For example, if you get in a car accident and you happen to hit a pedestrian,
0:08:12 I can say something like, well, Mark didn’t intend to do that. Because I think that I understand
0:08:17 your mind is not such a black box to me. So, I feel like I have some insight into what your
0:08:23 decision might be and so more allowing some of the uncertainty to bubble up there. But if this
0:08:29 black box algorithm makes the decision, now all of a sudden, I’m like, get these cars off the road.
0:08:33 Never mind that the human mind is a black box itself. Of course. But we have some sort of
0:08:37 illusion that I understand sort of what’s going on in there, just like I have an illusion that I
0:08:40 understand what’s going on in my own brain. And you can actually see this in some of the
0:08:46 language around crashes on Wall Street, too, when you have a crash that comes from human
0:08:50 beings selling. People say things like, the market went down today. When it’s algorithms,
0:08:56 they say it’s a flash crash. So now they’re sort of pointing out, this is clearly in the
0:08:59 skilled category. It’s the algorithm’s fault. We should really have a discussion about algorithmic
0:09:04 trading and whether there should be allowed. When obviously the mechanism for the market
0:09:08 going down is the same either way. So now if we understand that, so exactly your matrix,
0:09:12 now we can say, well, okay, human beings understand what’s going to get them in the room.
0:09:19 And pretty much anybody who’s living and breathing in the top levels of business at this point is
0:09:22 going to tell you, process, process, process. I don’t care about your outcomes, process, process,
0:09:27 process. But then the only time they ever have like an all hands on deck meeting is when something
0:09:31 goes wrong. Like let’s say that you’re in a real estate investing group. And so you invest in a
0:09:37 particular property based on your model. And the appraisal comes in 10% lower than what you
0:09:42 expected. Like everybody’s in a room, right? You’re all having a discussion. You’re all examining
0:09:46 the model. You’re trying to figure out, but what happens when the appraisal comes in 10% higher
0:09:50 than expected? Is everyone in the room going, what happened here? Now there is the obvious
0:09:54 reality, which is like, we don’t get paid in process. We get paid in outcomes. Booker players,
0:09:58 you don’t get paid in process, you get paid in outcome. And so there is an incentive alignment.
0:10:02 It’s not completely emotional. There’s also an actual, there’s a real component to it.
0:10:08 Yeah. So two things. One is you have to make it very clear to the people who work for you that
0:10:13 you understand that outcomes will come from good process. That’s number one. And then number two,
0:10:19 what you have to do is try to align the fact that as human beings, we tend to be outcome driven
0:10:28 to what you want in terms of getting an individual’s risk to align with the enterprise risk.
0:10:31 Because otherwise you’re going to get the CYA behavior. And the other thing is that we want
0:10:35 to understand if we have the right assessment of risk. So one of the big problems with the
0:10:39 appraisal coming in 10% too high there could be that your model’s correct. It could be that you
0:10:44 could have just a tail result, but it certainly is a trigger for you to go look and say, was there
0:10:48 risk in this decision that we didn’t know was there? And it’s really important for deploying
0:10:54 resources. I have a question about translating this to say non-investing context. So if in the
0:11:01 example of Mark’s Matrix, even if it’s a non-consensus wrong, you are staking money
0:11:06 that you are responsible for. In most companies, people do not have that kind of skin in the game.
0:11:12 So how do you drive accountability in a process-driven environment that the results actually
0:11:17 do matter? You want people to be accountable yet not overly focused on the outcome? How do you
0:11:23 calibrate that? So let’s think about how can we create balance across three dimensions
0:11:26 that makes it so that the outcome you care about is the quality of the forecast.
0:11:33 So first of all, obviously this demands that you have people making forecasts. You have to stay
0:11:38 in advance. Here’s what I think. This is my model of the world here where all the places are going
0:11:45 to fall. So this is what I think. So now you stated that and the weather the outcome is “good”
0:11:50 or “bad” is how close are you to whatever that forecast is. So now it’s not just like,
0:11:55 oh, you won to it or you lost to it, it was your forecast good. So that’s piece number one is make
0:12:00 sure that you’re trying to be as equal across quality as you can and focus more on forecast
0:12:04 quality as opposed to like traditionally what we would think of as outcome quality.
0:12:12 So now the second piece is directional. So when we have a bad outcome and everybody gets in the room,
0:12:16 when was the last time that someone suggested, “Well, you know, we really should have lost more
0:12:24 here.” Like nobody’s saying that. But sometimes that’s true. Sometimes if you examine it, you’ll
0:12:29 find out that you didn’t have a big enough position. It turned out, okay, well maybe we should have
0:12:36 actually lost more. So you want to ask both up, down, and orthogonal. So could we have lost less?
0:12:42 Should we have lost more? And then the question of should we have been in this position at all?
0:12:46 So Inventure Capital, after a company works and exits, they say it sells for a lot of money,
0:12:51 you do often say, “God, I wish we had invested more money.” You never, ever, ever, ever,
0:12:55 I have never heard anybody say on a loss we should have invested more money.
0:12:59 See, I wouldn’t be great if someone said that. Like wouldn’t you love for someone to come up and
0:13:03 say that to you? That would make you so happy. And what would be the logic of why they should say
0:13:06 that? I still don’t get the point. Exactly. Why does that matter? I don’t really understand that.
0:13:11 So let’s, can I just, like simple in a poker example. So let’s say that I get involved in a
0:13:20 hand with you and I have some idea about how you play. And I have decided that you are somebody
0:13:26 that if I, if I bet X, you will continue to play with me. Let’s say this is a spot where I know
0:13:32 that I have the best hand. But if I bet X plus C that you will fold. So if I go above X that I’m
0:13:36 not going to be able to keep you coming along with me, but if I bet X or below that you will. So I
0:13:43 bet X you call, but you call really fast in a way that makes me realize, oh, I could have actually
0:13:48 bet X plus C. You hit a very lucky card on the end and I happened to lose the pot. I should have
0:13:52 maximized at the point that I was a mathematical favorite. Your model of me was wrong, which is
0:13:56 a learning independent of the winner, the loss. Exactly. So you need to be exploring those questions
0:14:01 in a real honest way. Because it has to do with how you size future bets. This is exactly like a
0:14:05 company betting on a product line. Correct. And then like ticking, like, you know, what the next
0:14:09 product line is going to be, and then not having had the information that would then drive a better
0:14:12 decision-making process around that. Right. So think about the learning loss that’s happening,
0:14:16 because we’re not exploring that. The negative direction is, and now you should do this on
0:14:22 wins as well. So if you do ever discuss a win, you always think like, how could I press? How
0:14:25 could I have won more? How could I have made this even better? How could I do this again in the
0:14:29 future? Should we have won less? We oversized the bet and then got bailed out by a fluke.
0:14:33 We should have actually had less in it. And sometimes not at all, because sometimes
0:14:37 the reasons that we invested turned out to be orthogonal to the reasons that it
0:14:41 actually ended up playing out in the way that it was. And so had we had that information,
0:14:45 we actually wouldn’t have bet on this at all, because it was completely orthogonal. Like,
0:14:50 we totally had this wrong. It just turned out that we ended up winning. And that can happen.
0:14:54 Obviously, that happens in poker all the time. But what does that communicate to the people on
0:14:59 your team? Good, bad, I don’t care. I care about our model. I want to know that we’re
0:15:03 modeling the world well and that we’re thinking about how do we incorporate the things that we
0:15:09 learn? Because we can generally think about stuff we know and stuff we don’t know. There’s
0:15:13 stuff we don’t know we know, obviously. So we don’t worry about that, because we don’t know
0:15:18 we don’t know it. But then there’s stuff we could know and stuff we can’t know. It’s things like
0:15:22 the size of the universe or the thoughts of others. Or what the outcome will actually be.
0:15:28 We don’t know that. I have a question about this, though. What is a time frame for that forecast?
0:15:32 So let’s say you have a model of the world, a model of a technology, how it’s going to adopt,
0:15:38 how it’s going to play out. In some cases, there are companies that can take years to get traction.
0:15:42 You want to get your customers very early to figure that out, right? So you can get that data.
0:15:48 But how much time do you give? How do you size that time frame for the forecast? So you’re not
0:15:52 constantly updating with every customer data point. And so you’re also giving it enough time for your
0:15:57 model, your plan, your forecast to play out. You have to think about very clearly in advance,
0:16:03 what’s my time horizon? How long do I need for this to play out? But also, don’t just do this
0:16:06 for the big decisions, because there’s things that you can forecast for tomorrow as well,
0:16:11 so that you end up bringing it into just the way that people think. And then once you’ve decided,
0:16:15 okay, this is the time horizon of my forecast. And you would want to be thinking about what
0:16:22 are forecasts we make for a year, two years, five years for the specific decision to play out.
0:16:27 And then just make sure that you talk in advance at what point you’ll revisit the forecast.
0:16:31 So you want to think in advance, what are the things that would have to be true
0:16:36 for me to be willing to come in and actually revisit this forecast? Because otherwise, you can
0:16:39 start, as you just said, you can turn into super bad. You like to leave in the wind.
0:16:44 Exactly, because then you’re one bad customer and you suddenly over-rotate on that when,
0:16:49 in fact, it could have been not leaving the thing. So if you include that in your forecast,
0:16:52 here are the circumstances under which we would come in and check on our model.
0:16:57 Then you’ve already got that in advance. So that’s actually creating constraints
0:17:01 around the reactivity, which is helpful. Two questions on practical implementation of the
0:17:05 theory. So what I’m finding is more and more people understand the logic of what you’re describing,
0:17:09 because people are getting exposed to these ideas and kind of expanding in importance.
0:17:12 And so more and more people intellectually understand this stuff. But there’s two kind of,
0:17:16 I don’t know, so-called emotion-driven warps or something that people just really have a hard
0:17:21 time with. So one is, you understand this could be true of investors, CEO, product line manager,
0:17:24 in a company, kind of anybody, in one of these domains, which is you can’t get the
0:17:27 non-consensus right results unless you’re willing to take the damage,
0:17:32 the risk on the non-consensus wrong results. But people cannot cope with the non-consensus
0:17:37 wrong outcome. They just emotionally cannot handle it. And they would like to think that
0:17:39 they can. And they intellectually understand that they should be able to. But as you say,
0:17:44 when they’re in the room, it’s such a traumatizing experience that it’s touching the hot stove,
0:17:48 they will do anything in the future to avoid that. Is that just a… And so one interpretation would
0:17:52 be, that’s just simply flat out human nature. And so to some extent, the intellectual understanding
0:17:56 of here doesn’t actually matter that much because there’s an emotional override. And so that would
0:18:00 be a pessimistic view on our ability as a species to learn these lessons. Or do you have a more
0:18:04 optimistic view of that? I’m going to be both pessimistic and optimistic at the same time.
0:18:09 So let me explain why. Because I think that if you move this a little bit, it’s a huge difference.
0:18:13 You sort of have two tasks that you want to take. One is, how much can you move the individual to
0:18:19 sort of train this kind of thinking for them? And that means naturally, they’re thinking in
0:18:24 forecasts a little bit more, that when they do have those kinds of reactions, which naturally
0:18:30 everybody will, they write the ship more quickly so that they can learn the lessons more quickly.
0:18:35 Right? I mean, I actually just had this happen. I turned in a draft of my next book, the first
0:18:38 part of my next book to my editor, and I just got the worst comments I’ve ever gotten back.
0:18:42 And I had a really bad 24 hours. But after 24 hours, I was like, you know what, she’s right.
0:18:48 Now, I still had a really bad 24 hours. And I’m the like, give me negative feedback, like Queen,
0:18:52 because I’m a human being. But I got to it fast. Like, I sort of got through it pretty quickly
0:18:57 after this. I mean, I, you know, the, you know, on the phone with my agent saying, I’m standing
0:19:01 my ground. This is ridiculous. And then he got a text the next day being like, no, she’s right.
0:19:05 And then I rewrote it. And you know what? It’s so much better for having been rewritten. And now
0:19:10 I can get to a place of gratitude for having the negative feedback. But I still had the really
0:19:15 bad day. So it’s okay. It doesn’t go away. Right. Yeah. And it’s okay. Like, we’re all human.
0:19:22 Like, we’re not robots. So number one is like, how much are you getting the individuals to say,
0:19:27 okay, I improved 2%. That’s so amazing for my decision making and my learning going forward.
0:19:32 And then the second through line is, what are you doing to not make it worse?
0:19:37 Because obviously, for a long time, people like to talk about I’m results oriented.
0:19:39 I mean, it’s like the worst sentence that could come out of somebody’s mouth.
0:19:43 Why is that the worst? I’ve heard that a lot. Because you’re letting people know that all you
0:19:47 care about is like, did you win or lose? That’s fantastic. Be results oriented to all you want.
0:19:52 You should pay by the piece. You will get much faster work. But the minute that you’re asking
0:19:56 people to do intellectual work, results oriented is like the worst thing that you could say to
0:20:01 somebody. So I think that we need to take responsibility. And the people in our orbit,
0:20:06 we can make sure at minimum that we aren’t making it worse. And I think that that’s,
0:20:10 so that’s pessimistic and optimistic. I don’t think anyone’s making a full reversal here.
0:20:13 So the second question then goes to the societal aspect of this.
0:20:18 And so we’ll talk about the role of the storytellers or as they’re sometimes known,
0:20:22 the journalists and the editors and the editors and the publishers. And so the very
0:20:26 first reporter I ever met when I was a kid, this is Jared Sandberg at the Wall Street Journal.
0:20:30 The internet was first emerging. Like there were no stories in the press about the internet.
0:20:33 And I used to say, like, there’s all this interesting stuff happening. Why am I not
0:20:36 reading about any of it in these newspapers? And he’s like, well, because
0:20:39 the story of something is happening is not an interesting story. He said,
0:20:42 there are only two stories that sell newspapers. He said, one is, oh, the glory of it.
0:20:45 And the other is, oh, the shame of it. And basically he said, it’s conflict. So it’s
0:20:48 either something wonderful has happened or something horrible has happened. Like those
0:20:51 are the two stories. And then you think about business journalism as kind of our domain.
0:20:54 You got to think about it. You’re like, those are the only two profiles of a CEO
0:20:57 or a founder you’ll ever read. It’s just like, what a super genius for doing something,
0:21:01 presumably not consensus and right, or what a moron. Like what a hopeless idiot for doing
0:21:05 something, not consensus and wrong. And so, and so since I’ve become more aware of this,
0:21:09 like it’s actually gotten, it’s gotten very hard for me to actually read any of the coverage
0:21:12 of the people I know, because it’s like the people who got not consensus, right,
0:21:15 they’re being lavished with too much praise. And the people who got not consensus wrong,
0:21:19 they’re being damaged for all kinds of reasons. The traits are actually the same in a lot of
0:21:25 cases. And so I guess as a consequence, like if you read the coverage, it really reinforces this
0:21:30 bias of being results oriented. And it’s like, it’s not our fault that people don’t want to
0:21:34 read a story that says, well, he tried something and it didn’t work this time, right?
0:21:34 Yes, exactly.
0:21:35 And so is there a…
0:21:38 But it was mathematically pretty good. If we go back to Pete Carroll,
0:21:43 this is a pretty great case. If we think about options theory, that just quickly the past preserved
0:21:47 the option for two run plays. So if you want to get three tries at the end zone instead of two,
0:21:51 for strictly for clock management reasons, you pass first.
0:21:54 Right. And that’s not going to kick off ESPN Sports Center that night. And so optimistic or
0:22:00 pessimistic that the narrative, the public narrative on these topics will ever move.
0:22:08 I’m super, super pessimistic on the societal level, but I’m optimistic on if we’re educating
0:22:13 people better, that we can equip them better for this. So I’m really focused on how do we
0:22:19 make sure that we’re equipping people to be able to parse those narratives in a way that’s more
0:22:27 rational. And particularly, now there’s so much information. And it’s all about the framing
0:22:33 and the storytelling. And it’s particularly driven by what’s the interaction of your own
0:22:36 point of view. We could think about it as partisan point of view, for example, versus
0:22:40 the point of view of the communicator of the information and how is that interacting with
0:22:44 each other, in terms of how critically are you viewing the information, for example.
0:22:50 I think this is another really big piece of the pie and somewhat actually related to the question
0:22:53 about journalism, which is that third dimension of the space. So we talked about two dimension,
0:22:57 which is sort of outcome quality, and how are you allowing that you’re exploring both
0:23:02 downside and upside outcomes in a way that’s really looking at forecast. How are you thinking
0:23:07 directionally, so that you’re more directionally neutral. But then the other piece of the puzzle
0:23:13 is how are you treating omissions versus commissions? So one of the things that we know
0:23:19 with this issue of resulting is, here’s a really great way to make sure that nobody ever
0:23:26 results on you. Don’t do anything. If I just don’t ever make a decision, I’m never going to
0:23:31 be in that room with everybody yelling at me for the stupid decision I made, because I had a bad
0:23:36 outcome. But we know that not making a decision is making a decision. We just don’t think about it
0:23:40 that way. And it doesn’t have to just be about investing. You can have a shadow of your own
0:23:44 personal decision. So, you know, it’s really interesting. I remember I was giving somebody
0:23:51 advice who was like 23. And so obviously, you know, newly out of college had been in this position
0:23:56 for a year and was really, really unhappy in the position. And he was asking me like, I don’t know
0:24:01 what to do. I don’t know if I should change jobs. And I said, well, you know, so I did all the tricks,
0:24:04 you know, time traveling. And so I was like, well, okay, imagine it’s a year from now. Do you
0:24:09 think you’re going to be happy in this job? No. Okay, well, maybe you should go and choose this
0:24:14 other, maybe you should go and try to find another position. And this is what he said to me. And this,
0:24:18 I think, shows you how much people don’t realize that the thing you’re already doing, the status
0:24:23 quo thing, choosing to stay in that really is a decision. So he said to me, but if I go and find
0:24:28 another position, and then I have to spend another year, which I just spent trying to learn the ins and
0:24:33 outs of the company. And it turns out that I’m not happy there. I’ll have wasted my time. And I said
0:24:38 to him, okay, well, let’s think about this, though, the job you’re in, which is a choice to stay in,
0:24:44 you’ve now told me it’s 100% in a year that you will be sad. Then if you go to the new job,
0:24:49 yes, of course, it’s more volatile. But at least you’ve opened your, you’ve opened the range of
0:24:54 outcomes up, but he didn’t want to do it because it doesn’t feel like staying where he was, didn’t
0:24:59 feel like somehow he was choosing it. So that he felt like if he went to the other place, he
0:25:04 ended up sad that somehow that would be his fault in a bad decision. So profound. In my case,
0:25:08 this is my beginning a little too personal, but in my case, it was a decision that I didn’t know
0:25:13 I had made to not have kids. And it’s still an option, but it’s probably not going to happen.
0:25:18 And my therapist kind of told me that my not deciding was a choice. And I was like so blown
0:25:23 away by that, that it had then allowed me to then examine what was going on there in that
0:25:28 framework in order to not do that for other arenas in my life, where I might actually
0:25:32 want something, or maybe I don’t, but at least it’s a choice that there’s intentionality behind
0:25:36 it. Well, I appreciate you sharing. I mean, I really want to thank you for that because I think
0:25:41 that people, first of all, should be sharing this kind of stuff so that people feel like they can
0:25:45 talk about these kinds of things. Number one, and number two, in my book, I’ve got all these
0:25:49 examples in there of like, how are you making choices about raising your kids when it feels so
0:25:52 consequential? You do decisions for other people. Right. And you’re trying to decide like,
0:25:56 should I have kids or shouldn’t I have kids? Or this school or that school or where am I supposed
0:26:03 to live? And the thing that I try to get across is, we can talk about investing like I’m putting
0:26:09 money into some kind of financial instrument, but we all have resources that we’re investing.
0:26:14 That’s right. It’s our time, your energy, your heart. It could be whatever, your friendships,
0:26:19 your relationships. So you’re deploying resources and like for the kind of decision that you’re
0:26:24 talking about, it’s like, if you choose to have children, you’re choosing to deploy
0:26:29 certain resources with some expected return, some of it good, some of it bad. And if you’re
0:26:34 choosing not to have children, that’s a different deployment of your resources toward other things.
0:26:38 And you need to know that there are limits. Everything isn’t a zero-sum game.
0:26:43 No. But approaching the world and the fact that evolution has approached the world as a zero-sum
0:26:48 game and our toolkit makes it a zero-sum game, means that we need to still view everything
0:26:52 as a zero-sum game when it comes to those trade-offs and resources. Because you are losing
0:26:56 something every time, even in a non-zero game. Right. So I don’t feel like the world is a zero-sum
0:27:01 game in terms of like, most of the activities that you and I would engage on, we can both win too.
0:27:06 But it’s a zero-sum game to go back to your therapist. It’s a zero-sum game between you
0:27:10 and the other versions of yourself that you don’t choose. Exactly. Or an organization and
0:27:14 the other versions of itself it doesn’t choose. Exactly. So there’s a set of possible futures
0:27:20 that result from not making a decision as well. So on an individual decision, let’s put things
0:27:26 into three categories. Clear misses, near misses, and hits. There’s some that would just be a clear
0:27:30 miss, throw them out. And there’s some that I’m going to sort of really agonize over and I’m going
0:27:36 to think about it and I’m going to do a lot of analysis on it. And so the ones which become a
0:27:42 yes go into the hit category. And the other one is a near miss. I came close. What happens with
0:27:48 those near misses is they just go away. So what I realized is that on any given decision, let’s
0:27:52 take an investment decision. If I went to you or you came to me and said, well, tell me what’s
0:27:57 happening with the companies that you have under consideration. On a single decision,
0:28:02 when I explain to you why I didn’t invest in a company, it’s going to sound incredibly reasonable
0:28:07 to you. So you’ll only be able to see in the aggregate, if you look across many of those
0:28:13 decisions, that I tend to be having this bias toward missing. Towards saying, you know what,
0:28:18 we’re not going to do it so that I don’t want to stick my neck out. Now this for you is incredibly
0:28:21 hard to spot because you do have to see it in the aggregate because I’m going to be able to
0:28:27 tell you a very good story on any individual decision. So the way to combat that and again,
0:28:31 get people to think about what we really care around here as forecast, not really outcomes,
0:28:36 is actually to keep a shadow book. The anti portfolio should contain basically all of your
0:28:41 near misses, but then you have to take a sample of the clear misses as well, which nobody ever looks
0:28:46 at because the near misses tend to be a little in your periphery. If they happen to be big hits.
0:28:50 So here’s the problem. So the good news is bad news. So the good news is we have actually done
0:28:54 this. And so we call it the shadow portfolio. Awesome. And the way that we do it is we make
0:28:59 the investment. We take the other equivalent deal of that vintage of that size that we almost did,
0:29:02 but didn’t do. We put that in the shadow portfolio and we’re trying to do kind of
0:29:07 apples to apples comparison. In finance theory terms, the shadow portfolio may well outperform
0:29:11 the real portfolio. And in finance terms, that’s because the shadow portfolio may be higher variance,
0:29:16 higher volatility, higher risk, and therefore higher return. Because the fear is the ones that
0:29:20 are hitting are the ones that are less, they’re less spiky, they’re less volatile, they’re less
0:29:25 risky. Right. So what’s wonderful about that, when you decide not to invest in a company,
0:29:29 you actually model out why that’s in there. It’s often, by the way, it’s often a single flaw that
0:29:34 we’ve identified. Yeah. Like it’s just like, oh, we would do it except for X. Right. Where X looks
0:29:37 like something that’s like potentially existentially bad. Right. And then that’s just
0:29:41 written in there. And so you know that. So, and then just make sure people, like those ones that
0:29:45 people are just rejecting out of hand. That’s my question. So we never do that. But let me ask
0:29:48 you how to do that though. So that’s what we don’t do. And as you’re describing,
0:29:51 I’m like, of course we should do that. I’m trying to think of how we would do that. Because the
0:29:57 problem is we reject 99 for everyone we do. Yeah. So you just literally, it’s a sample. You just
0:30:00 take a random sample. A random sample. Okay. I mean, as long as it’s just sort of being kept
0:30:06 in view a little bit, because what that does is it basically just asks as pushing against your model.
0:30:10 You’re just sort of getting people to have the right kind of discussion.
0:30:15 So all of that communicates to the people around you, like, I care about your model.
0:30:19 So let me ask you a different question on, because you talk about sort of groups of decisions.
0:30:22 So the other question, portfolios of decisions. So the other question is, early on in the firm,
0:30:25 I happen to have this discussion with a friend of mine. And he basically looked at me and he’s like,
0:30:29 you’re thinking about this all wrong. You were thinking about this as a decision. You’re thinking
0:30:32 about investor not. He said, that’s totally the wrong way to think about this. You should be thinking
0:30:38 about this is, is this one of the 20 investments of this kind, of this class size that you’re
0:30:42 going to put in your portfolio. When you’re evaluating an opportunity,
0:30:47 you are kind of definitionally talking about that opportunity. But it’s very hard to
0:30:51 abstract that question from the broader concept of a portfolio or a basket.
0:30:54 Yeah. What I would suggest there is actually just doing some time traveling that as people
0:30:58 are really down in the weeds to say, let’s imagine it’s a year from now, and what does the portfolio
0:31:04 look like of these investments of this kind. So I’m a big promoter of time traveling, of just
0:31:08 making sure that you’re always asking that question, what does this look like in a year?
0:31:11 What does this look like in five years? Are we happy? Are we sad?
0:31:15 If we imagine that we have this, what percentage of this do we think will have failed?
0:31:19 We understand that any one of these individual ones could have failed. So let’s remember that.
0:31:24 And I think that that really allows you to sort of get out of what feels like the biggest decision
0:31:29 on earth because that’s the decision you have to be making and be able to see it in the context of
0:31:35 kind of all of what’s going on. It’s fantastic. One of the most powerful things my therapist
0:31:39 gave me, and it was such a simple construct. It was sort of like doing certain things today is
0:31:45 like stealing from my future self. Oh, it blew me. It blew my mind. So beautiful.
0:31:49 It’s so beautiful. And it seems so like, you know, hokey, like personal self help you. But
0:31:55 actually I had never thought of because we were on a continuum by making discreet individuals
0:31:59 like Sonal in the past, Sonal today, Sonal this woman in the future I haven’t met yet.
0:32:06 Wow. Like the idea of stealing from her was like I… That’s really a lovely way to put it.
0:32:10 Yeah, she is. I have an amazing therapy. I like talking publicly about therapy because
0:32:15 I like to stick on it. No, I’m very, very open about like, let’s not hide it. It’s totally fine.
0:32:19 There’s no fucking reason to hide it. I totally agree. Some of the ways that we deal with this
0:32:23 is actually prospectively employing really good decision hygiene, which involves a couple of
0:32:28 things. One is some of this good time traveling that we talked about where you’re really imagining
0:32:32 what is this going to look like in the future so that that’s metabolized into the decision.
0:32:39 Two is making sure that you have pushback once there’s consensus reached. Great. Now let’s go
0:32:44 disagree with each other. Then the next thing is in terms of the consensus problem is to make
0:32:51 sure that you’re eliciting as much input not in a room with other people. So, you know,
0:32:55 when somebody has a deal they want to bring to everybody that goes to the people individually,
0:32:58 they have to sort of write their thoughts about it individually. And then it comes into the
0:33:01 room after that. As opposed to the pile on effect that just happened. As opposed to the pile on
0:33:06 effect. And that reduces the sort of effects of consensus anyway. So now this is how you then come
0:33:11 up with basically what your forecast of the future is that then is absolutely memorialized because
0:33:15 that memorializing of it acts as the prophylactic. First of all, it gives you your forecast, which
0:33:19 is what you’re trying to push against anyway. You’re trying to change the attitude to be that
0:33:24 the forecast is the outcome that we care about. And it acts as a prophylactic for those emotional
0:33:28 issues, right? Which is now you, it’s like, okay, well, we all talked about this and we had our
0:33:33 red team over here and we had a good steel man going on. And we kind of really thought about
0:33:39 why we were wrong. We questioned if somebody, you know, let’s, if somebody has the outside view,
0:33:45 what would this really look like to them by eliciting the information individually. We were
0:33:51 less likely to be in the inside view anyway. We’ve done all of that good hygiene. And then that acts
0:33:56 as a way to, to protect yourself against these kinds of issues in the first place. Again,
0:34:02 you’re going to have a bad 24 hours. I’m just like, for sure. But you can get out of it more
0:34:07 quickly, more often and get to a place where you can say, okay, moving on to the next decision,
0:34:11 how do I, how do I improve this going forward? Yeah. So building on that, but returning real
0:34:16 quick to my optimism pessimism question, if society is not going to move on these issues,
0:34:19 but we can move as individuals. So one form of optimism would be more of us can move as
0:34:23 individuals. The other form of optimism could be there will just always be room in these
0:34:26 probabilistic domains for the rare individual who’s actually able to think about this stuff
0:34:30 correctly. Like there will always be an edge. There will always be certain people who are
0:34:34 like much better at poker than everybody else. There will. Oh, I think that’s for sure. Okay.
0:34:38 Because most people simply, most people just simply can’t or won’t get there. Like a few
0:34:42 people in every domain might be able to take the time and have the discipline of willpower to kind
0:34:46 of get all the way there. But most people can’t or won’t. I think that in some, in some ways, maybe
0:34:51 that, that’s okay. Like, I mean, I sort of think about from an evolutionary standpoint, that kind
0:34:55 of thinking was selected for, for a reason, right? Like it’s better for survival, likely better for
0:34:59 happiness. You mean the conventional, conventional wisdom? Yeah. Don’t touch the burn stove twice.
0:35:03 Yeah. Or run away when you hear rustling in the leaves. Don’t sit around and say, well, it’s a
0:35:06 probabilistic world. I have to figure out how often is that a lion that’s going to come eat me?
0:35:09 Most people shouldn’t be playing in the World Series of Poker. I have people come up to me all
0:35:14 the time and be like, Oh, you know, I play poker, but it’s just a home game. You know, and I’m like,
0:35:17 what are you saying? Just a home game. Like there are different purposes to poker. Like
0:35:21 you probably have a great time doing that. And it brings you a tremendous amount of enjoyment.
0:35:24 And you don’t have an interest in becoming a professional poker player and why just be
0:35:30 proud of that. I think that that’s amazing. Like I play tennis. I’m not saying, Oh, but you know,
0:35:36 I’m just playing week, you know, I’m just playing in like USTA, like 3.5. Like I’m really happy with
0:35:42 my tennis. I think it’s great. So I think we need to remember that like people have different things
0:35:48 that they love. And this kind of thinking, I think that I would love it if we could spread it more.
0:35:52 But of course, there are going to be some people who are going to be ending up in this
0:35:56 category more than others. And that’s okay. Like not everybody has to think like this. I think
0:36:00 it’s all right. So one of the things I get asked all the time is like, well, we can’t really do
0:36:06 this because people expect us to be confident in our choices. Don’t confuse confidence and certainty.
0:36:12 So I can express a lot of uncertainty and still convey confidence. Ready? I’m weighing these
0:36:16 three options, A, B and C. I’ve really done the analysis. Here’s the analysis. And this is what
0:36:22 I think. I think that option A is going to work out 60% of the time. Option B is going to work
0:36:27 out 25% of the time. And option C is going to work out 15% of the time. So option A is the clear
0:36:33 winner. Now, I just expressed so much uncertainty in that sentence. But also a lot of confidence.
0:36:37 But also a lot of confidence. I’ve done my analysis. This is my forecast. And all that I
0:36:42 ever ask people to do when they do that is make sure that they ask a question before they bank
0:36:46 the decision, which is, is there some piece of information that I could find out that would
0:36:51 reverse my decision that would actually cause, not that would make it go from 60 to 57. I don’t
0:36:55 care modulating so much. I care that you’re going to actually change. And your point is that organizations
0:36:59 can then bake that into their process. And not just in the forecasting, but in arriving to that
0:37:04 decision. So that then the next time they get to it right or wrong, they make a better decision.
0:37:10 And if the answer is yes, go find it. Or sometimes the answer is yes, but the cost is too high. It
0:37:15 could be time. It could be opportunity costs, whatever. Exactly. So then you just don’t. And
0:37:18 then you would say, well, then you all recognize as a group, we knew that if we found this out,
0:37:22 it would change our decision. But we’ve agreed that it would, the cost was too high. And so we
0:37:25 didn’t. So then if it reveals itself afterwards, you’re not sad. Well, you’ve talked a lot about
0:37:29 how people should use confidence intervals and communicating, which I love because we’re both
0:37:37 ex-PhD psychology people, neither are finished. So I love that idea. One thing that I struggle with,
0:37:40 though, is again, in the organizational context, like if you’re trying to translate this to a
0:37:46 big group of people, not just one-on-one or small group decisions, how do you communicate a confidence
0:37:52 interval and all the variables in it in an efficient kind of compressed way? Like honestly,
0:37:57 part of communication and organizations is emails and quick decisions. And yes, you can have all
0:38:03 the process behind the outcome. But how do you then convey that even though the people were not
0:38:08 part of that room of that discussion? I think that there’s a simpler way to express uncertainty,
0:38:12 which is using percentages. Now, obviously, sometimes you can only come up with a range.
0:38:19 But for example, if I’m talking to my editor, and this is very quick in an email, I’ll say,
0:38:24 you’ll have the draft by Friday, 83% of the time. By Monday, you’ll have it 97% of the time.
0:38:27 Those are inclusive, right? That’s another way of doing a confidence interval, but without
0:38:32 making it so wonky. Without making it so wonky. So I’m just letting her know, most of the time,
0:38:37 you’re going to get it on Friday, but I’m building, like my kid gets sick or I have trouble with a
0:38:42 particular section of the draft or whatever it is, and I set the expectations for that way.
0:38:45 That’s fantastic. I mean, we’ve been trying to do forecasting even for like timelines for
0:38:49 podcasts, editing and episodes. And I feel frustrated because I have like a set of frameworks,
0:38:55 like if there’s accents, if there’s more than two voices, if there’s, you know, a complexing,
0:38:59 room tone, like interaction, feedback, sound effects. I know all the factors that can go into
0:39:05 my model, but I don’t know how to put a confidence interval in our pipeline spreadsheet for, you
0:39:09 know, all the content that’s coming out. Yeah. So one way to do it is think about what’s the range,
0:39:13 what’s the earliest that I could get it, and you put a percentage on that. And then you think about
0:39:17 the latest day, they’re going to get it. And you put a percentage on that. And so now,
0:39:23 what’s wonderful about that is that it’s a few things. One is I’ve set the expectations properly
0:39:28 now so that I’m not getting, you know, yell that on Friday, like where the hell’s the draft.
0:39:32 Exactly. And I think that, and a lot of what happens is that because we’re sort of, we think
0:39:37 that we have to give a certain answer. It ends up, boy, who cried wolf, right? So that if I’m
0:39:44 telling her, I’m going to get it on Friday and, you know, 25% of the time, 25% of the time I’m
0:39:50 late, she just starts to not put much stock in what I’ve said anyway. So that’s number one.
0:39:54 Number two is what happens is that you really kind of infect other people with this in a good
0:39:59 way, where you get them, it just moves them off of that black and white thinking. So like,
0:40:02 I love that. One of the things that I love thinking about, and this is the difference
0:40:09 between a deadline or kind of giving this range, is that I think that we ask ourselves,
0:40:16 am I sure? And other people, are you sure way too often that it’s a terrible question to ask
0:40:20 somebody because the only answer is yes or no. So what should we be asking? How sure are you?
0:40:25 How sure are you? I have a quick question for you on this because earlier you mentioned uncertainty.
0:40:30 How do you as an organization build that uncertainty in by default? So first of all,
0:40:34 we obviously talked a little bit about time traveling and the usefulness of time traveling.
0:40:39 So one thing that I like to think about is not overvalue the decision that’s right at hand,
0:40:45 the things that are right sitting in front of us. So you can kind of think about it like,
0:40:50 how are you going to figure out the best path? As you think about what your goals are and obviously
0:40:54 the goal that you want to reach is going to sort of define for you what the best path is.
0:40:58 If you’re standing at the bottom of a mountain that you want to summit, let’s call the summit your
0:41:02 goal, all you can really see is the base of the mountain. So as you’re doing your planning,
0:41:06 you’re really worried about how do I get the next little bit, right? How do I start?
0:41:10 But if you’re at the top of the mountain having a tanger goal, now you can look at the whole
0:41:14 landscape. You get this beautiful view of the whole landscape and now you can really see what
0:41:18 the best path looks like. And so we want to do this not just physically like standing up on a
0:41:23 mountain, but we want to figure out a cognitive way to get there. And that’s to do this really good
0:41:28 time traveling. And you do this through back casting and premortem. And now let’s look backwards
0:41:33 instead of forwards to try to figure out, this is now the headline. Let me think about why that
0:41:37 happened. So you could think about this like as a simple like weight loss goal. I want to lose a
0:41:41 certain amount of weight within the next six months. It’s the end of the six months. I’ve lost
0:41:47 that weight. What happened? You know, I went to the gym. I avoided bread. I didn’t eat any sweets.
0:41:53 I made sure that, you know, whatever. So you now have this list. Then in pairing with that,
0:41:59 you want to do a premortem, which is I didn’t get to the top of the mountain. I failed to lose
0:42:03 the weight. I failed to do whatever it is. And then all the things you can do to counter program
0:42:07 against that. Exactly. Because that’s going to reveal really different things. It’s going to reveal
0:42:13 some things that are just sort of luck, right? Let me think, can I do something to reduce the
0:42:17 influence of luck there? Then there’s going to be some things that have to do with your
0:42:22 decisions. Like I went into the break room every day and there were donuts there. And so I couldn’t
0:42:26 resist them. So now you can think about how do I counter that, right? How can I bring other people
0:42:30 into the process and that kind of thing? And then there’s stuff that’s just, you can figure out,
0:42:33 it’s just out of your control. It turned out out of slow metabolism. And now what happens is that
0:42:36 you’re just much less reactive and you’re much more nimble because you’ve gotten a whole view of
0:42:40 the landscape and you’ve gotten a view of the good part of the landscape and the bad part of the
0:42:46 landscape. But I’m sure as he told you, people are very low to do these premortems because I think
0:42:52 that the imagining of failure feels so much like failure that people are like, no, and you should
0:42:56 posit, you know, positive visualization. I mean, even in brainstorming meetings, everyone’s like,
0:43:00 don’t dump on an idea. But the exact point is you have to dump on an idea and kill the winnowing
0:43:06 of options. No. As part of the process, you should be then premorteming it. Exactly. There’s
0:43:11 wonderful research by Gabrielle Adinchin that I really recommend that people see that the references
0:43:18 are in my book. In across domains, what she’s found is that when people do this sort of positive
0:43:22 fantasizing, the chances that they actually complete the goal are just lower than if people
0:43:27 do this negative fantasizing. And then there’s research that shows that when people do this
0:43:32 time travel and this backwards thinking that increases identifying reasons for success or
0:43:39 failure by about 30%, you’re just more likely to see what’s in your way. So like, for example,
0:43:44 she did like one of the simple studies was she asked people who were in college,
0:43:49 you know, who do you have a crush on that you haven’t talked to yet? She had one group who,
0:43:53 you know, it was all positive fantasies. So like, oh, I’m going to meet them and I’m going to ask
0:43:56 them out on a date and it’s going to be great and then we’re going to live happily ever after and
0:44:01 whatever. And then she had another group that engaged in negative fantasizing. What if I asked
0:44:06 them out and they say no? Like they said no and I was really embarrassed and so on and so forth.
0:44:12 And then she revisited them like four months later to see which group had actually gone out on a date
0:44:16 with the person that they had a crush on and the ones that did the negative fantasizing were much
0:44:20 more likely to have gone out on the date. It’s fantastic. Yeah. So one of the things that I
0:44:25 say is like, look, when we’re in teams to your point, we tend to sort of view people as naysayers,
0:44:32 right? But we don’t want to think of them as downers. So I suggest divide those up into two
0:44:37 processes. Have the group individually do a back cast, have the group individually write a narrative
0:44:42 about a pre-mortem. And what that does is when you’re now doing a pre-mortem, it changes the
0:44:47 rules of the game where being a good team player is now actually identifying the ways that you fail.
0:44:51 I love what you said because it’s like having two modes as a way of getting into these two
0:44:55 mindsets. Right. Where you’re not stopping people from feeling like they’re a team player. And I
0:44:59 think that that’s the issue. As you said, it’s like, don’t sit there and like, you know, crap on
0:45:04 my goal. Well, because what are they really saying? You’re not being a team player. So change the
0:45:09 rules of the game. You had this line in your book about how regret is an unproductive. The issue is
0:45:14 that it comes after the fact, not before. So the one thing that I don’t want people to do is think
0:45:18 about how they feel right after the outcome. Because I think that then you’re going to overweight
0:45:25 regret. So you want to think about regret before you make the decision. You have to get it within
0:45:29 the right timeframe. What we want to do instead is write in the moment of the outcome when you’re
0:45:34 feeling really sad. You can stop and say, am I going to care about this in a year? Think about
0:45:39 yourself as a happiness stock. And so if we can sort of get that more 10,000 foot view on our own
0:45:45 happiness and we think about ourselves as we’re investing in our own stock, our own happiness
0:45:50 stock, we can get to that regret question a lot better. You don’t need to improve that much
0:45:57 to get really big dividends. You make thousands of decisions a day. If you can get a little better
0:46:04 at this stuff, if you can just, you know, de-bias a little bit, think more probabilistically,
0:46:10 really sort of wrap your arms around uncertainty to free yourself up from sort of the emotional
0:46:15 impact of outcomes. A little bit is going to have such a huge effect on your future decision making.
0:46:19 Well, that’s amazing, Annie. Thank you so much for joining the A6NZ podcast.
0:46:21 -Thank you very much. -Yes, thank you.

with @annieduke, @pmarca, and @smc90

Every organization, whether small or big, early or late stage — and every individual, whether for themselves or others — makes countless decisions every day, under conditions of uncertainty. The question is, are we allowing that uncertainty to bubble to the surface, and if so, how much and when? Where does consensus, transparency, forecasting, backcasting, pre-mortems, and heck, even regret, usefully come in?Going beyond the typical discussion of focusing on process vs. outcomes and probabilistic thinking, this episode of the a16z Podcast features Thinking in Bets author Annie Duke — one of the top poker players in the world (and World Series of Poker champ), former psychology PhD, and founder of national decision education movement How I Decide — in conversation with Marc Andreessen and Sonal Chokshi. The episode covers everything from the role of narrative — hagiography or takedown? — to fighting (or embracing) evolution. How do we go from the bottom of the summit to the top of the summit to the entire landscape… and up, down, and opposite?The first step to understanding what really slows innovation down is understanding good decision-making — because we have conflicting interests, and are sometimes even competing against future versions of ourselves (or of our organizations). And there’s a set of possible futures that result from not making a decision as well. So why feel both pessimistic AND optimistic about all this??

Leave a Comment