AI transcript
0:00:02 (upbeat music)
0:00:07 Pushkin.
0:00:12 There are these moments when people make
0:00:17 huge technical advances and it happens all of a sudden.
0:00:20 Or at least it feels like it happens all of a sudden.
0:00:22 You know, this is happening right now,
0:00:26 most obviously with AI, with artificial intelligence.
0:00:28 It happened not too long ago with rockets
0:00:32 when SpaceX dramatically lowered the cost of getting to space.
0:00:36 Maybe it’s happening now with crypto.
0:00:39 I’d say it’s probably too soon to say on that one.
0:00:42 In any case, you can look at technological breakthroughs
0:00:44 in different fields and at different times.
0:00:48 And you can ask, what can we learn from these?
0:00:51 You can ask, can we abstract certain, you know,
0:00:54 certain qualities, certain tendencies
0:00:56 that seem to drive toward these bursts
0:00:58 of technological progress.
0:01:03 There’s a recent book called Boom that asks this question.
0:01:05 And it comes up with an interesting answer.
0:01:08 According to the book, one thing that’s really helpful,
0:01:13 if you wanna make a wild technological leap, a bubble.
0:01:16 (upbeat music)
0:01:21 I’m Jacob Goldstein and this is What’s Your Problem.
0:01:24 My guest today is Bern Hobart.
0:01:26 He’s the author of a finance newsletter
0:01:28 called The Diff, and he’s also the co-author
0:01:32 of a book called Boom, Bubbles and the End of Stagnation.
0:01:35 When Bern talks about bubbles,
0:01:37 he isn’t just talking about financial bubbles
0:01:40 where investors drive prices through the roof.
0:01:44 When he says bubble, largely he means social bubbles,
0:01:46 filter bubbles, little groups of people
0:01:49 who share some wild belief.
0:01:51 He really gets at what he means in this one sentence
0:01:54 where he and his co-author write, quote,
0:01:58 “Transformative progress arises from small groups
0:02:01 “with a unified vision, vast funding,
0:02:05 “and surprisingly poor accountability,” end quote,
0:02:07 basically living the dream.
0:02:10 Later in the conversation, Bern and I discussed
0:02:14 the modern space industry and cryptocurrency and AI,
0:02:16 but to start, we talked about two case studies
0:02:19 from the US in the 20th century.
0:02:21 Bern writes about him in the book and he argues
0:02:24 that these two moments hold broader lessons
0:02:26 for how technological progress works.
0:02:28 The two case studies we talk about
0:02:30 are the Manhattan Project and the Apollo Missions.
0:02:34 So let’s start with the Manhattan Project.
0:02:37 And maybe one place to start it is with this famous
0:02:42 1939 letter from Albert Einstein and other scientists
0:02:46 to FDR, the president, about the possibility
0:02:48 of the Nazis building an atomic bomb.
0:02:53 – Right, so that letter, it feels like good material
0:02:55 for maybe not a complete musical comedy,
0:02:56 but at least an act of a musical comedy
0:02:57 because the whole–
0:03:01 – It’s kind of a springtime for Hitler in Germany-ish.
0:03:02 – There is this whole thing
0:03:04 where you have this brilliant physicist,
0:03:08 but he is just kind of the stereotypical professor,
0:03:09 crazy hair, very absent-minded,
0:03:12 always talking about these things where no one,
0:03:14 no normal person can really understand
0:03:15 what he’s talking about.
0:03:19 And suddenly, instead of talking about space time
0:03:22 and that energy and matter in their relationship,
0:03:24 suddenly he’s saying someone could build
0:03:25 a really, really big bomb
0:03:28 and that person will probably be a German.
0:03:31 And that has some very bad implications
0:03:31 for the rest of the world.
0:03:34 – So now here we are, the president decides,
0:03:35 okay, we need to build a bomb,
0:03:38 we need to spend a wild amount of money on it.
0:03:40 And this is a thing that you describe as a bubble,
0:03:41 which is interesting, right?
0:03:44 ‘Cause it’s not a bubble in the sense of market prices,
0:03:45 it’s the federal government and the military,
0:03:49 but it has these other bubble-like characteristics
0:03:51 in your telling, right?
0:03:53 Maybe other meanings of bubble,
0:03:56 the way we talk about a social bubble or a filter bubble.
0:03:56 Tell me about that.
0:04:00 Like why is the Manhattan Project a kind of bubble?
0:04:01 – Why is it a bubble?
0:04:02 Because there’s that feedback loop,
0:04:06 because people take the idea of actually building the bomb
0:04:09 more seriously as other people take it more seriously.
0:04:12 And the more that you have someone like,
0:04:14 the more that you have people like Oppenheimer
0:04:16 actually dedicating time to the project,
0:04:17 the more other people think the project
0:04:19 will actually happen, this is actually worth doing.
0:04:20 So you have this group of people
0:04:24 who start taking the idea of building a bomb more seriously.
0:04:27 They treat it as a thing that will actually happen
0:04:29 rather than a thing that is hypothetically possible
0:04:31 if this particular equation is right,
0:04:32 if these measurements are right, et cetera.
0:04:34 And then they start actually designing them.
0:04:37 – The Manhattan Project seems like this sort of point
0:04:40 that gets a bunch of really smart people to coalesce
0:04:42 in one place on one project at one time, right?
0:04:45 It sort of solves the coordination problem.
0:04:49 The way, whatever you might say, AI today is doing that.
0:04:52 Like just brilliant people suddenly are all in one place
0:04:53 working on the same thing in a way
0:04:56 that they absolutely would not otherwise be.
0:04:57 – That is true.
0:05:00 And this was both within the US academic community
0:05:01 and then within the global academic community.
0:05:04 ‘Cause you had a lot of people who were in Central Europe
0:05:06 or Eastern Europe who realized that that is just
0:05:08 not a great place for them to be
0:05:13 and tried to get to the UK, US, other allied countries
0:05:15 as quickly as possible.
0:05:18 And so there was just this massive intellectual dividend
0:05:21 of a lot of the most brilliant people in Germany
0:05:25 and in Eastern Europe and in Hungary, et cetera.
0:05:29 They were all fleeing and all ended up in the same country.
0:05:32 So yeah, you have just this serendipity machine
0:05:35 where it was just, if you were a physicist,
0:05:37 it was an incredible place for just overhearing
0:05:40 really novel ideas and putting those ideas,
0:05:42 putting your own ideas to the test
0:05:44 because you had all the smartest people in the world
0:05:48 pretty much in this one little town in New Mexico.
0:05:52 – Right, so the Los Alamos piece is the famous part.
0:05:54 It’s the part one has heard of
0:05:57 with respect to the Manhattan Project.
0:05:59 There’s a less famous part that’s really interesting
0:06:01 and that also seems to hold some broader lessons as well.
0:06:06 And that is the kind of basically the manufacturing part.
0:06:09 How do we enrich enough uranium to build a bomb
0:06:11 if the physicists figure out how to design?
0:06:14 Talk about that piece and the lessons there.
0:06:16 – Yes, so that one, you’re right.
0:06:19 It is often under-emphasized in the history.
0:06:21 It was more of an engineering project
0:06:22 than a research project.
0:06:23 Though there was a lot of research involved.
0:06:26 The purpose was get enough of the,
0:06:27 get enough enriched uranium.
0:06:30 So the isotope that is actually prone
0:06:33 to these chain reactions.
0:06:38 Get it isolated and then be able to incorporate that
0:06:39 into a bomb.
0:06:41 They were also working on other physical materials
0:06:45 because there were multiple plausible bomb designs.
0:06:47 Some use different triggering mechanisms.
0:06:48 Some use different materials.
0:06:51 And there were also multiple plausible ways
0:06:54 to enrich enough of the fissile material
0:06:55 to actually build a bomb.
0:06:59 And so one version of the story is you just go down the list
0:07:01 and you pick the one that you think
0:07:03 is the most cost effective, most likely.
0:07:07 And so we choose one way to get just U-235
0:07:09 and we have one way to build a bomb.
0:07:11 – U-235 is the enriched uranium.
0:07:12 – Yes.
0:07:14 – And that, by the way, is the way normal businesses
0:07:16 do things in normal times.
0:07:18 You’re like, well, we got to do this really expensive thing.
0:07:19 We got to build a factory
0:07:20 and we don’t even know if it’s going to work.
0:07:23 Let’s choose the version that’s most likely to work.
0:07:25 Like that is the kind of standard move, right?
0:07:26 Yeah.
0:07:26 – Right.
0:07:29 And then the problem though is that if you try that
0:07:32 and you just, you got unlucky.
0:07:33 You picked the wrong bomb design
0:07:34 and the right fissile material
0:07:36 or right material is wrong bomb design.
0:07:39 You’ve done a lot of work which has zero payoff.
0:07:40 – And you’ve lost time, right?
0:07:43 Like crucially there is a huge sense of urgency
0:07:45 present at this moment
0:07:48 that is driving the whole thing really.
0:07:49 – Right.
0:07:51 We could also do more than one of them in parallel
0:07:52 and that is what we did.
0:07:53 And on the manufacturing side
0:07:55 that was actually just murderously expensive.
0:07:57 If you are building a factory
0:07:59 and you build the wrong kind of factory
0:08:03 then you’ve wasted a lot of money and effort and time.
0:08:05 So they did more than one.
0:08:07 They did several different processes
0:08:10 for enriching uranium and for plutonium.
0:08:11 – All at the same time, right?
0:08:13 And they knew they weren’t going to use all of them.
0:08:15 They just didn’t know which one was going to work.
0:08:18 So it was like, well, let’s try all of them at the same time
0:08:19 and hopefully one of them will work.
0:08:20 – Yes.
0:08:22 – Like that is super bubbly, right?
0:08:23 That is wild and expensive.
0:08:28 That is just throwing wild amounts of money at something
0:08:29 in a great amount of haste.
0:08:31 – Yes, yeah.
0:08:33 And if you, so if you believe
0:08:35 that there is this pretty linear payoff
0:08:37 then every additional investment you make
0:08:39 has, it doesn’t qualitatively change things.
0:08:41 It just means you’re doing a little bit more of it.
0:08:43 But if you believe there’s some kind of nonlinear payoff
0:08:47 where either this facility basically doesn’t work at all
0:08:49 or it works really, really well,
0:08:51 then when you diversify a little bit,
0:08:54 you do actually get just this better risk adjusted return
0:08:57 even though you’re objectively taking more risk.
0:08:58 – Interesting, right.
0:09:01 So in this instance, it’s if the Nazis have the bomb
0:09:04 before we do, it’s the end of the world as we know it.
0:09:05 – Yes.
0:09:06 – And so we better take a lot of risk
0:09:08 and that’s actually rational.
0:09:12 – It reminds me a little bit of aspects
0:09:13 of Operation Warp Speed.
0:09:17 I remember talking to Susan Athea, Stanford economist
0:09:19 early in the pandemic who was making the case
0:09:22 to do exactly this with vaccine manufacturing
0:09:24 in like early 2020.
0:09:26 We didn’t know if any vaccine was gonna work
0:09:28 and it takes a long time to build a factory
0:09:30 to make a vaccine basically or to tailor a factory.
0:09:32 And she was like, just make a bunch of factories
0:09:34 to make vaccines because if one of them works
0:09:37 we wanna be able to start working on it that day.
0:09:39 Like that seems quite similar to this.
0:09:40 And it worked.
0:09:42 – Yeah, yeah, I think that’s absolutely true.
0:09:45 That you, you know, the higher the stakes are,
0:09:47 the more you wanna be running everything
0:09:49 that can plausibly help in parallel.
0:09:52 And depending on the exact nature of what you’re doing,
0:09:54 there can be some spillover effects.
0:09:57 It’s possible that you build a factory
0:10:00 for manufacturing vaccine A and vaccine A doesn’t work out
0:10:02 but you can retrofit that factory
0:10:03 and start doing vaccine B.
0:10:05 And you know, there are little ways
0:10:06 to shuffle things around a bit.
0:10:10 But you often wanna go into this basically telling yourself,
0:10:13 if we didn’t waste money and we still got a good outcome,
0:10:14 it’s because we got very, very lucky
0:10:16 and that we only know we’re being serious
0:10:17 if we did in fact waste a lot of money.
0:10:21 And I think that kind of inverting your view of risk
0:10:22 is often a really good way to think
0:10:25 about these big transformative changes.
0:10:26 And this is actually another case
0:10:29 where the financial metaphors do give useful information
0:10:31 about just real world behaviors
0:10:34 because at hedge funds, this is actually something
0:10:36 that risk teams will sometimes tell portfolio managers
0:10:38 is you are making money on too high a percentage
0:10:39 of your trades.
0:10:41 This means that you are not making all the trades
0:10:43 that you could and if you made,
0:10:47 if you took your hit rate from 55% down to 53%,
0:10:49 we’d be able to allocate more capital to you
0:10:50 even though you’d be annoyed
0:10:52 that you were losing money on more trades.
0:10:54 – Interesting, because overall,
0:10:57 you would likely have a more profitable outcome
0:10:59 by taking bigger risks and incurring a few more losses
0:11:02 but your wins would be bigger and make up for the losses.
0:11:03 – Yes, and this kind of thinking, you know,
0:11:05 it’s very easy if you’re the one sitting behind the desk
0:11:07 just talking about these relative trade offs.
0:11:09 It’s a lot harder if you are the first person
0:11:11 working with Uranium in the factory
0:11:13 and we don’t quite know what the risks of that are
0:11:15 but it is just a, it’s a generally true thing
0:11:18 about trade offs that if you, about trade offs and risk
0:11:20 that there is an optimal amount of risk to take
0:11:23 that optimal amount is sometimes dependent
0:11:26 on what the downside risk of inaction is.
0:11:29 And so sometimes if you’re too successful,
0:11:32 you realize that you are actually messing something up.
0:11:33 – Yeah, you’re not taking enough risk.
0:11:37 So we all know how the Manhattan project ends, it worked.
0:11:42 I mean, it is a little bit of a weird one to start with,
0:11:45 you know, the basic ideas like technological progress
0:11:46 is good, risks are good.
0:11:47 And we’re talking about building the atomic bomb
0:11:49 and dropping it on two cities.
0:11:53 And it’s, you know, it’s morally a much easier question
0:11:55 if you think it’s the Nazis, sorry,
0:11:56 but the Nazis are absolutely the worst
0:11:59 and I definitely don’t want them to have a bomb first.
0:12:01 You know, there is the argument
0:12:02 that more people would have died
0:12:05 in a conventional invasion without the bomb.
0:12:07 I don’t know.
0:12:11 I mean, how do you, what do you make of it?
0:12:13 Like, obviously the book is very pro technological progress.
0:12:16 This show is basically pro technological progress.
0:12:19 But like the bomb isn’t a happy one to start on.
0:12:21 Like, what do you make of it ultimately?
0:12:23 – Yeah, it’s one of those things where you do,
0:12:27 it does make me wish that we could run the same simulation,
0:12:28 you know, a couple of million times
0:12:30 and see what the net, you know,
0:12:32 loss and save lives are different scenarios.
0:12:36 But one thing, the bomb, if you,
0:12:41 I guess from like a purely utilitarian standpoint,
0:12:44 I suspect that the there’ve been net lives saved
0:12:47 because of less use of coal for electricity generation,
0:12:49 more use of nuclear power.
0:12:51 And that is directly downstream of the bomb
0:12:53 that you can build these, you know,
0:12:57 by design uncontrollable releases of atomic energy.
0:12:59 You can also build more controllable ones.
0:13:02 And then getting the funding for that would be a lot harder.
0:13:05 – And presumably we got nuclear power much sooner
0:13:06 than we otherwise would have
0:13:09 because the incredibly rapid progress
0:13:12 of the Manhattan Project, that’s the case there.
0:13:12 Fair.
0:13:15 – Which I don’t, I don’t think, you know,
0:13:19 if you let me push the button on what I drop an atomic bomb
0:13:22 on a civilian population in exchange
0:13:25 for fewer people dying of respiratory diseases
0:13:27 over the next couple of decades, you know,
0:13:29 I would have to give it a lot of thought.
0:13:32 – I don’t, I’m not gonna, I’m not gonna push that button,
0:13:34 but I’m never gonna have a job where I have to decide
0:13:35 ’cause I can’t deal.
0:13:39 Okay, a thing you mentioned in the book,
0:13:41 kind of in passing that was really interesting
0:13:46 and surprising to me was that nuclear power today
0:13:51 accounts for 18% of electric power generation in the US.
0:13:54 18%, like that is so much higher than I would have thought
0:13:57 given sort of how little you hear
0:13:59 about existing nuclear power plants, right?
0:14:01 Like that is a lot.
0:14:05 – Yeah, yeah, it is a surprisingly,
0:14:06 it’s a surprisingly high number,
0:14:09 but also nuclear power, it is one of the most annoying
0:14:12 technologies to talk about in the sense that
0:14:14 it doesn’t do anything really, really exciting
0:14:17 other than provide essentially unlimited power
0:14:19 with minimal risk.
0:14:24 – And some amount of, some amount of scary tail risk,
0:14:25 right? – Yes.
0:14:27 – Like, I mean, that is what is actually interesting
0:14:30 to talk about, sort of unfortunately for the world
0:14:32 given that it has a lot of benefits.
0:14:34 There is this tail risk and once in a while
0:14:36 something goes horribly wrong.
0:14:39 Even though on the whole, it seems to be clearly less risky
0:14:41 than say a coal-fired power plant.
0:14:46 – Right, and the industry has, they’re aware of those risks
0:14:49 and nobody wants to be responsible for that kind of thing
0:14:51 and nobody wants to be testifying before Congress
0:14:54 about ever having cut any corner whatsoever
0:14:56 in the event that a disaster happens.
0:14:59 So they do actually take that incredibly seriously
0:15:03 and so nuclear power does end up being in practice
0:15:05 much safer than other power sources.
0:15:07 And then you add in the externality
0:15:09 of doesn’t really produce emissions
0:15:13 and uranium exists in some quantities just about everywhere.
0:15:16 – Yeah, no climate change, no local air pollution
0:15:19 has a lot going for it, always on.
0:15:22 Okay, let’s go to the moon.
0:15:26 So you write also about the Apollo missions,
0:15:28 US going to the moon.
0:15:32 It’s the early ’60s, was it ’61.
0:15:35 Kennedy says we’re gonna go to the moon
0:15:36 by the end of the decade.
0:15:39 There’s the Cold War context.
0:15:41 Kennedy announces this goal.
0:15:45 What’s the response in the US when Kennedy says this?
0:15:46 – Yeah, so a lot of the response,
0:15:50 at first people are somewhat hypothetically excited
0:15:52 as they start realizing how much it will cost,
0:15:54 they go from not especially excited
0:15:57 to actually pretty deeply opposed.
0:15:59 And this shows up in,
0:16:01 there was someone coined the term moon doggal.
0:16:03 – Yeah, moon doggal, I loved moon doggal,
0:16:05 I learned that from the book.
0:16:08 It was Norbert Wiener, like a famous technologist,
0:16:11 not a crank, right?
0:16:12 Somebody who knew what he was talking about
0:16:16 was like, this is a crazy idea, it’s a moon doggal.
0:16:17 – Right, and this really worked its way
0:16:18 to popular culture.
0:16:23 If you go on Spotify and listen to the Tom Lehrer song,
0:16:26 Werner von Braun, the recording that Spotify has,
0:16:28 it opens with a monologue that is talking about
0:16:30 how stupid the idea of the Apollo program is.
0:16:35 It’s, and this is again someone who is in academia,
0:16:37 who’s a very, very sharp guy,
0:16:40 and who just feels like he completely sees
0:16:43 through this political giveaway program
0:16:45 to big defense contractors
0:16:47 and knows that there’s no point in doing this.
0:16:51 You write that NASA’s own analysis
0:16:55 found a 90% chance that a failure,
0:16:57 a failing to reach the moon by the end of the decade.
0:17:00 Like it wasn’t just outside people being critical,
0:17:03 it was NASA itself didn’t think it was good work.
0:17:06 There’s a phrase you use in the book
0:17:09 to talk about these sort of bubble-like environments
0:17:10 that are of interest to you.
0:17:12 And I found it really interesting,
0:17:14 and I think we can talk about it
0:17:15 in the context of Apollo.
0:17:18 That phrase is definite optimism.
0:17:19 Tell me about that phrase.
0:17:21 – Yes, so definite optimism is the view
0:17:24 that the future can and will be better
0:17:27 in some very specific way.
0:17:30 That there will be, there is something we cannot do now,
0:17:31 we will be able to do it in the future,
0:17:33 and it will be good that we can do it.
0:17:34 – And why is it important?
0:17:38 Like it’s a big deal in your telling in an interesting way.
0:17:41 Why is it so important?
0:17:43 – It’s important because that is what allows you
0:17:45 to actually marshal those resources,
0:17:49 whether those are the people or the capital
0:17:51 or the political pull to put them
0:17:53 all in some specific direction
0:17:54 and say, we’re going to build this thing,
0:17:56 so we need to actually go step by step
0:17:59 and figure out what specific things have to be done,
0:18:01 what discoveries have to be made,
0:18:04 what laws have to be passed in order for this to happen.
0:18:06 And that is, so it’s definite optimism
0:18:07 in the sense that you’re saying
0:18:09 there is a specific thing we’re going to build.
0:18:11 It’s the kind of thing that can keep you going
0:18:13 when you encounter temporary setbacks.
0:18:15 And that’s where the optimism part comes in,
0:18:20 because if you have a less definitely optimistic view
0:18:22 about that project, you might say the goal
0:18:24 of the Apollo program is to figure out
0:18:27 if we can put a person on the moon.
0:18:29 But I think what that leaves you open to
0:18:31 is the temptation to give up at any point.
0:18:35 ‘Cause at any point you can have a botched launch
0:18:39 or an accident or you’re designing some component
0:18:41 and the math just doesn’t pencil out
0:18:43 you need, you know, it’s going to weigh too much
0:18:46 to actually make it on track.
0:18:47 And you could say, okay, well that’s how we figured out
0:18:48 that we’re not actually doing this.
0:18:52 But if you do just have this kind of delusional view
0:18:54 that know if there’s a mistake,
0:18:55 it’s a mistake in my analysis,
0:18:58 not in the ultimate plan here,
0:18:59 and that it is physically possible,
0:19:01 we just have to figure out all the details,
0:19:03 then I think that does set up a different kind of motivation
0:19:06 because at that point, you can view every mistake
0:19:09 as just exhausting the set of possibilities
0:19:10 and letting you narrow things down
0:19:12 to what is the correct approach.
0:19:14 What you sort of needed was this sort of
0:19:16 very localized definite optimism
0:19:20 where you could imagine a researcher thinking to themselves
0:19:23 or an engineer or someone throughout the project
0:19:25 thinking to themselves that, okay,
0:19:28 this will probably not work overall.
0:19:30 But the specific thing I’m working on,
0:19:31 whether it is designing a space suit
0:19:33 or designing this rocket
0:19:35 or programming the guidance computer
0:19:39 that one, I could tell that my part is actually going to work
0:19:40 or at least I believe that I can make it work.
0:19:42 And two, this is my only chance
0:19:43 to work with these really cool toys.
0:19:46 So if the money is going to be wasted at some point,
0:19:48 let that money be wasted on me.
0:19:51 And I think that that kind of attitude of just,
0:19:52 you know that you have one shot
0:19:54 to actually do something really interesting,
0:19:56 you will not get a second chance.
0:19:58 If everyone believes that it does become
0:19:59 a coordinating mechanism
0:20:00 where now they’re all working extremely hard,
0:20:04 they all recognize that the success of what they’re doing
0:20:05 is very much up to them.
0:20:07 And then that ends up contributing
0:20:08 to this group’s success.
0:20:13 So it’s like this, if I’m going to do this,
0:20:14 I got to do it now.
0:20:15 Everybody’s doing it now.
0:20:16 We got the money now.
0:20:18 This is our one shot.
0:20:19 We better get it right.
0:20:21 We better do everything we can to make it work.
0:20:23 – Yes, fear of missing out.
0:20:25 – Yeah, FOMO, right.
0:20:25 So FOMO, it’s funny.
0:20:29 People talk about that as like a dumb investment thesis,
0:20:29 basically, right?
0:20:31 It’s like a meme stock idea,
0:20:36 but you talk about it in these more interesting contexts,
0:20:36 basically, right?
0:20:38 More meaningful, I would say.
0:20:43 – Yeah, so in the purely straightforward way,
0:20:45 the idea is there are sometimes
0:20:47 these very time-limited opportunities to do something.
0:20:49 And if you’re capable of doing that thing,
0:20:50 this may be your only chance.
0:20:51 And so missing out is actually something
0:20:53 you should be afraid of.
0:20:56 So if you actually have a really clever idea
0:20:58 for an AI company,
0:20:59 this is actually a time where you can at least
0:21:00 make the attempt.
0:21:03 So yeah, we do argue that missing out
0:21:04 is something you should absolutely fear.
0:21:06 – So what happens with the Apollo project?
0:21:10 Just in brief, like talking about just how big it is
0:21:13 and how risky it is, like it’s striking, right?
0:21:14 – Right, yeah.
0:21:16 So it was running, the expenses were running
0:21:20 on like a low single-digit percentage of GDP for a while.
0:21:23 So a couple percent of the value of everything
0:21:25 everybody in the country does
0:21:27 is going into the Apollo mission.
0:21:31 Just this one plainly unnecessary thing
0:21:33 that the government has decided to do.
0:21:35 – Right, and this is one of the cases
0:21:37 where there was a very powerful spillover effect
0:21:41 because the Apollo guidance computer
0:21:44 needed the most lightweight and least power consuming
0:21:47 and most reliable components possible.
0:21:49 And if you were building a computer conventionally
0:21:51 at that time and you had a budget,
0:21:53 you would probably build it out of vacuum tubes.
0:21:55 And you knew that the vacuum tubes,
0:21:56 they’re bulky, they consume a lot of power,
0:21:57 they throw off a lot of heat,
0:21:59 they burn out all the time,
0:22:01 but they are fairly cheap.
0:22:06 But in this case, there was an alternative technology.
0:22:08 It was extremely expensive,
0:22:12 but it was lightweight, didn’t use a lot of power
0:22:13 and did not have moving parts.
0:22:15 And that’s the integrated circuit.
0:22:17 So transistor-based computing.
0:22:19 – The chip, well, we know today as the chip.
0:22:21 – Yes, the chip.
0:22:23 – You read that in 1963,
0:22:28 NASA bought 60% of the chips made in the United States.
0:22:32 Just NASA, not the whole government, just NASA, 60%.
0:22:33 – They actually bought more chips than they needed
0:22:37 because they recognized that the chip companies
0:22:41 were run by very, very nice electrical engineering nerds
0:22:43 who just love designing tiny, tiny things
0:22:46 and that these people just don’t know how to run a business.
0:22:49 And so they were worried that Fairchild Semiconductor
0:22:51 would just run out of cash at some point.
0:22:54 And then NASA would have half of a computer
0:22:56 and no way to build the rest of it.
0:22:57 So they actually over-ordered.
0:23:00 They used integrated circuits for a few applications
0:23:02 that actually were not so dependent
0:23:04 on power consumption and weight and things.
0:23:06 So that critique of the Apollo program
0:23:07 was directionally correct.
0:23:10 It was money being splashed out to defense contractors
0:23:11 who were favored by the government.
0:23:12 But in this case, it was being done
0:23:15 in a more strategic and thoughtful way
0:23:17 and kind of kept the industry going.
0:23:21 – So you talk a fair bit in the book
0:23:26 about the sort of religious and quasi-religious aspects
0:23:28 of these little groups of people
0:23:30 that come together in these bubble-like moments
0:23:31 to do these big things.
0:23:35 And that’s really present in the Apollo section.
0:23:39 Like talk about the sort of religious ideas
0:23:41 associated with the Apollo mission
0:23:43 that the people working on the mission had.
0:23:46 – Yeah, I mean, you name it after a Greek God
0:23:49 and you’re already starting a little bit religious.
0:23:54 So there were people who worked on these missions
0:23:56 who felt like this is part of mankind’s destiny,
0:23:58 is to explore the stars
0:23:59 and that there’s this whole universe
0:24:01 that is a universe created by God.
0:24:02 And it would be kind of weird.
0:24:04 We can’t second-guess the divine,
0:24:06 but it’s a little weird for God to create
0:24:07 all of these astronomical bodies
0:24:09 that just kind of look good from the ground
0:24:11 and that you’re not actually meant to go visit.
0:24:13 – You talk about somewhat similar things
0:24:18 in other kind of less obviously spiritual dimensions
0:24:20 of people coming together
0:24:24 and having it kind of more than rational.
0:24:27 You use this word thymos from the Greek meaning spirit.
0:24:29 Like what’s going on there more broadly?
0:24:31 Why is that important more generally
0:24:33 for technological progress?
0:24:38 – Because the, so thymos is part of this tripartite model
0:24:42 of the soul where you have your appetites and your reason
0:24:45 and then your thymos show like your longing for glory
0:24:50 and honor and this kind of transcendent achievement.
0:24:54 And logos reasoning, it only gets you so far.
0:24:55 You can reason your way
0:24:57 into some pretty interesting things,
0:24:58 but at some point you do decide
0:25:01 that the reasonable thing is probably
0:25:03 to take it a little bit easy
0:25:06 and not take certain risks.
0:25:10 And it still is just this pursuit of something greater
0:25:13 and something beyond the ordinary,
0:25:14 something really beyond the logos, right?
0:25:16 Like beyond what you could get
0:25:19 to just by reasoning one step at a time.
0:25:24 And I think that that is just a deeply attractive proposition
0:25:26 to many people.
0:25:29 And it’s also a scary one because at that point,
0:25:31 if you’re doing things that are beyond
0:25:34 what is the rational thing to do,
0:25:35 then of course you have no rational explanation
0:25:38 for what you did wrong if you mess up.
0:25:40 And you are sort of betting
0:25:41 on some historical contingencies.
0:25:43 – That’s the definite optimism part, right?
0:25:45 – Metting on historical contingencies
0:25:48 is another way of saying definite optimism, right?
0:25:52 – So, back to the moon.
0:25:55 So we get to the moon, in fact, against all odds,
0:25:57 we make it.
0:26:01 And there’s this moment where it’s like,
0:26:05 today the moon, tomorrow, the solar system.
0:26:07 But in fact, it was today the moon,
0:26:09 tomorrow, not even the moon.
0:26:10 – Right.
0:26:11 – Like what happened?
0:26:15 – Well, you had asked about what these mega projects
0:26:16 have in common with financial bubbles.
0:26:17 And one of the things they have in common is,
0:26:19 sometimes there’s a bust.
0:26:21 And sometimes that bust is actually an overreaction
0:26:23 in the opposite direction.
0:26:28 And people take everything they believed
0:26:33 in, say, 1969 about humanity’s future and the stars,
0:26:35 and they say, okay, this is exactly the opposite
0:26:37 of where things will actually go,
0:26:38 and the exact opposite of what we should care about,
0:26:40 that we have plenty of problems here on Earth,
0:26:43 and why would we, do we really wanna turn Mars
0:26:46 into just another planet that also has problems
0:26:49 of racism and poverty and nuclear war and all that stuff?
0:26:53 So maybe we should stay home and fix our own stuff.
0:26:54 In public policy, you’d actually need for there
0:26:58 to be some kind of resurgence in belief and space.
0:26:59 You need some kind of charismatic story.
0:27:02 And perhaps to an extent, we have that right now.
0:27:03 – Yes.
0:27:05 – Maybe Elon’s not the perfect front man for all of this,
0:27:07 but he is certainly someone who demonstrates
0:27:10 that space travel, it can be done, it can be improved,
0:27:13 and that it’s just objectively cool.
0:27:16 That it is just hard to watch a SpaceX launch video
0:27:18 and not feel something.
0:27:21 – Yes, so good.
0:27:26 I wanna talk more about space in a minute.
0:27:28 So it’s interesting, these two stories
0:27:30 that are kind of in the middle of your book,
0:27:31 they’re kind of the core of the book, right?
0:27:34 These two interesting moments that are non-financial bubbles
0:27:37 when you have these incredible technological innovation
0:27:40 in a short amount of time that seems unrealistic,
0:27:44 unrealistically fast, impressive outcome.
0:27:48 And they’re both pure government projects.
0:27:51 They’re both command and control economy.
0:27:55 It is not the private sector, it is not capitalism.
0:27:57 What do you make of that?
0:28:00 – I would say there’s a very strong indirect link
0:28:02 for a couple of reasons.
0:28:07 One is just the practical kind of,
0:28:08 the practical kind of reason that personnel is policy
0:28:13 and that the US government in the 1930s,
0:28:14 the US government was hiring
0:28:16 and the private sector mostly wasn’t.
0:28:18 And so all the ambitious people,
0:28:20 basically all the ambitious people in the country
0:28:21 tried to get government jobs.
0:28:23 And that is usually not the case.
0:28:25 And there are certainly circumstances
0:28:27 where that’s a really bad sign.
0:28:28 But in this case, it was great.
0:28:30 It meant that there were a lot of New Deal projects
0:28:31 that were staffed by the people
0:28:34 who would have been rising up the ranks at RCA
0:28:36 or General Electric or something a decade earlier.
0:28:38 Now they’re running New Deal projects instead.
0:28:40 And they’re again, rising up the ranks really fast,
0:28:42 having a very large real-world impact
0:28:43 very early in their careers.
0:28:47 And those people had been working together for a while
0:28:48 and they knew each other.
0:28:49 There was a lot of just institutional knowledge
0:28:52 about how to get big things done
0:28:53 within the US government.
0:28:55 And a lot of that institutional knowledge
0:28:56 could then be redirected.
0:28:59 So you have the New Deal and then the war effort.
0:29:00 And then you have this post-war economy
0:29:02 where there’s still, it takes a while
0:29:04 for the government to fully relax its control.
0:29:07 And then very soon we’re into the Korean War.
0:29:11 So, yeah, there was just a large increase in state capacity
0:29:13 and just in the quality of people making decisions
0:29:15 within the US government in that period.
0:29:20 – We’ll be back in a minute
0:29:23 to talk about bubble-esque things happening right now.
0:29:27 Namely, rockets, cryptocurrency, and AI.
0:29:30 (upbeat music)
0:29:37 – Okay, now to space today.
0:29:41 Bern and I talked about SpaceX in particular
0:29:42 because, you know, it really is the company
0:29:45 that launched the modern space industry.
0:29:48 And there’s this one key trait that SpaceX shares
0:29:51 with the other projects Bern wrote about in the book.
0:29:55 It brought together people who share a wild dream.
0:29:56 If you go to work at SpaceX,
0:29:58 it’s probably because you believe
0:30:01 in getting humanity to Mars.
0:30:03 – Yeah, yeah, it’s not just that you believe in the dream,
0:30:06 but when you get the job, you’re suddenly in an environment
0:30:07 where everyone believes in the dream.
0:30:09 And if you’re working in one of those organizations,
0:30:11 you’re probably not working nine to five,
0:30:15 which means you have very few hours in your day or week
0:30:17 where you’re not completely surrounded by people
0:30:19 who believe that humanity will,
0:30:21 people will be living on Mars
0:30:24 and that this is the organization that will make it happen.
0:30:26 And that just has to really mess with your mind.
0:30:30 Like, what is normal to an engineer
0:30:32 working at SpaceX in 2006 is completely abnormal
0:30:35 to 99.9% of the human population.
0:30:37 And, you know, most of the exceptions
0:30:38 are like six-year-old boys
0:30:40 who just watched Star Wars for the first time,
0:30:41 go Mars, it’s crazy.
0:30:43 – Yeah, I mean, really, as I went through the book,
0:30:45 I was like, oh, really the bubble you’re talking about
0:30:48 is a social bubble, like the meaningful bubble.
0:30:50 Like maybe there’s a financial bubble attached to it.
0:30:52 Maybe there isn’t, but what really matters
0:30:54 is you’re in this weird little social bubble
0:30:58 that believes some wild thing together
0:30:59 that believes it is not wild,
0:31:00 that believes it is gonna happen.
0:31:02 Like, that’s the thing.
0:31:03 – Yeah.
0:31:05 – And has money, and has the money
0:31:07 to act on their wild belief.
0:31:08 – Yes.
0:31:10 And so, you know, getting the money
0:31:13 does mean interacting with the normie sphere,
0:31:15 interacting with people who don’t quite buy into all of it,
0:31:20 but they, when you have these really ambitious plans
0:31:21 and you’re taking them seriously,
0:31:23 you’re doing them step by step,
0:31:25 some of those steps do have other practical applications.
0:31:27 And so that is the basic story.
0:31:29 It was not just straight shot,
0:31:32 we are going to invest all the money Elon got from PayPal
0:31:32 into going to Mars,
0:31:34 and hopefully we get to Mars before we run out.
0:31:35 – Yeah.
0:31:38 – It was, you know, we’re going to build these prototypes,
0:31:39 we’re going to build reusable rockets,
0:31:43 we’re going to use those for existing use cases,
0:31:45 and we will probably find new use cases.
0:31:47 And then once we get really, really good at launching things
0:31:50 cheaply, well, there are a lot of satellites out there,
0:31:52 and perhaps we should have some of our own.
0:31:53 And if we can do it at sufficient scale,
0:31:56 then maybe we can just throw a global communications network
0:31:59 up there in the sky and see what happens next.
0:32:02 So, yeah, that’s, you know, the intermediate steps,
0:32:05 each one, it’s basically taking the thermos like the spirited,
0:32:08 you know, here’s our grand vision of the future,
0:32:09 and you know, here’s my destiny,
0:32:11 and I was put on Earth to do this and say,
0:32:13 okay, well, the next step is have enough money
0:32:14 to pay rent next month.
0:32:17 – Right, what do I gotta do tomorrow to get to Mars?
0:32:22 So, is there a space bubble right now?
0:32:25 – I think so, I think there is,
0:32:30 I think there are people who look at SpaceX and say,
0:32:32 this is achievable, and that more is achievable.
0:32:34 They also look at SpaceX and say,
0:32:36 this is a kind of infrastructure
0:32:40 that there are things like doing manufacturing in orbit,
0:32:41 or doing manufacturing on the moon,
0:32:44 where in some cases that is actually the best place
0:32:45 to build something.
0:32:48 – Basically, because SpaceX has driven down the cost
0:32:50 so much of getting stuff into orbit,
0:32:54 new ideas that would have been economically absurd
0:32:58 20 years ago, like manufacturing in space are now plausible.
0:33:00 And so this is sort of bubble building on itself.
0:33:03 And like, why is it not just an industry now?
0:33:05 Why is it a bubble in your telling?
0:33:10 – It is the feedback loop where what SpaceX does
0:33:11 makes more sense if they believe
0:33:12 that there will be a lot of demand
0:33:16 to move physical things off of Earth and into orbit,
0:33:18 and perhaps further out.
0:33:21 If they believe that there’s more demand for that,
0:33:23 they should be investing more in R&D,
0:33:25 they should be building bigger and better rockets,
0:33:29 and they should be doing the big fixed cost investment
0:33:31 that incrementally reduces the cost of launches,
0:33:33 and only pays for itself if you do a lot of them.
0:33:35 And then if they’re doing that,
0:33:37 and you have your dream of,
0:33:39 we’re going to manufacture drugs in space,
0:33:42 and they will be, like the marginal cost is low,
0:33:44 once you get stuff up there.
0:33:45 Well, that dream is a little bit more plausible
0:33:47 if you can actually plot that curve
0:33:49 of how much does it cost to get a kilogram into space,
0:33:54 and say, there is a specific year at which point
0:33:56 we would actually have the cost advantage
0:33:57 versus terrestrial manufacturing.
0:34:00 – So it’s this sort of coordinating mechanism
0:34:03 that like you also write about with Microsoft and Intel
0:34:04 in the like 80s, 90s, where it’s like,
0:34:06 oh, they’re building better chips,
0:34:08 so we’ll build better software.
0:34:10 And then because they’re building better software,
0:34:11 we’ll build better chips.
0:34:14 So this is like a more exciting version of that, right?
0:34:17 Because it’s going to get even cheaper
0:34:18 to send stuff to space.
0:34:22 We can build this crazy factory to exist in space.
0:34:23 And then that tells SpaceX,
0:34:27 oh, we can in fact keep building, keep innovating,
0:34:28 keep spending money.
0:34:32 – Yes, and so someone has to do just half of that,
0:34:34 like the half of that, that makes no sense whatsoever.
0:34:36 – That was SpaceX at the beginning, right?
0:34:39 That was like just a guy with a lot of money
0:34:40 and a crazy dream.
0:34:42 – Yeah, it just really helps to have someone
0:34:44 who’s eccentric and has a lot of money
0:34:46 and is willing to throw it at a lot of different things.
0:34:49 Like Musk, he spent some substantial fraction
0:34:52 of his net worth right after this PayPal sale
0:34:54 on a really nice sports car.
0:34:57 And then immediately took it for a drive and wrecked it,
0:34:59 had no insurance and was not wearing a seatbelt.
0:35:02 So the Elon Musk story could have just been this proverb
0:35:05 about dot com excess and what happened
0:35:07 when you finally gave these people money
0:35:09 as they immediately bought sports cars and wrecked them.
0:35:12 Instead, it’s a story about a different kind of excess,
0:35:15 but it’s still, I guess what that illustrates
0:35:17 is the risk tolerance. – Risk-seeking, yes, yes.
0:35:22 – Yeah, there’s a risk level where you are going for a joy
0:35:25 ride in your $2 million car and you haven’t bothered
0:35:28 to fill out all the paperwork or buy the insurance,
0:35:29 and that is the risk tolerance
0:35:32 of someone who starts a company like SpaceX.
0:35:35 – Okay, enough about space.
0:35:38 Let’s talk about crypto, formerly known as cryptocurrency.
0:35:42 Let’s talk about Bitcoin, and let’s talk about Bitcoin,
0:35:44 especially at the beginning, right,
0:35:47 before it was number go up,
0:35:50 when it was, it really was true believers, right?
0:35:52 It was people who had a crazy worldview,
0:35:55 like you’re talking about in these other contexts.
0:35:59 – Yes, so we still don’t know for sure
0:36:01 who Satoshi Nakamoto was,
0:36:02 and I think everyone in crypto
0:36:05 has at least one guess, sometimes many guesses,
0:36:07 but whoever Satoshi was, whoever they were,
0:36:09 this is the creator of Bitcoin
0:36:11 for the one person who doesn’t know yet.
0:36:16 – They had this view that one of the fundamental problems
0:36:20 in the world today is that if you are going to transfer value
0:36:21 from one party to another,
0:36:23 you need some trusted intermediary.
0:36:25 – You need a trusted intermediary
0:36:28 like a government and a bank.
0:36:31 Typically, in money, you need both governments and banks
0:36:33 the way it works in the world today, right?
0:36:33 – Yes.
0:36:35 – And Satoshi happened to publish the Bitcoin White Paper
0:36:39 in October, 2008, which was a great moment to find people
0:36:42 who really didn’t want to have to deal with governments
0:36:44 and banks when they were dealing with money,
0:36:45 at the financial crisis, right?
0:36:48 Right in the teeth of the financial crisis.
0:36:50 – Yes, so it is in one sense
0:36:52 just this technically clever thing.
0:36:54 And then in another sense,
0:36:55 it’s this very ideological project
0:36:57 where he doesn’t like central banks,
0:36:59 he doesn’t like regular banks.
0:37:02 He feels like all of these institutions are corrupt
0:37:05 and your money is just an entry in somebody’s database
0:37:07 and they can update that database tomorrow
0:37:09 and either change how much you have
0:37:10 or change what it’s worth.
0:37:13 And we need to just build something new from a clean slate.
0:37:14 And there’s also,
0:37:17 I think there’s this tendency among a lot of tech people to,
0:37:21 when you look at any kind of communications technology
0:37:23 and money broadly defined as a communication technology,
0:37:24 you’re always looking at something
0:37:25 that has evolved from something simple
0:37:28 and it has just been patched and altered and edited
0:37:32 and tweaked and so on until it works the way that it works.
0:37:35 But that always means that you can easily come up
0:37:36 with some first principles view
0:37:39 that’s a whole lot cleaner, easier to reason about,
0:37:40 omits some mistakes.
0:37:42 And then you often find that, okay,
0:37:43 you omitted all the mistakes
0:37:44 that are really, really salient about fiat,
0:37:46 but then you added some brand new mistakes
0:37:49 or added mistakes that we haven’t made in hundreds of years.
0:37:51 And so they’re, it’s full of trade-offs.
0:37:53 – It gets complicated, but at the beginning, right?
0:37:57 So the white paper comes out and I covered,
0:37:59 I did a story about Bitcoin in 2011,
0:38:01 which was still quite early.
0:38:05 We had shocked that it had gone from $10 a Bitcoin
0:38:07 to $20 a Bitcoin, thought we were reading it wrong.
0:38:10 And at that time, like I talked to Gavin Andreessen,
0:38:13 who was very early in the Bitcoin universe,
0:38:15 like he was not in it to get rich, right?
0:38:17 Like he really believed, he really believed in it.
0:38:20 And that was the vibe then.
0:38:23 And like he thought it was gonna be money, right?
0:38:27 The dream was people will use this to buy stuff.
0:38:30 And one thing that is interesting to me
0:38:34 is yes, some people sort of use it to buy stuff,
0:38:36 but basically not, right?
0:38:39 Like that, it would go from $20 a Bitcoin
0:38:43 to $100,000 a Bitcoin without some crazy killer app,
0:38:44 without becoming the web,
0:38:47 without becoming something that everybody uses,
0:38:48 whether they care about it or not.
0:38:50 That I would not have guessed.
0:38:52 And it seems weird.
0:38:54 And plainly now crypto is full of some people
0:38:55 who are true believers and a lot of people
0:38:56 who just want to get rich.
0:38:59 And some of whom are pretty scammy.
0:39:01 – Yeah, yeah, there’s like the grifter coefficient
0:39:03 always goes up with the price.
0:39:05 And then the true believers are still there
0:39:07 during the next 80% drawdown.
0:39:09 And I’m sure there will be a drawdown,
0:39:10 something like that, at some point in the future.
0:39:12 It’s just that that’s kind of the nature
0:39:14 of these kinds of assets.
0:39:18 Bitcoin, it was originally conceived as more of a currency.
0:39:21 And Satoshi talked about some hypothetical products
0:39:23 you could buy with it.
0:39:27 And then the first Bitcoin killer app, to be fair,
0:39:29 was e-commerce, it was specifically drugs.
0:39:31 – Yeah, crime. – Yes.
0:39:33 – It is a very, very libertarian project in that way.
0:39:36 So it doesn’t work very well as a dollar substitute
0:39:39 for many reasons, most of the obvious reasons.
0:39:42 But it is interesting as a gold substitute
0:39:44 where part of the point of gold
0:39:46 is that it is very divisible
0:39:48 and your gold is the same as my gold.
0:39:51 And we’ve all kind of collectively agreed
0:39:54 that gold is worth more than its value
0:39:56 as just an industrial product.
0:40:00 And then the neat thing about gold is
0:40:01 it’s really hard to dig up anymore.
0:40:03 Gold supply is extremely inelastic.
0:40:07 – And Bitcoin is designed to have a finite supply,
0:40:08 right? – Yes.
0:40:09 – It’s an important analogy, yeah.
0:40:09 – Yes.
0:40:12 – More generally, like, it’s what?
0:40:13 It’s a long time out now.
0:40:17 It’s 17 years or something since the white paper.
0:40:24 What do you make of the sort of costs and benefits
0:40:25 of cryptocurrency so far?
0:40:27 The costs are more obvious to me.
0:40:29 Like there’s a lot of grift.
0:40:32 It’s, you know, by design, very energy intensive.
0:40:35 Like I’m open to like better payment systems.
0:40:38 There’s lots of just like boring efficiency gains
0:40:42 you would think we could get that we haven’t gotten, right?
0:40:42 – Yeah.
0:40:43 – What do you think about the costs
0:40:45 versus the benefits so far?
0:40:49 – I think in terms of the present value of future gains,
0:40:50 probably better off.
0:40:52 I think in terms of, yeah, realized gains so far, worse off.
0:40:53 – Uh-huh, uh-huh.
0:40:55 So basically worse off so far,
0:40:57 but in the long run, we’ll be better off.
0:40:59 We just haven’t got the pay off yet.
0:41:01 This is actually something that general purpose technologies,
0:41:03 it is a feature of general purpose technologies
0:41:05 that there’s often a point early in their history
0:41:07 where the net benefit has been negative.
0:41:11 – What would make it clearly positive?
0:41:13 Like what’s the killer return you’re hoping to see
0:41:15 from cryptocurrency?
0:41:17 – Yeah, so I think the killer return would be
0:41:20 if there is a financial system that is open
0:41:23 in the sense that starting a financial institution,
0:41:25 starting a bank or an insurance company or something
0:41:27 is basically you write some code
0:41:31 and you click the deploy button and your code is running,
0:41:33 you have capitalized your little entity
0:41:35 and now you can provide whatever it is,
0:41:37 like mean tweet insurance.
0:41:38 You’re selling people for a dollar a day,
0:41:42 you’ll pay them $100 if there’s a tweet that makes them cry.
0:41:43 You know, that kind of thing–
0:41:45 – Weird incentives in your insurance business,
0:41:46 I’m gonna tell you right now.
0:41:48 – You get to speed run all kinds of financial history,
0:41:50 I’m sure you learn all about adverse selection,
0:41:52 but like a financial system where anything,
0:41:54 anything can be plugged into something else
0:41:57 and basically everything is an API call away.
0:41:59 It’s just a really interesting concept
0:42:02 and the fiat system is moving in that direction, but slowly.
0:42:05 – And just to be clear, like why is it,
0:42:08 why is that better on balance?
0:42:10 So for it to be net positive,
0:42:12 that has to be not only interesting,
0:42:15 but that has to like lead to more human flourishing
0:42:18 and less suffering than we would have in its absence, right?
0:42:22 – Yeah, markets provide large positive externalities.
0:42:25 There’s a lot of effort in those markets that feels wasted,
0:42:29 but it is like markets transmit information better
0:42:30 than basically anything else
0:42:32 because what they’re always transmitting
0:42:34 is the information you actually care about.
0:42:36 So like oil prices,
0:42:40 you don’t have to know that oil prices are up
0:42:41 because there was a terrorist attack
0:42:44 or because someone drilled a dry hole or whatever.
0:42:47 You, what you respond to is just gas is more expensive
0:42:49 and therefore I will drive less
0:42:51 or you know, energy is cheaper or more expensive.
0:42:53 And so I need to change my behavior.
0:42:56 So it’s always transmitting the actually useful information
0:42:57 to the people who would want to use it.
0:42:59 And the more complete markets are
0:43:02 and the more things there are
0:43:04 where that information can be instantaneously transmitted
0:43:06 to the people who want to respond to it,
0:43:08 the more everyone’s real world behavior
0:43:11 actually reflects whatever the underlying material constraints
0:43:12 are on doing what we want to do.
0:43:17 – The sort of crypto dream there is just more finance markets,
0:43:21 more feedback, more market feedback,
0:43:24 better financial services as a result.
0:43:27 That’s the basic view you’re arguing for.
0:43:29 – And it’s just a really interesting way
0:43:33 to build up new financial products from first principles
0:43:35 and sometimes you learn why those first principles are wrong
0:43:37 but that itself is valuable.
0:43:41 Like there is actual value in understanding something
0:43:42 that is a tradition or a norm
0:43:44 and understanding why it works
0:43:46 and therefore deciding that that norm
0:43:47 is actually a good norm.
0:43:48 – Good.
0:43:51 Last one, you know what it’s gonna be.
0:43:54 You tell me what the last one is.
0:43:55 – Is AI a bubble?
0:43:59 – Yeah, but you sound so sad about it.
0:44:01 Of course we’ve got to talk about AI, right?
0:44:02 Are you sad? – Yeah, of course.
0:44:02 – You talk about AI?
0:44:06 Like it seems exactly like what you’re right about.
0:44:08 Yeah.
0:44:13 When you hear Sam Altman talk about creating open AI,
0:44:18 starting open AI, he’s like, we basically said,
0:44:21 you know, we’re gonna make AGI,
0:44:24 artificial general intelligence, come work with us.
0:44:25 And when he talks about it, it’s like,
0:44:27 there was a universe of people who were like,
0:44:30 the smartest people who really believed
0:44:31 who that’s what they wanted to do.
0:44:33 So they came and worked with us,
0:44:36 which seems like exactly your story.
0:44:37 – Yes.
0:44:39 It turns out that a lot of people have had that dream
0:44:41 and for a lot of people,
0:44:43 maybe it wasn’t what they were studying in grad school,
0:44:46 but it was why they ended up being the kind of person
0:44:47 who would major in computer science
0:44:48 and then try to get a PhD in it
0:44:52 and would go into a more researchy end of the software world.
0:44:56 So yeah, there were people for whom this was,
0:44:57 it was incredibly refreshing to hear
0:44:59 that someone actually wants to build the thing.
0:45:02 – So you have that kind of shared belief.
0:45:04 I mean, at this point, you have these other elements
0:45:05 of what you’re talking about, right?
0:45:09 Like a sense of urgency,
0:45:12 an incredible amount of money,
0:45:19 elements of spiritual or quasi-spiritual belief.
0:45:23 – Yes, there are pseudonymous open AI employees on Twitter
0:45:26 who will tweet about things like building God.
0:45:29 So yeah, they’re taking it in a weird spiritual direction,
0:45:31 but I think there is something,
0:45:35 it is interesting that a feature of the natural world
0:45:38 is that you can actually,
0:45:40 if you put enough of a,
0:45:43 you arrange refined sand and a couple of metals
0:45:44 in exactly the right way
0:45:46 and type in the right incantations
0:45:48 and add a lot of power,
0:45:50 that you get something that appears to think
0:45:52 and that can trick someone into thinking
0:45:54 that it’s a real human being.
0:45:56 – The is it good or is it bad question
0:45:58 is quite interesting here.
0:46:00 Obviously too soon to tell,
0:46:04 but striking to me in the case of AI
0:46:06 that the people who seem most worried about it
0:46:09 are the people who know the most about it,
0:46:10 which is not often the case, right?
0:46:13 Usually the people doing the work, building the thing,
0:46:15 just love it and think it’s great.
0:46:17 In this case, it’s kind of the opposite.
0:46:21 – Yeah, I think the times when I am calmest about AI
0:46:24 and least worried about it taking my job
0:46:28 are times when I’m using AI products
0:46:31 to slightly improve how I do my job.
0:46:32 That is better natural language search
0:46:36 or actually most of it is processing natural language
0:46:39 when there are a lot of pages I need to read,
0:46:41 which contain, if it’s like a thousand pages
0:46:43 of which five sentences matter to me,
0:46:46 that is a job for the API and not a job for me.
0:46:49 But it is now a job that the API and I can actually get done
0:46:53 and my function is to figure out what those five sentences
0:46:55 are and figure out a clever way to find them.
0:46:57 And then the AI’s job is to do the grunt work
0:46:58 of actually reading through them.
0:47:00 – That’s AI as useful tool, right?
0:47:03 That’s the happy AI story, yeah.
0:47:06 – And I actually think that preserving your own agency
0:47:08 is a pretty big deal in this context.
0:47:11 So I think that if you’re making a decision,
0:47:14 it needs to be something where you have actually formalized it
0:47:15 to the extent that you can formalize it
0:47:18 and then you have made the call.
0:47:20 But for a lot of the grunt work,
0:47:23 AI is just, it’s a way to massively parallelize
0:47:25 having an intern.
0:47:26 – Plainly, it’s powerful.
0:47:29 And you’re talking about what it can do right now.
0:47:32 I mean, the smartest people are like,
0:47:34 yes, but we’re gonna have AGI in two years,
0:47:36 which I don’t know if that’s right or not.
0:47:37 I don’t know how to evaluate that claim,
0:47:39 but it’s a wild claim.
0:47:43 It’s plainly not obviously wrong on its face, right?
0:47:44 It’s possible.
0:47:46 Can you even start to parse that?
0:47:49 You’re giving sort of little things today about,
0:47:50 oh, here’s a useful tool and here’s a thing
0:47:51 I don’t use it for.
0:47:53 But there’s a much bigger set of questions
0:47:54 that seem imminent.
0:47:56 – You know, there are certain kinds of radical uncertainty.
0:48:00 They’re, you know, I think it increases wealth inequality,
0:48:04 but also means that intelligence is just more abundant
0:48:08 and is available on-demand and is baked into more things.
0:48:11 I think that it’s, you know, you can definitely sketch out
0:48:12 really, really negative scenarios.
0:48:15 You can sketch out, you know, not end of the world,
0:48:19 but maybe might as well be for the average person scenarios
0:48:21 where every white collar job gets eliminated
0:48:22 and then a tiny handful of people
0:48:25 have just unimaginable wealth and, you know,
0:48:27 rearrange the system to make sure that doesn’t change.
0:48:30 But I think there are a lot of intermediate stories
0:48:33 that are closer to just the story of, say,
0:48:35 accountants after the rise of Excel,
0:48:37 where there were parts of their job
0:48:39 that got much, much easier
0:48:41 and then the scope of what they could do expanded.
0:48:43 – It was the bookkeepers who took it on the chin.
0:48:47 It turns out like Excel actually did drive bookkeepers
0:48:50 out of work and it made accountants more powerful.
0:48:53 – Yeah, so you, you know, within,
0:48:55 I think within a kind of company function,
0:48:58 you’ll have specific job functions that do mostly go away.
0:49:00 And then a lot of them will evolve.
0:49:03 And so the way that AI seems to be rolling out
0:49:06 in big companies in practice is they,
0:49:09 they generally don’t lay off a ton of people.
0:49:12 They will sometimes end outsourced contracts,
0:49:15 but in a lot of the cases, they don’t lay people off.
0:49:18 They change people’s responsibilities.
0:49:20 They ask them to do less of one thing
0:49:22 and a whole lot more of something else.
0:49:24 And then in some cases, that means
0:49:25 they don’t have to do much hiring right now,
0:49:27 but they think that a layoff would be pretty demoralizing.
0:49:30 So they sort of grow into the new cost structure
0:49:32 that they can support.
0:49:33 And then in other cases, there are companies
0:49:35 where they realize, wait,
0:49:37 we can ship features twice as fast now.
0:49:38 And so our revenue’s going up faster.
0:49:40 So we actually need more developers
0:49:42 because our developers are so much more productive.
0:49:48 – We’ll be back in a minute with the lightning round.
0:49:58 Okay, let’s finish with the lightning round.
0:50:01 It’s the most interesting thing you learned
0:50:04 from an earnings call transcript in the last year.
0:50:08 – Most interesting thing from a transcript in the last year.
0:50:12 I would say there was a point,
0:50:13 this might have been a little over a year ago.
0:50:16 There was a point at which Satya Nadella
0:50:18 was talking about Microsoft’s AI spending.
0:50:21 And he said, “We are still at the point.”
0:50:23 And I think he and Zuckerberg both said something
0:50:25 to the same effect and in the same quarter,
0:50:27 which was very exciting for NVIDIA people.
0:50:28 But it was like, we’re at the point
0:50:30 where we see a lot more risk to underspending
0:50:34 than to overspending on AI specifically.
0:50:36 – That really speaks to your book, right?
0:50:39 That really is like a bubbly as hell
0:50:40 in the context of your book,
0:50:43 like overspending like the Apollo missions,
0:50:44 like the Manhattan Project,
0:50:47 like the big risk is that we don’t spend enough.
0:50:49 – And also they know that their competitors
0:50:51 are listening to these calls too.
0:50:55 So they were also saying that this is kind of a winnable fight,
0:50:57 that they do think that there is a level
0:51:00 of capital spending at which Microsoft can win
0:51:01 simply because they took it more seriously
0:51:02 than everybody else.
0:51:06 – So he’s like, yes, we’re gonna spend billions
0:51:08 and billions of dollars on AI
0:51:13 because we think we can win Zuckerberg implicitly.
0:51:22 What’s one innovation in history that you wish didn’t happen?
0:51:31 – I wish there were some reason that it was infeasible
0:51:34 to have really, really tight feedback loops
0:51:37 for consumer facing apps, particularly games.
0:51:42 – Is that a way of saying you wish games were less addictive?
0:51:44 – Yeah, I wish games were less addictive
0:51:48 or that they weren’t as good at getting more addictive.
0:51:50 So I wrote a piece in the newsletter about this recently
0:51:51 ’cause there was that wonderful article
0:51:54 on the loneliness economy in the Atlantic
0:51:55 a couple of weeks back that was talking about who we,
0:51:58 we just spend that one of the pandemic trends
0:51:59 that has mean reverted the least
0:52:01 is how much time people spend alone.
0:52:02 And I think one of the reasons for that
0:52:05 is that all the things you do alone,
0:52:08 they are things that produce data for the company
0:52:10 that monetizes the time that you spend alone.
0:52:14 And so the fact that we all watched a whole lot of Netflix
0:52:16 in the spring of 2020 means that Netflix has a lot more data
0:52:18 on what our preferences are.
0:52:21 – So they got better at making us want to watch Netflix
0:52:25 and all the video games we’ve played on our phones
0:52:27 got better at making us addicted
0:52:29 to keep playing video games on our phones.
0:52:31 Yeah, that’s a bummer, it’s a bummer.
0:52:35 What was the best thing about dropping out of college
0:52:37 and moving to New York City at age 18?
0:52:43 – So I would say that it was,
0:52:45 it really meant that I could,
0:52:49 could and had to just take full responsibility for outcomes
0:52:53 and that I get to get to take a lot more credit
0:52:55 for what I’ve done since then,
0:52:57 but also get a lot more blame
0:53:01 where there isn’t really a brand name to fall back on.
0:53:02 And so if someone hires me,
0:53:05 they can’t say this person got a degree from institution X.
0:53:06 You know, I didn’t even,
0:53:07 I dropped out of a really bad school too.
0:53:10 So there’s not even like the,
0:53:13 not even the extra upside of, you know,
0:53:14 if I started up was so great,
0:53:16 I just had to leave Stanford
0:53:17 after only a couple of semesters.
0:53:20 No, it was Arizona State and I didn’t even party.
0:53:25 So yeah, but yeah, it’s that.
0:53:28 It’s just being a little more in control of the narrative
0:53:33 and also just knowing that it’s a lot more up to me.
0:53:35 – What was the worst thing about dropping out of college
0:53:36 and moving to New York at age 18?
0:53:39 – So one time I went through a really,
0:53:40 really long interview process
0:53:41 for a job that I really wanted.
0:53:45 And at the end of many, many rounds of interviews
0:53:49 and you know, work session and lots of stuff,
0:53:51 they, the hiring committee rejected
0:53:53 because I didn’t have a degree and that was on my resume.
0:53:55 So that was kind of inconvenient.
0:53:57 I guess another downside,
0:54:01 like it might have been nice to spend more time
0:54:06 with fewer obligations and access to a really good library.
0:54:09 (upbeat music)
0:54:15 – Byrne Hobart is the co-author of “Boom”,
0:54:18 “Bubbles and the End of Stagnation”.
0:54:20 Today’s show was produced by Gabriel Hunter-Chang.
0:54:23 It was edited by Lydia Jean-Cott
0:54:25 and engineered by Sarah Brugger.
0:54:29 You can email us at problem@pushkin.fm.
0:54:31 I’m Jacob Goldstein and we’ll be back next week
0:54:33 with another episode of “What’s Your Problem?”
0:54:36 (upbeat music)
0:54:39 (upbeat music)
0:54:49 [BLANK_AUDIO]
0:00:07 Pushkin.
0:00:12 There are these moments when people make
0:00:17 huge technical advances and it happens all of a sudden.
0:00:20 Or at least it feels like it happens all of a sudden.
0:00:22 You know, this is happening right now,
0:00:26 most obviously with AI, with artificial intelligence.
0:00:28 It happened not too long ago with rockets
0:00:32 when SpaceX dramatically lowered the cost of getting to space.
0:00:36 Maybe it’s happening now with crypto.
0:00:39 I’d say it’s probably too soon to say on that one.
0:00:42 In any case, you can look at technological breakthroughs
0:00:44 in different fields and at different times.
0:00:48 And you can ask, what can we learn from these?
0:00:51 You can ask, can we abstract certain, you know,
0:00:54 certain qualities, certain tendencies
0:00:56 that seem to drive toward these bursts
0:00:58 of technological progress.
0:01:03 There’s a recent book called Boom that asks this question.
0:01:05 And it comes up with an interesting answer.
0:01:08 According to the book, one thing that’s really helpful,
0:01:13 if you wanna make a wild technological leap, a bubble.
0:01:16 (upbeat music)
0:01:21 I’m Jacob Goldstein and this is What’s Your Problem.
0:01:24 My guest today is Bern Hobart.
0:01:26 He’s the author of a finance newsletter
0:01:28 called The Diff, and he’s also the co-author
0:01:32 of a book called Boom, Bubbles and the End of Stagnation.
0:01:35 When Bern talks about bubbles,
0:01:37 he isn’t just talking about financial bubbles
0:01:40 where investors drive prices through the roof.
0:01:44 When he says bubble, largely he means social bubbles,
0:01:46 filter bubbles, little groups of people
0:01:49 who share some wild belief.
0:01:51 He really gets at what he means in this one sentence
0:01:54 where he and his co-author write, quote,
0:01:58 “Transformative progress arises from small groups
0:02:01 “with a unified vision, vast funding,
0:02:05 “and surprisingly poor accountability,” end quote,
0:02:07 basically living the dream.
0:02:10 Later in the conversation, Bern and I discussed
0:02:14 the modern space industry and cryptocurrency and AI,
0:02:16 but to start, we talked about two case studies
0:02:19 from the US in the 20th century.
0:02:21 Bern writes about him in the book and he argues
0:02:24 that these two moments hold broader lessons
0:02:26 for how technological progress works.
0:02:28 The two case studies we talk about
0:02:30 are the Manhattan Project and the Apollo Missions.
0:02:34 So let’s start with the Manhattan Project.
0:02:37 And maybe one place to start it is with this famous
0:02:42 1939 letter from Albert Einstein and other scientists
0:02:46 to FDR, the president, about the possibility
0:02:48 of the Nazis building an atomic bomb.
0:02:53 – Right, so that letter, it feels like good material
0:02:55 for maybe not a complete musical comedy,
0:02:56 but at least an act of a musical comedy
0:02:57 because the whole–
0:03:01 – It’s kind of a springtime for Hitler in Germany-ish.
0:03:02 – There is this whole thing
0:03:04 where you have this brilliant physicist,
0:03:08 but he is just kind of the stereotypical professor,
0:03:09 crazy hair, very absent-minded,
0:03:12 always talking about these things where no one,
0:03:14 no normal person can really understand
0:03:15 what he’s talking about.
0:03:19 And suddenly, instead of talking about space time
0:03:22 and that energy and matter in their relationship,
0:03:24 suddenly he’s saying someone could build
0:03:25 a really, really big bomb
0:03:28 and that person will probably be a German.
0:03:31 And that has some very bad implications
0:03:31 for the rest of the world.
0:03:34 – So now here we are, the president decides,
0:03:35 okay, we need to build a bomb,
0:03:38 we need to spend a wild amount of money on it.
0:03:40 And this is a thing that you describe as a bubble,
0:03:41 which is interesting, right?
0:03:44 ‘Cause it’s not a bubble in the sense of market prices,
0:03:45 it’s the federal government and the military,
0:03:49 but it has these other bubble-like characteristics
0:03:51 in your telling, right?
0:03:53 Maybe other meanings of bubble,
0:03:56 the way we talk about a social bubble or a filter bubble.
0:03:56 Tell me about that.
0:04:00 Like why is the Manhattan Project a kind of bubble?
0:04:01 – Why is it a bubble?
0:04:02 Because there’s that feedback loop,
0:04:06 because people take the idea of actually building the bomb
0:04:09 more seriously as other people take it more seriously.
0:04:12 And the more that you have someone like,
0:04:14 the more that you have people like Oppenheimer
0:04:16 actually dedicating time to the project,
0:04:17 the more other people think the project
0:04:19 will actually happen, this is actually worth doing.
0:04:20 So you have this group of people
0:04:24 who start taking the idea of building a bomb more seriously.
0:04:27 They treat it as a thing that will actually happen
0:04:29 rather than a thing that is hypothetically possible
0:04:31 if this particular equation is right,
0:04:32 if these measurements are right, et cetera.
0:04:34 And then they start actually designing them.
0:04:37 – The Manhattan Project seems like this sort of point
0:04:40 that gets a bunch of really smart people to coalesce
0:04:42 in one place on one project at one time, right?
0:04:45 It sort of solves the coordination problem.
0:04:49 The way, whatever you might say, AI today is doing that.
0:04:52 Like just brilliant people suddenly are all in one place
0:04:53 working on the same thing in a way
0:04:56 that they absolutely would not otherwise be.
0:04:57 – That is true.
0:05:00 And this was both within the US academic community
0:05:01 and then within the global academic community.
0:05:04 ‘Cause you had a lot of people who were in Central Europe
0:05:06 or Eastern Europe who realized that that is just
0:05:08 not a great place for them to be
0:05:13 and tried to get to the UK, US, other allied countries
0:05:15 as quickly as possible.
0:05:18 And so there was just this massive intellectual dividend
0:05:21 of a lot of the most brilliant people in Germany
0:05:25 and in Eastern Europe and in Hungary, et cetera.
0:05:29 They were all fleeing and all ended up in the same country.
0:05:32 So yeah, you have just this serendipity machine
0:05:35 where it was just, if you were a physicist,
0:05:37 it was an incredible place for just overhearing
0:05:40 really novel ideas and putting those ideas,
0:05:42 putting your own ideas to the test
0:05:44 because you had all the smartest people in the world
0:05:48 pretty much in this one little town in New Mexico.
0:05:52 – Right, so the Los Alamos piece is the famous part.
0:05:54 It’s the part one has heard of
0:05:57 with respect to the Manhattan Project.
0:05:59 There’s a less famous part that’s really interesting
0:06:01 and that also seems to hold some broader lessons as well.
0:06:06 And that is the kind of basically the manufacturing part.
0:06:09 How do we enrich enough uranium to build a bomb
0:06:11 if the physicists figure out how to design?
0:06:14 Talk about that piece and the lessons there.
0:06:16 – Yes, so that one, you’re right.
0:06:19 It is often under-emphasized in the history.
0:06:21 It was more of an engineering project
0:06:22 than a research project.
0:06:23 Though there was a lot of research involved.
0:06:26 The purpose was get enough of the,
0:06:27 get enough enriched uranium.
0:06:30 So the isotope that is actually prone
0:06:33 to these chain reactions.
0:06:38 Get it isolated and then be able to incorporate that
0:06:39 into a bomb.
0:06:41 They were also working on other physical materials
0:06:45 because there were multiple plausible bomb designs.
0:06:47 Some use different triggering mechanisms.
0:06:48 Some use different materials.
0:06:51 And there were also multiple plausible ways
0:06:54 to enrich enough of the fissile material
0:06:55 to actually build a bomb.
0:06:59 And so one version of the story is you just go down the list
0:07:01 and you pick the one that you think
0:07:03 is the most cost effective, most likely.
0:07:07 And so we choose one way to get just U-235
0:07:09 and we have one way to build a bomb.
0:07:11 – U-235 is the enriched uranium.
0:07:12 – Yes.
0:07:14 – And that, by the way, is the way normal businesses
0:07:16 do things in normal times.
0:07:18 You’re like, well, we got to do this really expensive thing.
0:07:19 We got to build a factory
0:07:20 and we don’t even know if it’s going to work.
0:07:23 Let’s choose the version that’s most likely to work.
0:07:25 Like that is the kind of standard move, right?
0:07:26 Yeah.
0:07:26 – Right.
0:07:29 And then the problem though is that if you try that
0:07:32 and you just, you got unlucky.
0:07:33 You picked the wrong bomb design
0:07:34 and the right fissile material
0:07:36 or right material is wrong bomb design.
0:07:39 You’ve done a lot of work which has zero payoff.
0:07:40 – And you’ve lost time, right?
0:07:43 Like crucially there is a huge sense of urgency
0:07:45 present at this moment
0:07:48 that is driving the whole thing really.
0:07:49 – Right.
0:07:51 We could also do more than one of them in parallel
0:07:52 and that is what we did.
0:07:53 And on the manufacturing side
0:07:55 that was actually just murderously expensive.
0:07:57 If you are building a factory
0:07:59 and you build the wrong kind of factory
0:08:03 then you’ve wasted a lot of money and effort and time.
0:08:05 So they did more than one.
0:08:07 They did several different processes
0:08:10 for enriching uranium and for plutonium.
0:08:11 – All at the same time, right?
0:08:13 And they knew they weren’t going to use all of them.
0:08:15 They just didn’t know which one was going to work.
0:08:18 So it was like, well, let’s try all of them at the same time
0:08:19 and hopefully one of them will work.
0:08:20 – Yes.
0:08:22 – Like that is super bubbly, right?
0:08:23 That is wild and expensive.
0:08:28 That is just throwing wild amounts of money at something
0:08:29 in a great amount of haste.
0:08:31 – Yes, yeah.
0:08:33 And if you, so if you believe
0:08:35 that there is this pretty linear payoff
0:08:37 then every additional investment you make
0:08:39 has, it doesn’t qualitatively change things.
0:08:41 It just means you’re doing a little bit more of it.
0:08:43 But if you believe there’s some kind of nonlinear payoff
0:08:47 where either this facility basically doesn’t work at all
0:08:49 or it works really, really well,
0:08:51 then when you diversify a little bit,
0:08:54 you do actually get just this better risk adjusted return
0:08:57 even though you’re objectively taking more risk.
0:08:58 – Interesting, right.
0:09:01 So in this instance, it’s if the Nazis have the bomb
0:09:04 before we do, it’s the end of the world as we know it.
0:09:05 – Yes.
0:09:06 – And so we better take a lot of risk
0:09:08 and that’s actually rational.
0:09:12 – It reminds me a little bit of aspects
0:09:13 of Operation Warp Speed.
0:09:17 I remember talking to Susan Athea, Stanford economist
0:09:19 early in the pandemic who was making the case
0:09:22 to do exactly this with vaccine manufacturing
0:09:24 in like early 2020.
0:09:26 We didn’t know if any vaccine was gonna work
0:09:28 and it takes a long time to build a factory
0:09:30 to make a vaccine basically or to tailor a factory.
0:09:32 And she was like, just make a bunch of factories
0:09:34 to make vaccines because if one of them works
0:09:37 we wanna be able to start working on it that day.
0:09:39 Like that seems quite similar to this.
0:09:40 And it worked.
0:09:42 – Yeah, yeah, I think that’s absolutely true.
0:09:45 That you, you know, the higher the stakes are,
0:09:47 the more you wanna be running everything
0:09:49 that can plausibly help in parallel.
0:09:52 And depending on the exact nature of what you’re doing,
0:09:54 there can be some spillover effects.
0:09:57 It’s possible that you build a factory
0:10:00 for manufacturing vaccine A and vaccine A doesn’t work out
0:10:02 but you can retrofit that factory
0:10:03 and start doing vaccine B.
0:10:05 And you know, there are little ways
0:10:06 to shuffle things around a bit.
0:10:10 But you often wanna go into this basically telling yourself,
0:10:13 if we didn’t waste money and we still got a good outcome,
0:10:14 it’s because we got very, very lucky
0:10:16 and that we only know we’re being serious
0:10:17 if we did in fact waste a lot of money.
0:10:21 And I think that kind of inverting your view of risk
0:10:22 is often a really good way to think
0:10:25 about these big transformative changes.
0:10:26 And this is actually another case
0:10:29 where the financial metaphors do give useful information
0:10:31 about just real world behaviors
0:10:34 because at hedge funds, this is actually something
0:10:36 that risk teams will sometimes tell portfolio managers
0:10:38 is you are making money on too high a percentage
0:10:39 of your trades.
0:10:41 This means that you are not making all the trades
0:10:43 that you could and if you made,
0:10:47 if you took your hit rate from 55% down to 53%,
0:10:49 we’d be able to allocate more capital to you
0:10:50 even though you’d be annoyed
0:10:52 that you were losing money on more trades.
0:10:54 – Interesting, because overall,
0:10:57 you would likely have a more profitable outcome
0:10:59 by taking bigger risks and incurring a few more losses
0:11:02 but your wins would be bigger and make up for the losses.
0:11:03 – Yes, and this kind of thinking, you know,
0:11:05 it’s very easy if you’re the one sitting behind the desk
0:11:07 just talking about these relative trade offs.
0:11:09 It’s a lot harder if you are the first person
0:11:11 working with Uranium in the factory
0:11:13 and we don’t quite know what the risks of that are
0:11:15 but it is just a, it’s a generally true thing
0:11:18 about trade offs that if you, about trade offs and risk
0:11:20 that there is an optimal amount of risk to take
0:11:23 that optimal amount is sometimes dependent
0:11:26 on what the downside risk of inaction is.
0:11:29 And so sometimes if you’re too successful,
0:11:32 you realize that you are actually messing something up.
0:11:33 – Yeah, you’re not taking enough risk.
0:11:37 So we all know how the Manhattan project ends, it worked.
0:11:42 I mean, it is a little bit of a weird one to start with,
0:11:45 you know, the basic ideas like technological progress
0:11:46 is good, risks are good.
0:11:47 And we’re talking about building the atomic bomb
0:11:49 and dropping it on two cities.
0:11:53 And it’s, you know, it’s morally a much easier question
0:11:55 if you think it’s the Nazis, sorry,
0:11:56 but the Nazis are absolutely the worst
0:11:59 and I definitely don’t want them to have a bomb first.
0:12:01 You know, there is the argument
0:12:02 that more people would have died
0:12:05 in a conventional invasion without the bomb.
0:12:07 I don’t know.
0:12:11 I mean, how do you, what do you make of it?
0:12:13 Like, obviously the book is very pro technological progress.
0:12:16 This show is basically pro technological progress.
0:12:19 But like the bomb isn’t a happy one to start on.
0:12:21 Like, what do you make of it ultimately?
0:12:23 – Yeah, it’s one of those things where you do,
0:12:27 it does make me wish that we could run the same simulation,
0:12:28 you know, a couple of million times
0:12:30 and see what the net, you know,
0:12:32 loss and save lives are different scenarios.
0:12:36 But one thing, the bomb, if you,
0:12:41 I guess from like a purely utilitarian standpoint,
0:12:44 I suspect that the there’ve been net lives saved
0:12:47 because of less use of coal for electricity generation,
0:12:49 more use of nuclear power.
0:12:51 And that is directly downstream of the bomb
0:12:53 that you can build these, you know,
0:12:57 by design uncontrollable releases of atomic energy.
0:12:59 You can also build more controllable ones.
0:13:02 And then getting the funding for that would be a lot harder.
0:13:05 – And presumably we got nuclear power much sooner
0:13:06 than we otherwise would have
0:13:09 because the incredibly rapid progress
0:13:12 of the Manhattan Project, that’s the case there.
0:13:12 Fair.
0:13:15 – Which I don’t, I don’t think, you know,
0:13:19 if you let me push the button on what I drop an atomic bomb
0:13:22 on a civilian population in exchange
0:13:25 for fewer people dying of respiratory diseases
0:13:27 over the next couple of decades, you know,
0:13:29 I would have to give it a lot of thought.
0:13:32 – I don’t, I’m not gonna, I’m not gonna push that button,
0:13:34 but I’m never gonna have a job where I have to decide
0:13:35 ’cause I can’t deal.
0:13:39 Okay, a thing you mentioned in the book,
0:13:41 kind of in passing that was really interesting
0:13:46 and surprising to me was that nuclear power today
0:13:51 accounts for 18% of electric power generation in the US.
0:13:54 18%, like that is so much higher than I would have thought
0:13:57 given sort of how little you hear
0:13:59 about existing nuclear power plants, right?
0:14:01 Like that is a lot.
0:14:05 – Yeah, yeah, it is a surprisingly,
0:14:06 it’s a surprisingly high number,
0:14:09 but also nuclear power, it is one of the most annoying
0:14:12 technologies to talk about in the sense that
0:14:14 it doesn’t do anything really, really exciting
0:14:17 other than provide essentially unlimited power
0:14:19 with minimal risk.
0:14:24 – And some amount of, some amount of scary tail risk,
0:14:25 right? – Yes.
0:14:27 – Like, I mean, that is what is actually interesting
0:14:30 to talk about, sort of unfortunately for the world
0:14:32 given that it has a lot of benefits.
0:14:34 There is this tail risk and once in a while
0:14:36 something goes horribly wrong.
0:14:39 Even though on the whole, it seems to be clearly less risky
0:14:41 than say a coal-fired power plant.
0:14:46 – Right, and the industry has, they’re aware of those risks
0:14:49 and nobody wants to be responsible for that kind of thing
0:14:51 and nobody wants to be testifying before Congress
0:14:54 about ever having cut any corner whatsoever
0:14:56 in the event that a disaster happens.
0:14:59 So they do actually take that incredibly seriously
0:15:03 and so nuclear power does end up being in practice
0:15:05 much safer than other power sources.
0:15:07 And then you add in the externality
0:15:09 of doesn’t really produce emissions
0:15:13 and uranium exists in some quantities just about everywhere.
0:15:16 – Yeah, no climate change, no local air pollution
0:15:19 has a lot going for it, always on.
0:15:22 Okay, let’s go to the moon.
0:15:26 So you write also about the Apollo missions,
0:15:28 US going to the moon.
0:15:32 It’s the early ’60s, was it ’61.
0:15:35 Kennedy says we’re gonna go to the moon
0:15:36 by the end of the decade.
0:15:39 There’s the Cold War context.
0:15:41 Kennedy announces this goal.
0:15:45 What’s the response in the US when Kennedy says this?
0:15:46 – Yeah, so a lot of the response,
0:15:50 at first people are somewhat hypothetically excited
0:15:52 as they start realizing how much it will cost,
0:15:54 they go from not especially excited
0:15:57 to actually pretty deeply opposed.
0:15:59 And this shows up in,
0:16:01 there was someone coined the term moon doggal.
0:16:03 – Yeah, moon doggal, I loved moon doggal,
0:16:05 I learned that from the book.
0:16:08 It was Norbert Wiener, like a famous technologist,
0:16:11 not a crank, right?
0:16:12 Somebody who knew what he was talking about
0:16:16 was like, this is a crazy idea, it’s a moon doggal.
0:16:17 – Right, and this really worked its way
0:16:18 to popular culture.
0:16:23 If you go on Spotify and listen to the Tom Lehrer song,
0:16:26 Werner von Braun, the recording that Spotify has,
0:16:28 it opens with a monologue that is talking about
0:16:30 how stupid the idea of the Apollo program is.
0:16:35 It’s, and this is again someone who is in academia,
0:16:37 who’s a very, very sharp guy,
0:16:40 and who just feels like he completely sees
0:16:43 through this political giveaway program
0:16:45 to big defense contractors
0:16:47 and knows that there’s no point in doing this.
0:16:51 You write that NASA’s own analysis
0:16:55 found a 90% chance that a failure,
0:16:57 a failing to reach the moon by the end of the decade.
0:17:00 Like it wasn’t just outside people being critical,
0:17:03 it was NASA itself didn’t think it was good work.
0:17:06 There’s a phrase you use in the book
0:17:09 to talk about these sort of bubble-like environments
0:17:10 that are of interest to you.
0:17:12 And I found it really interesting,
0:17:14 and I think we can talk about it
0:17:15 in the context of Apollo.
0:17:18 That phrase is definite optimism.
0:17:19 Tell me about that phrase.
0:17:21 – Yes, so definite optimism is the view
0:17:24 that the future can and will be better
0:17:27 in some very specific way.
0:17:30 That there will be, there is something we cannot do now,
0:17:31 we will be able to do it in the future,
0:17:33 and it will be good that we can do it.
0:17:34 – And why is it important?
0:17:38 Like it’s a big deal in your telling in an interesting way.
0:17:41 Why is it so important?
0:17:43 – It’s important because that is what allows you
0:17:45 to actually marshal those resources,
0:17:49 whether those are the people or the capital
0:17:51 or the political pull to put them
0:17:53 all in some specific direction
0:17:54 and say, we’re going to build this thing,
0:17:56 so we need to actually go step by step
0:17:59 and figure out what specific things have to be done,
0:18:01 what discoveries have to be made,
0:18:04 what laws have to be passed in order for this to happen.
0:18:06 And that is, so it’s definite optimism
0:18:07 in the sense that you’re saying
0:18:09 there is a specific thing we’re going to build.
0:18:11 It’s the kind of thing that can keep you going
0:18:13 when you encounter temporary setbacks.
0:18:15 And that’s where the optimism part comes in,
0:18:20 because if you have a less definitely optimistic view
0:18:22 about that project, you might say the goal
0:18:24 of the Apollo program is to figure out
0:18:27 if we can put a person on the moon.
0:18:29 But I think what that leaves you open to
0:18:31 is the temptation to give up at any point.
0:18:35 ‘Cause at any point you can have a botched launch
0:18:39 or an accident or you’re designing some component
0:18:41 and the math just doesn’t pencil out
0:18:43 you need, you know, it’s going to weigh too much
0:18:46 to actually make it on track.
0:18:47 And you could say, okay, well that’s how we figured out
0:18:48 that we’re not actually doing this.
0:18:52 But if you do just have this kind of delusional view
0:18:54 that know if there’s a mistake,
0:18:55 it’s a mistake in my analysis,
0:18:58 not in the ultimate plan here,
0:18:59 and that it is physically possible,
0:19:01 we just have to figure out all the details,
0:19:03 then I think that does set up a different kind of motivation
0:19:06 because at that point, you can view every mistake
0:19:09 as just exhausting the set of possibilities
0:19:10 and letting you narrow things down
0:19:12 to what is the correct approach.
0:19:14 What you sort of needed was this sort of
0:19:16 very localized definite optimism
0:19:20 where you could imagine a researcher thinking to themselves
0:19:23 or an engineer or someone throughout the project
0:19:25 thinking to themselves that, okay,
0:19:28 this will probably not work overall.
0:19:30 But the specific thing I’m working on,
0:19:31 whether it is designing a space suit
0:19:33 or designing this rocket
0:19:35 or programming the guidance computer
0:19:39 that one, I could tell that my part is actually going to work
0:19:40 or at least I believe that I can make it work.
0:19:42 And two, this is my only chance
0:19:43 to work with these really cool toys.
0:19:46 So if the money is going to be wasted at some point,
0:19:48 let that money be wasted on me.
0:19:51 And I think that that kind of attitude of just,
0:19:52 you know that you have one shot
0:19:54 to actually do something really interesting,
0:19:56 you will not get a second chance.
0:19:58 If everyone believes that it does become
0:19:59 a coordinating mechanism
0:20:00 where now they’re all working extremely hard,
0:20:04 they all recognize that the success of what they’re doing
0:20:05 is very much up to them.
0:20:07 And then that ends up contributing
0:20:08 to this group’s success.
0:20:13 So it’s like this, if I’m going to do this,
0:20:14 I got to do it now.
0:20:15 Everybody’s doing it now.
0:20:16 We got the money now.
0:20:18 This is our one shot.
0:20:19 We better get it right.
0:20:21 We better do everything we can to make it work.
0:20:23 – Yes, fear of missing out.
0:20:25 – Yeah, FOMO, right.
0:20:25 So FOMO, it’s funny.
0:20:29 People talk about that as like a dumb investment thesis,
0:20:29 basically, right?
0:20:31 It’s like a meme stock idea,
0:20:36 but you talk about it in these more interesting contexts,
0:20:36 basically, right?
0:20:38 More meaningful, I would say.
0:20:43 – Yeah, so in the purely straightforward way,
0:20:45 the idea is there are sometimes
0:20:47 these very time-limited opportunities to do something.
0:20:49 And if you’re capable of doing that thing,
0:20:50 this may be your only chance.
0:20:51 And so missing out is actually something
0:20:53 you should be afraid of.
0:20:56 So if you actually have a really clever idea
0:20:58 for an AI company,
0:20:59 this is actually a time where you can at least
0:21:00 make the attempt.
0:21:03 So yeah, we do argue that missing out
0:21:04 is something you should absolutely fear.
0:21:06 – So what happens with the Apollo project?
0:21:10 Just in brief, like talking about just how big it is
0:21:13 and how risky it is, like it’s striking, right?
0:21:14 – Right, yeah.
0:21:16 So it was running, the expenses were running
0:21:20 on like a low single-digit percentage of GDP for a while.
0:21:23 So a couple percent of the value of everything
0:21:25 everybody in the country does
0:21:27 is going into the Apollo mission.
0:21:31 Just this one plainly unnecessary thing
0:21:33 that the government has decided to do.
0:21:35 – Right, and this is one of the cases
0:21:37 where there was a very powerful spillover effect
0:21:41 because the Apollo guidance computer
0:21:44 needed the most lightweight and least power consuming
0:21:47 and most reliable components possible.
0:21:49 And if you were building a computer conventionally
0:21:51 at that time and you had a budget,
0:21:53 you would probably build it out of vacuum tubes.
0:21:55 And you knew that the vacuum tubes,
0:21:56 they’re bulky, they consume a lot of power,
0:21:57 they throw off a lot of heat,
0:21:59 they burn out all the time,
0:22:01 but they are fairly cheap.
0:22:06 But in this case, there was an alternative technology.
0:22:08 It was extremely expensive,
0:22:12 but it was lightweight, didn’t use a lot of power
0:22:13 and did not have moving parts.
0:22:15 And that’s the integrated circuit.
0:22:17 So transistor-based computing.
0:22:19 – The chip, well, we know today as the chip.
0:22:21 – Yes, the chip.
0:22:23 – You read that in 1963,
0:22:28 NASA bought 60% of the chips made in the United States.
0:22:32 Just NASA, not the whole government, just NASA, 60%.
0:22:33 – They actually bought more chips than they needed
0:22:37 because they recognized that the chip companies
0:22:41 were run by very, very nice electrical engineering nerds
0:22:43 who just love designing tiny, tiny things
0:22:46 and that these people just don’t know how to run a business.
0:22:49 And so they were worried that Fairchild Semiconductor
0:22:51 would just run out of cash at some point.
0:22:54 And then NASA would have half of a computer
0:22:56 and no way to build the rest of it.
0:22:57 So they actually over-ordered.
0:23:00 They used integrated circuits for a few applications
0:23:02 that actually were not so dependent
0:23:04 on power consumption and weight and things.
0:23:06 So that critique of the Apollo program
0:23:07 was directionally correct.
0:23:10 It was money being splashed out to defense contractors
0:23:11 who were favored by the government.
0:23:12 But in this case, it was being done
0:23:15 in a more strategic and thoughtful way
0:23:17 and kind of kept the industry going.
0:23:21 – So you talk a fair bit in the book
0:23:26 about the sort of religious and quasi-religious aspects
0:23:28 of these little groups of people
0:23:30 that come together in these bubble-like moments
0:23:31 to do these big things.
0:23:35 And that’s really present in the Apollo section.
0:23:39 Like talk about the sort of religious ideas
0:23:41 associated with the Apollo mission
0:23:43 that the people working on the mission had.
0:23:46 – Yeah, I mean, you name it after a Greek God
0:23:49 and you’re already starting a little bit religious.
0:23:54 So there were people who worked on these missions
0:23:56 who felt like this is part of mankind’s destiny,
0:23:58 is to explore the stars
0:23:59 and that there’s this whole universe
0:24:01 that is a universe created by God.
0:24:02 And it would be kind of weird.
0:24:04 We can’t second-guess the divine,
0:24:06 but it’s a little weird for God to create
0:24:07 all of these astronomical bodies
0:24:09 that just kind of look good from the ground
0:24:11 and that you’re not actually meant to go visit.
0:24:13 – You talk about somewhat similar things
0:24:18 in other kind of less obviously spiritual dimensions
0:24:20 of people coming together
0:24:24 and having it kind of more than rational.
0:24:27 You use this word thymos from the Greek meaning spirit.
0:24:29 Like what’s going on there more broadly?
0:24:31 Why is that important more generally
0:24:33 for technological progress?
0:24:38 – Because the, so thymos is part of this tripartite model
0:24:42 of the soul where you have your appetites and your reason
0:24:45 and then your thymos show like your longing for glory
0:24:50 and honor and this kind of transcendent achievement.
0:24:54 And logos reasoning, it only gets you so far.
0:24:55 You can reason your way
0:24:57 into some pretty interesting things,
0:24:58 but at some point you do decide
0:25:01 that the reasonable thing is probably
0:25:03 to take it a little bit easy
0:25:06 and not take certain risks.
0:25:10 And it still is just this pursuit of something greater
0:25:13 and something beyond the ordinary,
0:25:14 something really beyond the logos, right?
0:25:16 Like beyond what you could get
0:25:19 to just by reasoning one step at a time.
0:25:24 And I think that that is just a deeply attractive proposition
0:25:26 to many people.
0:25:29 And it’s also a scary one because at that point,
0:25:31 if you’re doing things that are beyond
0:25:34 what is the rational thing to do,
0:25:35 then of course you have no rational explanation
0:25:38 for what you did wrong if you mess up.
0:25:40 And you are sort of betting
0:25:41 on some historical contingencies.
0:25:43 – That’s the definite optimism part, right?
0:25:45 – Metting on historical contingencies
0:25:48 is another way of saying definite optimism, right?
0:25:52 – So, back to the moon.
0:25:55 So we get to the moon, in fact, against all odds,
0:25:57 we make it.
0:26:01 And there’s this moment where it’s like,
0:26:05 today the moon, tomorrow, the solar system.
0:26:07 But in fact, it was today the moon,
0:26:09 tomorrow, not even the moon.
0:26:10 – Right.
0:26:11 – Like what happened?
0:26:15 – Well, you had asked about what these mega projects
0:26:16 have in common with financial bubbles.
0:26:17 And one of the things they have in common is,
0:26:19 sometimes there’s a bust.
0:26:21 And sometimes that bust is actually an overreaction
0:26:23 in the opposite direction.
0:26:28 And people take everything they believed
0:26:33 in, say, 1969 about humanity’s future and the stars,
0:26:35 and they say, okay, this is exactly the opposite
0:26:37 of where things will actually go,
0:26:38 and the exact opposite of what we should care about,
0:26:40 that we have plenty of problems here on Earth,
0:26:43 and why would we, do we really wanna turn Mars
0:26:46 into just another planet that also has problems
0:26:49 of racism and poverty and nuclear war and all that stuff?
0:26:53 So maybe we should stay home and fix our own stuff.
0:26:54 In public policy, you’d actually need for there
0:26:58 to be some kind of resurgence in belief and space.
0:26:59 You need some kind of charismatic story.
0:27:02 And perhaps to an extent, we have that right now.
0:27:03 – Yes.
0:27:05 – Maybe Elon’s not the perfect front man for all of this,
0:27:07 but he is certainly someone who demonstrates
0:27:10 that space travel, it can be done, it can be improved,
0:27:13 and that it’s just objectively cool.
0:27:16 That it is just hard to watch a SpaceX launch video
0:27:18 and not feel something.
0:27:21 – Yes, so good.
0:27:26 I wanna talk more about space in a minute.
0:27:28 So it’s interesting, these two stories
0:27:30 that are kind of in the middle of your book,
0:27:31 they’re kind of the core of the book, right?
0:27:34 These two interesting moments that are non-financial bubbles
0:27:37 when you have these incredible technological innovation
0:27:40 in a short amount of time that seems unrealistic,
0:27:44 unrealistically fast, impressive outcome.
0:27:48 And they’re both pure government projects.
0:27:51 They’re both command and control economy.
0:27:55 It is not the private sector, it is not capitalism.
0:27:57 What do you make of that?
0:28:00 – I would say there’s a very strong indirect link
0:28:02 for a couple of reasons.
0:28:07 One is just the practical kind of,
0:28:08 the practical kind of reason that personnel is policy
0:28:13 and that the US government in the 1930s,
0:28:14 the US government was hiring
0:28:16 and the private sector mostly wasn’t.
0:28:18 And so all the ambitious people,
0:28:20 basically all the ambitious people in the country
0:28:21 tried to get government jobs.
0:28:23 And that is usually not the case.
0:28:25 And there are certainly circumstances
0:28:27 where that’s a really bad sign.
0:28:28 But in this case, it was great.
0:28:30 It meant that there were a lot of New Deal projects
0:28:31 that were staffed by the people
0:28:34 who would have been rising up the ranks at RCA
0:28:36 or General Electric or something a decade earlier.
0:28:38 Now they’re running New Deal projects instead.
0:28:40 And they’re again, rising up the ranks really fast,
0:28:42 having a very large real-world impact
0:28:43 very early in their careers.
0:28:47 And those people had been working together for a while
0:28:48 and they knew each other.
0:28:49 There was a lot of just institutional knowledge
0:28:52 about how to get big things done
0:28:53 within the US government.
0:28:55 And a lot of that institutional knowledge
0:28:56 could then be redirected.
0:28:59 So you have the New Deal and then the war effort.
0:29:00 And then you have this post-war economy
0:29:02 where there’s still, it takes a while
0:29:04 for the government to fully relax its control.
0:29:07 And then very soon we’re into the Korean War.
0:29:11 So, yeah, there was just a large increase in state capacity
0:29:13 and just in the quality of people making decisions
0:29:15 within the US government in that period.
0:29:20 – We’ll be back in a minute
0:29:23 to talk about bubble-esque things happening right now.
0:29:27 Namely, rockets, cryptocurrency, and AI.
0:29:30 (upbeat music)
0:29:37 – Okay, now to space today.
0:29:41 Bern and I talked about SpaceX in particular
0:29:42 because, you know, it really is the company
0:29:45 that launched the modern space industry.
0:29:48 And there’s this one key trait that SpaceX shares
0:29:51 with the other projects Bern wrote about in the book.
0:29:55 It brought together people who share a wild dream.
0:29:56 If you go to work at SpaceX,
0:29:58 it’s probably because you believe
0:30:01 in getting humanity to Mars.
0:30:03 – Yeah, yeah, it’s not just that you believe in the dream,
0:30:06 but when you get the job, you’re suddenly in an environment
0:30:07 where everyone believes in the dream.
0:30:09 And if you’re working in one of those organizations,
0:30:11 you’re probably not working nine to five,
0:30:15 which means you have very few hours in your day or week
0:30:17 where you’re not completely surrounded by people
0:30:19 who believe that humanity will,
0:30:21 people will be living on Mars
0:30:24 and that this is the organization that will make it happen.
0:30:26 And that just has to really mess with your mind.
0:30:30 Like, what is normal to an engineer
0:30:32 working at SpaceX in 2006 is completely abnormal
0:30:35 to 99.9% of the human population.
0:30:37 And, you know, most of the exceptions
0:30:38 are like six-year-old boys
0:30:40 who just watched Star Wars for the first time,
0:30:41 go Mars, it’s crazy.
0:30:43 – Yeah, I mean, really, as I went through the book,
0:30:45 I was like, oh, really the bubble you’re talking about
0:30:48 is a social bubble, like the meaningful bubble.
0:30:50 Like maybe there’s a financial bubble attached to it.
0:30:52 Maybe there isn’t, but what really matters
0:30:54 is you’re in this weird little social bubble
0:30:58 that believes some wild thing together
0:30:59 that believes it is not wild,
0:31:00 that believes it is gonna happen.
0:31:02 Like, that’s the thing.
0:31:03 – Yeah.
0:31:05 – And has money, and has the money
0:31:07 to act on their wild belief.
0:31:08 – Yes.
0:31:10 And so, you know, getting the money
0:31:13 does mean interacting with the normie sphere,
0:31:15 interacting with people who don’t quite buy into all of it,
0:31:20 but they, when you have these really ambitious plans
0:31:21 and you’re taking them seriously,
0:31:23 you’re doing them step by step,
0:31:25 some of those steps do have other practical applications.
0:31:27 And so that is the basic story.
0:31:29 It was not just straight shot,
0:31:32 we are going to invest all the money Elon got from PayPal
0:31:32 into going to Mars,
0:31:34 and hopefully we get to Mars before we run out.
0:31:35 – Yeah.
0:31:38 – It was, you know, we’re going to build these prototypes,
0:31:39 we’re going to build reusable rockets,
0:31:43 we’re going to use those for existing use cases,
0:31:45 and we will probably find new use cases.
0:31:47 And then once we get really, really good at launching things
0:31:50 cheaply, well, there are a lot of satellites out there,
0:31:52 and perhaps we should have some of our own.
0:31:53 And if we can do it at sufficient scale,
0:31:56 then maybe we can just throw a global communications network
0:31:59 up there in the sky and see what happens next.
0:32:02 So, yeah, that’s, you know, the intermediate steps,
0:32:05 each one, it’s basically taking the thermos like the spirited,
0:32:08 you know, here’s our grand vision of the future,
0:32:09 and you know, here’s my destiny,
0:32:11 and I was put on Earth to do this and say,
0:32:13 okay, well, the next step is have enough money
0:32:14 to pay rent next month.
0:32:17 – Right, what do I gotta do tomorrow to get to Mars?
0:32:22 So, is there a space bubble right now?
0:32:25 – I think so, I think there is,
0:32:30 I think there are people who look at SpaceX and say,
0:32:32 this is achievable, and that more is achievable.
0:32:34 They also look at SpaceX and say,
0:32:36 this is a kind of infrastructure
0:32:40 that there are things like doing manufacturing in orbit,
0:32:41 or doing manufacturing on the moon,
0:32:44 where in some cases that is actually the best place
0:32:45 to build something.
0:32:48 – Basically, because SpaceX has driven down the cost
0:32:50 so much of getting stuff into orbit,
0:32:54 new ideas that would have been economically absurd
0:32:58 20 years ago, like manufacturing in space are now plausible.
0:33:00 And so this is sort of bubble building on itself.
0:33:03 And like, why is it not just an industry now?
0:33:05 Why is it a bubble in your telling?
0:33:10 – It is the feedback loop where what SpaceX does
0:33:11 makes more sense if they believe
0:33:12 that there will be a lot of demand
0:33:16 to move physical things off of Earth and into orbit,
0:33:18 and perhaps further out.
0:33:21 If they believe that there’s more demand for that,
0:33:23 they should be investing more in R&D,
0:33:25 they should be building bigger and better rockets,
0:33:29 and they should be doing the big fixed cost investment
0:33:31 that incrementally reduces the cost of launches,
0:33:33 and only pays for itself if you do a lot of them.
0:33:35 And then if they’re doing that,
0:33:37 and you have your dream of,
0:33:39 we’re going to manufacture drugs in space,
0:33:42 and they will be, like the marginal cost is low,
0:33:44 once you get stuff up there.
0:33:45 Well, that dream is a little bit more plausible
0:33:47 if you can actually plot that curve
0:33:49 of how much does it cost to get a kilogram into space,
0:33:54 and say, there is a specific year at which point
0:33:56 we would actually have the cost advantage
0:33:57 versus terrestrial manufacturing.
0:34:00 – So it’s this sort of coordinating mechanism
0:34:03 that like you also write about with Microsoft and Intel
0:34:04 in the like 80s, 90s, where it’s like,
0:34:06 oh, they’re building better chips,
0:34:08 so we’ll build better software.
0:34:10 And then because they’re building better software,
0:34:11 we’ll build better chips.
0:34:14 So this is like a more exciting version of that, right?
0:34:17 Because it’s going to get even cheaper
0:34:18 to send stuff to space.
0:34:22 We can build this crazy factory to exist in space.
0:34:23 And then that tells SpaceX,
0:34:27 oh, we can in fact keep building, keep innovating,
0:34:28 keep spending money.
0:34:32 – Yes, and so someone has to do just half of that,
0:34:34 like the half of that, that makes no sense whatsoever.
0:34:36 – That was SpaceX at the beginning, right?
0:34:39 That was like just a guy with a lot of money
0:34:40 and a crazy dream.
0:34:42 – Yeah, it just really helps to have someone
0:34:44 who’s eccentric and has a lot of money
0:34:46 and is willing to throw it at a lot of different things.
0:34:49 Like Musk, he spent some substantial fraction
0:34:52 of his net worth right after this PayPal sale
0:34:54 on a really nice sports car.
0:34:57 And then immediately took it for a drive and wrecked it,
0:34:59 had no insurance and was not wearing a seatbelt.
0:35:02 So the Elon Musk story could have just been this proverb
0:35:05 about dot com excess and what happened
0:35:07 when you finally gave these people money
0:35:09 as they immediately bought sports cars and wrecked them.
0:35:12 Instead, it’s a story about a different kind of excess,
0:35:15 but it’s still, I guess what that illustrates
0:35:17 is the risk tolerance. – Risk-seeking, yes, yes.
0:35:22 – Yeah, there’s a risk level where you are going for a joy
0:35:25 ride in your $2 million car and you haven’t bothered
0:35:28 to fill out all the paperwork or buy the insurance,
0:35:29 and that is the risk tolerance
0:35:32 of someone who starts a company like SpaceX.
0:35:35 – Okay, enough about space.
0:35:38 Let’s talk about crypto, formerly known as cryptocurrency.
0:35:42 Let’s talk about Bitcoin, and let’s talk about Bitcoin,
0:35:44 especially at the beginning, right,
0:35:47 before it was number go up,
0:35:50 when it was, it really was true believers, right?
0:35:52 It was people who had a crazy worldview,
0:35:55 like you’re talking about in these other contexts.
0:35:59 – Yes, so we still don’t know for sure
0:36:01 who Satoshi Nakamoto was,
0:36:02 and I think everyone in crypto
0:36:05 has at least one guess, sometimes many guesses,
0:36:07 but whoever Satoshi was, whoever they were,
0:36:09 this is the creator of Bitcoin
0:36:11 for the one person who doesn’t know yet.
0:36:16 – They had this view that one of the fundamental problems
0:36:20 in the world today is that if you are going to transfer value
0:36:21 from one party to another,
0:36:23 you need some trusted intermediary.
0:36:25 – You need a trusted intermediary
0:36:28 like a government and a bank.
0:36:31 Typically, in money, you need both governments and banks
0:36:33 the way it works in the world today, right?
0:36:33 – Yes.
0:36:35 – And Satoshi happened to publish the Bitcoin White Paper
0:36:39 in October, 2008, which was a great moment to find people
0:36:42 who really didn’t want to have to deal with governments
0:36:44 and banks when they were dealing with money,
0:36:45 at the financial crisis, right?
0:36:48 Right in the teeth of the financial crisis.
0:36:50 – Yes, so it is in one sense
0:36:52 just this technically clever thing.
0:36:54 And then in another sense,
0:36:55 it’s this very ideological project
0:36:57 where he doesn’t like central banks,
0:36:59 he doesn’t like regular banks.
0:37:02 He feels like all of these institutions are corrupt
0:37:05 and your money is just an entry in somebody’s database
0:37:07 and they can update that database tomorrow
0:37:09 and either change how much you have
0:37:10 or change what it’s worth.
0:37:13 And we need to just build something new from a clean slate.
0:37:14 And there’s also,
0:37:17 I think there’s this tendency among a lot of tech people to,
0:37:21 when you look at any kind of communications technology
0:37:23 and money broadly defined as a communication technology,
0:37:24 you’re always looking at something
0:37:25 that has evolved from something simple
0:37:28 and it has just been patched and altered and edited
0:37:32 and tweaked and so on until it works the way that it works.
0:37:35 But that always means that you can easily come up
0:37:36 with some first principles view
0:37:39 that’s a whole lot cleaner, easier to reason about,
0:37:40 omits some mistakes.
0:37:42 And then you often find that, okay,
0:37:43 you omitted all the mistakes
0:37:44 that are really, really salient about fiat,
0:37:46 but then you added some brand new mistakes
0:37:49 or added mistakes that we haven’t made in hundreds of years.
0:37:51 And so they’re, it’s full of trade-offs.
0:37:53 – It gets complicated, but at the beginning, right?
0:37:57 So the white paper comes out and I covered,
0:37:59 I did a story about Bitcoin in 2011,
0:38:01 which was still quite early.
0:38:05 We had shocked that it had gone from $10 a Bitcoin
0:38:07 to $20 a Bitcoin, thought we were reading it wrong.
0:38:10 And at that time, like I talked to Gavin Andreessen,
0:38:13 who was very early in the Bitcoin universe,
0:38:15 like he was not in it to get rich, right?
0:38:17 Like he really believed, he really believed in it.
0:38:20 And that was the vibe then.
0:38:23 And like he thought it was gonna be money, right?
0:38:27 The dream was people will use this to buy stuff.
0:38:30 And one thing that is interesting to me
0:38:34 is yes, some people sort of use it to buy stuff,
0:38:36 but basically not, right?
0:38:39 Like that, it would go from $20 a Bitcoin
0:38:43 to $100,000 a Bitcoin without some crazy killer app,
0:38:44 without becoming the web,
0:38:47 without becoming something that everybody uses,
0:38:48 whether they care about it or not.
0:38:50 That I would not have guessed.
0:38:52 And it seems weird.
0:38:54 And plainly now crypto is full of some people
0:38:55 who are true believers and a lot of people
0:38:56 who just want to get rich.
0:38:59 And some of whom are pretty scammy.
0:39:01 – Yeah, yeah, there’s like the grifter coefficient
0:39:03 always goes up with the price.
0:39:05 And then the true believers are still there
0:39:07 during the next 80% drawdown.
0:39:09 And I’m sure there will be a drawdown,
0:39:10 something like that, at some point in the future.
0:39:12 It’s just that that’s kind of the nature
0:39:14 of these kinds of assets.
0:39:18 Bitcoin, it was originally conceived as more of a currency.
0:39:21 And Satoshi talked about some hypothetical products
0:39:23 you could buy with it.
0:39:27 And then the first Bitcoin killer app, to be fair,
0:39:29 was e-commerce, it was specifically drugs.
0:39:31 – Yeah, crime. – Yes.
0:39:33 – It is a very, very libertarian project in that way.
0:39:36 So it doesn’t work very well as a dollar substitute
0:39:39 for many reasons, most of the obvious reasons.
0:39:42 But it is interesting as a gold substitute
0:39:44 where part of the point of gold
0:39:46 is that it is very divisible
0:39:48 and your gold is the same as my gold.
0:39:51 And we’ve all kind of collectively agreed
0:39:54 that gold is worth more than its value
0:39:56 as just an industrial product.
0:40:00 And then the neat thing about gold is
0:40:01 it’s really hard to dig up anymore.
0:40:03 Gold supply is extremely inelastic.
0:40:07 – And Bitcoin is designed to have a finite supply,
0:40:08 right? – Yes.
0:40:09 – It’s an important analogy, yeah.
0:40:09 – Yes.
0:40:12 – More generally, like, it’s what?
0:40:13 It’s a long time out now.
0:40:17 It’s 17 years or something since the white paper.
0:40:24 What do you make of the sort of costs and benefits
0:40:25 of cryptocurrency so far?
0:40:27 The costs are more obvious to me.
0:40:29 Like there’s a lot of grift.
0:40:32 It’s, you know, by design, very energy intensive.
0:40:35 Like I’m open to like better payment systems.
0:40:38 There’s lots of just like boring efficiency gains
0:40:42 you would think we could get that we haven’t gotten, right?
0:40:42 – Yeah.
0:40:43 – What do you think about the costs
0:40:45 versus the benefits so far?
0:40:49 – I think in terms of the present value of future gains,
0:40:50 probably better off.
0:40:52 I think in terms of, yeah, realized gains so far, worse off.
0:40:53 – Uh-huh, uh-huh.
0:40:55 So basically worse off so far,
0:40:57 but in the long run, we’ll be better off.
0:40:59 We just haven’t got the pay off yet.
0:41:01 This is actually something that general purpose technologies,
0:41:03 it is a feature of general purpose technologies
0:41:05 that there’s often a point early in their history
0:41:07 where the net benefit has been negative.
0:41:11 – What would make it clearly positive?
0:41:13 Like what’s the killer return you’re hoping to see
0:41:15 from cryptocurrency?
0:41:17 – Yeah, so I think the killer return would be
0:41:20 if there is a financial system that is open
0:41:23 in the sense that starting a financial institution,
0:41:25 starting a bank or an insurance company or something
0:41:27 is basically you write some code
0:41:31 and you click the deploy button and your code is running,
0:41:33 you have capitalized your little entity
0:41:35 and now you can provide whatever it is,
0:41:37 like mean tweet insurance.
0:41:38 You’re selling people for a dollar a day,
0:41:42 you’ll pay them $100 if there’s a tweet that makes them cry.
0:41:43 You know, that kind of thing–
0:41:45 – Weird incentives in your insurance business,
0:41:46 I’m gonna tell you right now.
0:41:48 – You get to speed run all kinds of financial history,
0:41:50 I’m sure you learn all about adverse selection,
0:41:52 but like a financial system where anything,
0:41:54 anything can be plugged into something else
0:41:57 and basically everything is an API call away.
0:41:59 It’s just a really interesting concept
0:42:02 and the fiat system is moving in that direction, but slowly.
0:42:05 – And just to be clear, like why is it,
0:42:08 why is that better on balance?
0:42:10 So for it to be net positive,
0:42:12 that has to be not only interesting,
0:42:15 but that has to like lead to more human flourishing
0:42:18 and less suffering than we would have in its absence, right?
0:42:22 – Yeah, markets provide large positive externalities.
0:42:25 There’s a lot of effort in those markets that feels wasted,
0:42:29 but it is like markets transmit information better
0:42:30 than basically anything else
0:42:32 because what they’re always transmitting
0:42:34 is the information you actually care about.
0:42:36 So like oil prices,
0:42:40 you don’t have to know that oil prices are up
0:42:41 because there was a terrorist attack
0:42:44 or because someone drilled a dry hole or whatever.
0:42:47 You, what you respond to is just gas is more expensive
0:42:49 and therefore I will drive less
0:42:51 or you know, energy is cheaper or more expensive.
0:42:53 And so I need to change my behavior.
0:42:56 So it’s always transmitting the actually useful information
0:42:57 to the people who would want to use it.
0:42:59 And the more complete markets are
0:43:02 and the more things there are
0:43:04 where that information can be instantaneously transmitted
0:43:06 to the people who want to respond to it,
0:43:08 the more everyone’s real world behavior
0:43:11 actually reflects whatever the underlying material constraints
0:43:12 are on doing what we want to do.
0:43:17 – The sort of crypto dream there is just more finance markets,
0:43:21 more feedback, more market feedback,
0:43:24 better financial services as a result.
0:43:27 That’s the basic view you’re arguing for.
0:43:29 – And it’s just a really interesting way
0:43:33 to build up new financial products from first principles
0:43:35 and sometimes you learn why those first principles are wrong
0:43:37 but that itself is valuable.
0:43:41 Like there is actual value in understanding something
0:43:42 that is a tradition or a norm
0:43:44 and understanding why it works
0:43:46 and therefore deciding that that norm
0:43:47 is actually a good norm.
0:43:48 – Good.
0:43:51 Last one, you know what it’s gonna be.
0:43:54 You tell me what the last one is.
0:43:55 – Is AI a bubble?
0:43:59 – Yeah, but you sound so sad about it.
0:44:01 Of course we’ve got to talk about AI, right?
0:44:02 Are you sad? – Yeah, of course.
0:44:02 – You talk about AI?
0:44:06 Like it seems exactly like what you’re right about.
0:44:08 Yeah.
0:44:13 When you hear Sam Altman talk about creating open AI,
0:44:18 starting open AI, he’s like, we basically said,
0:44:21 you know, we’re gonna make AGI,
0:44:24 artificial general intelligence, come work with us.
0:44:25 And when he talks about it, it’s like,
0:44:27 there was a universe of people who were like,
0:44:30 the smartest people who really believed
0:44:31 who that’s what they wanted to do.
0:44:33 So they came and worked with us,
0:44:36 which seems like exactly your story.
0:44:37 – Yes.
0:44:39 It turns out that a lot of people have had that dream
0:44:41 and for a lot of people,
0:44:43 maybe it wasn’t what they were studying in grad school,
0:44:46 but it was why they ended up being the kind of person
0:44:47 who would major in computer science
0:44:48 and then try to get a PhD in it
0:44:52 and would go into a more researchy end of the software world.
0:44:56 So yeah, there were people for whom this was,
0:44:57 it was incredibly refreshing to hear
0:44:59 that someone actually wants to build the thing.
0:45:02 – So you have that kind of shared belief.
0:45:04 I mean, at this point, you have these other elements
0:45:05 of what you’re talking about, right?
0:45:09 Like a sense of urgency,
0:45:12 an incredible amount of money,
0:45:19 elements of spiritual or quasi-spiritual belief.
0:45:23 – Yes, there are pseudonymous open AI employees on Twitter
0:45:26 who will tweet about things like building God.
0:45:29 So yeah, they’re taking it in a weird spiritual direction,
0:45:31 but I think there is something,
0:45:35 it is interesting that a feature of the natural world
0:45:38 is that you can actually,
0:45:40 if you put enough of a,
0:45:43 you arrange refined sand and a couple of metals
0:45:44 in exactly the right way
0:45:46 and type in the right incantations
0:45:48 and add a lot of power,
0:45:50 that you get something that appears to think
0:45:52 and that can trick someone into thinking
0:45:54 that it’s a real human being.
0:45:56 – The is it good or is it bad question
0:45:58 is quite interesting here.
0:46:00 Obviously too soon to tell,
0:46:04 but striking to me in the case of AI
0:46:06 that the people who seem most worried about it
0:46:09 are the people who know the most about it,
0:46:10 which is not often the case, right?
0:46:13 Usually the people doing the work, building the thing,
0:46:15 just love it and think it’s great.
0:46:17 In this case, it’s kind of the opposite.
0:46:21 – Yeah, I think the times when I am calmest about AI
0:46:24 and least worried about it taking my job
0:46:28 are times when I’m using AI products
0:46:31 to slightly improve how I do my job.
0:46:32 That is better natural language search
0:46:36 or actually most of it is processing natural language
0:46:39 when there are a lot of pages I need to read,
0:46:41 which contain, if it’s like a thousand pages
0:46:43 of which five sentences matter to me,
0:46:46 that is a job for the API and not a job for me.
0:46:49 But it is now a job that the API and I can actually get done
0:46:53 and my function is to figure out what those five sentences
0:46:55 are and figure out a clever way to find them.
0:46:57 And then the AI’s job is to do the grunt work
0:46:58 of actually reading through them.
0:47:00 – That’s AI as useful tool, right?
0:47:03 That’s the happy AI story, yeah.
0:47:06 – And I actually think that preserving your own agency
0:47:08 is a pretty big deal in this context.
0:47:11 So I think that if you’re making a decision,
0:47:14 it needs to be something where you have actually formalized it
0:47:15 to the extent that you can formalize it
0:47:18 and then you have made the call.
0:47:20 But for a lot of the grunt work,
0:47:23 AI is just, it’s a way to massively parallelize
0:47:25 having an intern.
0:47:26 – Plainly, it’s powerful.
0:47:29 And you’re talking about what it can do right now.
0:47:32 I mean, the smartest people are like,
0:47:34 yes, but we’re gonna have AGI in two years,
0:47:36 which I don’t know if that’s right or not.
0:47:37 I don’t know how to evaluate that claim,
0:47:39 but it’s a wild claim.
0:47:43 It’s plainly not obviously wrong on its face, right?
0:47:44 It’s possible.
0:47:46 Can you even start to parse that?
0:47:49 You’re giving sort of little things today about,
0:47:50 oh, here’s a useful tool and here’s a thing
0:47:51 I don’t use it for.
0:47:53 But there’s a much bigger set of questions
0:47:54 that seem imminent.
0:47:56 – You know, there are certain kinds of radical uncertainty.
0:48:00 They’re, you know, I think it increases wealth inequality,
0:48:04 but also means that intelligence is just more abundant
0:48:08 and is available on-demand and is baked into more things.
0:48:11 I think that it’s, you know, you can definitely sketch out
0:48:12 really, really negative scenarios.
0:48:15 You can sketch out, you know, not end of the world,
0:48:19 but maybe might as well be for the average person scenarios
0:48:21 where every white collar job gets eliminated
0:48:22 and then a tiny handful of people
0:48:25 have just unimaginable wealth and, you know,
0:48:27 rearrange the system to make sure that doesn’t change.
0:48:30 But I think there are a lot of intermediate stories
0:48:33 that are closer to just the story of, say,
0:48:35 accountants after the rise of Excel,
0:48:37 where there were parts of their job
0:48:39 that got much, much easier
0:48:41 and then the scope of what they could do expanded.
0:48:43 – It was the bookkeepers who took it on the chin.
0:48:47 It turns out like Excel actually did drive bookkeepers
0:48:50 out of work and it made accountants more powerful.
0:48:53 – Yeah, so you, you know, within,
0:48:55 I think within a kind of company function,
0:48:58 you’ll have specific job functions that do mostly go away.
0:49:00 And then a lot of them will evolve.
0:49:03 And so the way that AI seems to be rolling out
0:49:06 in big companies in practice is they,
0:49:09 they generally don’t lay off a ton of people.
0:49:12 They will sometimes end outsourced contracts,
0:49:15 but in a lot of the cases, they don’t lay people off.
0:49:18 They change people’s responsibilities.
0:49:20 They ask them to do less of one thing
0:49:22 and a whole lot more of something else.
0:49:24 And then in some cases, that means
0:49:25 they don’t have to do much hiring right now,
0:49:27 but they think that a layoff would be pretty demoralizing.
0:49:30 So they sort of grow into the new cost structure
0:49:32 that they can support.
0:49:33 And then in other cases, there are companies
0:49:35 where they realize, wait,
0:49:37 we can ship features twice as fast now.
0:49:38 And so our revenue’s going up faster.
0:49:40 So we actually need more developers
0:49:42 because our developers are so much more productive.
0:49:48 – We’ll be back in a minute with the lightning round.
0:49:58 Okay, let’s finish with the lightning round.
0:50:01 It’s the most interesting thing you learned
0:50:04 from an earnings call transcript in the last year.
0:50:08 – Most interesting thing from a transcript in the last year.
0:50:12 I would say there was a point,
0:50:13 this might have been a little over a year ago.
0:50:16 There was a point at which Satya Nadella
0:50:18 was talking about Microsoft’s AI spending.
0:50:21 And he said, “We are still at the point.”
0:50:23 And I think he and Zuckerberg both said something
0:50:25 to the same effect and in the same quarter,
0:50:27 which was very exciting for NVIDIA people.
0:50:28 But it was like, we’re at the point
0:50:30 where we see a lot more risk to underspending
0:50:34 than to overspending on AI specifically.
0:50:36 – That really speaks to your book, right?
0:50:39 That really is like a bubbly as hell
0:50:40 in the context of your book,
0:50:43 like overspending like the Apollo missions,
0:50:44 like the Manhattan Project,
0:50:47 like the big risk is that we don’t spend enough.
0:50:49 – And also they know that their competitors
0:50:51 are listening to these calls too.
0:50:55 So they were also saying that this is kind of a winnable fight,
0:50:57 that they do think that there is a level
0:51:00 of capital spending at which Microsoft can win
0:51:01 simply because they took it more seriously
0:51:02 than everybody else.
0:51:06 – So he’s like, yes, we’re gonna spend billions
0:51:08 and billions of dollars on AI
0:51:13 because we think we can win Zuckerberg implicitly.
0:51:22 What’s one innovation in history that you wish didn’t happen?
0:51:31 – I wish there were some reason that it was infeasible
0:51:34 to have really, really tight feedback loops
0:51:37 for consumer facing apps, particularly games.
0:51:42 – Is that a way of saying you wish games were less addictive?
0:51:44 – Yeah, I wish games were less addictive
0:51:48 or that they weren’t as good at getting more addictive.
0:51:50 So I wrote a piece in the newsletter about this recently
0:51:51 ’cause there was that wonderful article
0:51:54 on the loneliness economy in the Atlantic
0:51:55 a couple of weeks back that was talking about who we,
0:51:58 we just spend that one of the pandemic trends
0:51:59 that has mean reverted the least
0:52:01 is how much time people spend alone.
0:52:02 And I think one of the reasons for that
0:52:05 is that all the things you do alone,
0:52:08 they are things that produce data for the company
0:52:10 that monetizes the time that you spend alone.
0:52:14 And so the fact that we all watched a whole lot of Netflix
0:52:16 in the spring of 2020 means that Netflix has a lot more data
0:52:18 on what our preferences are.
0:52:21 – So they got better at making us want to watch Netflix
0:52:25 and all the video games we’ve played on our phones
0:52:27 got better at making us addicted
0:52:29 to keep playing video games on our phones.
0:52:31 Yeah, that’s a bummer, it’s a bummer.
0:52:35 What was the best thing about dropping out of college
0:52:37 and moving to New York City at age 18?
0:52:43 – So I would say that it was,
0:52:45 it really meant that I could,
0:52:49 could and had to just take full responsibility for outcomes
0:52:53 and that I get to get to take a lot more credit
0:52:55 for what I’ve done since then,
0:52:57 but also get a lot more blame
0:53:01 where there isn’t really a brand name to fall back on.
0:53:02 And so if someone hires me,
0:53:05 they can’t say this person got a degree from institution X.
0:53:06 You know, I didn’t even,
0:53:07 I dropped out of a really bad school too.
0:53:10 So there’s not even like the,
0:53:13 not even the extra upside of, you know,
0:53:14 if I started up was so great,
0:53:16 I just had to leave Stanford
0:53:17 after only a couple of semesters.
0:53:20 No, it was Arizona State and I didn’t even party.
0:53:25 So yeah, but yeah, it’s that.
0:53:28 It’s just being a little more in control of the narrative
0:53:33 and also just knowing that it’s a lot more up to me.
0:53:35 – What was the worst thing about dropping out of college
0:53:36 and moving to New York at age 18?
0:53:39 – So one time I went through a really,
0:53:40 really long interview process
0:53:41 for a job that I really wanted.
0:53:45 And at the end of many, many rounds of interviews
0:53:49 and you know, work session and lots of stuff,
0:53:51 they, the hiring committee rejected
0:53:53 because I didn’t have a degree and that was on my resume.
0:53:55 So that was kind of inconvenient.
0:53:57 I guess another downside,
0:54:01 like it might have been nice to spend more time
0:54:06 with fewer obligations and access to a really good library.
0:54:09 (upbeat music)
0:54:15 – Byrne Hobart is the co-author of “Boom”,
0:54:18 “Bubbles and the End of Stagnation”.
0:54:20 Today’s show was produced by Gabriel Hunter-Chang.
0:54:23 It was edited by Lydia Jean-Cott
0:54:25 and engineered by Sarah Brugger.
0:54:29 You can email us at problem@pushkin.fm.
0:54:31 I’m Jacob Goldstein and we’ll be back next week
0:54:33 with another episode of “What’s Your Problem?”
0:54:36 (upbeat music)
0:54:39 (upbeat music)
0:54:49 [BLANK_AUDIO]
There are moments in history when people make huge technological advances all of a sudden. Think of the Manhattan Project, the Apollo missions, or, more recently, generative AI. But what do these moments have in common? Is there some set of conditions that lead to massive technological leaps?
Byrne Hobart is the author of a finance newsletter called The Diff, and the co-author of Boom: Bubbles and the End of Stagnation. In the book, Bryne makes the case for one thing that is really helpful if you want to make a wild technological leap: a bubble.
See omnystudio.com/listener for privacy information.