AI transcript
0:00:18 The universe wants us to basically be alive, the universe wants us to become more sophisticated, the universe wants us to replicate, the universe feeds us essentially a limited amount of energy and raw materials with which to do that.
0:00:23 Yes, we dump entropy out the other side, but we get structure and life to basically compensate for that.
0:00:25 That’s the thermodynamic underpins of the effect of accelerationism.
0:00:31 Today, we’re running a drop from the Hermetics podcast featuring a conversation with Marc Andreessen.
0:00:40 In this episode, they discuss AI, accelerationism, effective accelerationism or EAC, energy, the future of technology, and much more.
0:00:42 Let’s get into it.
0:00:54 This episode, I’m joined by Marc Andreessen to discuss accelerationism, AI, technology, the future, energy, and more.
0:01:00 I’d like to say a big thank you to all my paying patrons and subscribers for making all of this work possible.
0:01:06 And if you’d like to support the podcast as it runs off patronage alone, then please find links in the description below.
0:01:08 Otherwise, please enjoy.
0:01:13 So, Marc Andreessen, thanks very much for joining us on Hermetics podcast.
0:01:14 Hey, James.
0:01:15 Thanks for having me.
0:01:23 We are going to be discussing accelerationism, AI, technology, the future.
0:01:26 Technology is probably the key one here, I think.
0:01:34 But I want to basically begin with probably something that on the usual podcasts you go on, you probably aren’t asked.
0:01:38 Like a lot of people know who you are, but in the sphere that I’m working, people might not.
0:01:44 So, yeah, just tell us a little bit about yourself and what it is you do before we get started here.
0:01:48 Yeah, so I’m probably the polar opposite from your usual guests.
0:01:49 Exactly.
0:01:53 So, I’m bringing diversity to your production.
0:01:56 So, I’m an engineer.
0:01:57 So, my background, I’m an engineer.
0:02:02 So, I’m a computer programmer, computer science, computer engineer by background.
0:02:09 I was trained in kind of the old school of computer science where they kind of teach you, you know, every layer of the system, including hardware and software.
0:02:15 So, and then I was a programmer and then an entrepreneur in the 90s.
0:02:25 And, you know, probably my main kind of claim to fame is I was sort of present at the creation of what today you’d consider the internet, sort of the modern consumer internet that people use.
0:02:42 And so, my work, first at the University of Illinois and then later at a company I co-founded called Netscape, you know, sort of popularized the idea of ordinary people being online, you know, and then, you know, helped to build what today you experience as the modern web browser and kind of the modern internet experience.
0:02:58 And then I was involved, you know, kind of through a broad range of Silicon Valley, you know, kind of waves over the course of the next 20 years in the 90s and 2000s, including, you know, cloud computing where I started a company in and social networking I started a company in.
0:03:04 And then in 2009, I started with my long-time business partner.
0:03:14 I started a venture capital firm and our firm, which is called Andreessen Horowitz, is now kind of one of the firms at the center of funding, you know, all of the new generations of technology startups.
0:03:30 And maybe the main thing I kind of, you know, kind of underlined there is just, you know, technology, you know, quote unquote technology, high tech, computer technology in particular, you know, kind of used to be, you know, it’s always been kind of interesting and important in the economy for the last 50 years or something.
0:03:41 You know, in the last 15 years, you know, technology has, I think a lot of people kind of have feel that like technology has really spread out and it has become, you know, integral to many more aspects of life.
0:04:03 And so my firm today finds itself very involved in the application of technology to, you know, everything from, you know, everything from, you know, education, housing, energy, you know, national defense, national security, you know, as well as kind of every, you know, possible, you know, kind of artificial intelligence, robotics, you know, kind of every different dimension of how you might touch technology in your life.
0:04:10 And it’s, you picked up on something that will, will come into the conversation in a couple of questions time.
0:04:21 But this, this notion of you is basically completely opposite to the majority of guests, not in a bad way, but often it’s a lot of philosophy and which, you know, theory and not practice.
0:04:25 And also this notion of technology in relation to either pessimism or optimism.
0:04:32 And this is super, super key, I think, for the ongoing atmosphere of really the West, of where we’re going to end up.
0:04:41 But before we get to these questions, I mean, this is a question I’m slowly phasing out, but I think it will work for the sake of our conversation because we’re talking more broadly around themes.
0:04:46 I know you’ve listened to the podcast before, so it is the Hermetics question.
0:04:50 You can place three thinkers, living or dead, into a room and listen in on the conversation.
0:04:51 Who do you pick?
0:04:57 Yeah, I think that, maybe I’ll give you two versions of the answer and then maybe I can combine them.
0:05:00 So there’s kind of the timeless answer.
0:05:06 And, you know, the timeless answer would be something like, you know, Plato or Socrates and then Nietzsche.
0:05:12 And then maybe I’d throw in, you know, one of your, you know, kind of one of your favorite people, Nick Land.
0:05:14 I think would be, would be interesting.
0:05:23 You know, the somewhat more applied version of that would be something a lot, you know, and this is sort of a, this is sort of maybe a little bit more topical these days with this movie, Oppenheimer, that just came out.
0:05:31 But, you know, it’s like John von Neumann, you know, who was one of the co-inventors of both, both the atomic bomb and the computer.
0:05:37 You know, Alan Turing, who became famous a few years ago with another movie, The Imitation Game.
0:05:45 You know, and then let’s, let’s, let’s throw on Oppenheimer there also, because those three guys were sort of, you know, present at the creation of what we would consider to be the modern technological world.
0:05:54 You know, including, including, including, literally those guys were at the center, you know, especially von Neumann and Turing were at the center of both, you know, World War II, the atomic bomb.
0:06:06 You know, the sort of information warfare, you know, the, the whole, you know, kind of decryption, you know, kind of phenomenon, which, which really, you know, a lot of people think won World War II, along with, you know, along ultimately with, with the A-bomb.
0:06:12 And then, and then also, you know, and then also right precisely at that time with those people, the birth of the computer and everything that followed.
0:06:18 So is that more of a practical room for you or in terms of like a vision, a vision going forward into the future?
0:06:23 Or is there something else going on there between those sort of six figures?
0:06:38 Those guys were very, it’s almost impossible to understate, overstate how smart and visionary and far seeing they were like, you know, there’s, there’s actually the von Neumann biography came out recently called The Man from the Future.
0:06:44 And, you know, in anything like that, von Neumann is a more interesting character than Oppenheimer in a lot of ways, because he, he just, he, he touched a lot more of these fields.
0:06:51 And of the people who knew them that, you know, von Neumann was always considered that he was, he was the smartest of what were called the Martians at that time, right?
0:06:56 Which were the, the sort of group of super geniuses that originated in Hungary in that era.
0:07:01 And so, you know, look, they, they were very, very conceptual thinkers.
0:07:05 I’ll just give you one example of how conceptual they were, how profoundly smart they were.
0:07:11 So they, they basically birthed the idea of artificial intelligence right in the middle of, right in the middle of the heat of World War II.
0:07:14 Like the minute they created a computer, like they created the computer, right?
0:07:18 They created like the electronic computer as we know it today in the heat of World War II.
0:07:20 And then they immediately said, aha, this means we can build electronic brains.
0:07:24 And then they immediately began theorizing and developing designs for, for artificial intelligence.
0:07:29 And, and in fact, the, the core algorithm of artificial intelligence is this idea of neural networks, right?
0:07:35 Which is this idea of a computer architecture that sort of is mirrors in some ways, the sort of mechanical operation of the human brain.
0:07:39 You know, that was literally an idea from that era in the early 1940s.
0:07:47 There was a paper, two other guys who were in this world wrote a paper in 1943 outlining the theory of, of, of, of neural networks.
0:07:54 And that, and that literally is the same technology that, that, that, that is the core idea behind like what you see when you use ChatGPT today, 80 years later.
0:08:02 And so there, there, there was a very, very deep level of intellectual and philosophical, you know, I don’t know what it is.
0:08:06 Like they, they tapped into or discovered or developed a very deep well that we’re, we’re still drawing out of today.
0:08:10 I was going to, yeah, I was going to ask that immediately, but you, you covered it.
0:08:17 I mean, is there any significant changes between AI then and AI now, or is it really just a matter of practicality?
0:08:21 Like we’ve, we’ve got the, we’ve got more resources and more ability to create it.
0:08:24 Yeah, we’re at this fairly shocking moment.
0:08:32 So for, for people who haven’t been following this, basically it’s this, it’s, it’s one of these amazing things where it’s like, there’s like this 80 year overnight success that all of a sudden is, is paying off.
0:08:39 And so it’s, you know, it’s, it’s, there were, you know, there were 80 years of, of scholars and researchers and projects and attempts to build electronic brains.
0:08:42 And like every step of the way people thought that they were super close.
0:08:46 You know, there was this famous seminar on the campus of, I think it was Dartmouth University.
0:08:49 In like 1956, where they got this grant to spend 10 weeks together.
0:08:53 They get all the AI scientists together in 1956, because they thought that after that, they’d have, they’d have AI.
0:08:56 You know, and it turned out they didn’t.
0:09:00 And so, so, so it, it’s one, but, but like, it’s, it’s starting to work.
0:09:00 Right.
0:09:05 And so when you use ChatGPT today, or you use on the artistic side, you use something mid journey or stable diffusion.
0:09:08 Like you’re, you’re seeing the, the, the, the payoff from that.
0:09:12 Um, I think the way to think about it is it’s, it’s, it’s the deep thinking that took place
0:09:13 up front.
0:09:18 Um, uh, it’s, um, and then, you know, just obviously tremendous amount of scientific and technological
0:09:21 thinking and development, you know, and elaboration that took place since then.
0:09:25 Um, but then there’s, there’s two other kinds of key things that are making AI work, work
0:09:28 today that are kind of, and there’s sort of, again, there’s sort of a combination of sort
0:09:31 of incremental, but also step function breakthroughs along the way.
0:09:33 Um, so one, one is, is data.
0:09:37 Um, and so just like, it, it turns out a big part of getting a neural network to work is
0:09:38 feeding in and up data.
0:09:43 Um, and so, you know, and the analogy is irresistible, right?
0:09:45 It’s like, uh, you know, it’s, you know, if you want to, if you’re trying to educate a student,
0:09:49 right, you want to feed them, you know, and feed them a lot of material, uh, in the, in the,
0:09:50 uh, in the human world also.
0:09:53 Um, and so it just turns out there’s, there’s this thing with, uh, neural networks and data
0:09:57 where, as they say, uh, you know, uh, uh, quantity has a quality all its own.
0:10:01 Um, and you really needed actually the internet to get to the scale of data.
0:10:04 You needed internet scale data, you know, you needed the web to generate enough text
0:10:04 data.
0:10:08 You needed like, you know, Google images and YouTube to generate enough video and imagery
0:10:09 to be able to train.
0:10:13 So, so we’re, we’re kind of getting a payoff from the internet itself, you know, combined
0:10:14 with the neural networks.
0:10:17 And then, and then the third is the advances in, in semiconductors.
0:10:21 Um, and, you know, and this is, you know, sort of the famous Moore’s law, but, you know,
0:10:25 it’s this, this, this phenomenon that, you know, that kind of, we refer to as, uh, you
0:10:30 know, quote unquote, teaching Santa think, um, so kind of this idea, right, that you
0:10:35 can literally convert, you know, you know, silicon, you know, sand, uh, rocks, um, into,
0:10:40 um, you know, into chips and then ultimately into brains, um, is kind of this amazing thing.
0:10:44 And, and, and actually as, I don’t know if you follow this stuff, but as, as we’re recording
0:10:48 right now, there’s this like amazing phenomenon happening in the world of, of semiconductors
0:10:53 and physics right now, which is, um, there’s this, uh, we, we, we may, we may be, we may
0:10:54 be right now.
0:10:57 We may have just discovered the first room temperature superconductor.
0:10:59 I’ve been seeing this, but I’m not smart enough.
0:11:03 Can you give me a brief overview of why this is so important?
0:11:06 I mean, I’m guessing, is this a resource input issue?
0:11:10 So basically every time you build a circuit today, right, every time you build any kind
0:11:16 of circuit, a wire, a chip, um, you know, anything like that, an engine, a motor, um, you know,
0:11:17 you have basically this process.
0:11:21 And by the way, this actually relates to the philosophy of, of, of accelerationism that we’ll
0:11:24 talk about, but, um, you, you have this sort of thermodynamic process where you, you, you’re
0:11:26 taking in energy on the one side, right.
0:11:28 And then you have a system, right.
0:11:31 Like a, uh, you know, an electrical transmission line or a computer chip or something.
0:11:35 You have a system that’s basically using that energy to accomplish something.
0:11:41 Um, and then that system is inefficient and that system is dumping heat out the other end.
0:11:45 Um, and, you know, and this is why when you use your computer, you know, if you got, you know,
0:11:48 an older laptop computer, you know, the fan turns on at a certain point.
0:11:51 Um, if you have a newer laptop computer, it just starts to get hot.
0:11:55 You know, you’re probably knows your phone starts to get hot, you know, um, you know,
0:11:57 batteries every once in a while do what they call the cook off.
0:12:00 They, you know, they, lithium ion batteries will explode, right?
0:12:03 Like there, there, you’re dumping, you’re, there’s always some, there’s, there’s, there’s
0:12:07 a by-product of heat and therefore, you know, sort of increased entropy kind of coming out
0:12:09 the other side of any sort of electrical or mechanical system.
0:12:13 Um, and that’s just because with, you know, kind of running energy through wires of any
0:12:15 kind, you, you, you, you just have a level of inefficiency.
0:12:18 Uh, by the way, the human, human body does the same thing, right?
0:12:22 Like, you know, we take in, you know, energy and then we, you know, we’re sitting here,
0:12:24 you know, we don’t feel it, but we’re sitting here humming along at, you know,
0:12:28 whatever, 98.6 degrees Fahrenheit, you know, significantly higher than, than, than room
0:12:32 temperature because, you know, we’re, we’re generating our, our actual biochemical process
0:12:33 of life, right?
0:12:35 Bioelectrical is, is generating heat and dumping it out.
0:12:39 Uh, anyway, so, uh, the idea of the superconductor is basically think about it in the abstract
0:12:43 as a wire that basically transmits information without, um, you know, with basically perfect
0:12:47 fidelity, you know, perfect conservation of energy without dumping any heat into the environment.
0:12:51 Um, and it turns out that if you could do that, if you do that at room temperature, then
0:12:56 all of a sudden you can have like, you know, basically like, you know, incredibly more efficient,
0:13:01 you know, kinds of batteries, electrical transmission, motors, um, you know, computer chips.
0:13:05 Um, and so you can start to think about, for example, uh, well, it’s just, you know, an example
0:13:10 people talk about is if you, if you, if you cover the Sahara desert and solar panels, you know,
0:13:14 you, you could power, you know, basically the entire planets, you know, power, you know, energy
0:13:14 needs today.
0:13:19 The problem is there’s no way to transmit that, um, uh, you know, transfer that, that
0:13:23 power from the Sahara to the rest of the world, uh, with existing transmission line technology
0:13:27 with superconducting transmission lines, all of a sudden you could, um, you know, quantum
0:13:31 computers, uh, you know, today they exist, but they’re sharply limited because they have
0:13:35 to be operated at these, you know, super, you know, super cool temperatures, um, you know,
0:13:38 in these very carefully constructed labs, you know, with superconductors in theory, you
0:13:42 have desktop quantum computers, um, you know, you have levitating trains, you’ve
0:13:47 got, you know, um, you, you’ve just, you, you have a very broad cross section, you know,
0:13:49 you have handheld MRIs, right?
0:13:52 Like every doctor, every nurse, you know, has an MRI and they can just, you know, take
0:13:55 a scan wherever they need to, you know, on the fly, uh, you know, and like a, like, like
0:13:59 the Star Trek, um, you know, the, um, the tricorder, uh, you know, kind of thing.
0:14:02 Um, and so anyway, it’s, it’s, it’s, it’s fascinating.
0:14:05 So, so, so sitting here today, there’s, there’s, there’s the reports of this, of this
0:14:10 breakthrough and there are these sort of almost these almost UFO style videos of, of,
0:14:14 of this, uh, material levitating where it’s not supposed to be levitating as a consequence
0:14:14 of this breakthrough.
0:14:19 Um, and, um, there are betting markets on scientific progress and the betting markets as of this
0:14:22 morning have the odds of this being a real breakthrough at exactly 50, 50.
0:14:24 Uh, and so we.
0:14:25 Not the worst odds.
0:14:27 No, but it’s, it’s, it’s funny.
0:14:28 If you think about it, it’s funny.
0:14:31 Cause it’s, it’s, it’s the, the, our entire world right now from a physics standpoint, it’s
0:14:32 like Schrodinger’s cat.
0:14:36 Like we, we, we, we live in a, we live, we live sitting here today in a superposition
0:14:37 of two worlds.
0:14:40 One in which we now have room temperature superconductors and one of which we don’t.
0:14:45 And people are, and you know, these are radically different potential futures for, for humanity.
0:14:45 Right.
0:14:49 Um, and so, uh, if it turns out it’s true, you know, it’s an amazing stuff function
0:14:50 breakthrough.
0:14:52 If not, it’ll, you know, it’ll, it’ll set us back and we’ll, you know, people will go
0:14:54 back to trying to work on to figure it out.
0:14:58 But, you know, but, but, but between the time we were recording, between the time we
0:15:02 release, we may even find out whether the cat, the superconducting cat in the box is alive
0:15:02 or dead.
0:15:05 That alive or dead state.
0:15:09 I mean, these, these two separate futures is really something that I, I see, you know, when
0:15:15 I was reading your blog, when I was looking at, uh, effective, effective accelerationism and
0:15:19 accelerationism we’ll get to, but these two futures, I think is the big question that I want
0:15:24 to ask you, which is because, because you’ve lived through this time, which is going
0:15:28 through the, the optimism of the nineties, especially, you know, you mentioned Nick Land
0:15:28 at the start.
0:15:32 I mean, you see that in philosophy, you see that in technology, see that in the history,
0:15:38 this huge, um, let’s call it a cyberpunk optimism regarding our technological future.
0:15:42 And I would say now, I don’t know, you know, whether or not you agree with me, please let
0:15:42 me know.
0:15:54 We have entered into what Land himself called a slump from the 2000s, like late 2000s, you
0:15:59 seems to be within the, within the air, a sort of cynicism, a sort of pessimism that we’ve
0:16:02 just ended up in this like place of stagnance.
0:16:07 And do you see, uh, do you, I mean, if you agree with me in terms of those two, two possibilities,
0:16:10 do you, I mean, I think I, I would be right in saying you’re an optimist.
0:16:17 Do you see us now reentering into that, uh, a new phase of optimism regarding technology and
0:16:18 regarding the future?
0:16:22 Well, so there’s, there’s, there’s several layers to this question and I would be happy
0:16:24 to kind of go through them.
0:16:27 Um, then we can spend as much time on this as you want, but the, the, the, the, the core
0:16:30 layer we’re talking about, and I totally, by the way, totally acknowledge and, and I think
0:16:33 this is a great topic and, and, you know, the, your observations are very real.
0:16:38 Um, the, the, the core thing that I would go to to start with is not kind of the social
0:16:41 political, you know, kind of, you know, philosophical dimension.
0:16:43 The core thing I would go to, to start with is the technological dimension.
0:16:48 Um, in other words, at the substantive level, like what, what, what is the actual rate of
0:16:50 technological change in our world?
0:16:54 Um, and, and you’ll know, you’ll note, I don’t know, you’ll note that on the, on the social
0:16:58 dimension, we seem to whip back and forth between, oh my God, there’s too much change.
0:17:02 And if she’s stabilizing everything, and then we whip right around to, oh my God, there’s
0:17:04 not enough change, uh, and we’re stagnant, right?
0:17:04 And that’s horrible.
0:17:08 So, so there’s kind of dystopian versions, you know, there’s kind of dystopian mindsets in
0:17:10 the air, kind of in, in, in both directions.
0:17:15 Um, so, so anyway, so I would start with kind of the technological kind of substantive layer
0:17:16 to it.
0:17:20 Um, and there, you know, the observation, and this is not an original observation on my part,
0:17:24 you know, Peter Thiel and Tyler Cohen in particular have, have gone through this in a lot
0:17:25 of detail in, in, in their work.
0:17:30 But, um, you know, basically like if you look at the long arc of technological development
0:17:33 over the course of, you know, basic, but you know, which, which effectively started with
0:17:34 the enlightenment, right?
0:17:38 So you, you sort of, practically speaking, you’re starting, starting around 1700 and, you
0:17:39 know, projecting forward to today.
0:17:44 It’s about 300, 300 years worth of what we would consider kind of systematic technological
0:17:44 development.
0:17:50 Um, you know, it’s basically, if you look at kind of that long arc, um, and then if, if you
0:17:54 basically measure the pace of technological development and applause by saying you, you actually
0:17:58 can measure the pace of technological development, um, in the economy, um, with a metric that
0:18:00 economists call productivity growth.
0:18:05 Um, and so, and, and basically the way that that works is, uh, you know, economic productivity
0:18:08 is defined basically as output per unit of input, right?
0:18:11 And, and you can, you know, whatever your inputs are, it could be energy, right?
0:18:14 It could be, um, you know, raw materials, um, you know, whatever you want.
0:18:18 Um, and then, you know, output is, is in, you know, actual, uh, you know, actual, actual output,
0:18:21 you know, more cars, more chips, more this, more that, more clothes, more food, more houses.
0:18:27 Um, and so basically what economists will tell you is the rate of productivity growth in the
0:18:30 economy, which they measure annually basically is the rate of technological change in the system,
0:18:31 right?
0:18:34 And so if, if technology is paying off, right?
0:18:38 If the, if the advances are real, then your economy is able to generate more output with
0:18:39 the same inputs.
0:18:43 Um, if your technological development is stagnant, then that, that, that’s not the case.
0:18:46 And it’s, it’s an aggregate measure, but it’s a, it’s a good measure overall.
0:18:51 If you look at those statistics, basically what you find is we had very, we take more recently
0:18:56 in the last century, we had very rapid productivity growth in the West, basically for the first
0:18:57 half of the 20th century.
0:19:02 So from the, basically, you know, what was called the second industrial revolution, which
0:19:08 started around 1880, 1890 through to basically the mid sixties, um, we had actually a very
0:19:10 rapid rate of technological development.
0:19:14 And by the way, in that era, right, we got, you know, the, the car, the interstate highway
0:19:18 system, hit the power grid, telegraph, telephone, radio, television.
0:19:23 Um, you know, we got computers, we got, um, you know, we got like all, you know, all we got
0:19:26 the, you know, atomic, we got, you know, both, uh, atomic weapons and all the
0:19:27 also nuclear power technology.
0:19:28 Um, right.
0:19:32 And so there, there was this tremendous kind of technological surge that took place, you
0:19:36 know, in that sort of call it 1880 to 1960, 1965 kind of period.
0:19:41 And productivity growth ran, you know, through that era, two to 4% a year, um, which, which
0:19:44 in the aggregate is very fast, you know, for the economy overall, like that’s, that’s a very
0:19:46 fast pace of, of, of change.
0:19:52 Um, basically since the mid sixties, early seventies, the rate of productivity growth basically
0:19:53 took a sharp deceleration.
0:19:57 Um, and so in, in, in the, in the, basically the 50 years, 52 years now that I’ve been
0:20:00 alive, um, you know, it’s, it’s been a step lower.
0:20:02 It’s been one or 2% a year.
0:20:05 It’s been kind of persistently too low relative to what it should be.
0:20:08 And, and, you know, I think there’s a bunch of possible explanations for that.
0:20:14 Um, but I think the most obvious one is that, um, basically the, the world of technology
0:20:16 bifurcated in the seventies and eighties into two domains.
0:20:20 One domain is the domain of bits, you know, the domain of computers and the internet where
0:20:23 there has been, you know, obviously very rapid technological development, you know, you know,
0:20:25 potentially, you know, now culminating in AI.
0:20:30 Um, but then there’s also the world of atoms and, you know, the, the diagnosis at least that
0:20:34 I would apply is we, we, we essentially outlawed, uh, technological development and innovation
0:20:38 in the, uh, in the realm of atoms, you know, basically since the 1970s.
0:20:42 Um, there are many examples of how we’ve done this and, you know, you can look at things like
0:20:45 housing policy and you can kind of see it quite clearly, but also very specifically, you can
0:20:49 see it in energy, um, which is, you know, we discovered nuclear power, right?
0:20:55 We discovered a source of, you know, a limited, you know, zero emissions energy that, you know,
0:20:57 compared to every other form of energy is like ultra safe.
0:21:01 Um, uh, you know, nuclear energy is like by far the safest form of energy that we know of.
0:21:04 Um, and, you know, the 1970s, we essentially made it illegal.
0:21:06 Um, you know, just like totally banned it.
0:21:11 Um, then we’ll talk more about that, but like that, that was like a, a, a draconian thing
0:21:14 that, that, you know, has consequences through to, to the world we live in today.
0:21:18 Um, and so, so we live in this sort of, and you, you mentioned cyberpunk and this, this is,
0:21:21 this is actually kind of the, the cyberpunk ethos that I think actually reflects something
0:21:24 real, which is, you know, if you’re in the virtual world, it’s like, wow.
0:21:25 Right.
0:21:26 It’s like, you know, it’s amazing.
0:21:30 Like everything is like spectacular and, and yeah, look, even like a podcast like yours, like,
0:21:30 right.
0:21:32 Would have been, you know, inconceivable 30 years ago.
0:21:33 Right.
0:21:37 Um, and so like information, transmission, communication, coordination, um, you know,
0:21:40 all, all these things are, are, have taken huge leaps forward.
0:21:44 But then the minute we, you know, the minute you get into a car or the minute you plug something
0:21:45 into the wall, right.
0:21:47 Or the minute you eat food, right.
0:21:48 You’re still living in the 1950s.
0:21:52 Um, and so I, I think we live in a schizophrenic world with respect to that question.
0:21:54 Why then?
0:22:00 So you write about this in your blog post on AI, which we’ll get to, but you draw in Prometheus,
0:22:00 right?
0:22:05 This, this consistent historical cycle of when there is a new technology, it’s going to destroy
0:22:05 us.
0:22:06 Everything’s going to end.
0:22:07 It’s the worst thing ever.
0:22:08 We need to be careful of it.
0:22:12 You know, the TV is going to burn your eyeballs out of your sockets.
0:22:15 The vacuum cleaner is going to, I don’t know, like explode or whatever.
0:22:21 But every time there is a, uh, like a cyclic change of a new technological innovation, uh,
0:22:24 it’s this Promethean thing of where we’re pretty terrified of it and we want it to go away.
0:22:27 And then eventually we’re like, Oh, actually, no, that’s pretty helpful.
0:22:31 But there seems to be, as you said, there’s something that happened in the 1970s where we
0:22:36 just pushed away the atomic world in favor of the bits, you know, which makes sense.
0:22:41 But why, I mean, there’s probably a lot of governmental reasons for this as well, but why were
0:22:45 we so, uh, it seems like a fear, uh, really the way you talk about it.
0:22:52 Like, why were we so, um, in a way scared to then develop the atomic world in the way we
0:22:53 had the, the bit world?
0:22:55 Yeah.
0:22:59 So I go start even deeper, I think, which is there’s, there’s a deep fear in the human
0:22:59 psyche.
0:23:03 And I think probably in the human animal, um, uh, of new knowledge.
0:23:06 Like it’s even a level like technology is an expression of knowledge, right?
0:23:10 Like the, the, the, the Greeks already had this term technique, um, which is sort of
0:23:12 this, uh, you know, which is where the word technology comes from.
0:23:14 But I think the underlying meaning is more like general knowledge.
0:23:18 Um, you know, the, the, the Christian, you know, the key to the Christian, you know,
0:23:20 kind of theology, right.
0:23:23 Is the, you know, what, what, what is, you know, what was the original sin, right?
0:23:26 It was eating the, the apple from the tree of knowledge, right?
0:23:28 It was, it was mankind, right.
0:23:30 Mankind learning that which he was not supposed to learn.
0:23:34 Um, and so, you know, the Greeks had the Prometheus myth, the Christians have the, the snake in
0:23:36 the, in the garden of Eden and the, the, the tree of knowledge.
0:23:40 Um, like there, there’s something very, very deep.
0:23:43 Um, like there’s, there’s an asymmetry, I think wire deeply into the human brain, right.
0:23:49 Which is, you know, sort of, uh, you know, fear versus hope, um, which, which from an evolutionary
0:23:51 standpoint, like would make a lot of sense, right.
0:23:54 Which is like, okay, if you’re living in, let’s say prehistoric times, you know, in this
0:23:58 sort of long evolutionary landscape that we lived in, um, you know, is new information likely
0:24:03 to be good or bad, right, probably over the sweep of, you know, the billions of years of
0:24:06 evolution that we went through, most new information was bad, right?
0:24:08 Most new information was the predators coming over the hill to kill you.
0:24:15 Um, and so I think there’s something like deeply resonant about the idea that new is bad, that,
0:24:18 that, you know, and by the way, look in, in the, in the West, like we probably, you know,
0:24:22 we, we, we actually, I think from a historical and maybe comparative standpoint, like we’re
0:24:25 actually quite enamored by new things as compared to a lot of traditional societies.
0:24:28 Um, and so if, if anything, we’ve overcome some of our national instincts.
0:24:30 On this, but that, that, that impulse is still deep.
0:24:36 Um, and then if you go up one level to kind of the social, uh, level, um, you know, I’m
0:24:40 quite bought into, uh, uh, an explanation on this that was, uh, uh, provided there’s a,
0:24:44 there’s a, there was a, uh, philosopher of science, historian of science named Elton
0:24:48 Morrison at MIT in the, in the first half of the 20th century, um, who, who talked about
0:24:49 this.
0:24:52 And, and he said, look, you need to think about basically technology intersects with social
0:24:53 systems.
0:24:57 Um, when, when, when a new technology intersects with a social system, basically what it does is
0:24:59 it threatens to upend the social order.
0:25:00 Right.
0:25:04 Um, and so at any given moment in time, you have a social order, right.
0:25:05 With status hierarchies, right.
0:25:07 And people who are in charge of things.
0:25:11 Um, and basically what he says is the social order of any time is basically, you know,
0:25:15 in, in, in sort of Western sort of modern sort of enlightenment, Western civilization, the
0:25:17 social order is a function of the technologies that led up to it.
0:25:18 Right.
0:25:20 And so you have a certain way of organizing the military.
0:25:23 You have a certain way of organizing, you know, industrial society, you have a certain
0:25:24 way of organizing, you know, political affairs.
0:25:27 Um, and they are the consequence of the technologies up to that point.
0:25:33 Um, and then you introduce a new technology and the new technology basically threatens to upend
0:25:34 that status hierarchy.
0:25:37 Um, and the people who are in power all of a sudden aren’t, and there are new people
0:25:38 in power.
0:25:41 And of course, you know, what is the thing that people will fight, will fight the hardest
0:25:43 to maintain, you know, as, you know, as their status in the hierarchy.
0:25:47 Um, and then he, he goes through example after example of this throughout history, including
0:25:52 this incredible example of the development of the first naval gun that adjusted for the
0:25:58 role of a battleship in battle, um, which increased the firing accuracy of naval, uh, guns by like
0:25:59 10 X.
0:26:02 Um, it was one of the great decisive breakthroughs in modern weaponry.
0:26:08 Um, and it still took both the U S and the UK British navies 25 years to adopt it.
0:26:15 Um, because the entire command status hierarchy of how naval combat vessels were run and how
0:26:19 gunnery systems worked and how tactics and strategy worked for naval battles, like had
0:26:21 to be upended with the invention of this new gun.
0:26:27 Um, anyway, and so like he would basically say, you know, it’s actually, duh, uh, you know,
0:26:30 you roll out this new technology, it, you know, it causes people who used to have power to
0:26:31 no longer have power, puts new people in power.
0:26:36 Um, you know, in, in modern terms, you know, the language that we would use to describe this
0:26:36 as gatekeepers, right?
0:26:42 Like, so, you know, why is the traditional journalism press so, you know, it just absolutely
0:26:44 furious about the internet, right?
0:26:48 And it was because like the internet gives right regular people, the opportunity to basically
0:26:51 be on a, on a, on at least a peer relationship, if not, you know, in the case of somebody like
0:26:54 Joe Rogan, a superior relationship, right.
0:26:57 And then, and then it’s, it’s an upending of the status hierarchy, um, and, and kind of,
0:27:01 you know, the same thing, you know, through, through, through, basically like one of the
0:27:04 ways to interpret the story of our time from a social standpoint is all of the gatekeepers
0:27:07 who were strong in the sixties and seventies are basically being torn down.
0:27:10 But I’ll give you another obvious example, political parties, right?
0:27:15 Why, why, why, why are so many Western political parties in a state of some combination of freak
0:27:16 out and meltdown right now?
0:27:16 Right.
0:27:20 It’s because, well, it’s because in an era of radio and television, they were able to
0:27:23 broadcast a top-down message and they were able to tell voters basically what to think
0:27:26 in the, in the, in the new model, voters are deciding what they think based on what they
0:27:27 read online.
0:27:30 And then they were reflecting that back up and finding their politicians wanting.
0:27:30 Right.
0:27:34 And, and so therefore like the re-rise of populism and, and, you know, sort of the blowing out
0:27:37 of, you know, sort of both left-wing and right-wing ideologies, right.
0:27:40 The, the, the sort of this, the, you know, the center is not holding, um, and so anyway,
0:27:42 that would be another example in Morrison’s framework.
0:27:45 Um, uh, and then, and then I’ll just close on this.
0:27:48 Morrison has this fascinating, he says there’s this, as a consequence of the fact that technology
0:27:53 changes social hierarchies, um, he says there’s a predictable three-stage process to the reaction
0:27:58 to any new technology by the status quo, right, by, by, by basically the people in power at that
0:27:58 time.
0:28:00 Um, he says, uh, uh, step one is ignore.
0:28:05 Um, and so just like pretend it doesn’t exist, uh, which by the way is actually a pretty good
0:28:09 strategy because like most technologies don’t upend social orders, like most new technologies
0:28:11 don’t work at the time that they’re first presented.
0:28:13 So maybe ignore it’s actually a rational strategy.
0:28:16 Um, uh, step two is what he calls rational counter argument.
0:28:20 Um, and so that’s where you get like the laundry list of all the things that are wrong with the
0:28:21 new technology, right.
0:28:24 Um, and then he says step three is when the name calling begins.
0:28:25 Mm-hmm.
0:28:26 Mm-hmm.
0:28:29 This, I mean, I watched, I’ve watched a couple of your, your other interviews recently, and
0:28:34 this relates to, I know you’ve been talking about Nietzsche’s, uh, master in slavery, uh,
0:28:35 morality recently.
0:28:39 And this seems to tie to that in this notion of Nietzschean result, where he, you know, he does
0:28:43 the typical philosophical thing of taking a French word and drawing it out, but Ressentiment,
0:28:44 right.
0:28:50 Instead of, you know, um, just having a look at nuclear power and seeing where it would go
0:28:53 and allowing that power to unfold within, within society.
0:28:56 Um, you try invert the morals.
0:29:00 So you say, well, actually the good thing to do is because these people don’t have the
0:29:04 will to power, because they don’t have the ability or the engineering skills, I guess in
0:29:09 your own case to, to like, you know, uh, to, to utilize the thing, they invert the morals
0:29:13 and say, well, actually the good thing is to do the inverse is to not have it.
0:29:14 Like this is bad.
0:29:18 And now that then immediately puts them in, in the, in the good camp.
0:29:23 But it seems like, to be honest, it really feels, especially with AI and also now with
0:29:28 nuclear power, now that, you know, especially in Germany, certain things have been tried.
0:29:30 And now it’s like, okay, this was a really bad mistake in terms of energy.
0:29:36 Uh, like the cat’s out of the bag and we like, there’s now this force of having to move.
0:29:39 You were then talking about the second to second and third stages there.
0:29:43 It’s almost like, look, with AI, especially the cats out of the bag, like we have to move.
0:29:47 There’s no, there’s no choice of like ignoring or reacting against it.
0:29:49 Now you either deal with it or you don’t.
0:29:49 Yeah.
0:29:52 So let’s, let’s spend a little, uh, one more moment on nuclear power and then, and then
0:29:53 go to AI.
0:29:55 So nuclear power is so interesting because nuclear power is the tell.
0:29:59 Like I always look for like the little signals that people don’t really mean what they say
0:30:02 or that they don’t, they’re not really like they’re, you know, they’re, they’re, uh, they’re,
0:30:04 uh, they’re, uh, sort of, you know, moral system doesn’t quite line up properly.
0:30:07 And so nuclear power is this like amazing, it’s this amazing thing.
0:30:10 It’s like, literally, it’s like, okay, you build this thing, it generates power.
0:30:12 It basically, it generates a small amount of nuclear waste.
0:30:16 It generates steam, but it generates zero emissions, right?
0:30:17 Zero carbon, right?
0:30:22 And so you have this basically, this amazing phenomenon where you have this, and let’s just
0:30:23 take them completely as face value.
0:30:25 I’m not going to, this is not me questioning.
0:30:27 I’m not going to question carbon emissions or global war.
0:30:30 I’m just going to, I’m going to assume that everything the environmentalists say about
0:30:32 carbon emissions, climate, you know, change, all that stuff.
0:30:33 Let’s assume that that’s all totally real.
0:30:35 Like, let’s just, let’s just grant them all that.
0:30:41 It’s like, okay, well, like, okay, so how could, how can you solve the sort of climate crisis,
0:30:42 the carbon emissions crisis?
0:30:45 It’s like, well, you have the silver bullet technology you could roll out in the form
0:30:46 of nuclear fission today.
0:30:49 You could generate a limited power.
0:30:53 Richard Nixon, by the way, the, you know, the, uh, the heavily, heavily condemned Richard Nixon
0:30:57 in 1972, um, you know, proposed something at the time he called Project Independence.
0:31:01 Um, Project Independence was going to be the United States building a thousand new civilian
0:31:06 nuclear power plants by the year 1980 and cutting the entire U.S. energy grid, including the
0:31:10 transportation system, cars, everything, um, home heating, everything over to nuclear power
0:31:14 by 1980, um, going zero emission in the U.S. economy.
0:31:17 And by the way, right, geopolitically removing us from the Middle East, right?
0:31:18 Um, right.
0:31:19 So no, right.
0:31:22 No Iraq, Afghanistan, all this stuff, like just completely unnecessary, right?
0:31:29 Um, and you know, you’ll note that like Project Independence did not, did not happen, right?
0:31:31 Like we don’t, we don’t live in that world today.
0:31:33 And so it’s like, okay, you’ve got this like crisis.
0:31:36 Um, you’ve got this like silver bullet solution to, to, for, for it.
0:31:39 And you very deliberately have chosen to not adopt that solution.
0:31:43 And, and it’s like, and there, it’s, it’s actually very interesting split in the environmental
0:31:43 movement today.
0:31:46 And it’s, it’s really kind of, you know, I think kind of bizarre.
0:31:48 And it’s like a 99 to one split.
0:31:51 You asked like 99% of environmental activists about nuclear power.
0:31:53 They just just sort of categorically dismiss it.
0:31:54 Of course, that’s not an option.
0:31:59 You, you, you do have this kind of radical fringe with people like Stuart Brand, um, who
0:32:02 are like basically now pointing out that it is, it is a silver bullet answer, but most
0:32:04 of them are saying, no, it’s not an answer.
0:32:06 And it’s like, okay, well, why are they doing that?
0:32:09 And it’s like, well, like what, what is it that they’re saying that they want to do?
0:32:12 And what they’re saying they want to do is what they call, you know, degrowth, right?
0:32:14 And so they, they want to decarbonize the economy.
0:32:15 They want to de-energize the economy.
0:32:17 They want to de-grow the economy.
0:32:21 And then, you know, when you get down to it and you ask them a very, you know, specific
0:32:24 question about the implications of this, you know, basically what you find is the general
0:32:28 model is they want to reduce the human population on the planet to about 500 million people.
0:32:31 You know, it’s kind of, kind of the answer that they ultimately come down to.
0:32:34 And so, so ultimately the, the, the, you know, the big agenda is to, is to, is to reduce
0:32:37 the human, you know, basically the human herd, you know, quite sharply.
0:32:40 And, and, you know, they, they, they kind of dance around this a little bit, but when
0:32:42 they, when they really get down to it, this is what they talk about.
0:32:45 And of course, you know, Paul Ehrlich, you know, is kind of one of the kind of famous icons
0:32:45 of this.
0:32:47 He’s been talking about this for decades.
0:32:52 I think it was Jane Goodall who used the 500, you know, million, you know, number recently
0:32:52 in public.
0:32:57 And so, and so, and so, so then you’ve got this kind of very interesting, you know, technological,
0:33:01 philosophical, moral question, which is like, well, what, what is the goal here, right?
0:33:06 Is the goal to like solve climate change or is the goal to like depopulate the planet, right?
0:33:10 And to the extent that like free unlimited power, right.
0:33:13 Would interfere with, you know, to the extent that that’s a problem, the problem it would be
0:33:15 as if the actual agenda is to depopulate the planet.
0:33:18 And like, I would like this to not be the case.
0:33:22 Like I, I think I, you know, again, take, taking everything else that they say at face value,
0:33:24 you’d like to solve carbon emissions and climate change and everything else.
0:33:29 But like, you know, like I think you, you know, you might also say you want a planet in which
0:33:32 there are not only 8 billion people, but maybe, you know, maybe people are good, right?
0:33:35 Maybe you’re actually should have 20 billion or 50 billion people.
0:33:38 And we have the technology to do that and we’re choosing not to do it.
0:33:44 So, so, so, so, so, so, so this is the thing, like this gets into these very deep questions,
0:33:44 right.
0:33:48 To, to your point of like, okay, very deep questions about morality and like, how did
0:33:53 we maneuver or, you know, like per Nietzsche, like how did we reverse ourselves into a situation
0:33:55 where we’re actually arguing against human life?
0:33:59 And of course, and this is, we’ll get to it, but this, this of course is, is then, you know,
0:34:02 a big part of the origin of the, of the idea of effective accelerationism, which is basically
0:34:05 new, like, let’s go sharply in the other direction.
0:34:06 Oh, and then, yeah.
0:34:10 So AI, yeah, AI is playing out much the same way.
0:34:11 AI is already playing out the same way.
0:34:14 And, and, and, and here you’ve got this like just incredible phenomenon happening where
0:34:18 we, we, we, you know, it looks like we have a key breakthrough to basically increase the
0:34:22 level of intelligence, you know, basically all, all throughout society and, and, and around
0:34:26 the world, you know, through, you know, for basically for the first time, you know, directly applying
0:34:27 new general intelligence to the world.
0:34:33 And, you know, there is this like incredibly basically aggressive movement that is actually
0:34:39 having tangible impact today in the halls of power in Washington, DC and in the EU and other
0:34:43 places, you know, that is seeking to stop and reverse it, you know, as aggressively as they
0:34:43 possibly can.
0:34:47 And so we’re, we’re, we’re kind of, we’re, we’re going through a, we’re going through, I
0:34:51 would say a suddenly accelerated and very sharp and aggressive version of, of exactly what
0:34:53 happened with nuclear power happening with AI right now.
0:34:58 I mean, this is the thing that can, can, well, there’s two questions because on your
0:35:03 blog, um, you, it’s really refreshing to see you, you’re, you’re pretty to the point
0:35:05 when you say, look, AI is code.
0:35:10 It’s code written by people, by human beings on computers developed by human beings, you
0:35:11 know, like we’re in control.
0:35:13 You’re not, uh, of this.
0:35:19 I think there was, you know, Musk signed a big thing where like, you know, a thousand people
0:35:23 signed this thing to say like, we need to hold this, the whole Rocco’s Basilisk AI is going
0:35:26 to be terminated to come in and blowing us up with robots, et cetera, et cetera, et cetera.
0:35:27 It’s going to kill us all.
0:35:29 You’re very much like, no, this is code.
0:35:31 This is just an intelligence for us to use.
0:35:36 Um, now that’s one question, you know, I guess, why isn’t AI going to kill us all?
0:35:37 And I know you’ve spoken about that a lot.
0:35:38 So that answer can be brief.
0:35:45 But secondly, this whole idea of trying to reverse it, to me, it seems inherent within AI as a thing
0:35:48 that it wants, you know, it’s the cats out the back.
0:35:55 You can’t, like once it’s here, you, you, outside of really draconian measures, you, you can’t
0:35:59 because how, how do you, how do you hold an intelligence, which is growing, right?
0:36:03 Well, except, you know, they did stall out nuclear power, right?
0:36:04 So, um, right.
0:36:06 Like, so they did, like it worked.
0:36:08 So why did Project Independence not happen?
0:36:11 Why do we not have like, you know, unlimited nuclear power today?
0:36:15 Um, you know, the reason is because it was, it was blocked by the, by the political system,
0:36:15 right?
0:36:18 And so, so, you know, Richard Nixon, who I mentioned, you know, proposed this.
0:36:21 He also created the Environmental Protection Agency and the Nuclear Regulatory Commission.
0:36:25 Um, you know, the nuclear, it’s actually, this has actually been a big week.
0:36:30 Um, the first new nuclear power plant, um, design, the first newly designed nuclear power
0:36:36 plant in the last 50 years just went online in Georgia, you know, $20 billion over budget.
0:36:39 And, you know, it’s got, it’s a, it’s a story of its own, but at least we got one
0:36:43 line, it’s the first new nuclear power plant design ever authorized by the Nuclear Regulatory
0:36:46 Commission, since Nixon created that commission, right?
0:36:51 And so, so, so we, we, we put in place a regulatory regime around nuclear power in the 1970s that,
0:36:53 you know, all but made it impossible.
0:36:55 By the way, you alluded to the Germany thing earlier.
0:36:56 I’ll just touch on that for a second.
0:36:59 So, you know, they, you know, you’ve, I’m sure you’ve heard of the idea of the precautionary
0:37:00 principle, right?
0:37:01 Um, right.
0:37:05 Which is this, this idea that basically scientists and technologists have a moral obligation to think
0:37:09 through all the possible negative consequences of a new technology before it’s rolled out.
0:37:11 The precautionary, right?
0:37:14 The precautionary principle, and we could talk about that, including whether scientists and
0:37:16 technologists are actually qualified, qualified to do that.
0:37:22 But, but, um, and you know, this was also a central theme of Oppenheimer, but, um, but the,
0:37:25 the precautionary principle was invented by the German Greens in the 1970s.
0:37:27 And it was prevented specifically to stop nuclear power.
0:37:30 Um, and, you know, it is just amazing.
0:37:35 We’re sitting here in 2023 and there’s this, you know, we’re, we’re, we effectively, we in
0:37:38 the West are effectively at this, you know, at war with Russia.
0:37:39 Um, right.
0:37:43 Um, and, you know, it’s a proxy war right now that, you know, hopefully doesn’t turn into
0:37:48 a real war, but, but who knows, you know, the proxy wars have a, you know, have a disconcerting,
0:37:50 you know, pattern of spilling over into becoming real wars.
0:37:55 Um, and, you know, a lot of this is, it’s a tale of, of energy.
0:38:01 Um, and, you know, basically the Russian economy, you know, is like 70% energy exports, right?
0:38:02 Oil and gas exports.
0:38:07 The major buyer of that energy historically has been Europe and specifically Germany.
0:38:12 Um, you know, Europe and Germany specifically essentially have funded the Russian state,
0:38:17 the Putin state, um, you know, and that funding is what basically built and sustains their military
0:38:19 engine, which is what they’ve used, used to invade Ukraine.
0:38:20 Um, right.
0:38:25 And so it’s this like, like there’s this counterfactual, right.
0:38:27 Where the German greens did not do what they did in the 1970s.
0:38:29 Nuclear power was not blocked.
0:38:32 You know, Germany and France and the rest of Europe today is like fully energy independent
0:38:33 running on nuclear power.
0:38:38 You know, the Russia state, it would be greatly weakened because the, the value of their exports
0:38:40 would be, you know, enormously diminished.
0:38:44 Um, and they would not have the wherewithal to invade other countries or to, or to, or to
0:38:45 threaten Europe.
0:38:48 Um, and so like these decisions have like real consequences.
0:38:55 Um, and you know, these people use the pejorative sense, like they are so confident that they
0:38:59 can step into these, you know, debates, you know, kind of questions around, you know, new
0:39:01 technologies and how they should be applied and what the consequences are.
0:39:05 They can step in and they can use the political machine to basically throw sand in the gears
0:39:06 and stop these things from happening.
0:39:09 So, so like AI, this is what’s happening to AI right now.
0:39:13 So like, you know, in the, in the sort of, you know, theoretical position where AI is kind
0:39:15 of this, you know, potentially runaway thing then, right.
0:39:16 Maybe it can’t be constrained.
0:39:19 Like in the real world, it very much can be constrained.
0:39:23 Um, and the reason it can be constrained in the real world is because it uses physical
0:39:24 resources, right?
0:39:28 It, it, it, it, it, it has a physical, it has a physical layer to it.
0:39:30 And that layer is energy usage.
0:39:32 Um, and that layer is chips.
0:39:34 Um, and that layer is, you know, telecom bandwidth.
0:39:38 Um, and that layer is data centers, uh, physical data centers.
0:39:39 Right.
0:39:43 Um, and so, and that layer is like, you know, and by the way, that layer also includes the
0:39:46 actual technologists like working in the field and their ability to actually do what they
0:39:47 do.
0:39:51 And there are, you know, a very large number of sort of control points and pressure points
0:39:56 that, you know, the state can put on those layers to prevent them from being used for
0:39:57 whatever it wants to prevent.
0:40:02 Um, and, you know, and, and look, the EU is on the verge, the EU has this like anti-AI
0:40:06 bill that it looks like is going to pass that is like extremely draconian and may result
0:40:10 in, in Europe, not even having an AI industry, um, and may result in, you know, American AI
0:40:12 companies not even operating in Europe.
0:40:15 Um, and then in the U S we have a very kind of similar push happening as, you know, the
0:40:20 sort of ant, what I would describe as the anti-AI zealots, um, uh, are, um, you know,
0:40:23 or they’re, they are, they are in the white house today, right.
0:40:25 Arguing that, you know, this is bad.
0:40:25 It should be stopped.
0:40:30 Um, and it’s like, you know, it’s, it’s, it’s amazing as it’s like in the white, like how
0:40:32 many times are we going to like run through this loop?
0:40:34 Uh, how many times are we going to like repeat history here?
0:40:37 How, how many times are we going to be kind of self-defeating like this?
0:40:40 And like, apparently the, the, the impulse to be self-defeating, we, we have not worked
0:40:41 it out of our system.
0:40:44 You don’t want to be self-defeating though.
0:40:49 I mean, let’s move into this, uh, peculiar four letters, which is found at the, uh, at the
0:40:53 moment at the end of your Twitter name and the end floating around Twitter, mostly E slash
0:40:56 act or effect effective accelerationism.
0:40:59 And this, like, this is just beautiful to me.
0:41:01 It’s like the, the accelerationist renaissance.
0:41:02 I’ve been set talking about it in that way.
0:41:07 I don’t want to gatekeep it too much, but you know, I wrote my master’s thesis on accelerationism.
0:41:07 Like, I love it.
0:41:08 I love talking about it.
0:41:10 You don’t want any of this holding back.
0:41:12 You don’t want to hold anything back.
0:41:13 You want to accelerate.
0:41:15 So firstly, I mean, there’s two questions there.
0:41:20 What is it for you to accelerate and what is effective accelerationism?
0:41:22 Yeah.
0:41:24 So let me, let me just say where that, where it came from.
0:41:27 I’ll reverse, I’ll answer the second one first and then go to the broader topic.
0:41:29 So, so, so it’s a, it’s a combination.
0:41:33 There’s, there’s, there’s, you know, kind of two, towards their effective and accelerationism.
0:41:36 So the, you know, the acceleration, accelerationism part of it is, is obviously building on, on what
0:41:40 you’ve talked about and what, what Nick Land and others have talked about for a long time.
0:41:43 And of course, as you, as you’ve talked about, there’s, there’s all these different
0:41:44 versions of accelerationism.
0:41:47 And so this is, this is, you know, proposing one that, you know, this, this one is like
0:41:50 the closest to what you would call right, right accelerationism, although, you know, maybe without
0:41:51 some of the political overtones.
0:41:54 And so there is that component.
0:41:56 There’s also the effective part of it.
0:42:00 And the effective part of it, it’s sort of a half humorous reference, obviously, to effective
0:42:00 altruism.
0:42:05 And it’s a little bit tongue in cheek because it’s like, of course, if you’re going to have
0:42:07 a philosophy, of course, you would like it to be effective.
0:42:12 But, you know, but, but also look like EAC is like very much like EAC’s enemy,
0:42:12 right.
0:42:17 The, the oppositional force that, or the thing that EAC was sort of formed to fight is, is
0:42:20 actually, you know, specifically effective altruism.
0:42:22 Right.
0:42:27 And so it’s, it’s, so it’s also like, you also sort of, you know, use that term to the term
0:42:28 effective to kind of, kind of make that point.
0:42:30 Like this is in that world and this is opposed to that.
0:42:36 Um, uh, and, and, and, and the reason why like this is happening now, like the reason why
0:42:39 the concept of effective accelerationism, you know, has kind of come into being.
0:42:43 And, and by the way that, you know, the people, this is not originally my, my formulation.
0:42:47 This is, this is, there’s, there’s, you know, kind of ultra smart Twitter characters, um,
0:42:49 who I think are still mostly operating under assumed names.
0:42:55 Um, but, um, uh, uh, there’s, uh, uh, Beth Jesus, uh, and, uh, Baze Lord, uh, are the
0:42:57 two, uh, the two, two of them that I know.
0:43:01 Um, uh, and they’re, you know, these are like top and Silicon Valley, you know, engineers,
0:43:02 scientists, technologists.
0:43:05 Um, uh, but, um, you know, at least for now they’re, they’re operating kind of under,
0:43:06 undercover pseudonym.
0:43:11 Um, so, um, the, the, the reason this is happening now is because of what I, what I, what I was
0:43:15 describing earlier with AI, which is you, you have this, you have this other movement, you
0:43:19 have this movement of what’s sort of called sometimes it used different terms, AI risk, AI
0:43:21 safety, AI alignment.
0:43:26 Um, sometimes you’ll hear the term X risk, um, you know, sometime, and then this is sort of
0:43:27 directly attached.
0:43:30 This is all part of the, you know, EA world, the effective altruism world.
0:43:34 Um, and then, you know, the, the central characters of, of, of this other world are,
0:43:39 you know, Nick Bostrom, Eliezer Yudkowsky, um, you know, the open philanthropy organization,
0:43:43 um, you know, a bunch of these, a bunch of these kind of, you know, the, the, it’s sort
0:43:48 of the AI, what we call the AI doomers, uh, running around like the, the, the, the, the AI
0:43:52 doomer movement is basically part and parcel with the effective altruism movement.
0:43:56 Um, and, and, you know, AI existential risk has always been kind of the boogeyman of effective
0:43:59 altruism kind of going back, um, you know, over the 20 year development
0:43:59 of EA.
0:44:05 Um, and so anyway, that, that EA movement is the movement, um, by the way, with lavish
0:44:09 funding by like EA billionaires, uh, which is, which is part of the problem, by the way, who
0:44:11 made all their money in tech, which is also amazing.
0:44:18 Um, but, um, you know, so you’ve got this funding complex, you’ve got this, uh, EA movement, you’ve
0:44:22 got this attached AI risk safety movement, and now you’ve got like active lobbying, you know,
0:44:25 um, uh, and, you know, sort of anti AI PR campaign.
0:44:30 And so anyway, so effective, effective, uh, accelerationism is, is intended to be the polar
0:44:30 opposite of that.
0:44:34 It’s intended to be the, you know, to, to, to head, head, uh, boldly and firmly and strongly
0:44:37 and confidently into the, um, uh, into the future.
0:44:42 Um, you know, it’s like, why, you know, why, why this form of positive accelerationism,
0:44:45 uh, you know, it’s, it, there’s a couple of different layers of it.
0:44:48 Um, the founders of the, of the concept of the act have a thermodynamic, you know, kind
0:44:51 of thing, which, which we could talk about, but it’s kind of one layer down from where I
0:44:51 operate.
0:44:54 Um, the layer operate is more at the level of, of engineering.
0:44:58 And when I think about it, I think in terms of essentially fundamentally of, of material
0:44:58 conditions.
0:45:05 So human flourishing, uh, quality of life, standard of living of, of, of human beings on earth.
0:45:09 Um, and back to that concept of productivity growth, you know, the application of technology,
0:45:14 uh, to be able to cause the economy to be more productive and therefore cause more material
0:45:18 wealth, higher levels of material welfare, you know, for people all over the world, by the
0:45:19 way, also with reduced inputs.
0:45:20 Right.
0:45:23 And, and so not, not just greater levels of development, uh, and greater levels of
0:45:25 advance, but also greater levels of efficiency.
0:45:29 Um, and the, the nature of technology as a lever on the physical world is you, you can’t
0:45:30 have your cake and eat it too.
0:45:32 You can get higher levels of output with lower levels of input.
0:45:34 And the result of that is a much higher standard of living.
0:45:39 So, so I, I kind of adopt my, my philosophical, philosophical grounding is, is sort of, you know,
0:45:42 I don’t know what you might call it, like a positive materialism or something.
0:45:46 Um, you know, which is like, I think the thing that we, the thing that the technology
0:45:49 industry does best is improve material quality of life.
0:45:53 Um, I think that, that we should accelerate as hard into that as we possibly can.
0:45:57 And I think the quote unquote risks around that are, are, are, are, are greatly exaggerated,
0:45:59 uh, if not, if not false.
0:46:03 Um, and, and, you know, I, I think the forces against basically technological progress, you
0:46:06 know, they’re, they’re like the environmental movement I described there, you know, they’re,
0:46:11 they’re fundamentally, uh, sort of at some deep level, they’re sort of anti-human, um, you
0:46:13 know, they, they, they, they want fewer people and they want a lower quality living on earth.
0:46:15 And like, I just, I very much disagree with both of those.
0:46:16 Mm-hmm.
0:46:19 And what is this at the thermodynamic level?
0:46:22 Is this, is this the, you know, we are, our ultimate enemy is entropy?
0:46:28 So there’s, there’s a, there’s a, the thermodynamic part gets complicated and this is not my, my field.
0:46:32 So there’s, there’s other people that you should probably have on to talk about this, but the
0:46:37 effective accelerationism version of the thermodynamic thing is, is based on the work of this, uh,
0:46:41 physicist, uh, named Jeremy England, um, who is this very interesting character.
0:46:47 Um, actually, he’s actually trained by one of my partners, um, uh, um, and, um, is now, um,
0:46:52 basically, um, he’s an MIT, you know, physicist, um, uh, you know, biologist.
0:46:55 And by, by the way, and also, by the way, interesting guy, I don’t know him, but very interesting
0:46:56 guy from a distance.
0:46:57 He’s also a trained rabbi.
0:47:00 Um, and so he, he’s, he’s an interesting cat.
0:47:04 Um, and so he basically has this theory that basically, it basically, it’s, it’s sort of
0:47:05 life is the direct result.
0:47:10 Life, life, like the phenomenon of life itself is a direct consequence of thermodynamics.
0:47:16 Um, and you know, the, the way he describes it is basically, um, basically if you take,
0:47:21 you know, basically the universe, um, with a level of energy that’s washing around and raw materials,
0:47:26 um, and you sort of apply kind of natural selection at a very deep level, um, you know,
0:47:29 you know, um, uh, you, you know, even at the level of just like the formation of materials,
0:47:33 like on a planet or something, um, you, you basically have this thing where basically a
0:47:39 matter wants to organize itself into, uh, states where it’s able to absorb energy and achieve
0:47:39 higher levels of structure.
0:47:44 Um, and so you, you have absorption of energy, you have achievement of higher levels of structure.
0:47:48 In the case of organic life, that’s, you know, starts with our basic RNA and then it kind
0:47:50 of works its way up to, you know, full living systems.
0:47:54 Um, and then on the other side of that, as we talked about before, on the other side of
0:47:58 that is you’re, you’re, you’re, the result of that is you’re, you’re, you’re dumping heat,
0:48:01 which is to say entropy, you know, kind of out, out into the broader system.
0:48:06 And so it’s almost like saying the second law of thermodynamics has an upside, um, right?
0:48:11 Which is basically, yes, entropy in the universe is increasing over time, but a lot of that
0:48:16 increases the result of structures forming that are basically absorbing energy and, and,
0:48:17 and, and then exporting entropy.
0:48:21 Um, and one form of that structure, um, is, is, is actually life.
0:48:25 And, and this, and this is actually a thermodynamic, you know, biomechanical, bioelectrical kind of
0:48:28 explanation of, of actually how organic life works.
0:48:29 Like this is what we are.
0:48:33 We are machines for gathering energy, um, you know, forming increasingly, you know, complicated
0:48:36 biological machines, replicating those machines, right?
0:48:40 And of course, you know, he talks about like, you know, natural selection, like it’s not
0:48:42 surprising that natural selection is so oriented around replication, right?
0:48:46 Because replication is the easiest way to generate more structure, right?
0:48:51 Like replication is the way that a system that is in basically in business to generate structure.
0:48:53 It’s, it’s, it’s the way that it can most efficiently generate more structure.
0:48:59 Um, and so, uh, so anyway, basically the, the, the universe wants us to basically be alive.
0:49:02 The universe wants us to become more sophisticated.
0:49:07 Um, you know, the universe wants us to replicate, um, you know, the universe feeds us in a, you
0:49:11 know, and essentially a limited amount of energy and raw materials with which to do that.
0:49:16 Um, you know, yes, we, we, we dump entropy out the other side, but we, we get structure and
0:49:18 life, you know, to basically, uh, to, to basically compensate for that.
0:49:24 The universe is, uh, the universe is pronatalist and kind of Nietzschean there as well.
0:49:26 Yeah, exactly.
0:49:26 A hundred percent.
0:49:27 Yeah.
0:49:31 So anyway, so that’s, that’s the, that’s the thermodynamic, that’s the thermodynamic underpins
0:49:32 of effective accelerationism.
0:49:37 Uh, the, the, the people, uh, who have encountered effective acceleration, effective accelerationism,
0:49:40 some of that get very, some of them get very deeply into that.
0:49:43 And there’s a very deep kind of well there to, uh, to draw from, uh, this guy, Jeremy England
0:49:44 has a book out.
0:49:45 Actually, you’ll appreciate this.
0:49:46 This guy, Jeremy England has a book out.
0:49:48 And the title of the book is something like, uh, every life is on fire.
0:49:51 Um, and it’s actually funny.
0:49:55 Cause it’s like, if you read Heraclitus, um, you’re like, oh my God, you know, he saw it.
0:49:57 Um, right.
0:50:01 Like it’s like, there’s something very, very deep going on here with this sort of intersection
0:50:02 of energy and life.
0:50:06 Um, but, uh, so he’s, he’s got this book out, which apparently is quite good.
0:50:10 Um, uh, and so some people in effective acceleration kind of, kind of go, go deep.
0:50:13 There’s a tongue in cheek reference to the so-called thermodynamic God, right.
0:50:16 Which is not, you know, which is not a literal, you know, religious.
0:50:20 In the literal religious sense, like, uh, uh, you know, sort of a conscious God or a sentient
0:50:24 God, but more of this, this idea that the universe is, is, is, is sort of designed to express itself,
0:50:28 uh, in the forms, you know, basically in higher and higher forms of life.
0:50:31 Um, yeah, to, to your point, like there’s an obviously direct Nietzschean connection.
0:50:35 Um, uh, you know, so maybe he saw a lot of this too.
0:50:39 Um, you know, and obviously he, you know, he was obviously writing and thinking at the same
0:50:42 time Darwin was figuring a lot of this out on the, on the natural selection evolution
0:50:42 side.
0:50:46 Um, yeah, so there’s that, but, but having said that, like, like I said, my, my take on
0:50:49 it is more, you know, I’m, I find that stuff fascinating.
0:50:53 I’m, I’m more naturally inclined as an engineer, more naturally inclined towards the material
0:50:54 side.
0:50:58 Um, and so I, I just more naturally think in terms of the, the, the social systems and the,
0:51:02 the technological development and the impact on, on, on, on material quality of life.
0:51:06 Um, and so I, I think you can also just take it at that level and not, not, not have to
0:51:08 get, uh, all the way down into thermodynamics if you don’t want to.
0:51:12 I mean, there’s an odd, I mean, yeah, drawing it down to this level of engineering, well,
0:51:16 not down to, but just to this level of engineering, there’s this odd learned helplessness.
0:51:20 And I mean, just to take the two examples we’ve given so far, and you know, they work quite
0:51:25 well, actually nuclear energy on the atomic side and AI on the bit side of things, uh, virtual,
0:51:27 I guess, virtual reality and reality.
0:51:32 Um, the, you posted this really interesting essay on your, on your blog about availability
0:51:37 cascades, which is about basically in short, if I’m getting this right, this idea of, uh,
0:51:45 why are so many people interested in this thing or this view, uh, of whatever the, the opinion
0:51:47 or the idea is that’s floating around.
0:51:51 And it seems on both of those, uh, both nuclear energy and AI, we have that same opinion, which
0:51:56 is like memetically infected culture of a sort of learned helplessness.
0:51:59 Like, oh no, you know, we’ve already spoken about this a bit, but like, oh no, we need to
0:51:59 get rid of this.
0:52:00 We can’t deal with this.
0:52:05 But it seems, do you think on the engineering side of things, and I guess it overlaps also
0:52:10 into the social in terms of how you engineer and how you, uh, promote these ideas socially
0:52:17 as, as tools, as, uh, things that people use is an attempt to like invert that availability
0:52:26 cascade and like try to like begin some mimesis on the side of like, it’s okay to want a better
0:52:27 quality of living.
0:52:28 It’s okay to want to grow.
0:52:30 It’s okay to want energy.
0:52:36 Like you don’t have to be, uh, almost like submissive to, to whatever this strange, um, self-defeating
0:52:41 learned helplessness is that we have in terms of technology and our, our like, our like weird
0:52:46 allegiance to just this, this stagnant comfort that we’ve had for too long.
0:52:48 Yeah, that’s, that’s, that’s right.
0:52:49 That’s exactly right.
0:52:51 And like I said, like we said, like we talked about earlier, like there, there’s, there’s
0:52:55 this, I think there’s a natural human impulse deeply wired into like the limbic system or
0:52:57 something, which is basically right.
0:52:58 Fear over hope.
0:52:59 Right.
0:53:02 Um, you know, like what’s most likely to come over the ridge, right?
0:53:06 A saber to tiger to eat you or like something warm and cuddly that wants to be your friend.
0:53:06 Right.
0:53:08 Um, I guess a quokka or something like that.
0:53:08 Right.
0:53:09 So, right.
0:53:10 It’s, it’s probably the tiger.
0:53:10 Right.
0:53:13 And, and, and, you know, there’s, there’s a sort of, you know, false positive, false negative,
0:53:14 right.
0:53:15 Two, two ways of making mistakes.
0:53:18 And you definitely, from an evolutionary standpoint, want to err in the direction of being, you
0:53:23 know, more, you know, more, you want to overestimate the rate of cyber two tigers, right.
0:53:25 Um, uh, to, to, to, to survive.
0:53:28 So, so, so, so, so that, that impulse is deep.
0:53:28 Yeah.
0:53:31 But then, you know, what we have is, you know, we have, we have sentience, we have the, you
0:53:33 know, the, we, we’re not just limbic systems anymore.
0:53:36 We have a, we have the ability to control our environment, uh, the ability to build tools.
0:53:38 We’re not afraid of two tigers anymore.
0:53:42 Um, and so, um, yeah, we, we have the ability to shape our world.
0:53:46 Um, you know, we develop rationality and the enlightenment and science and technology and
0:53:49 markets and everything else to be able to control the world, um, you know, to our benefit.
0:53:55 Um, and so we, you know, we don’t, we don’t have to live cowering in fear, um, anymore, uh,
0:53:58 you know, as much as, or as much as that might be like grimly satisfying, like we, we don’t
0:53:59 actually have to do that.
0:54:03 And there’s actually a, you know, very, there are many, many good reasons over the last
0:54:06 300 years to believe that, you know, there’s, there’s a much better way to live.
0:54:10 Um, yeah, but look, somebody has to say, you know, somebody has to actually say that.
0:54:15 Um, and then look, I think the other part is, um, I think there’s a big divide.
0:54:19 Um, I think there’s a big divide between, I’ll, I’ll pull out my, my burn on this a little
0:54:23 bit as, as, as, as a big divide on this stuff between what you describe as the elites and
0:54:26 the masses, um, that has turned out to be pretty interesting.
0:54:32 So the, the, the, I would say this problem, this problem of fear of technology, um, or
0:54:37 hatred of technology or desire to stop technology, I think it’s primarily a phenomenon of the elites.
0:54:40 Um, I actually don’t think it’s particularly shared by the masses.
0:54:44 Um, and I, it just seems like I’ll just take AI as an obvious example.
0:54:49 One of the amazing things about AI is it’s like freely available for use by everybody in
0:54:54 the world right now, today, fully state of the art, like the best AI in the world, um, is
0:54:57 on, you know, websites from open AI and Google and Microsoft.
0:55:01 Um, and you can go on there and you can use it for free today.
0:55:05 Um, and people, you know, hundreds, a hundred, already a hundred, 200 million people, something
0:55:06 like that around the world are already doing this.
0:55:06 Right.
0:55:10 And if you talk to anybody who’s, you know, if you talk to any teacher, right, um, you
0:55:13 know, uh, you know, they’ll already tell you they’ve got students using GPD to write essays
0:55:14 and so forth.
0:55:15 Right.
0:55:19 Um, and so you’ve got this amazing thing where, you know, like the internet before it
0:55:24 and like the personal computer before it and like the smartphone before it, AI is, it’s,
0:55:26 it’s like immediately democratized, right?
0:55:29 Like it’s immediately available in its full state of the art version.
0:55:35 Like there’s no more advanced version of like GPT that I can buy for a million dollars than
0:55:39 you can get for free or by paying 20 bucks for the upgraded version on, on, on, on the,
0:55:40 on the open AI website.
0:55:44 Like the, the, the state of the art stuff is fully available for free.
0:55:48 Um, and so you have people all over the world and this is one of my, this would be a source
0:55:51 of optimism that the AI doomers are going to lose almost by definition, right?
0:55:54 Is you have people all over the world who are just already using this and they’re getting,
0:55:56 you know, great value out of it and they’re in their daily lives.
0:55:57 They love it.
0:55:58 They’re having a huge amount of fun with it.
0:55:59 Um, you know, it’s great.
0:56:03 They’re, you know, making new art and they’re, you know, doing all kinds of, you know, asking
0:56:05 all kinds of things and it’s helping them in their jobs and in school and everything
0:56:05 else.
0:56:06 And they, and, and they love it.
0:56:11 So, so I, I think there’s this thing where like, I actually think that the, what we,
0:56:15 what we’re actually talking about from a social standpoint is basically essentially a corrupt
0:56:16 elite, right?
0:56:21 A corrupt oligarchic elite that basically has been in a position of gatekeeping power, you
0:56:24 know, for, for, you know, basically in its modern form for, for 60 years.
0:56:27 Um, and every new technology development comes along as a threat to that.
0:56:31 Um, and, and, and, and back to the Morrison thing like that, that’s why they hate and
0:56:32 fear new technology.
0:56:35 Um, you know, they, they would very much like to control it.
0:56:37 Um, you know, they, they, it’s like social media, like they’re all just like completely
0:56:41 furious about social media, but like, you know, 3 billion people use social media every
0:56:41 day and they love it.
0:56:45 Um, and so it’s, it’s only the elites that are constantly kind of raging against it.
0:56:48 The problem is the elites are actually in charge of right.
0:56:52 From a, from a formal, you know, government, like they actually have the ability to write
0:56:52 laws.
0:57:15 By the way, you also see this in polls. If you poll on, like, there’s two very interesting kind of phenomena kind of unfolding if you do these broad-based polls on trust in institutions. And there’s organizations, Gallup in particular, and then there’s another organization called Edelman that does these polls every year of basically, essentially, they poll regular people and the question is like, which institutions do you trust?
0:57:42 And that institutions here includes everything from the military to, you know, religion to schools to government to, you know, journalism to, you know, big companies, big tech, and so forth, small business. And basically, the two big themes you see in those polls is one is ordinary people trust in institutions, trust in any sort of centralized gatekeeping function. It’s been in basically secular decline basically since the 1970s, corresponding to the period we’ve been talking about.
0:58:02 And so generally, people as a whole have kind of had it with the gatekeepers, which is very interesting. And by the way, that phenomenon, actually, the beginning of that predates the internet and social media. And so that traces back to the early 70s. And so I think that, which I think is not an accident. It was just like, it’s where the current regime basically essentially took control.
0:58:18 And then the other thing that’s so striking is that, you know, although you can sit and read the news all day long, and where they just like hate on tech companies all day long, if you do the poll of, you know, basically businesses by category, tech polls by far at the top.
0:58:35 And so again, ordinary people are just like, wow, my iPhone’s pretty cool. I kind of like it. And this JetGPT thing seems really nifty. And so I do think there’s this weird, like, I do think there is this aspect of this where, like, it’s a cliche to say the elites are out of touch. Of course, the elites are out of touch. The elites are always out of touch.
0:59:01 But, like, it seems like the elites are particularly out of touch right now, including on this issue. And another way kind of through this knothole is, you know, they may just simply discredit themselves. Like, you know, the EU is a great example. The EU may pass this anti-AI law, and the population of Europe might just be like, what the hell? Right? And so that would be another white pill against what otherwise looks like a deep kind of drive in our society for stagnation.
0:59:14 It would also be really strange to try and find a way to, like, define AI in that sense. Because it’s not like we haven’t been using it in a minor form before all this for a while, right? So I don’t know how they’d go about defining that in that way.
0:59:29 Yes. So, yes. So do you ban linear algebra? Right? Do you ban linear algebra? And it’s actually really funny, because I don’t know if you know this, there actually is a push underway to quote-unquote ban algebra. And it’s literally in California. There’s a big push underway to drive it out of the schools in California.
0:59:45 So there’s a big push. First, it started with a push to drive calculus out of the schools in California. Now it’s extended to drive algebra out. And of course, this is being done under the so-called rubric of equity, right? Because it turns out, you know, test scores for advanced math, you know, vary by, you know, vary by group.
0:59:54 And so, you know, there’s this weird thing where, like, in California, we’re trying to push algebra out of the school. In Washington, we’re trying to push algebra, like, out of tech.
1:00:05 Like, the whole thing is, and this is where I get, like, really, you know, this is where I start to get emotional. Because it’s like, really? Like, we spent, you know, 500 years climbing our way out of, you know, primitivism and getting to the point where we have, like, advanced science and math.
1:00:14 And we’re literally going to try to ban it. I was involved. I was involved. I don’t know if you remember this. There was actually a similar push like this. There was a push in the 1990s to ban cryptography.
1:00:25 To ban the idea of codes, right? And ciphers, right? And as you probably know, like, codes and ciphers are just math. Like, all they are is math, right?
1:00:34 And there was a move in the 1990s where people who thought that cryptography, obviously, you know, there’s all these anti-cryptography arguments because, like, bad guys can use it to hide and so forth.
1:00:39 And so, there was this, like, concerted effort by the U.S. government and other Western governments to ban cryptography in the 90s.
1:00:43 And it took us years to fight and defeat that. And I was like, okay, that was so stupid. That will certainly never happen again.
1:00:46 And, like, we’re literally back at trying to ban math again.
1:01:01 Well, that does lead me just to the final question here, which is to do with the future. I mean, whether or not it’s your optimistic, pessimistic in relation to, you know, I guess it would draw in on what we’ve just been talking about there.
1:01:09 How do you envision the short-term future, which I’ve put down here, like, 10 to 50 years? And then how do you, what do you foresee for the year 3000 AD?
1:01:22 Oh, boy. So, I should start by saying I’m not a utopian. So, you know, we talked a little bit earlier about kind of these impulses that kind of drive people to these kind of extreme points of view.
1:01:29 Like, the way I think about it is, like, there’s a natural drive. A lot of people, a lot of people have what Thomas Sowell called the unconstrained vision.
1:01:35 So, they’ve got these kind of very broad-based kind of visions. And, you know, those visions kind of then split into, like, a utopian vision.
1:01:43 And that might be, you know, for AI, that might be something like the singularity, right? Or in the 1990s, these were called the extropians, right?
1:01:50 Which is sort of this idea of kind of a material utopia as a consequence of, like, AI and, let’s say, nanotechnology on the one hand.
1:01:55 And that’s where, by the way, the idea of the singularity came from, right? Which is Ray Kurzweil and Werner Binge.
1:02:01 They were like, at some point, you get this kind of, you know, point of no return, which is like a utopian point of no return.
1:02:04 But then, of course, the flip side of every utopia is, you know, apocalypse.
1:02:13 And so, then that’s where the sort of AI, you know, sort of the singularitarians 20 years ago have become that, you know, a lot of them have become the AI doomers of today.
1:02:16 And, you know, they have sort of this sort of, you know, they have the same utopian impulse.
1:02:18 They’ve just flipped a bit and made it negative.
1:02:21 So, I should say, like, I’m not one of those.
1:02:29 I’m probably more of a materialist and a little bit more of a, like I said, an engineer where, you know, things, for example, have constraints in the real world.
1:02:34 So, I don’t think we tend to get the extreme, quite the extreme outcomes.
1:02:36 But I do think we get, you know, we get change.
1:02:42 Like, we get change in the margin and then, you know, change in the margin that compounds over time can become, you know, quite striking.
1:02:54 So, look, over 10 to 50 years, you know, look, sitting here today, like, if we want it, you know, you can imagine the next 50 years to be characterized by, you know, the rise of AI.
1:02:56 Looks like we kind of figured that out now.
1:03:01 You know, this superconductor thing, if it’s real, that’s a, you know, turning point moment.
1:03:06 And, by the way, if it’s not real, it may be, you know, that this result points us in the direction of something that becomes real in the next few years.
1:03:19 And so, you can imagine some combination of AI, superconductors, you know, biotech, you know, you know, all these new techniques for, you know, bio-optimization, gene editing, you know.
1:03:22 And then, you know, nuclear, you know, if we get our act together on nuclear fission.
1:03:26 By the way, there’s a lot of really smart people working on nuclear fusion right now.
1:03:32 You know, fusion is, you know, would be an even bigger, you know, kind of opportunity for unlimited clean energy.
1:03:38 You know, now, you know, my cynical, the cynic in me would say if fission is illegal, then they’re certainly going to make fusion illegal.
1:03:40 But, you know, that, you know, that’s a choice.
1:03:43 We all get to decide whether we want to live in a world where fusion is illegal.
1:03:45 So, you know, we get nuclear fusion.
1:03:56 And so, sitting here 50 years from now, you know, we basically are like, wow, you know, we are like, we have, you know, we are all much smarter than we were because we have these smart machines working with us and everything.
1:04:04 You know, we have solved whatever environmental problems we thought we had, you know, with, we have abundant energy in an increasingly clean environment.
1:04:08 You know, we’re curing diseases at a rapid pace.
1:04:11 And, you know, new babies are born that are immune to disease.
1:04:19 And so, you know, you know, not quite a material utopia, but like, you know, a significant, you know, meaningful step function upgrades in human quality of life.
1:04:23 Like, I think that’s all very, over a 50-year period for sure, like that’s all very possible.
1:04:28 Over a 3,000, over whatever, the year 3,000, over a 1,000-year period.
1:04:37 I mean, look, you do get into these questions, you know, you do, if you’re going to talk about 1,000 years, like you do get into these questions of like, you know, for example, merger of man and machine, right?
1:04:42 So, you do have to, over that time frame, you have to start thinking about things like, you know, the neural link, you know, like where neural link takes you.
1:04:48 And, you know, you know, over that period of time, you know, you’ll definitely have like, you know, neural augmentation.
1:04:50 So, you know, do you have shifting definitions of humanity?
1:04:57 You know, where is the transhumanist movement actually taking us, you know, becomes a very interesting question over that time frame.
1:05:06 Obviously, you have lots of questions over that time frame of space, you know, exploration, getting to other planets, you know, other life, you know, either other life in the universe or not other life in the universe.
1:05:17 So, kind of the spread of the, you know, the spread of our civilization more broadly, you know, so there you truly get into science fiction scenarios, you know, then, yeah, that’s always fun to talk about.
1:05:20 I will admit, I am much more focused on the next 50 years.
1:05:27 Yeah, I mean, is there anything you’d like to add into the conversation that you feel, you know, is key that we haven’t touched upon?
1:05:31 Yeah, no, I think that’s a good, I think that was a good, it covered a lot of ground.
1:05:40 Yeah, so for effective accelerationism, just if you Google, there’s a number of good already websites and substacks talking about that.
1:05:42 A lot of the conversations happening on Twitter.
1:05:50 So, and I already dropped the names of the IAC guys, so Beth Jesus and Bazelore, definitely follow those guys.
1:05:59 You know, I’ve not met Nick Land, but I would definitely give a shout out and say for anybody who hasn’t encountered his work, they should definitely read up on it.
1:06:25 He is, I think pretty clearly, like the philosopher of our time, and not even because, you know, whether I agree or disagree with him on everything he said, and of course, he’s changed his views on a lot of things over time, but just the framework that he operates, like, his willingness to actually go deep and actually think through the consequences of the kinds of technologies that I deal with every day, you know, are just, I think, way beyond most other people in his field.
1:06:30 And so it’s, and I know he took kind of a long road to get here, so it’s fun to see.
1:06:32 You know, it’s fascinating to read that.
1:06:33 Oh, I’ll point to one other thing.
1:06:37 So I already mentioned the Jeremy England book, and I’ll point to one other book that people might find interesting.
1:06:50 So a lot of Lance work and a lot of accelerationism, right, is based on these, the ideas of this field called cybernetics, which is kind of this, it’s, cybernetics is interesting because it’s kind of this lost field of engineering.
1:07:03 It was super hot as an engineering field from the 1940s to the 1960s, and it basically was sort of the original computer science, and then it was sort of, it was also sort of the original artificial intelligence.
1:07:07 A lot of the AI people of that era kind of called themselves cybernetics or cyberneticists.
1:07:13 But it really is an engineering field that kind of went away or got a lot more sedate after the 60s.
1:07:24 But as I mentioned, like a lot of the ideas around AI and, you know, world of machines and thermodynamics, a lot of those ideas were being explored as far back as the 30s and 40s.
1:07:28 So the cybernetics people of that era thought a lot about a lot of these questions.
1:07:29 Anyway, there’s this great book.
1:07:37 There’s a lot of original source material on this, and, you know, the key character of that movement was Norbert Wiener, and there’s a bunch of books by him and about him.
1:07:48 But there’s also a great book came out recently called Rise of the Machines by an author named Thomas Ridd, and it sort of reconstructs the archaeology of cybernetics and sort of makes clear how relevant those ideas are today.
1:07:54 And so if you read that in conjunction with Nick Land’s work, I think you’ll find it pretty interesting.
1:07:59 I’ll be sure to put the link for your Twitter and your blog in the description below as well.
1:08:01 But yeah, I think that’s a good place to finish up.
1:08:03 Mark Andreessen, thanks very much.
1:08:04 Good, James.
1:08:04 A pleasure.
1:08:05 Thank you.
1:08:10 Thanks for listening to the A16Z podcast.
1:08:16 If you enjoyed the episode, let us know by leaving a review at ratethispodcast.com slash A16Z.
1:08:18 We’ve got more great conversations coming your way.
1:08:19 See you next time.
1:08:29 This information is for educational purposes only and is not a recommendation to buy, hold, or sell any investment or financial product.
1:08:37 This podcast has been produced by a third party and may include paid promotional advertisements, other company references, and individuals unaffiliated with A16Z.
1:08:44 Such advertisements, companies, and individuals are not endorsed by AH Capital Management LLC, A16Z, or any of its affiliates.
1:08:50 Information is from sources deemed reliable on the date of publication, but A16Z does not guarantee its accuracy.
1:09:02 This podcast is from sources deemed reliable on the date of publication.
Marc Andreessen, cofounder Andreessen Horowitz, joins the Hermitix podcast for a conversation on AI, accelerationism, energy, and the future.
From the thermodynamic roots of effective accelerationism (E/acc) to the cultural cycles of optimism and fear around new technologies, Marc shares why AI is best understood as code, how nuclear debates mirror today’s AI concerns, and what these shifts mean for society and progress.
Timecodes:
0:00 Introduction
0:51 Podcast Overview & Guest Introduction
1:45 Marc Andreessen’s Background
3:30 Technology’s Role in Society
4:44 The Hermitix Question: Influential Thinkers
8:19 AI: Past, Present, and Future
10:57 Superconductors and Technological Breakthroughs
15:53 Optimism, Pessimism, and Stagnation in Technology
22:54 Fear of Technology and Social Order
29:49 Nuclear Power: Promise and Controversy
34:53 AI Regulation and Societal Impact
41:16 Effective Accelerationism Explained
47:19 Thermodynamics, Life, and Human Progress
53:07 Learned Helplessness and the Role of Elites
1:01:08 The Future: 10–50 Years and Beyond
Resources:
Marc on X: https://x.com/pmarca
Marc’s Substack: https://pmarca.substack.com/
Become part of the Hermitix community:
On X: https://x.com/Hermitixpodcast
Support: http://patreon.com/hermitix
Find James on X: https://x.com/meta_nomad
Stay Updated:
Let us know what you think: https://ratethispodcast.com/a16z
Find a16z on Twitter: https://twitter.com/a16z
Find a16z on LinkedIn: https://www.linkedin.com/company/a16z
Subscribe on your favorite podcast app: https://a16z.simplecast.com/
Follow our host: https://x.com/eriktorenberg
Please note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures.