Will AI Radically Change the World by 2027?… from Risky Business

AI transcript
0:00:27 What are the consequences of getting on Europe’s bad side?
0:00:30 The rationale is more like, f**k those guys.
0:00:33 How is stealth wealth changing retail?
0:00:35 You can have taste and still buy jupes.
0:00:40 What does a $6.2 million banana have to do with any of us?
0:00:46 People don’t like the attribution of serious financial value to comedy.
0:00:49 Join me, Felix Salmon, and my co-hosts Emily Peck and Elizabeth Spires
0:00:54 as we talk about the most important and obscure stories in business and finance.
0:00:56 Follow Slate Money wherever you like to listen.
0:01:00 Hey, it’s Jacob.
0:01:03 There is another Pushkin show called Risky Business.
0:01:07 It’s a show about decision-making and strategic thinking.
0:01:10 It’s hosted by Nate Silver and Maria Konnikova,
0:01:14 and they just had a really interesting episode about the future of AI.
0:01:18 So we’re going to play that episode for you right now.
0:01:22 We’ll be back later this week with a regular episode of What’s Your Problem?
0:01:31 Welcome back to Risky Business, a show about making better decisions.
0:01:33 I’m Maria Konnikova.
0:01:34 And I’m Nate Silver.
0:01:37 Nate, today on the show is going to be a little bit doom-tastic.
0:01:41 Yeah, I mean, I don’t know if it’s, like, worse than thinking about, like,
0:01:47 the global economy going into a recession because of dumb-fuck tariff policies.
0:01:50 This is all about how we’re all going to die in seven years instead.
0:01:51 No, I’m just kidding.
0:01:56 This is a very, very intelligent and well-written and thoughtful report
0:01:59 called AI 2027 that we’re going to spend the whole show on
0:02:02 because I think it’s such an interesting subject to talk about.
0:02:07 But that, you know, includes some dystopian possibilities, I would say.
0:02:08 It does indeed.
0:02:14 So let’s get into it and hope that you guys are all still here to listen to us in seven years.
0:02:25 The contrast is interesting between, like, all the chaos we’re seeing with tariff policy
0:02:29 in terms of starting a trade war with China and then other types of chaos.
0:02:32 And it’s interesting to kind of look at this.
0:02:36 I mean, I wouldn’t call it a more optimistic future exactly,
0:02:42 but, like, on a different trajectory of, like, a future that’s going to change very fast
0:02:48 according to these authors with profound implications for, you know, everything.
0:02:52 The human species, these researchers and authors are saying that
0:02:54 everything is going to change profoundly.
0:03:03 Certainly. And even though there is some hedging here, this is kind of their base case scenario.
0:03:08 And, like, you know, base case number one and base case number two differ.
0:03:11 There’s, like, kind of a choose-your-own-adventure at some point in this report.
0:03:13 But they’re both very different in the status quo, right?
0:03:17 And the notion that we hear from AI researchers is that, like, everything becomes different
0:03:22 if AIs become substantially more intelligent than human beings.
0:03:25 People can debate, and we will debate on this program, what that means.
0:03:27 But, yeah, do you want to contextualize this, Marie?
0:03:29 Do you want to tell people who the authors are of this report?
0:03:31 Absolutely. Absolutely.
0:03:38 So, the report is authored officially by five people, but I think unofficially there’s also a sixth.
0:03:47 We’ve got Eli Leiflund, who’s a super forecaster, and he was ranked first on RAN’s forecasting initiative.
0:03:54 So, he is someone who is very good at kind of looking at the future, trying to predict what’s going to happen.
0:03:58 You have Jonas Vollmer, who is a VC at Macroscopic Ventures.
0:04:05 Thomas Larson was the former executive director of the Center for AI Policy.
0:04:12 So, that is a center that advises both sides of the aisle on, you know, how AI is going to go.
0:04:18 And Romeo Dean, who is part of Harvard’s AI safety student team.
0:04:26 So, someone who is still a student, still learning, but kind of the next generation of people looking at AI.
0:04:35 And finally, we have Daniel Coquetelo, who basically had written a report back in 2021.
0:04:41 He was an AI researcher, and he looked at predictions for AI for 2026.
0:04:45 And it turns out that his predictions were pretty spot on.
0:04:50 And so, OpenAI actually hired Daniel as a result of this report.
0:04:51 Previously, he’s now left and complained about that.
0:04:52 Yes, and then he left.
0:04:53 Exactly.
0:04:57 And importantly, there’s also Scott Alexander.
0:04:57 Exactly.
0:05:01 So, I was saying he’s the person who kind of is in the background.
0:05:06 And he’s, you guys might know him as the author behind Astral Codex.
0:05:13 And I know, Scott, he’s one of the kind of fathers of what you might call rationalism.
0:05:22 I think Scott, when I interviewed him for my book, was happy enough with that term and accused me or co-opted me into also being a rationalist.
0:05:27 These people are somewhat adjacent to the effective altruists, but not quite, right?
0:05:44 They’re just trying to apply a sort of thoughtful, rigorous, quantitative lens to big picture problems, including existential risk, of which most people in this community believe that AI is both an existential risk and also kind of an existential opportunity, right?
0:05:45 That it could transform things.
0:05:50 You talk to Sam Altman, he’ll say, we’re going to cure cancer and eliminate poverty and whatever else, right?
0:05:53 And Scott’s also an excellent writer.
0:05:56 And so, let me disclose something, which is slightly important here.
0:06:02 So, I actually was approached by some of the authors of this report a couple of months ago.
0:06:04 I guess it was in February-ish.
0:06:06 Just to give feedback and chat with them.
0:06:09 So, I’m working off the draft version, right?
0:06:11 Which I do not believe they changed very much.
0:06:13 So, my notes pertain to an earlier draft.
0:06:15 I did not have time this morning to go back and reread it.
0:06:19 So, I have – I was not on the inside loop.
0:06:20 So, I did not get an earlier draft.
0:06:23 And I’ve read this draft.
0:06:31 And basically, just to kind of big picture sum it up, it outlines two scenarios, right?
0:06:37 Two major scenarios for how AI might change the world as soon as 2030.
0:06:43 Now, important note, like, that date is kind of hedged.
0:06:44 It might be sooner.
0:06:45 It might be later.
0:06:48 There’s kind of a – there’s a confidence interval there.
0:06:57 But the two different scenarios, one, basically, be doomed 2030, humanity disappears and is taken over by AI.
0:07:05 The positive report is in 2030, basically, we get AIs that are aligned to our interests.
0:07:13 And we get kind of this AI utopia where AIs actually help make life much better for everyone and make the standard of living much higher.
0:07:16 But the crucial turning point is before 2030.
0:07:35 And the crucial kind of question at the center of this is will we be able to design AIs that are truly aligned to human interests rather than just appear to be aligned and kind of lying to us while actually following their own agenda?
0:07:39 And how we handle that is kind of the linchpin.
0:07:56 And it’s actually interesting, Nate, that you started out with China because a lot of the policy choices and a lot of what they see as kind of the decision points that will affect the future of humanity actually hinge on the U.S.-China dynamic, how they compete with each other,
0:08:05 and how that sometimes might basically clash against safety concerns because no one wants to be left behind.
0:08:06 Can we manage that effectively?
0:08:12 And can kind of that transition work in our favor as opposed to against us?
0:08:14 I think that this is kind of one of the big questions here.
0:08:19 And so it’s funny that we’re seeing all of this trade war right now as this report is coming out.
0:08:27 Yeah, look, I think this exercise is partly just a forecasting exercise, right?
0:08:37 I mean, you know, obviously there’s this kind of like fork at the bottom where we learn to have an AI slowdown or we kind of are pressing fully on the accelerator, right?
0:08:41 Like in some ways scenarios are like not that different, right?
0:08:51 Either one assumes remarkable rates of technological growth that I think even AI, I’m never quite sure who to call an optimist or a pessimist, right?
0:08:56 Even AI believers, you know, might think is a little bit aggressive, right?
0:09:03 But what they want to do is they want to have like a specific fleshed out scenario for how the world would look like.
0:09:11 Like it’s kind of like a modal scenario and like I think they’d say that like we’re not totally sure about either of these necessarily, right?
0:09:18 And I don’t think they’d be as like pedantic as to say if you do X, Y, and Z, then we’ll save the world and have utopia.
0:09:21 And if you don’t, then we’ll all die, right?
0:09:25 I think they’d probably say it’s unclear and there’s kind of like risk either way.
0:09:30 And we wanted to go through the scenario of like fleshing out like what the world might look like, right?
0:09:37 I do think one thing that’s important is that whatever decisions are made now could get locked in, right?
0:09:45 That you pass certain points of no return and it becomes very hard to decelerate like an arms race.
0:09:48 This is, you know, what we found during the Cold War, for example.
0:09:56 I mean one of the big things I look at is like do we force the AI to be transparent in its thinking with humans, right?
0:10:00 Like now there’s been a movement toward the AI will actually explicate its thinking more.
0:10:02 I’ll ask it a query, open AI.
0:10:05 The Chinese models do this too, right?
0:10:10 And they’ll say I am thinking about X, Y, and Z and I’m looking up PD and Q and now I’m reconsidering this.
0:10:12 It actually has this chain of thought process, right?
0:10:14 Which is explicated in English.
0:10:24 One concern is that one of the AI just kind of communicates to one another in these implicit vectors that’s inferring from all the text it has.
0:10:27 It’s kind of unintelligible to human beings, right?
0:10:33 It may be kind of quote-unquote thinking in that way in the first place and then does us the favor of like translating back.
0:10:39 So it goes from English to kind of this big bag of numbers as one AI researcher called it, right?
0:10:44 And then it translates it back into English or whatever language you want really in the end.
0:10:48 What if it just cuts out that last step, right?
0:10:51 Then we can’t kind of like check what AI is doing.
0:10:54 Then it can behave deceptively more easily.
0:10:57 So, you know, so that part seems to be important.
0:11:03 I want to hear your first impressions before I kind of poison the well too much.
0:11:11 Well, my first impressions is that the alignment problem is a very real one and an incredibly important one to solve.
0:11:21 And what I got from this is that actually the problem that I’ve had with like these initial AI LLMs is the kernel of what they’re seeing there, right?
0:11:24 So you and I have talked about this on the show in the past.
0:11:38 And I’ve said, well, my problem is that when I’m a domain expert, right, I start seeing some inaccuracies and I start seeing like places where like it either just didn’t do well or made shit up or whatever it is.
0:11:42 Now, I think it’s very clear that those problems are going to go away, right?
0:11:44 That that is going to get much, much better.
0:12:00 However, the kernel of it’s showing me something, but that might just be, you know, I have no way of verifying if that’s what’s going on, what it’s reading, like how it’s, I don’t want to say thinking about it, even though in the report they do use thinking, but it’s.
0:12:01 No, no, no, no, no, no, no, no, no, no, no.
0:12:03 I think thinking is correct.
0:12:03 I think it’s.
0:12:04 Okay.
0:12:06 We’ll stick to that language.
0:12:06 Yeah.
0:12:22 So how it’s thinking about it, that those little problems and like the little, the glitches and the things that it might be doing where it starts actually glitching on purpose are not going to be visible to the human eye.
0:12:35 And so one of the main things that they say here is that as AI internal R&D gets rapidly faster, so that means basically AI is researching AI, right?
0:12:49 And so internally they start developing new models and as they kind of surpass human ability to monitor it, it becomes progressively more difficult to figure out, okay, is the AI actually doing what I want it to do?
0:13:02 Is the output that it’s giving me its actual thought process and is it accurate or is it like trying to deceive me, but it’s actually kind of inserting certain things on purpose because it has different goals, right?
0:13:07 Because it is actually secretly misaligned, but it’s very good at persuading me that it’s aligned.
0:13:22 Because one of the things that actually came out of this report, and I was like, huh, you know, this is interesting, is if we get this remarkable improvement in AI, it will also remarkably improve at persuading us, right?
0:13:24 See, this is one part I don’t buy.
0:13:25 Yeah.
0:13:28 So this is, but I’d never even thought about that.
0:13:40 I was like, okay, fine, but one of the things that I do buy is that it’s going to be very difficult for us to monitor it and to figure out, like, is it truly aligned with human wants, with human desires, with human goals?
0:13:48 And the experts who are capable of doing that, I think, are actually going to dwindle, right, as AI starts proliferating in society.
0:13:54 And so to me, that is something that is actually quite worrisome, and that is something that we really need to be paying attention to.
0:14:10 Now, just to fast forward a little bit, in their doomsday scenario, in 2030, when AI takes over, it basically, like, suddenly releases some chemical agents, right, and humanity dies, and the rest of the stragglers are taken care of by drones, et cetera.
0:14:12 I don’t even, like…
0:14:17 It’s a quick and painless death, I will say, on the positive side.
0:14:17 Let’s hope.
0:14:19 We don’t know what the chemical agents are.
0:14:21 Might not be quick and painless.
0:14:25 Some chemical agents are actually a very painful death mate, so let’s hope.
0:14:27 Let’s hope it’s quick and painless.
0:14:28 Quick.
0:14:28 Quick.
0:14:29 Yes, quick.
0:14:30 Okay, hopefully.
0:14:33 Some chemical agents are not quick.
0:14:36 Let’s hope it’s quick and painless.
0:14:44 But if they’re actually capable of deception at that high level, then you technically don’t even need them to do it.
0:14:50 If we’re trusting medicine and all sorts of things to the AIs, it’s pretty easy for it to actually manipulate something
0:15:02 and actually insert something into codes, et cetera, that will fuck up humanity in a way that we can’t actually figure out at the moment, right?
0:15:10 Like, the way I think of it, and this is not from the paper, but this is just the way that, like, my mind processed it, is, like, think about DNA, right?
0:15:15 Like, you have these remarkably complex, huge strands of data.
0:15:23 And as we’ve found out, but it’s taken forever, one tiny mutation can actually be fatal, right?
0:15:24 But you can’t spot that mutation.
0:15:30 Sometimes that mutation isn’t fatal immediately, but will only manifest at a certain point in time.
0:15:35 That’s the way that my mind tried to kind of try to conceptualize what this actually means.
0:15:40 And so I think that, you know, that would be easy for a deceptive AI to do.
0:15:51 And to me, like, that’s kind of the big takeaway from this report is that we need to make sure that we are building AIs that will not deceive, right?
0:16:04 That their capabilities, they explain them in an honest way, and that honesty and trust is actually prioritized over other things, even though it might slow down research, it might slow down other things.
0:16:12 But that that kind of alignment step is absolutely crucial at the beginning, because otherwise, humans are human, right?
0:16:13 They’re easily manipulated.
0:16:19 And we often trust that computers are, quote, unquote, rational because they’re computers.
0:16:20 But they’re not.
0:16:21 They have their own inputs.
0:16:22 They have their own weights.
0:16:23 They have their own values.
0:16:27 And that could just lead us down a dark path.
0:16:31 So let me follow up with this pushback, I guess, right?
0:16:36 Like, first of all, I don’t know that humans are so easily persuaded.
0:16:44 This is my big critique with, like, all the misinformation people who say, well, misinformation is the biggest problem that society faces.
0:16:51 It’s like people are actually pretty stubborn and they’re kind of, to sound pretentious, they’re kind of Bayesian.
0:16:53 And how they formulate their beliefs, right?
0:16:54 They have some notion of reality.
0:16:59 They’re looking at the credibility of the person who is telling them these remarks.
0:17:07 If it’s an unpersuasive source, it might make them less likely to believe they’re balancing with other information, with their lived experience, so-called, right?
0:17:16 You know, part of the reason that, like, I am skeptical of AIs being super persuasive is, like, you know that it’s an AI and you know it’s trying to persuade you.
0:17:17 You know what I mean?
0:17:23 So, like, if you go and play poker against, like, a really chatty player, Phil Helmuth or Scott Sieber or someone like that, right?
0:17:29 You know on some level that the best play is just to totally ignore it, right?
0:17:34 You know that they are trying to sweet talk you into doing exactly what they want you to do.
0:17:43 And so the best play is to disengage or literally you can randomize your moves if you have some notion of what the game theoretical optimal play might be, right?
0:17:49 Or salesmen or politicians have reputations for being, oh, he’s a little too smooth.
0:17:52 Gavin Newsom, a little too fucking smooth, right?
0:17:54 I don’t find Gavin Newsom persuasive at all, right?
0:17:58 He’s a little too, like, from the hair gel to the constantly shifting vibes.
0:18:04 I mean, I don’t really find Gavin Newsom persuasive at all, even though, like, in AI, I might say, boy, Gavin Newsom’s a good-looking guy.
0:18:06 He’s a little gravelly, throated, but, you know, whatever.
0:18:12 I mean, look, the big critique I have of this project – and by the way, I think this is an amazing project.
0:18:17 In addition to, like, wonderful writing, if you view it on the web, not your phone.
0:18:22 So these very cool, like, little infographic that updates everything from, like, the market value.
0:18:23 They don’t call it open AI.
0:18:26 They call it open brain, I guess, is what they settle on for a substitute, right?
0:18:32 Yeah, they call everything something else just to make sure that they’re not stepping on any toes.
0:18:36 So they have open brain, and then they have deep scent from China.
0:18:38 I wonder which one that could be.
0:18:39 I wonder.
0:18:43 But it’s beautifully presented and written.
0:18:48 And, like, I appreciate their going out on a limb here, you know?
0:18:51 I mean, I think they have – it’s been fairly well-received.
0:18:56 They’ve gotten some pushback, both from inside and, I think, outside the AI safety community, right?
0:18:58 But they’re putting their necks on the line here.
0:19:04 They will look – if things look pretty normal, the world looks pretty normal in 2032 or whatever, right?
0:19:08 And they will look dumb for having published this.
0:19:15 Well, and they actually have that, right, as a scenario, that you end up looking stupid if everything goes well.
0:19:16 But that’s okay.
0:19:19 Now, can I push back on the persuasion thing a little bit?
0:19:21 Just on two things.
0:19:32 So, first of all, the poker example is not actually a particularly applicable one here because you know that you’re playing poker and you know that someone is trying to get information and deceive you.
0:19:37 The tricky thing – so this is kind of when I spent time with con artists.
0:19:39 The best con artists aren’t Gavin Newsom.
0:19:40 Like, they’re not car salesmen.
0:19:44 You have no idea they’re trying to persuade you to do something.
0:19:48 They are just, like, nice, affable people who are incredibly charismatic.
0:19:58 And even in the poker community, by the way, like, some of the biggest grifters who, like, it comes out later on where just, like, stealing money and doing all of these things are charming, right?
0:19:59 They’re not sleazy-looking.
0:20:02 Like, they have no signs of, oh, I’m a salesman.
0:20:03 I’m trying to sell you something.
0:20:09 The people who are actually good at persuasion, you do not realize you’re being persuaded.
0:20:17 And I think people are incredibly easy to kind of – to subtly lead in a certain direction if you know how to do it.
0:20:23 And I think AIs could do that, and they might persuade you when you don’t even think they’re trying to persuade you.
0:20:26 You might just ask, like, can you please summarize this research report?
0:20:34 And the way that it frames it, right, the way that it summarizes it just subtly changes the way that you think of this issue.
0:20:41 We see that in psych studies all the time, by the way, where you have different articles presented in slightly different order, slightly different ways.
0:20:52 And people from the same political beliefs, you know, same starting point, come away with different impressions of what kind of the right course of action is or what this is actually trying to tell you.
0:20:57 Because the way the information is presented actually influences how you think about it.
0:21:01 It’s very, very easy to do subtle manipulations like that.
0:21:14 And if we’re relying on AI on a large scale for a lot of our lives, I think that if it has, like, a quote-unquote master plan, you know, the way that they present in this report, then persuasion in that sense is actually going to be pretty simple.
0:21:16 But you’ll know you’re being manipulated, right?
0:21:17 That’s the issue.
0:21:17 No, you don’t.
0:21:18 No, that’s the thing.
0:21:19 You don’t know you’re being manipulated.
0:21:23 I don’t know, but I don’t know.
0:21:30 I honestly, like, Nate, I applaud your belief in humans’ ability to adjust to this, but I don’t know that they will.
0:21:41 Because I’ve just seen enough people who are incredibly intelligent fall for cons and then be very unpersuadable that they have been conned, right?
0:21:44 Instead, doubling down and saying, no, I have not.
0:21:57 So humans are stubborn, but they’re also stubborn in saying, I have not been deceived, I have not been manipulated, when in fact they have, to protect their ego and to protect their view of themselves as people who are not capable of being manipulated or deceived.
0:22:02 And I think that that is incredibly powerful, and I think that that’s going to push against your optimism.
0:22:06 I hope you’re right, but from what I know, I don’t think you are.
0:22:20 I’m not quite sure I call it optimism, so I guess maybe we do, like, slightly different views of human nature, but, like, there’s not yet a substantial market for, like, AI-driven art or writing, and I’m sure there will be one eventually, right?
0:22:23 But, like, people understand that context matters, right?
0:22:28 That you could have AI create a rip-off at the Mona Lisa, but you can also buy a rip-off at the Mona Lisa on Canal Street for five bucks, right?
0:22:36 And, like, you know, so it’s the intentionality of the act and the context of the speaker.
0:22:39 Now I sound, like, super woke, I guess, right?
0:22:42 Like, where you’re coming from and what you live the experience.
0:22:44 I think that actually is how humans communicate.
0:22:50 Like, art that might be pointless dribble coming from somebody can be something different coming from a Jackson Pollock or whatever, you know?
0:22:51 Absolutely.
0:22:53 I think that that’s a really important point, by the way.
0:22:55 I think it’s a different point, but I think that that is a very important point.
0:22:56 I think context does matter.
0:23:17 What are the consequences of getting on Europe’s bad side?
0:23:20 The rationale is more like, f*** those guys.
0:23:23 How is stealth wealth changing retail?
0:23:25 You can have taste and still buy jupes.
0:23:30 What does a $6.2 million banana have to do with any of us?
0:23:35 People don’t like the attribution of serious financial value to comedy.
0:23:44 Join me, Felix Salmon, and my co-hosts Emily Peck and Elizabeth Spires as we talk about the most important and obscure stories in business and finance.
0:23:46 Follow Slate Money wherever you like to listen.
0:23:51 I was buttering up this report before.
0:23:57 My big critique of it is, like, where are the human beings in this?
0:24:01 Or put another way, kind of like, where is the politics, right?
0:24:07 They’re trying not to use any remotely controversial real names, right?
0:24:09 So you have Open Brain, for example.
0:24:13 So where is President Trump?
0:24:16 Let me just do a quick search to make sure the name Trump does not appear.
0:24:19 So they do actually, I don’t know if this existed.
0:24:22 Maybe they took your criticism in this.
0:24:26 But they do have, like, the vice president and the president.
0:24:29 Like, they do put politicians in this version of the report.
0:24:29 They don’t have names.
0:24:35 But they say the vice president, in one of these scenarios, you know, handily wins the election of 2028.
0:24:36 We have one vice president.
0:24:37 They have general secretary, I think.
0:24:38 I’m not sure if they, I mean.
0:24:39 General secretary.
0:24:42 General secretary resembles she.
0:24:44 Yep.
0:24:49 And the vice president kind of resembles J.D. Vance, right?
0:24:53 I don’t think the president resembles Trump at all, right?
0:24:55 It’s kind of this composite character.
0:24:58 Yeah, they tried to sidestep that as much as possible.
0:25:05 If I think this will happen in the next four years, then, you know, presidential politics matter quite a bit.
0:25:06 I mean, I don’t know.
0:25:11 This is such a fucking, you know, I was jogging earlier on the east side.
0:25:15 And I was listening to the Ezra Klein interview with Thomas Friedman.
0:25:18 It’s such a fucking yuppie fucking thing, right?
0:25:19 It’s okay.
0:25:20 It’s okay.
0:25:21 You’re allowed to be a yuppie, Nate.
0:25:23 Sometimes you just got to embrace it.
0:25:28 Not a huge fan of necessarily, but, like, you know, is well-versed on, like, geopolitics and, like, China issues.
0:25:29 He’s like, yeah, China.
0:25:30 He’s just been back from China.
0:25:32 He’s like, yeah, China’s kind of winning.
0:25:33 You know what I mean?
0:25:35 And, like, I’m not sure.
0:25:45 How Trump’s hawkishness on China, but, like, kind of imbecilically executed hawkishness on China.
0:25:48 Like, I’m not sure how that figures in to this, right?
0:26:01 If we’re reducing U.S.-China trade, that probably does produce an AI slowdown, maybe more for us if we’re, like, if they’re not exporting their, like, raw earth materials and so forth.
0:26:05 But we’re making it harder for them to get NVIDIA chips, so they probably have, like, lots of workarounds and things like that.
0:26:08 Maybe Trump tariffs are good.
0:26:12 I’d like to ask the authors this report because it means that we’re going to, like, have slower AI progress.
0:26:13 I’m not joking, right?
0:26:20 On the other hand, it, like, increases the hostility between the U.S. and China in the long run, right?
0:26:31 I mean, if we rescind all the tariffs tomorrow, I think we still permanently, or let’s not say permanent, let’s say at least for a decade or so, have injured U.S. standing in the world.
0:26:33 And so I don’t know how that figures in.
0:26:39 And I’m, like, I’m also not sure, like, kind of, quote, what the rational response might be.
0:26:43 But one thing they tracked, let me make sure that they kept this into their report, right?
0:26:58 So they actually have their implied approval rating for how people feel about open brain, which is their not very self-proxy for open AI.
0:27:02 I think this actually is some feedback that they took into account, right?
0:27:08 They originally had it slightly less negative, but they have this being persistently negative and then getting more negative over time.
0:27:12 It was a little softer in the previous version that I saw.
0:27:16 So they did change that one thing at some stage.
0:27:27 But, like, the fact that, like, AI scares people, it scares people for both good and bad reasons, but I think mostly for valid reasons, right?
0:27:42 Like, that the fear is fairly bipartisan that the biggest AI accelerators are now these kind of Republican techno-optimists who are not looking particularly wise given how it’s going with the first 90 days, whatever we are, the Trump administration.
0:27:51 And the likelihood of, like, the likelihood of a substantial political backlash, right, which could lead to dumb types of regulations.
0:28:00 But, like, you know, part of it, too, is like, okay, AI, they’re saying, can do not just computer desk jobs, but, like, all types of things, right?
0:28:04 And, like, humans kind of play this role initially as supervisors.
0:28:11 And then, literally within a couple of years, people start to say, you know what, am I really adding much value here, right?
0:28:15 You kind of have, like, these legacy jobs.
0:28:16 And there is a lot of money.
0:28:26 I think most of these scenarios imagine very fast economic growth, although maybe very lumpy, right, for some parts of the world and not others.
0:28:30 But we’re kind of just sitting around with a lot of idle time.
0:28:33 It might be good for live poker, Maria, right?
0:28:40 All of a sudden, all these smart people, their open earth or open brain, excuse me, stock is now worth billions of dollars, right?
0:28:43 And, like, nothing to do because the AI is doing all their work, right?
0:28:46 They have a lot of fucking time to play some fucking Texas Hold’em, right?
0:28:52 That is one way of thinking about it.
0:29:10 Let’s go back to your earlier point, which I actually think is an important one, because, obviously, they were trying to do, as all, you know, super forecasting tries to do, is you try to create a report that will work in multiple scenarios, right?
0:29:13 You can’t tie it too much to, like, the present moment.
0:29:16 Otherwise, your forecasts are going to be quite biased.
0:29:29 However, I do think that what you raise, kind of our current situation with China, et cetera, has very real implications, given that this is kind of the central dynamic of this report that their predictions are based on.
0:29:40 I think that it’s incredibly valid to actually speculate, you know, how will, if at all, this affect the timeline of the predictions, the likelihood of the two scenarios.
0:29:50 And I will also say that one of the things in the report is that all of these negotiations on, like, will we slow down, will we not, how aligned is it, this all takes place in secret, right?
0:29:59 Like, we don’t know, we the humans don’t know that it’s going on, we don’t know what’s happening behind the scenes, and we don’t know what the decision makers are kind of thinking.
0:30:09 And so, for all we know, you know, President Trump is meeting with Sam Altman and trying to, trying to kind of do some of these things.
0:30:17 And it’s funny, because we were kind of pushing for transparency in one way, but there’s a lot of things here that are very much not transparent.
0:30:19 Yeah, it’s kind of the deep state, right?
0:30:25 But also, a lot of the negotiations are now AI versus AI, right?
0:30:32 And, look, I’m not sure that AIs will have that trust, both with the external actor and internally.
0:30:33 I’m skeptical of that, right?
0:30:43 If that does happen, they kind of think this might be good, because the AIs will probably behave in, like, a literally game theory optimal way, right?
0:30:49 And understand these things and make, I guess, like, fewer mistakes than humans might?
0:30:52 If they’re properly aligned, like, that’s a crucial thing.
0:30:58 Because in the doomsday scenario, AI negotiates with AI, but they conspire to destroy humanity, right?
0:30:59 So there are two scenarios.
0:31:03 One, it’s actually properly aligned, so AI negotiates with AI.
0:31:10 Game theory works out, and we end up with, you know, democracy and wonderful things.
0:31:18 But in the other one, where they’re misaligned, AI negotiates with AI to create a new AI, basically, and destroy humanity.
0:31:23 So it can go one way or the other, depending on that alignment step, first of all.
0:31:27 I mean, the utopia didn’t seem that utopian to me, right?
0:31:27 I’m not sure they had a job anymore.
0:31:28 No, it didn’t.
0:31:30 It actually seemed quite dystopian to me.
0:31:31 Like, it seemed incredibly dystopian.
0:31:33 It’s kind of like, you know…
0:31:34 But at least we’re still alive.
0:31:36 We’ll have cures.
0:31:37 We’ll probably live longer.
0:31:40 And again, lots of poker.
0:31:44 The AI, I’ll be writing a silver bulletin and hosting our podcast, right?
0:31:53 Let me back up a little bit, because I think we maybe take for granted that some of these premises are kind of controversial, right?
0:31:57 So they have a break point, I think, in 2026.
0:32:02 Well, it says, why aren’t certain increases substantially beyond 2026, right?
0:32:03 So that’s kind of the break point.
0:32:05 It’s like, 2027 is an inflection point.
0:32:07 I think I’m using that term correctly in this context.
0:32:14 You know, so I’m reading this report, and up to 2026, and like, thumbs up, yeah, yeah, yeah, yeah.
0:32:22 This seems, like, very smart and detailed about, like, you know, how the economy is reacting and how politics are reacting and the race dynamic with China.
0:32:24 Maybe there needs to be a little bit more Trump in there.
0:32:27 I understand why politically they didn’t want to get into that mess, right?
0:32:32 But, like, so there’s kind of three different things here, right?
0:32:37 One is a notion of what’s sometimes called AGI or artificial general intelligence.
0:32:42 And if you asked 100 different researchers, you’d get 100 different definitions of what AGI is.
0:32:56 But, you know, I think it is basically, like, being able to do a large majority of things that a human being could do competently, assuming we’re limiting it to kind of, like, desk job-type tasks, right?
0:33:01 Anything that can be done remotely is sometimes definition that is used or through remote work, right?
0:33:08 Because clearly AIs are inferior to humans and, like, sorting and folding laundry or things like that, right?
0:33:09 That requires a certain type of intelligence, right?
0:33:16 If you use the kind of desk job definition, then, like, AI is already pretty close to AGI, right?
0:33:21 I use large language models all the freaking time, and they’re not perfect for everything.
0:33:37 I felt like, you know, in terms of, like, being able to do the large majority of desk work at levels ranging from competent intern to super genius, like, on average, it’s probably pretty close to being generally intelligent by that definition, right?
0:33:41 If you’re the one using it, I just want to, like, once again point that out.
0:33:53 Because one of the things that they say in the report is that as it gets more and more involved, what we’re asking AI to do, it’s, like, the human process to evaluate whether it’s accurate and whether it’s making mistakes will get longer and longer.
0:34:05 And I think they say, like, for every, like, basically one day of work, it’ll take several, it’s, like, a two-to-one ratio at the beginning for how long it will take humans to verify the output, right?
0:34:19 So you think, like, you think you save time by having AI do this, but if you want it to actually develop correctly, then you need a team, and it takes them twice as long to verify that what the AI did is actually true and actually valid and actually aligned, et cetera, et cetera.
0:34:33 Now, you’re not asking it to do things that require that amount of time, but there do need to be little caveats to how we think about their usefulness and how you are able to evaluate the output versus other scenarios.
0:34:43 When I use AI, the things that it’s best with are things that, like, save me time, right, where I feed it a bunch of different names for different college basketball teams.
0:34:44 We’re working on our NCAA model.
0:34:47 I’m, like, take these seven different naming conventions.
0:34:53 They’re all different and create a cross-reference table of these, which is kind of like a hard task, right?
0:34:58 You need to have a little context about basketball, and it did that very well, right?
0:34:59 That’s something I could have done.
0:35:06 It might have taken an hour or two, but, you know, instead I could do it in a few minutes, and it gets faster.
0:35:12 It’s like, oh, I’ve learned from this from you before, Nate, so now I can be faster doing this type of task in the future.
0:35:21 I was at the poker tournament down in Florida last week, and, like, you know, I asked Open Research, or excuse me, oh, God, is that, why?
0:35:24 See, exactly, right?
0:35:25 It all becomes.
0:35:27 Deep research.
0:35:28 Deep research.
0:35:29 Deep research.
0:35:30 I asked deep research.
0:35:34 Reading too many of these 2027 reports, your brain gets awful.
0:35:43 To pull a bunch of stock market data for me, and then I’m playing a poker hand, and, like, I make a really thin, sexy value bet with, like, fourth pair.
0:35:45 No one knows what that means, right?
0:35:48 I bet a very weak hand, because I thought the other guy would call with an even weaker hand.
0:35:53 And I was right, and I feel like I’m such a fucking stud here, value betting fourth pair, while AI does the work for me.
0:35:56 And then, of course, I, like, bust out of the tournament an hour later.
0:36:01 And meanwhile, you know, deep research bungles this particular task, but in general, AI has been very reliable.
0:36:10 But the point is, like, there’s, like, a inflection point where, like, I’m asking you to do things that, like, are just a faster version of what I could do myself.
0:36:20 I wouldn’t, at the moment, ask AI, be like, I want you to design a new NCA model for me with these parameters, because, like, I wouldn’t know how to test it.
0:36:22 But anyway, I’m being long-winded here.
0:36:28 So AGI, we’re going to get AGI, or at least we’re going to get someone calling something AGI soon, right?
0:36:39 Artificial superintelligence, where it’s doing things much better than human beings, I think this report takes for grantedβ€”or not takes for granted.
0:36:47 It has lots of documentation about its assumptions, but it’s saying, okay, this trajectory has been very robust so far, and people make all types of bullshit predictions.
0:36:53 So the fact that these guys in particular have made accurate predictions in the past is certainly worth something, I think, right?
0:37:04 But they’re like, okay, you kind of follow the scaling law, and before too much longer, you know, AI starts to be more intelligent than human beings.
0:37:11 You can debate what intelligent means if you want, but do superhuman types of things, and or do them very fast, which I think might be different, right?
0:37:12 And AI can do things very fast.
0:37:16 Maybe it’s certainly a component of intelligence, right?
0:37:26 But I don’t take for granted that, like, quote-unquote, AI can reliably extrapolate beyond the data set.
0:37:30 I just think that, like, it’s not an absurd leap of logic.
0:37:38 It may even be, like, the base case or close to the base case, but, like, it’s not assumable from first principles, I don’t think.
0:37:40 We’ve all seen lots of trend charts.
0:37:51 You know, if you looked at a chart of Japan’s GDP in the 1980s, you might have said, okay, well, Japan’s going to take over the world, and people bought this.
0:37:56 And now the economy hasn’t grown for, like, 40 years, basically, right?
0:38:02 And so, like, we’ve all seen lots of curves that go up, and then it’s actually an escrow, whatever the fuck you call it, where, like, it begins to bend the other way at some point.
0:38:04 And we can’t tell until later, right?
0:38:12 The other thing is, like, the ability of AI to plan and manipulate the physical world.
0:38:28 I mean, some of these things where they’re talking about, like, you know, brain uploading and Dyson swarms and nanobots, like, you know, there I would literally wage your money against this happening on the timescales that they’re talking about.
0:38:29 About, right?
0:38:32 They double the timescale, okay, then I might start to give some more probability.
0:38:33 And look, I’m willing to be wrong about that.
0:38:37 I guess we’ll all be dead anyway, 50% likelihood in this scenario.
0:38:38 But, like…
0:38:40 In this scenario, 50% P do.
0:38:49 But, like, AA, you know, the physical world requires sensory input and lots of parts of our brain that AI is not as effective at manipulating.
0:38:54 It also requires, like, being given or commandeering resources somehow.
0:38:59 By the way, this is, like, a little bit of a problem for the United States.
0:39:07 I mean, we are behind China by, like, quite a bit in robotics and related things, right?
0:39:13 So, like, I don’t know what happens if, like, we have the brainier, smarter AIs.
0:39:15 But, like, they’re very good at manufacturing and machinery.
0:39:20 So, like, what if we have the brains and they have the brawn, so to speak, right?
0:39:26 And they have maybe a more authoritarian but functional infrastructure.
0:39:28 So, I don’t know what happens then, right?
0:39:38 Like, but the ability of AIs to commandeer resources to control the physical world, to me, seems far-fetched on these timelines, in part because of politics, right?
0:39:50 I mean, the fact that it takes so long to build a new building in the U.S. or a new subway station or a new highway, and the fact that our politics is kind of sclerotic, right?
0:40:01 And, look, I mean, I don’t want to sound, like, too pessimistic, but if you read the book I recommended last week, Fight, I mean, we basically did not have a fully competent president for the last two years.
0:40:04 And I would argue that we don’t have one for the next four years, right?
0:40:10 So, like, all these kind of things that we have to plan for, like, who’s doing that fucking planning?
0:40:15 Our government’s kind of dysfunctional, and, you know, maybe that means we just lose to China, right?
0:40:16 Maybe that means we lose to China.
0:40:18 At least we’ll have, like, nice cars, I guess.
0:40:24 We’ll be back right after this.
0:41:04 Join me, Felix Salmon, and my co-hosts Emily Peck and Elizabeth Spires as we talk about the most important and obscure stories in business and finance.
0:41:06 Follow Slate Money wherever you like to listen.
0:41:17 I think that your point about the growth trajectories not necessarily being reliable is a very valid one.
0:41:19 The Japanic example is great.
0:41:25 You know, Malthusian population growth is another big one, right?
0:41:30 We thought that population would explode, and instead we’re actually seeing population decline.
0:41:33 So, you know, the world does change.
0:41:48 The thing that I think they rely on is that the AIs are capable of designing just this incredible technology much more quickly so that our building process and all of that gets sped up hundredfold from what it is right now.
0:41:54 But it’s still, at least at this point, needs humans to implement it, right, and needs all of these different workers.
0:42:01 And so, yeah, I think there are some assumptions built into here that I hope, like, I hope that that timeline isn’t feasible.
0:42:04 And I do think that there are things that are holding us back.
0:42:07 All the same, I think it’s really – I think it’s interesting.
0:42:19 One of the reasons I like this report is that it forces you to think about these things, right, and try to game out some of these worst-case scenarios to try to prevent them, which I think is always an important thought exercise.
0:42:32 I do want to go back to kind of their good scenario, which just – so, bad scenario is, you know, we’re all wiped out by chemical warfare that the AIs release on us.
0:42:41 Good scenario is that, you know, everyone gets a universal basic income, and AI does everything, and no one has to do anything, and we can just live –
0:42:43 You know, happily – play poker, Maria.
0:42:44 Play poker, yeah.
0:42:55 And that just, as you suggested, that seems actually like a very dystopian scenario where people can become much more easy to brainwash, control, et cetera, et cetera.
0:43:04 It’s like a dumbing town, right, where we’re not challenged to produce good art, to advance in any sort of way.
0:43:08 Just, to me, it does not seem like a very meaningful way of life.
0:43:24 The question is, there’s another – if you read AI 2027, which I highly recommend that you read it, there’s also another post by a pseudonymous poster called L. Rudolph L., who wrote something called A History of the Future 2025 to 2040,
0:43:39 which is very detailed but goes through kind of like what this looks like at like more of a human level, how society evolves, how the economy evolves, how work evolves, right?
0:43:47 And like very detailed, just like AI 2027 is, but kind of focuses on the parts of AI 2027 that I think it kind of deliberately ignores.
0:43:50 Maybe you can call them mild blind spots or whatever, right?
0:43:53 But that’s interesting because that kind of thinks about like what types of jobs are there in the future?
0:43:55 There are probably lots of lawyers actually, right?
0:44:03 Because, you know, the law is very sluggish to change, especially in a constitutional system where there are lots of veto points, right?
0:44:05 Probably high-end service sector.
0:44:08 You know, you go to a restaurant because everyone’s – or a lot of people are rich now, right?
0:44:13 You’re flattered by the attractive young server and things like that.
0:44:17 So it’s kind of highly kind of like catered and curated experiences.
0:44:29 I guess I have some faith in humanity’s ability to fight back, quote-unquote, against like two scenarios that I might not really like either one.
0:44:30 You know what I mean?
0:44:36 And like the scenario where like AI is producing like 10 percent GDP growth or whatever, right?
0:44:40 I mean it’s great if you own stocks that are exposed to AI and tech companies probably, right?
0:44:56 But it’s also making that money on the backs of mass job displacement and like, you know, economists are confident in the long run that human beings find productive things to do and, you know, mass unemployment has been predicted many times and never really occurred, right?
0:45:02 But like – but it’s not occurring this fast where they think the world ends in six years or whatever they’re predicting or we have utopia in six years.
0:45:11 And like just the ability of like human society to like deal with that change at these timescales leads to like more chaos than I think they’re more predicting.
0:45:14 But I think also – and I told them this too, right?
0:45:18 I think also leads to more constraints, right?
0:45:20 That you hit bottlenecks.
0:45:31 If you have five things you have to do, right, and you have the world’s fastest computer, et cetera, et cetera, but there’s like a power outage in your neighborhood, right?
0:45:33 And that’s a – that’s a bottleneck, right?
0:45:39 Maybe there are ways around it if you’re like go to Home Depot and buy a generator or – you know what I mean?
0:45:44 But like – but the point is that like you’re often defined by like the slowest link and politics are sometimes the slowest link.
0:46:00 But also by like – also, you know, I think the report maybe understates – and I think kind of in general the AI safety community like maybe understates the ability of like human beings to cause harm to other human beings with AI, right?
0:46:04 That concern kind of gets brushed off as like too pedestrian or like –
0:46:05 I was going to say too pedestrian.
0:46:08 It was the exact word I was thinking of.
0:46:17 I think that’s a good – I mean, I think that’s a great place to end it because, yes, we do need to be concerned about all of these things about AI.
0:46:20 But like that phrase I think is very crucial.
0:46:24 Like do not underestimate the ability of humans to cause harm to other humans.
0:46:30 And I think that that’s – you know, it’s not a very – it’s not a very pleasant place to end.
0:46:33 But I think it’s a really important place to end.
0:46:37 And I think that that’s a very valid kind of way of reflecting on this.
0:46:40 Or to trust AIs too much, right?
0:46:44 I generally think that concern is like somewhat misplaced.
0:46:54 But like if we’re handing over critical systems to AI, right, it can cause problems if it’s very smart and deceives us and doesn’t like us very much.
0:47:06 It can also cause problems if it has hallucinations, bugs in critical areas where it isn’t as robust and hasn’t really been tested yet that are outside of its domain.
0:47:07 Yep.
0:47:10 Or there could be espionage.
0:47:20 Anyway, we will have plenty of time, although maybe only seven more years actually to like explore these scenarios.
0:47:29 Yes, and in seven years we’ll be like, welcome back to the final episode of Risky Business because the prediction is we’re all going to be dead tomorrow.
0:47:32 But yeah, this was an interesting exercise.
0:47:38 And I think my PDOOM has slightly gone up as a result of reading this.
0:47:43 But I also remain optimistic that humans can do good as well as harm.
0:47:50 Yeah, my interest in learning Chinese has increased as a result of recent developments.
0:47:51 I don’t know about my PDOOM.
0:47:52 All right.
0:47:52 Yeah.
0:47:55 Let’s do some language immersion.
0:47:56 I’m with you.
0:48:03 That’s it for today.
0:48:10 If you’re a premium subscriber, we will be answering a question about whether MFAs can ever be plus EV right after the credits.
0:48:13 And if you’re not a subscriber, it’s not too late.
0:48:21 For $6.99 a month, the price of a mid-beer, you get access to all these conversations and all premium content across the Pushkin network.
0:48:26 Risky Business is hosted by me, Maria Konnikova.
0:48:27 And by me, Nate Silver.
0:48:30 The show is a co-production of Pushkin Industries and iHeartMedia.
0:48:33 This episode was produced by Isabel Carter.
0:48:35 Our associate producer is Sonia Gerwitt.
0:48:37 Sally Helm is our editor.
0:48:39 And our executive producer is Jacob Goldstein.
0:48:40 Mixing by Sarah Bruguer.
0:48:43 If you like the show, please rate and review us.
0:48:46 You know, we like, we take a four or a five.
0:48:47 We take the five.
0:48:48 Rate and review us to other people.
0:48:49 Thank you for listening.
0:49:01 What are the consequences of getting on Europe’s bad side?
0:49:04 The rationale is more like, f*** those guys.
0:49:07 How is stealth wealth changing retail?
0:49:09 You can have taste and still buy jupes.
0:49:14 What does a $6.2 million banana have to do with any of us?
0:49:20 People don’t like the attribution of serious financial value to comedy.
0:49:23 Join me, Felix Salmon, and my co-hosts Emily Peck and Elizabeth Spires
0:49:28 as we talk about the most important and obscure stories in business and finance.
0:49:31 Follow Slate Money wherever you like to listen.

This week, Nate and Maria discuss AI 2027, a new report from the AI Futures Project that lays out some pretty doom-y scenarios for our near-term AI future. They talk about how likely humans are to be misled by rogue AI, and whether current conflicts between the US and China will affect the way this all unfolds. Plus, Nate talks about the feedback he gave the AI 2027 writers after reading an early draft of their forecast, and reveals what he sees as the report’s central flaw.

Enjoy this episode from Risky Business, another Pushkin podcast.

The AI Futures Project’s AI 2027 scenario: https://ai-2027.com/


Get early, ad-free access to episodes of What’s Your Problem? by subscribing to Pushkin+ on Apple Podcasts or Pushkin.fm. Pushkin+ subscribers can access ad-free episodes, full audiobooks, exclusive binges, and bonus content for all Pushkin shows.

Subscribe on Apple: apple.co/pushkin
Subscribe on Pushkin: pushkin.com/plus

See omnystudio.com/listener for privacy information.

Leave a Comment