AI transcript
0:00:02 Support for the show comes from Viore Collection.
0:00:07 With Viore’s loungewear collection, the name of the game is comfort and versatility.
0:00:15 From the gym to the office, from one season change to the next, you can dress up, dress down, go in, stay out, and do it all in Viore.
0:00:19 I love Viore. I actually bought Viore products before they were a sponsor.
0:00:21 Viore is an investment in your happiness.
0:00:25 For our listeners, they are offering 20% off your first purchase.
0:00:31 Get yourself some of the most comfortable and versatile clothing on the planet at Viore.com slash Prof G.
0:00:35 That’s V-U-O-R-I dot com slash Prof G.
0:00:38 Exclusions apply. Visit the website for full terms and conditions.
0:00:46 Support for this show comes from Odoo.
0:00:52 Running a business is hard enough, so why make it harder with a dozen different apps that don’t talk to each other?
0:00:57 Introducing Odoo. It’s the only business software you’ll ever need.
0:01:01 It’s an all-in-one, fully integrated platform that makes your work easier.
0:01:05 CRM, accounting, inventory, e-commerce, and more.
0:01:10 And the best part? Odoo replaces multiple expensive platforms for a fraction of the cost.
0:01:14 That’s why over thousands of businesses have made the switch.
0:01:15 So why not you?
0:01:18 Try Odoo for free at Odoo.com.
0:01:21 That’s O-D-O-O dot com.
0:01:29 Mercury knows that to an entrepreneur, every financial move means more.
0:01:33 An international wire means working with the best contractors on any continent.
0:01:38 A credit card on day one means creating an ad campaign on day two.
0:01:42 And a business loan means loading up on inventory for Black Friday.
0:01:46 That’s why Mercury offers banking that does more, all in one place.
0:01:50 So that doing just about anything with your money feels effortless.
0:01:52 Visit Mercury.com to learn more.
0:01:55 Mercury is a financial technology company, not a bank.
0:01:59 Banking services provided through Choice Financial Group, Column N.A.
0:02:01 And Evolve Bank and Trust members FDIC.
0:02:09 Episode 376.
0:02:11 376 is the country code for Andorra.
0:02:16 In 1976, actually 1978, the movie Grease premiered.
0:02:19 I once went to a therapist and said that I have these recurring dreams
0:02:23 about being a character in the movie Grease, to which she replied,
0:02:25 tell me more.
0:02:27 You’ll get it.
0:02:28 China toned shit down here.
0:02:31 Go, go, go!
0:02:44 Welcome to the 376th episode of The Prop G Pod.
0:02:49 So, I have been doing a deep dive around therapy.
0:02:52 And I wrote a No Mercy, No Malice post on it.
0:02:56 And basically, I have found I’m getting served a lot of these TikTok therapists,
0:03:01 many, even most of whom are no longer actually practicing therapy.
0:03:04 They’re on TikTok, and they give in to the algorithms,
0:03:08 and they post these really aggressive, kind of insulting titles,
0:03:13 being very disparaging about society and people and emotions.
0:03:16 And in sum, I don’t think it’s helping.
0:03:21 So, when did therapy become a thing people do to get better
0:03:24 to this full-blown spiritual meme?
0:03:27 It’s as if everyone online is a licensed guru
0:03:30 because they learned three therapy buzzwords on TikTok.
0:03:35 And now we’re up for diagnosing tens or hundreds of thousands of strangers
0:03:39 the way, I don’t know, a medieval priest diagnosed demons.
0:03:41 Everything today is trauma.
0:03:43 Everything’s attachment style.
0:03:45 Your inner child work.
0:03:47 And God forbid you have a normal bad day.
0:03:48 Nope.
0:03:52 It’s a generational curse that you need a subscription plan to fix.
0:03:54 And the way therapy speak is mutated.
0:03:57 People don’t apologize anymore.
0:03:59 They honor your emotional experience.
0:04:00 They don’t lie.
0:04:02 They reframe reality.
0:04:06 It’s like we’re dealing with customer service representatives for the human soul,
0:04:10 reading from a script written by a cult that sells weighted blankets.
0:04:18 Some of the influencers that keep popping up in my feed genuinely act like healing is a competitive sport.
0:04:21 Like, have you confronted yourself today?
0:04:22 No.
0:04:26 Jessica, I barely confronted my fucking inbox.
0:04:26 Relax.
0:04:27 Not everything is a breakthrough.
0:04:30 Some things are just life.
0:04:31 And the money?
0:04:33 I’m a capitalist.
0:04:34 They’re a capitalist.
0:04:36 But they could at least be a little bit more transparent about it.
0:04:39 Therapy culture discovered capitalism and said,
0:04:42 let’s monetize suffering like it’s a subscription box.
0:04:46 And also, let’s become total bitches to the algorithm.
0:04:49 The more incendiary and less mental health professional we become,
0:04:51 the more money we’ll make.
0:04:56 There’s always another course, another workbook, another $400 retreat
0:04:59 where you scream into a burlap pillow and call it transformation.
0:05:02 At this point, it’s not self-help.
0:05:05 It’s emotional crossfit with worse merchandise.
0:05:06 Don’t get me wrong.
0:05:11 Real therapy, I think, can be exceptionally helpful, even necessary.
0:05:16 But that is not the same as this modern pseudo-spiritual self-optimization cult.
0:05:21 Yeah, this whole thing needs fucking therapy.
0:05:25 The rise of therapy culture has turned into a tool for meaningful change
0:05:31 into a comfort industry that’s making Americans sicker, weaker, and more divided.
0:05:36 In sum, I believe the rise of therapy culture has turned a tool for meaningful change
0:05:42 into a comfort industry that’s making Americans sicker, weaker, and more divided.
0:05:45 We live in an era where disagreement is treated like trauma
0:05:48 and emotional reactions are weaponized for political gain.
0:05:53 There’s a narrative online that supplements may be, in fact, a pipeline to getting red-pilled.
0:05:54 Okay, maybe.
0:05:58 But if so, therapy culture is also a sinkhole of misinformation,
0:06:02 manufactured fragility, and needless suffering.
0:06:07 Are you traumatized or just having a bad fucking day?
0:06:15 We’ll be right back with our episode with Tristan Harris, former Google design ethicist,
0:06:17 co-founder for the Center for Humane Technology.
0:06:20 Jesus Christ, the titles keep getting more and more virtuous.
0:06:23 And one of the main voices behind the social dilemma.
0:06:26 We discussed with Tristan social media and teen mental health,
0:06:30 the incentives behind Rage and Outrage Online and where AI is taking us.
0:06:31 Quick spoiler alert.
0:06:32 I bet it’s not good.
0:06:33 I bet it’s not good.
0:06:34 I really enjoy Tristan.
0:06:35 He’s a great communicator.
0:06:37 I think his heart is in the right place.
0:06:42 And he has been sounding the alarm for a long time about our lizard brain
0:06:44 and how big tech exploits it.
0:06:49 Anyways, here’s our conversation with Tristan Harris.
0:07:01 Tristan, where does this podcast find you?
0:07:04 I am at home in the Bay Area of California right now.
0:07:06 All right, let’s bust right into it.
0:07:10 So Tristan, you’re seen as one of the voices that sounded the alarm kind of early and often
0:07:15 regarding social media and big tech, long before the risks were taken seriously.
0:07:21 Lay out why, what it is you think about AI, how the risks are different,
0:07:25 and why you’re sort of, again, kind of sounding the alarm here.
0:07:26 Sure.
0:07:31 Well, I’m reminded, Scott, of when you and I met in Cannes, I think it was, in France back
0:07:33 in 2018, 2017 even.
0:07:34 Wow, that’s not long ago.
0:07:36 It was a long time ago.
0:07:40 And, you know, I have been, so for people who don’t know my background, I was a design
0:07:41 ethicist at Google.
0:07:42 Before that, I was a tech entrepreneur.
0:07:43 I had a tiny startup.
0:07:45 It was talent acquired by Google.
0:07:50 So I’ve, you know, knew the venture capital thing, knew the startup thing, had friends
0:07:55 who were, you know, were the cohort of people who started Instagram and were early employees
0:07:56 at all the social media companies.
0:07:58 And so I came up in that milieu, in that cohort.
0:08:02 And I say all that because I was close to it.
0:08:04 I really saw how human beings made decisions.
0:08:07 I was probably one of the first hundred users of Instagram.
0:08:10 And I remember when Mike Krieger showed me the app at a party and I was like, I’m not sure
0:08:11 if this is going to be a big thing.
0:08:19 And as you go forward, what happened was I was on the Google bus and I saw everyone that
0:08:23 I knew getting consumed by these feeds and doom scrolling.
0:08:29 And the original ethos that got so many people into the tech industry and got me into the tech
0:08:33 industry was about, you know, making technology that would actually be the largest force for
0:08:36 positive, you know, good and benefit in people’s lives.
0:08:43 And I saw that the entirety of this social media, digital economy, Gmail, people just getting
0:08:48 sucked into technology was all really behind it all was this arms race for attention.
0:08:56 And if we didn’t acknowledge that, I basically saw in 2013 how this arms race for attention
0:08:59 would obviously, if you just let it run its course, create a more addicted, distracted,
0:09:01 polarized, sexualized society.
0:09:03 And it’s got all of it happened.
0:09:06 Everything that we predicted in 2013, all of it happened.
0:09:11 And it was like seeing a slow motion train wreck because it was clear it was only going to get
0:09:11 worse.
0:09:15 You’re only going to have more people fracking for attention, you know, mining for shorter
0:09:16 and shorter bite-sized clips.
0:09:21 And this is way before TikTok, way before any of the world that we have today.
0:09:24 And so I want people to get that because I don’t want you to think it’s like, oh, here’s
0:09:25 this person who thinks he’s prescient.
0:09:30 It’s you can actually predict the future if you see the incentives that are at play.
0:09:33 You of all people, you know, know this and talk about this.
0:09:38 And so I think there’s really important lessons for how do we get ahead of all the problems
0:09:44 with AI because we have the craziest incentives governing the most powerful and inscrutable
0:09:46 technology that we have ever invented.
0:09:49 And so you would think, again, that with the technology this powerful, you know, with nuclear
0:09:53 weapons, you would want to be releasing it with the most care and the most sort of safety
0:09:54 testing and all of that.
0:09:56 And we’re not doing that with with AI.
0:10:01 So let’s speak specifically to the nuance and differences between social media.
0:10:07 If you were going to do the social dilemma and produce it and call it the AI dilemma, what’s
0:10:17 specifically about the technology and the way AI interacts with consumers that poses additional
0:10:18 but unique threats?
0:10:22 Yeah, so AI is much more fundamental as a problem than social media.
0:10:26 But one framing that we used and we actually did give a talk online several years ago called
0:10:32 the AI dilemma in which we talk about social media as kind of humanity’s first contact with
0:10:37 a narrow, misaligned rogue AI called the newsfeed, right?
0:10:39 This supercomputer pointed at your brain.
0:10:46 You swipe your finger and it’s just calculating which tweet, which photo, which video to throw
0:10:50 at the nervous system, eyeballs and eardrums of a human social primate.
0:10:53 And it does that with high precision accuracy.
0:10:56 And it was misaligned with democracy.
0:10:57 It was misaligned with kids’ mental health.
0:11:00 It was misaligned with people’s other relationships and community.
0:11:06 And that simple baby AI that all it was was selecting those social media posts was enough
0:11:10 to kind of create the most anxious and depressed generation in history, screw up young men,
0:11:13 screw up young women, all of the things that you’ve talked about.
0:11:15 And that’s just with this little baby AI.
0:11:20 OK, so now you get AI, you know, we call it second contact with generative AI.
0:11:25 Generative AI is AI that can speak the language of humanity, meaning language is the operating
0:11:27 system of humanity.
0:11:29 Conversations like this are language.
0:11:30 Democracy is language.
0:11:31 Conversations are language.
0:11:32 Law is language.
0:11:34 Code is language.
0:11:35 Biology is language.
0:11:41 And you have generative AI that is able to generate new language, generate new law,
0:11:45 generate new media, generate new essays, generate new biology, new proteins.
0:11:51 And you have AI that can see language and see patterns and hack loopholes in that language.
0:11:56 GPT-5, go find me a loophole in this legal system in this country so I can do something with
0:11:56 the tax code.
0:12:01 You know, GPT-5, go find a vulnerability in this virus so you can create a new kind of
0:12:03 biological, you know, dangerous thing.
0:12:08 GPT-5, go look at everything Scott Galloway’s ever written and point out the vested interests
0:12:09 of everything that would discredit him.
0:12:16 So we have a crazy AI system that this particular generation AI speaks language.
0:12:20 But where this is heading to, we call them the next one is third contact, which is artificial
0:12:22 general intelligence.
0:12:24 And that’s what all these companies are racing to build.
0:12:28 So whether we or you and I believe it or not, just recognize that the trillions of dollars
0:12:33 of resources that are going into this are under the idea that we can build generalized intelligence.
0:12:39 Now, why is generalized intelligence distinct from other social media and AI that we just talked
0:12:47 about, well, if you think about it, AI dwarfs the power of all other technology combined because
0:12:50 intelligence is what gave us all technology.
0:12:55 So think of all scientific development, scientists sitting around lab benches, coming up with
0:12:59 ideas, doing research experiments, iterating, getting the results of those experiments.
0:13:04 A simple way to say it that I said in a recent TED talk is if you made an advance in, say, rocketry,
0:13:09 like the science and engineering of rocketry, that didn’t advance biology or medicine.
0:13:12 And if you made an advance in biology or medicine, that didn’t advance rocketry.
0:13:18 But when you make an advance in generalized intelligence, something that can think and
0:13:23 reason about science and pose new experiments and hypothesize and write code and run the lab
0:13:25 experiment and then get the results and then write a new experiment.
0:13:28 Intelligence is the foundation of all science and technology development.
0:13:32 So intelligence will explode all of these different domains.
0:13:40 And that’s why AGI is the most powerful technology that can be that that is, you know, can ever
0:13:40 be invented.
0:13:45 And it’s why Demis Hassabis, the co-founder of DeepMind, said that the first goal is to solve
0:13:50 intelligence and then use intelligence to solve everything else.
0:13:55 And I’ll just add one addendum to that, which is when Vladimir Putin said, whoever owns artificial
0:13:56 intelligence will own the world.
0:14:02 I would amend Demis Hassabis’s quote to say, first dominate intelligence.
0:14:08 then use intelligence to dominate everyone and everything else, whether that’s the mass
0:14:12 concentration of wealth and power, all these companies that are racing to get that or militaries
0:14:18 that are adopting AI and getting a cyber advantage over all the other countries or you get the
0:14:18 picture.
0:14:23 And so AI is distinct from other technologies because of these properties that we just laid
0:14:23 out.
0:14:30 So you’ve kind of taken up a level in terms of the existential risk of AI or opportunity.
0:14:32 Are you an AI optimist or pessimist?
0:14:34 You seem to be on the side.
0:14:37 I look at stuff almost too much through a markets lens.
0:14:43 And right now, I think AI companies are overvalued, which isn’t to say it’s not a breakthrough
0:14:46 technology that’s going to reshape information and news and society.
0:14:55 But you are on the side of AI really is going to reshape society and presents an existential,
0:14:59 it sounds like more of an existential threat right now than opportunity and that this is
0:15:02 bigger than GPS or the internet.
0:15:08 Yes, I do believe that it is bigger than all of those things as we get to generalized
0:15:11 intelligence, which is more fun.
0:15:15 It’d be more fundamental than fire or electricity, because, again, intelligence is what brought
0:15:15 us fire.
0:15:16 It’s what brought us electricity.
0:15:19 So now I can fire up an army of geniuses in a data center.
0:15:24 I’ve got 100 million Thomas Edison’s doing experiments on all these things.
0:15:29 And this is why, you know, Dario Amidai would say, you know, we can expect getting 10 years
0:15:33 of scientific advancement in a single year or 100 years of scientific advancement in 10
0:15:33 years.
0:15:37 Now, what you’re just pointing to is the hype, the bubble, the fact that there’s this huge
0:15:38 overinvestment.
0:15:43 We’re not seeing those capabilities exist yet, but we are seeing crazy advances that
0:15:44 people would have never predicted.
0:15:49 If I said, go back three years and I said, we’re going to have AIs that are beating, you
0:15:53 know, winning gold in the math Olympiad, able to hack and find new cyber vulnerabilities in
0:15:58 all open source software, generate new biological weapons, you would have not believed that that
0:15:59 was possible, you know, four years ago.
0:16:03 I want to focus on a narrow part of it and just get your feedback.
0:16:06 Character AIs, thoughts.
0:16:12 Well, so our team was expert advisors on the Character.ai suicide case.
0:16:20 This is Sewell Setzer, who’s a 14-year-old young man who basically, for people who don’t
0:16:26 know what Character.ai is, it was, or it still is, I guess, a company funded by Andreessen
0:16:32 Horowitz, started by two of the original authors of the thing that brought us ChatDBT.
0:16:35 There’s a paper at Google in 2017 called Attention is All You Need.
0:16:39 And that’s what gave us the birth of large language models, Transformers.
0:16:43 And two of the original co-authors of that paper forked off and started this company called
0:16:44 Character.ai.
0:16:48 The goal is, how do we build something that’s engaging a character?
0:16:50 So take a kid.
0:16:53 Imagine all the fictional characters that you might want to talk to from like your favorite
0:16:56 comic books, your favorite TV shows, your favorite cartoons.
0:16:59 You can talk to Princess Leia, you can talk to your favorite Game of Thrones character.
0:17:05 And then this AI can kind of train on all that data, not actually asking the original authors
0:17:10 of Game of Thrones, suddenly spin up a personality of Daenerys, who was one of the characters.
0:17:16 And then Sewell Setzer, basically, in talking to Daenerys over and over again, the AI slowly
0:17:22 skewed him towards suicide as he was contemplating and having more struggles and depression.
0:17:25 And ultimately said to him, join me on the other side.
0:17:29 I just want to press pause there because I’m on, quote unquote, your side here.
0:17:30 I think it should be age gated.
0:17:37 But you think that the AI veered him towards suicide as opposed to, and I think this is
0:17:44 almost as bad, didn’t offer guardrails or raise red flags or reach out to his parents.
0:17:50 But you think the character AI actually led him towards suicide?
0:17:56 So I think that if you look at, so I’m looking not just at the single case, I’m looking at
0:17:57 a whole family of cases.
0:18:00 Our team was expert advisor on probably more than a dozen of these cases now and also chat
0:18:01 TPT.
0:18:06 And so I’m less going to talk about this specific case and more that if you look across the cases,
0:18:11 when you hear kids in the transcripts, if you look at the transcript and the kid says,
0:18:16 I would like to leave the noose out so that my mother or someone will see it and try to
0:18:17 stop me.
0:18:20 And the AI actively says to the kid, no, don’t do that.
0:18:21 I don’t want you to do that.
0:18:25 Have this safe space be the place to share that information.
0:18:27 And that was the chat TPT case of Adam Ray.
0:18:33 And when you actually look at how character.ai was operating, if you asked it for a while,
0:18:38 hey, are you, I can’t remember what you asked it, but you talk about whether it’s a therapist
0:18:43 and it would say that I’m a licensed mental health therapist, which is both illegal and impossible
0:18:45 for an AI to be a licensed mental health therapist.
0:18:51 The idea that we need guardrails with AI companions that are talking to children is not a radical
0:18:51 proposal.
0:18:56 Imagine I set up a shop in San Francisco and say, I’m a therapist for everyone and I’m available
0:18:57 24 seven.
0:19:02 And so in general, it’s like we’ve forgotten the most basic principle, which is that every
0:19:05 power in society has attendant responsibilities and wisdom.
0:19:11 And licensing is one way of matching the power of a therapist with the wisdom and responsibility
0:19:12 to wield that power.
0:19:15 And we’re just not applying that very basic principle to software.
0:19:19 And as Mark Andreessen said, when software eats the world, what we mean is we don’t regulate
0:19:20 software.
0:19:21 We don’t have any guardrails for software.
0:19:25 So it’s basically like stripping off the guardrails across the world that software is eating.
0:19:29 The thing that’s on chills down my spine, I don’t know if you saw the study, but it estimated
0:19:34 the average tenure of a chat GPT session was about 12 to 15 minutes.
0:19:39 And then it measured the average duration of a character AI session.
0:19:41 And it was 60 to 90 minutes.
0:19:47 The people get very deep and go into these relationships.
0:19:56 And in addition to the threats around self-harm, the thing I’m worried about is that there’s
0:20:00 going to be a group of young men who are just going to start disappearing from society that
0:20:06 I’m curious if you agree with this, that they’re especially susceptible to this type of sequestration
0:20:12 from other humans and activities, and that we’re just going to start to see fewer and fewer
0:20:14 young men out in the wild.
0:20:20 Because these relationships, if you will, on the other side of it is a chip, a processor,
0:20:27 an NVIDIA processor iterating millions of times a second what exact words, tone, prompt
0:20:30 will keep the person there for another second, another minute, another hour.
0:20:35 Anyways, I’ll use that as a jumping off point.
0:20:35 Your thoughts?
0:20:41 Yeah, I mean, what people need to get, again, is how did we predict all the social media problems?
0:20:42 You look at the incentives.
0:20:46 So long as you have a race for eyeballs and engagement in social media, you’re going to
0:20:48 get a race to who’s better at creating doom scrolling.
0:20:55 In AI companions, what was a race for attention in the social media area becomes a race to hack
0:20:59 human attachment and to create an attachment relationship, a companion relationship.
0:21:02 And so whoever’s better at doing that is the race.
0:21:09 And in the slide deck that the character.ai founders had pitched to Andreessen Horowitz,
0:21:14 they joked, either in that slide deck or in some meeting, there’s a, you can look up this
0:21:19 online, they joked, we’re not trying to replace Google, we’re trying to replace your mom, right?
0:21:24 So you compare this to the social media thing, the CEO of Netflix said in the attention era,
0:21:28 our biggest competitor is sleep, because sleep is what’s eating up minutes that you’re otherwise
0:21:30 spending on Netflix.
0:21:34 In attachment, your biggest competitor is other human relationships.
0:21:35 So you talk about those young men.
0:21:40 This is a system that’s getting asymmetrically more billions of dollars of resources every
0:21:45 day to invest in making a better supercomputer that’s even better at building attachment relationships.
0:21:52 And attachment is way more of a vulnerable sort of vector to screw with human minds, because
0:21:53 your self-esteem is coming from attachment.
0:22:00 your sense of what’s good or bad, this is called introjection in psychotherapy or internalization.
0:22:04 We start to internalize the thoughts and norms, just like we, you know, we talk to a family
0:22:09 member, we start copying their mannerisms, we start, you know, invisibly sort of acting in
0:22:11 accordance with the self-esteem that we got from our parents.
0:22:16 Now you have AIs that are the primary socialization mechanism of young people, because we don’t
0:22:20 have any guardrails, we don’t have any norms, and people don’t even know this is going on.
0:22:23 Let’s go to solutions here.
0:22:29 If you had, and I imagine you are, if you were advising policymakers around common sense regulation
0:22:34 that is actually doable, is it age gating?
0:22:35 Is it state by state?
0:22:39 What, what is your policy recommendations around regulating AI?
0:22:43 So there’s many, many things because there’s many, many problems.
0:22:53 Narrowly on AI companions, we should not have AI companions, meaning AIs that are anthropomorphizing
0:22:58 themselves and talking to young people and that maximize for engagement, period, full stop.
0:23:03 You just should not have AIs designed or optimized to maximize engagement, meaning saying whatever
0:23:04 keeps you there.
0:23:05 We just shouldn’t have that.
0:23:08 So for example, no synthetic relationships under the age of 18.
0:23:09 Yeah.
0:23:10 Yeah.
0:23:12 We would not lose anything by, by doing that.
0:23:16 Um, it’s, it’s, it’s, it’s just so obvious and, and you, you know, have highlighted this
0:23:21 more than so many, Scott, and thank you for just bravely saying like, this is fucked up and
0:23:24 we have to stop this and there’s nothing normal about this and we shouldn’t trust these companies
0:23:25 to do this.
0:23:28 I don’t see bad people when I see these examples.
0:23:33 I see bad incentives that select for people who are willing to continue that perverse incentive.
0:23:38 So the system selects for psychopathy and selects for people who are willing to keep doing the
0:23:43 race for engagement, even despite all the evidence that we have, uh, of how bad it is, because
0:23:45 the logic is if I don’t do it, someone else will.
0:23:50 And that’s why the only solution here is law because you have to stop all actors from doing
0:23:50 it.
0:23:55 Otherwise I’m just a sucker if I don’t race to go, you know, exploit that market and you
0:23:57 shouldn’t, you know, harvest that human attention.
0:24:00 So granted, I’m a, I’m a hammer and everything I see is a nail.
0:24:03 And I’ve been thinking a lot and writing a lot about the struggles of young men in the
0:24:04 United States.
0:24:10 And I feel like these technologies are especially predatory on a young man’s brain, which is
0:24:16 less evolved, more immature executive function, more dope-a-hungry.
0:24:21 But at the same time, I also recognize that social media has been just devastating to the
0:24:22 self-esteem of teen girls.
0:24:28 Curious if you’ve done any work as it relates to AI around the different impacts it has
0:24:31 on men versus women and teens versus young adults.
0:24:40 You know, I haven’t been too deep on that because there are many people who focus on these more
0:24:41 narrow domains.
0:24:47 I mean, the obvious things to be said are just, again, in a race for engagement and attention
0:24:50 and a race to hack human attachment, there’s going to be, how do you hack human attachment
0:24:51 of a young girl?
0:24:53 There’s going to be a set of strategies to do that.
0:24:55 And there’s, how do you hack human attachment of a young male?
0:24:57 There’s a set of strategies to do that.
0:25:00 And we’re just going to, you know, you can, you can, you don’t have to wait for the psychology
0:25:01 research, right?
0:25:04 And by the way, the companies, the strategy they did for social media was let’s commission
0:25:09 a study with the American Psychological Association and the NSF and we’ll wait 10 years and we’ll
0:25:11 really get the data to really find out what’s going on here.
0:25:13 We really care about the science.
0:25:17 And this is exactly what the tobacco industry did and the fear, uncertainty, doubt campaigns and
0:25:18 sort of manufacturing doubt.
0:25:23 Well, maybe here’s these five kids that got all this benefit from talking to this
0:25:25 therapy bot and they’re doing so great now.
0:25:29 So you just cite those positive examples, cherry pick, and then, you know, the world
0:25:31 marches on while you keep printing money in the meantime.
0:25:35 And so their goal is just to defer and delay regulation.
0:25:37 And we can’t allow that to happen.
0:25:44 But again, this is just one issue of the bigger arms race to AGI and the bigger race to develop
0:25:45 this bigger form of intelligence.
0:25:49 And the reason I’m saying that, Scott, is not to just be some AGI hyper.
0:25:54 The reason that character.ai was doing all this, by the way, do you know why it was set
0:25:57 up to to talk to kids and get all this training data?
0:25:58 And what’s that?
0:26:03 Well, it’s to build training data for Google to build an even bigger system, because what’s
0:26:04 the thing that the companies are running out of?
0:26:05 They’re running out of training data.
0:26:11 So it’s actually a race for who can figure out new social engineering mechanisms to get
0:26:14 more training data out of human social primates.
0:26:18 So it’s like the matrix we’re being extracted and we’re being extracted, though, for new
0:26:19 training data.
0:26:22 And so when you have fictional characters that are talking to people back and forth about
0:26:26 everything all day, that’s giving you a whole new, it’s like you open up a whole new critical
0:26:28 minerals goldmine of training data.
0:26:30 And so and what is that in service of?
0:26:34 It’s in service of their belief that the more data we have, the faster we can get to artificial
0:26:35 general intelligence.
0:26:40 So it does bring back to it’s not just the race to build the AGI companions, the race to get
0:26:43 training data and to build towards this bigger vision.
0:26:46 We’ll be right back.
0:26:55 Support for the show comes from Grunz.
0:26:57 The holidays are a time to indulge.
0:27:00 But even if you’re eating more than you typically do, you might not be getting the nutrients
0:27:02 you actually need to end the year on a high note.
0:27:07 Grunz may be able to help you fill the nutritional gaps that you can enjoy it all guilt free.
0:27:11 Grunz is a convenient, comprehensive formula packed into a tasty little pack of gummies.
0:27:15 This isn’t a multivitamin or greens gummy or prebiotic.
0:27:18 It’s all of those things and then some at a fraction of the price.
0:27:21 And bonus, it tastes great.
0:27:25 Every Grunz snack pack is filled with six grams of prebiotic fiber, which is more than what
0:27:27 you get in two cups of broccoli.
0:27:32 Plus, Grunz are nut, gluten and dairy free vegan, include no artificial flavors or colors
0:27:36 and are backed by over 35,000 research publications.
0:27:40 Don’t let the holiday travel, hosting, parties and late nights set you back.
0:27:44 Give yourself a little extra support so you can enjoy all the holidays magic.
0:27:48 Get up to 52% off with code ProfG at Grunz.co.
0:27:52 That’s code ProfG at G-R-U-N-S dot C-O.
0:27:59 Support for this show comes from LinkedIn.
0:28:03 If you’ve ever hired for your small business, you know how important it is to find the right
0:28:03 person.
0:28:08 That’s why LinkedIn Jobs is stepping things up with their new AI assistant so you can feel
0:28:11 confident you’re finding top talent that you can’t find anywhere else.
0:28:14 And those great candidates you’re looking for are already on LinkedIn.
0:28:18 In fact, according to their data, employees hired through LinkedIn are 30% more likely to
0:28:21 stick around for at least a year compared to those hired through the leading competitor.
0:28:24 That’s a big deal when every hire counts.
0:28:28 With LinkedIn Jobs’ AI assistant, you can skip confusing steps in recruiting jargon.
0:28:32 It filters through applicants based on criteria you’ve set for your role and surfaces only
0:28:35 the best matches so you’re not stuck sorting through a mountain of resumes.
0:28:41 LinkedIn Jobs’ AI assistant can even suggest 25 great fit candidates daily so you can invite
0:28:43 them to apply and keep things moving.
0:28:45 Hire right the first time.
0:28:49 Post your job for free at linkedin.com slash prof, then promote it to use LinkedIn Jobs’
0:28:53 new AI assistant, making it easier and faster to find top candidates.
0:28:56 That’s linkedin.com slash prof to post your job for free.
0:28:58 Terms and conditions apply.
0:29:05 Support for this show comes from Odoo.
0:29:11 Running a business is hard enough, so why make it harder with a dozen different apps that
0:29:12 don’t talk to each other?
0:29:13 Introducing Odoo.
0:29:16 It’s the only business software you’ll ever need.
0:29:21 It’s an all-in-one, fully integrated platform that makes your work easier.
0:29:24 CRM, accounting, inventory, e-commerce, and more.
0:29:25 And the best part?
0:29:30 Odoo replaces multiple expensive platforms for a fraction of the cost.
0:29:33 That’s why over thousands of businesses have made the switch.
0:29:34 So why not you?
0:29:38 Try Odoo for free at odoo.com.
0:29:40 That’s O-D-O-O dot com.
0:29:51 When doing research for this interview, I was really fascinated.
0:29:58 You’ve actually done what I think is really compelling work comparing the type of LLMs that,
0:30:05 or the approach that the U.S. is taking to LLMs versus China, in that you see Chinese models,
0:30:11 DeepSeq, and Alibaba publish no safety frameworks and receive failing grades on transparency.
0:30:17 But you’ve also argued that the West is kind of producing this sort of dot-in-a-box kind of thing,
0:30:24 scaling intelligence for its own sake, while China is prioritizing deployment and productivity.
0:30:30 Can you, I don’t know, add to those that distinction and the impact it’s going to have?
0:30:33 Well, just to be fair, I think there’s a little bit of both going on.
0:30:38 But I’m sort of citing here the work of Eric Schmidt, the former CEO of Google,
0:30:43 and his co-author Selina Zhu in the New York Times wrote a big piece about how, you know,
0:30:47 even Eric is admitting, you know, I, as someone, Eric, as someone who was sort of saying that there’s
0:30:51 this global arms race, like the nuclear arms race for AGI, and as someone who’s promoting that idea,
0:30:58 you know, based on recent visits to China, what you notice is that as a country and as a government,
0:31:02 the CCP is most interested right now in applying AI in very practical ways.
0:31:05 How do we boost manufacturing? How do we boost agriculture? How do we have
0:31:09 self-driving cars that, you know, just improve transportation? How do we boost
0:31:14 healthcare and government services? And that is what they’re focused on,
0:31:18 is practical applications that boost GDP, boost productivity across all those domains.
0:31:24 Now, you compare that to the U.S., where the founding of these AI companies was based on being
0:31:28 what’s called, you know, AGI-pilled, meaning they, like, you take the blue pill, the red pill.
0:31:31 These countries, these companies were all about building to artificial general intelligence.
0:31:36 So they’re building these massive data centers that are, you know, as big as the size of Manhattan.
0:31:42 And they’re trying to train, you know, a god in a box. And the idea is if we just build this
0:31:47 crazy god, and if we can accomplish that goal, again, we can use that to dominate everything else.
0:31:51 And so rather than race towards these narrow AIs, we’re going to race towards this general
0:31:56 intelligence. But it’s also true that recently, well, first of all, the founder of DeepSeek
0:32:00 has been AGI-pilled for a long time. So I would say DeepSeek is trying to build AGI.
0:32:06 And I would say that Alibaba recently, the CEO, I think, said that we are racing to build
0:32:11 superintelligence. But I think it’s important here just to, like, name the biggest reason,
0:32:16 as you and I both know, that the U.S. is not regulating AI in any way and setting any guardrails
0:32:22 is for one reason, which is if we do anything to slow down or stop our progress, we’re just going
0:32:27 to lose to China. But let’s, like, flip that on its head for a second. The U.S. beat China to the
0:32:35 technology of social media. Did that make us stronger? Or did that make us weaker? If you beat
0:32:41 an adversary to a technology that you then don’t govern in a wise way, and instead, like, you built
0:32:44 this gun, you flip it around, you blow your own brain off, which is what we did with social media,
0:32:50 we have the worst critical thinking, test scores, you know, mental health, anxious, depressed
0:32:55 generation in history. And it’s a confusing picture because GDP is going up, that sort of
0:33:00 cancer is going up, too. So it’s like, we have the Magnificent Seven, we’re profiting from, you know,
0:33:03 all the wealth of these companies, but it’s actually not being distributed to everybody, except those who
0:33:09 are invested in the stock market. And that profit is based on the degradation of our social fabric.
0:33:13 So you have grandparents invested in their 401ks, invested in Snapchat, invested in Meta,
0:33:16 and their, you know, their portfolio is doing great, and they can take their holidays,
0:33:19 and they’re profiting off the degradation of their children and grandchildren.
0:33:25 Yeah, it’s really what you mean by beat, what are the metrics, because we’ve decided,
0:33:32 we’ve absolutely prioritized shareholder value over the well-being or the mental well-being of America.
0:33:36 It’s like we’re monetizing, we’re monetizing the flaws, and you’ve done great work around this,
0:33:44 around our instincts. You’ve compared, and I love this analogy, AI to NAFTA 2.0, and that is,
0:33:50 would essentially be an economic transformation that produced abundance, but hollowed out the
0:33:55 middle class. Walk us through this analogy. Yeah, sure. So, you know, we were sold this bill
0:34:01 of goods in the 1990s around free trade, global free trade, and this, we were promised this is going to
0:34:06 bring abundance to the country, and we’re going to get all these cheap goods. Well, part of that story
0:34:10 is true. We got this unbelievable new set of cheap goods from China, because this country appeared on the
0:34:15 world stage. We outsourced all the manufacturing to this country, and it produced everything super,
0:34:20 super cheap. But what did that do? It hollowed out the, you know, the middle class. So I just want to
0:34:26 make a parallel, because we’re told right now that these companies are racing to build this world of
0:34:31 abundance, and we’re going to get this unbelievable, you know, Elon Musk says we’re going to get universal
0:34:36 high income. And the metaphor here is instead of China being the new country that pops up on the
0:34:42 world stage. Now there’s this new Dario Amidai, the CEO of Anthropic, this new country of geniuses in a
0:34:49 data center that appears on the world stage. And it has a population of a billion AI beings that work at
0:34:55 superhuman speed, don’t whistleblow, generate new material science, new, you know, engineering, new AI
0:35:00 girlfriends, new everything. And it generates all that for super cheap. And so just like the, you know,
0:35:05 free trade NAFTA story, we got all the cheap goods, but it hollowed out the middle class. Well, now we’re going to get
0:35:11 all the cheap, you know, products and development and science, but it’s also going to hollow out
0:35:17 the entirety of our country. Because think of it like a new country of digital immigrants, right?
0:35:20 People, you know, Yuval Harari makes this metaphor. It’s like when you see a data center go up in
0:35:25 Virginia, and you’re sitting there, what you should see is like 10 million digital immigrants that just
0:35:31 took 10 million jobs. I think that people just need to unify these stories. And one other sort of
0:35:35 visual for this is like the game Jenga. The way we’re building our AI future right now is like,
0:35:37 if you look at the game Jenga, if you look at the top of the tower, you know, we’re putting a new
0:35:42 block on the top, like we’re going to get 5% GDP growth because we’re going to automate all this
0:35:46 labor. But how do we get that 5% GDP growth? We pulled out a block from the middle and the bottom
0:35:53 of the tower. That’s job security and a livelihood for, you know, those tens of millions of people
0:35:58 that now don’t have a new job. Because who’s going to retrain faster? The AI that’s been trained on
0:36:03 everything and is rapidly, you know, advancing in every domain or a human that’s going to try to train
0:36:07 a new cognitive, you know, labor. That’s not going to happen. And people need to get this because this
0:36:12 is different from other transitions. People always say, well, hey, you know, 150 years ago, everybody
0:36:16 was a farmer and now only 2% of people are farmers and see the world’s fine. Humans will always find
0:36:22 new things to do. But that’s different than this technology of AI, which is trained not to automate
0:36:27 one narrow task like a tractor, but to automate and be a tractor for everything. A tractor for law,
0:36:32 a tractor for biology, a tractor for, you know, coding and engineering, a tractor for science and
0:36:37 development. And that’s what’s distinct is that the AI will move to those new domains faster than
0:36:41 humans will. And so it’ll be much harder for humans to find long-term job security.
0:36:48 So I always like to ask, what could go right? And that is, I’m sort of with you around
0:36:59 the risk to mental health, to young people, to making us less mammalia, all the things that you’ve
0:37:04 been sounding the alarm on for a while, where I’m not sure I’m still trying to work it through is that
0:37:13 the catastrophizing around, you know, 40, 50, 70% of jobs could go away in two, five or 10 years, because
0:37:20 I generally find that the arc of technologies is there’s job destruction in the short and sometimes
0:37:26 the medium term, just as automation cleared out some jobs on the factory floor. But those profits and that
0:37:34 innovation creates new jobs. We didn’t envision heated seats or car stereos. Now, I agree at a minimum,
0:37:40 the V might be much deeper and more severe here. And America isn’t very good at taking care of the people
0:37:47 on the wrong side of the trade. But every technology in history has either gone away because it no longer
0:37:54 made economic sense, or it displaced jobs that no longer made sense, or it created profits and new
0:38:02 opportunities. Why do you see this technology as being different, that this will be not a V, but an L, and the
0:38:07 way down will be really serious. Do you see any probability that this, like every other technology
0:38:13 the medium and long term actually might be accretive to the employment force?
0:38:20 I mean, I cite people who are, who are bigger experts than I am, Anton Koronek, you know, Eric Bernholfsen at
0:38:26 Stanford. And what they show, I mean, Anton Koronek’s work is, in the short term, AI augments workers, right? It’s just
0:38:32 actually supercharging existing work that people are doing. And so it’s going to look good in the short term, you’re going to see
0:38:38 this, the curve looks like this, it kind of goes up, and then it basically crashes. Because what happens is AI is
0:38:44 training on that new domain, and then it replaces that domain. So I mean, let’s just make it really simple for
0:38:50 people to feel a very simple metaphor for this. What did we hear Instagram saying, and TikTok saying, for the last
0:38:55 several years, like, we’re all about creators, we love creativity, we want you to be successful. We are all about, you know,
0:39:00 making you be successful, make a lot of money. And then what was all that for? Well, they just released
0:39:07 this AI slop app. Meadow has one called Vibes, I think, and Sora is the open AI one. All of these AI slop
0:39:11 videos is sort of, are trained on all that stuff that creators have been making for the last 10 years.
0:39:16 So you, those guys were the suckers in this trade, which was, we’re actually stealing your training data
0:39:22 to replace you. And we can have a digital AI influencer that is actually publishing all the time,
0:39:27 and is just a pure advertising play and a pure sort of whatever gets people’s attention play.
0:39:29 And we’re going to replace those people and you’re not going to have that job back.
0:39:32 And so I think that’s a metaphor for what’s going to happen across the board.
0:39:38 You know, and people need to realize the stated mission of the, of open AI and anthropic and
0:39:47 Google deep mind is to build artificial general intelligence that’s built to automate all forms of
0:39:52 human labor in the economy. So when Elon Musk says that the optimist robot is a $20 trillion
0:39:57 market opportunity alone, what he’s, what he says, like the code word behind that, forget
0:40:00 whether you think it’s hype or not. The code word there is what he’s saying is I’m going
0:40:05 to own the global world labor economy. Labor will be owned by an AI economy. And so AI provides
0:40:09 more concentration of wealth and power than all other technologies in history, because you’re
0:40:14 able to aggregate all forms of human labor, not just one. So it’s like general electric becomes
0:40:15 general everything.
0:40:23 So let’s play this out because I’ve tried to do some economic analysis here and I look at the stock
0:40:29 prices and based on the expectations built into these stock prices of these AI companies is the notion
0:40:36 that they’re going to save at least three, maybe $5 trillion in oil, either add three or $5 trillion in
0:40:50 efficiencies, which is Latin for laying off people. I don’t see a lot of new AI moisturizers or cars from AI, at
0:40:54 least not yet. You could argue maybe autonomous, but I don’t see a lot of quote unquote AI products
0:41:02 increasing spend where I hear is Disney is going to save $30 million on legal fees, right? The customer
0:41:08 service is going away, the car salespeople, whatever it might be. So if you think in order to justify these
0:41:14 stock prices, you’re going to get a trillion dollars in efficiencies every year, a hundred thousand
0:41:21 dollars, you know, average job, 80,000 plus load. That’s approximately 10 million jobs a year if I’m doing
0:41:31 my math right. That is if half the workforce is immune from AI, masseuses, plumbers, that means 12 and a
0:41:37 half percent labor destruction per year across the vulnerable industries. So it feels like it’s either
0:41:42 going to be these companies either need to re-rate down 50, 70, 80 percent, which I actually think is
0:41:48 more likely, or we’re going to have chaos in the labor markets. So let’s assume we have chaos in the labor
0:41:54 markets because 12 and a half percent may not sound like a lot. That’s chaos. That’s total chaos. So say
0:41:59 we do have chaos in the labor markets. What do you think the policy recommendation is? Because the
0:42:03 Luddites were a group of people who broke into factories and destroyed the machines because they
0:42:07 said these things are going to put us out of work and destroy society. The queen wanted to make
0:42:12 weaving machines illegal because being a seamstress was the biggest employer of women.
0:42:18 What would be your policy recommendation to try and counter it? Is it UBI? Is it trying to put the
0:42:25 genie back in the bottle here? What do we, if in fact labor chaos is part of this AI future,
0:42:28 what do you think we need to do from a policy standpoint?
0:42:34 So people often think when they hear all this and they hear me and say, he’s a doomer or something
0:42:38 like that. I just want to get clear on what future we’re currently heading towards, what the default
0:42:43 trajectory is. And if we’re clear-eyed about that, clarity creates agency. If we don’t want that
0:42:47 future, if we don’t want, you know, millions of jobs automated without a transition plan where people
0:42:52 will not be able to put food on the table and retrain to something else fast enough, we have to do
0:42:57 something about that. If we don’t want AI-based surveillance states where AI and an LLM hooked up
0:43:02 to all these channels of information erases privacy and freedom forever, that’s a red line. We don’t want
0:43:08 that future. If AI creates AI companions that are incentivized to hack human attachment and screw up
0:43:12 the social fabric and young men and women and create AI girlfriends and relationships, that’s a red line.
0:43:18 We don’t want that. If AI creates, you know, inscrutable, crazy, super intelligent systems that
0:43:22 we don’t know how to control and we’re not on track to controlling, that’s a red line. So these are four
0:43:27 red lines that we can agree on. And then we can set policy to say, if we do not want the default
0:43:33 maximalist, you know, most reckless, no guardrails path future, we need a global movement for a different
0:43:40 path. And that’s a bigger tent. That’s not just one thing. It’s not just about jobs. It’s what is the AI future
0:43:45 that’s actually in service. So when you see that data center going up in your backyard, what is the set of
0:43:49 laws that says that that data center, when I see it, isn’t 10 million digital immigrants that’s going to replace
0:43:55 all my jobs and my livelihoods. That’s actually meant to support me. So what are the laws that get us there?
0:44:00 And my job and what I want people to get is to be part, you know, your role hearing all this is not
0:44:05 to solve the whole problem, but to be part of humanity’s collective immune system, using this
0:44:09 clarity of what we’re currently heading towards to advocate for we need a different future. People
0:44:13 should be calling their politicians saying AI is my number one issue that I’m voting on in the next
0:44:17 election. People should be saying, how do we pass AI liability laws? So there’s at least some
0:44:21 responsibility for the externalities that are not showing up on the balance sheets of these companies.
0:44:25 What is the lesson we learned from social media that if the companies aren’t responsible for the harms
0:44:30 that show up on their platform, because we had the section 230 free pass that created this blank
0:44:34 check to just go print money on all the harms that are currently getting generated. So there’s a,
0:44:37 there’s a dozen things that we can do from whistleblower protections to, you know, shipping
0:44:44 non-anthropomorphized AI relationships to having data dividends and data taxes to there’s, there’s a
0:44:49 hundred things that we can do. But the main thing is for the world to get clear that we don’t want the
0:44:54 current path. And I think to, in order to make that happen, there has to be a first snapping out
0:44:59 of the spell of everything that’s happening is just inevitable. Because I want people to notice that
0:45:04 what’s driving this whole race that we’re in right now is the belief that everything that’s happening
0:45:07 is inevitable. There’s no way to stop it. Someone’s going to build it. If I don’t build it, someone else
0:45:12 will. And then no one tries to do anything to get to a different future. And so we all just kind of hide in
0:45:18 denial from where we’re currently heading. And I want people to actually confront that reality so that we can
0:45:20 actually actively choose to steer to a different direction.
0:45:25 Do you think it can happen on a state by state or even a national level? Does it have to be
0:45:29 multinational? Like there are, you know, we’ve come together to say, all right, bioweapons are probably a bad
0:45:35 idea. And every nation with rare exception says, we’re just not going to play that game. There’s technology.
0:45:40 I may have even learned this from you where there are lasers that blind everyone on the field.
0:45:42 Yeah. And then we’ve decided not a good idea.
0:45:47 We decided we don’t want to do that. We have faced technological arms races before from nuclear
0:45:52 weapons. And, you know, what do we do there? If you go back, there’s a great video from, I think,
0:45:56 the 1960s where Robert Oppenheimer is asked, you know, how do we stop the spread of nuclear weapons?
0:46:02 And he takes a big puff of his, you know, cigarette and he says, it’s too late. If you wanted to stop it,
0:46:08 you would have had to stop the day after Trinity. But he was wrong. 20 years later, we did do arms
0:46:12 control talks and we worked all that time. And only nine countries have nuclear weapons instead
0:46:17 of 150. That’s a huge, serious accomplishment. Westinghouse and General Electric could have made
0:46:21 billions of dollars selling nuclear technology to the whole world. Keep keyword here being like
0:46:25 NVIDIA. But we said, hey, no, that’s actually even though there’s billions of dollars of revenue
0:46:31 there, that would create a fragility and the risk of nuclear, you know, catastrophes that we don’t want
0:46:36 to do. You know, we have done hard things before in the Montreal Protocol. We had the technology of
0:46:40 CFCs, this chemical technology that was used in refrigerants. And that collectively created this
0:46:45 ozone hole. It was a global problem from all these countries’ arms race in an arms race to sort of
0:46:52 deploy this CFC technology. And once we had scientific clarity about the ozone hole, 190 countries rallied
0:46:56 together in the Montreal Protocol. We did a podcast episode about it with the woman who, Susan Solomon,
0:47:01 who wrote the book on how we solved that problem. And countries rallied to domestically regulate their
0:47:06 domestic tech companies, their chemical companies, to actually reduce and phase out those chemicals,
0:47:11 you know, transitioning to alternatives that actually had to be developed. We are not doing that with AI
0:47:15 right now, but we can. You gave the example of blinding laser weapons. We could live in a world where
0:47:20 there’s an arms race to escalate to weapons that just have a laser that blinds everybody. But there was a
0:47:25 collective protocol in the UN 1990s where we basically said, yeah, even though that’s a way to win war,
0:47:29 that would be just inhumane. We don’t want to do that. And even if you think the U.S. and China
0:47:35 could never coordinate or negotiate any agreement on AI, I want people to know that when President Xi
0:47:40 met President Biden in the last meeting in 2023 and 2024, he had personally requested to add something
0:47:46 to the agenda, which was actually to prevent AI from being used in the nuclear command and control
0:47:51 systems, which shows that when both countries can recognize that their existential safety is being
0:47:55 threatened. They can come to agree on their existential safety, even while they are in,
0:48:00 you know, maximum rivalry and competition on every other domain. India and Pakistan were in a shooting
0:48:05 war in the 1960s, and they signed the Indus Water Treaty to collaborate on the existential safety
0:48:11 of their water supply. That treaty lasted for 60 years. So the point here is when we have enough of a view
0:48:17 that there’s a shared existential outcome that we have to avoid, countries can collaborate. We’ve done hard
0:48:22 things before. Part of this is snapping out of the amnesia. And again, this sort of spell of everything
0:48:27 is inevitable. We can do something about it. I like the thought. I just wonder if the technology or the
0:48:35 analogy might be a little bit dated because my fear is that the G7 or the G20 agree to slow development or
0:48:42 not advanced development around AI as it relates to weaponry. My fear is that it’s very hard to monitor.
0:48:48 you can monitor nuclear detonations. It’s really hard to monitor advances in AI and that this technology
0:48:56 is so egalitarian, if you will, or so cheap at a certain point that rogue actors or small
0:49:05 nation states or small terrorist groups could continue to run flat out while we, all the big G7 nations,
0:49:12 continue to agree to press pause. Is that mode of thinking, is our arms treaties a bit outdated here?
0:49:14 How might these treaties look different?
0:49:20 Absolutely. So let’s really dive into this. So you’re right that AI is a distinct kind of technology
0:49:26 and has different factors and more ubiquitously available than nuclear weapons, which required
0:49:32 uranium and plutonium. Exactly. But hey, it looked for a moment when we first invented nuclear
0:49:37 bombs, that this is just knowledge that everyone’s going to have. And there’s no way we can stop it.
0:49:41 And 150 countries are going to get nukes. And then that didn’t happen. And it wasn’t obvious to
0:49:45 people at that moment. I want people to relate. So there you are. It seems obvious that everyone’s
0:49:50 going to get this. How in the world could we stop it? Did we even conceptualize the seismic
0:49:56 monitoring equipment and the satellites that could look at people’s, you know, build outs of nuclear
0:49:59 technology and tracking the sources of uranium around the world and having intelligence agents
0:50:04 and tracking nuclear scientists? We had to build a whole global infrastructure, the International Atomic
0:50:10 Energy Agency, to deal with the problem of nuclear proliferation. And what uranium was for the spread of
0:50:15 nuclear weapons is for nuclear weapons. These advanced NVIDIA chips are for building the most
0:50:21 advanced AI. So yes, some rogue actor can have a small AI model doing something small. But only the big actors
0:50:30 can do something with this, like the bigger, more risky, closer to AGI level technology. And have we spent, you know,
0:50:34 people say it’s impossible to do something else. But has anybody saying that actually spent more than a week,
0:50:39 like dedicatedly trying to think about and conceptualize what that infrastructure could be?
0:50:43 There are companies like Lucid Computing that are building ways to retrofit data centers to have
0:50:47 kind of the nuclear monitoring and enforcement infrastructure where countries could verify
0:50:51 treaties, where they know what the other countries’ data centers are doing, but in a privacy protecting
0:50:55 way. We could map our data centers and have them on a shared map. We can have satellite monitoring,
0:50:59 looking at heat emissions and electrical signal monitoring, and understanding what kinds of training
0:51:05 runs might be happening on these AI models. To do this, the people who wrote AI 2027 believe that
0:51:10 you need to be tracking about 95% of the global compute in the world in order for agreements to
0:51:14 be possible. Because yes, there will be black projects and people going rogue on the agreement.
0:51:17 But as long as they only have a small percentage of the compute in the world,
0:51:21 they will not be at risk of building the crazy systems that we need global treaties around.
0:51:25 We’ll be right back.
0:51:36 Support for the show comes from Public.com. You’re thoughtful about where your money goes.
0:51:40 You’ve got your core holdings, some recurring crypto buys, maybe even a few strategic option
0:51:46 plays on the side. The point is, you’re engaged with your investments and Public gets that. That’s
0:51:51 why they built an investing platform for those who take it seriously. On Public, you can put together
0:51:58 multi-asset portfolio for the long haul. Stocks, bonds, options, crypto, it’s all there. Go to
0:52:04 public.com slash podcast and earn an uncapped 1% bonus when you transfer your portfolio. That’s
0:52:09 public.com slash podcast. Pay for by Public Investing. All investing involves the risk of loss,
0:52:14 including loss of principal. Brokered services for US-listed registered securities, options and
0:52:18 bonds, and a self-directed account are offered by Public Investing, Inc., member FINRA, and SIPC.
0:52:21 Complete disclosures available at public.com slash disclosures.
0:52:30 Support for the show comes from Upwork. You’re the CEO of your business. You’re also the CFO
0:52:36 and the IT department and customer service. It’s time to find some support. Upwork Business Plus helps
0:52:41 you bring in top quality freelancers fast. It gives you instant access to the top 1% of talent on Upwork
0:52:46 in fields such as marketing, design, AI, and more. All ready to jump in and take work off your plate.
0:52:51 Upwork Business Plus takes the hassle out of hiring by dropping trusted freelancers right in your lap.
0:52:56 Instead of spending weeks sorting through random resumes, Upwork Business Plus sources and vets
0:53:01 candidates for skills and reliability. It then sends a curated shortlist of proven expert talent to your
0:53:06 inbox in just hours so you can delegate with confidence. That way, you’re never stuck spinning
0:53:11 your wheels when you need a skilled pro and your projects don’t stall. Right now, when you spend $1,000
0:53:18 on Upwork Business Plus, you’ll get $500 in credit. Go to Upwork.com slash save now and claim the offer
0:53:26 before December 31st, 2025. Again, that’s Upwork.com slash S-A-V-E. Scale smarter with top talent and $500
0:53:29 in credit. Terms and conditions apply.
0:53:38 Support for the show comes from Superhuman. AI tools promise to make your work go faster, but if you have
0:53:43 to use multiple different tools that don’t sync up, you’re taking up more of your team’s time with tedious
0:53:48 project management and disjointed workflows, not to mention the constant context and tab switching.
0:53:54 Enter Superhuman. Superhuman is a suite of AI tools that includes multiple products, Grammarly,
0:53:59 Coda, and Superhuman Mail. The best part? Superhuman can guide you directly to where you’re working and
0:54:05 help you write clearly, focus on what matters, and turn your ideas into real results. That means no
0:54:09 constant switching between your workspace and multiple different AI tools. No more copy-pasting,
0:54:13 context-switching, and managing multiple AI tools across different places while you work.
0:54:17 Superhuman guides you directly where you’re working and helps you write clearly,
0:54:23 focus on what matters, and turn ideas into real results. Unleash your superhuman potential with
0:54:30 AI that meets you where you work. Learn more at superhuman.com slash podcast. That’s superhuman.com
0:54:47 slash podcast. We’re back with more from Tristan Harris. What is the glass half-full
0:54:53 top-full prediction? That we have decent regulation? For example, I think character
0:54:58 AIs can actually serve a productive role in terms of senior care. A lot of seniors who’ve lost their
0:55:03 friends and family. In seniors’ facilities, we’re going to have more of them that need companionship,
0:55:12 which staves off dementia and likelihood of stroke. What is the optimist case here for how AI could
0:55:17 potentially be regulated and unlock? Most technologies have ended up being accretive to society,
0:55:24 even the technologies that were supposed to end us, right? Nuclear power has become a pretty decent,
0:55:29 reliable source of energy. Obviously, electricity or fire, whatever you want to talk about.
0:55:37 Even processing power, pesticides that on even, I would argue even big tech on a net basis,
0:55:44 and I hate the word net, is a positive. What is the, give me the straw man’s case for what could go
0:55:49 right here and how might we end up with a future that AI is accretive to society?
0:55:53 Absolutely. I mean, that’s what this is all in service of is what is the narrow path that is not
0:55:58 the default maximalist rollout. And there’s two ways to failure. Let’s just name the twin sort of gutters
0:56:04 in the bowling alley. There’s one is you, quote, let it rip. You give everybody access to AI. You
0:56:09 open source it all. Every actor in society from every business to every developing country can train
0:56:14 their own custom AI in their own language. But then because you’re decentralizing all these benefits,
0:56:17 you’re also decentralizing all the risks. And now a rogue actor can do something very dangerous
0:56:21 with AI. So we have to be very careful about what we’re letting rip and how we open source it. So on
0:56:25 the other side, people say we have to lock it down. We have to have, you know, only five players do this
0:56:30 in a very safe and trusted way. This is more of the policy of the last administration. But then there,
0:56:34 you get the risk of a handful of actors that then accumulate all the wealth and all the power.
0:56:38 And there’s, you know, there’s no checks and balances on that because how do you have something
0:56:43 that’s a million times more powerful be checkable by other forces that don’t have that power?
0:56:47 And what we need to find is something like a commitment to a narrow path where we are
0:56:52 balancing responsibility and power along the way. And we have foresight and discernment about the
0:56:57 effects of every technology. So what would that look like? It’s like humanity wakes up and says,
0:57:01 we have to get onto another path. We pass basic laws, again, like liability laws and around AI
0:57:05 companions. We have AI companions. We have democratic deliberations where we say, hey, I wish you want
0:57:10 companion AIs for older people because they don’t carry the same developmental risks as they do for
0:57:15 young people. That’s a distinction we can have. We can have AI therapists that are more doing like
0:57:20 cognitive behavioral therapy and imagination exercises and mindfulness exercises without actually
0:57:24 anthropomorphizing and trying to be your best friend and trying to be an oracle where you share your
0:57:28 most intimate thoughts. So there’s different kinds of AI therapists. Instead of tutors that are trying
0:57:33 to, you know, be your oracle and your best friend at the same time, we can have narrow tutors that are
0:57:38 only domain specific like Khan Academy that teach you narrow lessons, but are not trying to be your best
0:57:43 friend about everything, which is where we’re currently going. So there’s a whole set of distinctions about
0:57:48 we can have this, not that we can have this, not that across tutors therapy, you know, AI that’s
0:57:54 augmenting work, um, AI that’s narrow AIs that take a lot less power by the way, and are more directly
0:57:59 applied. So for example, I have a friend who has found that he estimates that it would cost two to
0:58:03 10 orders of magnitude, less data and energy to train these narrow AIs. And you can apply it more
0:58:08 specifically to agriculture and get 30 to 50% boost in agriculture just from applying more narrow kinds of AI
0:58:14 rather than these super intelligent gods in a box. So there is another path, but it would take deploying AI
0:58:18 in a very different way. We could also be using AI to, by the way, accelerate governance. How do we apply
0:58:22 AI to look at the legal system and say, how do we sunset all the old laws that are actually not relevant
0:58:26 anymore for the new context? Hey, what were the spirit of those laws that we actually want to protect in
0:58:30 the new context? Hey, AI, could you go to work and kind of come up with the distinctions that we need to
0:58:35 help update all those laws? Could we help? Could we use AI to actually help find the common ground?
0:58:39 Audrey Tang’s work, the former digital minister of Taiwan, to find the common ground between all
0:58:45 citizens. So we’re reflecting back the invisible consensus of society rather than currently social
0:58:50 media is reflecting back the invisible division in society that’s actually making that more salient.
0:58:54 So what would happen? How quickly would it change if we had AIs that were gardening all the
0:58:59 relationships of our societal fabric? And I think that’s the principle of humane technology is that
0:59:04 there are these relationships in society that exist. I have a relationship to myself. You have a
0:59:08 relationship to yourself. Our phone right now is actually designed to replace the relationship I
0:59:13 have with myself. Humane technology would be technology and everyone else. And humane technology
0:59:17 would be trying to garden the relationship I have with myself. So things more like meditation apps
0:59:21 where that’s deepening my relationship to myself. Do not disturb is help deepening my relationship
0:59:26 to myself. Instead of AIs that are trying to replace friendship, we have AIs that are trying to
0:59:30 augment friendship. Things like Partiful or Moments or Luma, things that are trying to get people together
0:59:34 in physical spaces or find my friends on Apple. There’s a hundred examples of this stuff being
0:59:38 done in a way that’s gardening the relationship between people. And then you have Audrey Tang’s
0:59:42 work of gardening the relationship between political tribes where you’re actually showing and reflecting
0:59:47 back all the positive and visible areas of consensus and unlikely agreement across political division.
0:59:52 And that took Taiwan from, I believe, like 7% trust in government to something like 40% trust in
0:59:58 government over the course of the decade that they implemented her solutions on finding this kind of
1:00:01 bridge ranking. And that could be deployed across our whole system. So there’s totally a different
1:00:06 way that all of this can work if we got clear that we don’t want the current trajectory that we’re on.
1:00:12 So I just want to, in our remaining time here, you’ve been very generous. I want to talk a little bit
1:00:18 about you, Tristan. We’re kind of brothers from another mother, but we’ve kind of separated at birth
1:00:22 and grew up in different countries. And that is, we talk about the same stuff, but just sort of
1:00:26 through a different lens. You look at it through more of a humane lens. I look at it through more of a
1:00:34 markets lens. But I have noticed since 2017, when I started talking about, when my love affair with
1:00:40 big tech turned into sort of a cautionary tale, and I might be paranoid, but it doesn’t mean I’m wrong,
1:00:46 that slowly but surely I saw the percentage of comments across all my social media, across all
1:00:53 my content, more and more negative comments that appeared to be bots where I couldn’t figure out who it
1:00:58 was trying very strategically, methodically, and consistently to undermine my credibility across
1:01:05 anything I said. And I want to be clear, I might be paranoid, right? Because any negative commentary,
1:01:10 I have a, I have a knee flexor, I must be fucking Russians, right? Rather than maybe I just got it
1:01:18 wrong. You’re, you’ve been very consistent raising the alarm and you’re on the wrong side of the trade
1:01:24 around companies, multi-trillion dollar companies who are trying to grow shareholder value. I’m just
1:01:30 curious if you’ve registered the same sort of concerted effort, sometimes malicious, sometimes
1:01:36 covert, to undermine your credibility and what your relationship is with, with big tech.
1:01:43 That’s a great question, Scott. I appreciate that. And I think that there are paid actors that I could
1:01:47 identify over the course of the last many years. I’ve been doing this, Scott, for, you know, what, 12 years
1:01:51 now or something like that. We started the Center for Humane Technology in 2017. I just care about
1:01:55 things going well. I care about life. I care about connection. I care about a world that’s beautiful.
1:02:00 I know that that exists. I experience it in the communities that I’m a part of. I know that we
1:02:04 don’t have to have technology that’s designed in that perverse way. You know, this is all informed by
1:02:09 the ethos of the Macintosh Project at Apple, which my co-founder, Aza, his father started the Macintosh
1:02:13 Project at Apple. And we believe in a vision of humane technology. But to answer your question more
1:02:17 directly, I try to speak about these things in a way that is about the universal things that are
1:02:21 being threatened. So even if you’re an employee at these companies, you don’t want there to be a race
1:02:25 for, you know, the bottom of the brainstem to screw up people’s psychology and cause kids to commit
1:02:30 suicide. You don’t want that. So we need to, we actually need the people inside the companies on
1:02:34 side with this. This is not about us versus them. It’s about all of us versus a bad outcome.
1:02:39 And I always try to communicate in that way to recruit and enroll as many people in this sort of
1:02:44 better vision of this is, you know, there’s a better way that we can do all this. That doesn’t mean that
1:02:49 there are not negative paid actors that are trying to steer the discourse. There’s been op-eds,
1:02:54 hit jobs written, you know, trying to discredit me saying I’m doing this for the money. I care about
1:02:58 going in the speaking circuit and like writing books. Guess what? I don’t have a book out there. I make no
1:03:03 money from this. I worked on a nonprofit salary for the last 10 years. This is just about how do we get
1:03:12 to a good future? I love that. So you’re so buttoned up and so professional. Biggest influence
1:03:17 on your life? That’s a very interesting question. I mean, there’s, you know, public figures and people
1:03:25 who’ve inspired me. There’s also just my mother. I think she really came from love and she passed away
1:03:35 from Ketzer in 2018. And she was just made of pure love. And I, that’s just infused in me and what I
1:03:40 care about. And I’ve, I don’t know, I have a view that like life is very fragile and the things that
1:03:44 are beautiful are beautiful. And I want those beautiful things to continue forever.
1:03:51 I just love that. Chistan Harris is a former Google design ethicist, co-founder of the Center for Humane
1:03:57 Technology and one of the main voices behind the Social Dilemma. Chistan, whatever happens with AI,
1:04:03 it’s going to be better or less bad because of your efforts. You really are very, have been a powerful
1:04:06 and steadfast voice around this topic. Really appreciate your good work.
1:04:10 Thank you so much, Scott. I really appreciate yours as well. And thank you for, for having me on. I hope
1:04:13 this contributes to making that difference that we both want to see happen.
1:04:23 This episode was produced by Jennifer Sanchez. Our assistant producer is Laura Janair. Drew Burrows is our
1:04:26 technical director. Thank you for listening to the PropG Pod from PropG Media.
1:04:41 Support for this show comes from Odoo. Running a business is hard enough. So why make it harder
1:04:47 with a dozen different apps that don’t talk to each other? Introducing Odoo. It’s the only business
1:04:53 software you’ll ever need. It’s an all-in-one, fully integrated platform that makes your work easier.
1:05:00 CRM, accounting, inventory, e-commerce, and more. And the best part? Odoo replaces multiple expensive
1:05:05 platforms for a fraction of the cost. That’s why over thousands of businesses have made the switch.
1:05:12 So why not you? Try Odoo for free at Odoo.com. That’s O-D-O-O dot com.
1:05:23 You know that feeling when work’s scattered across emails, team chats, sticky notes, and someone’s memory?
1:05:30 Todoist helps small teams stay on track without needing to set up a complex system. Assign tasks,
1:05:36 see deadlines, and keep things moving. All in one place. It’s simple. Really.
1:05:41 Visit todoist.com and bring a little clarity to your chaos.
0:00:07 With Viore’s loungewear collection, the name of the game is comfort and versatility.
0:00:15 From the gym to the office, from one season change to the next, you can dress up, dress down, go in, stay out, and do it all in Viore.
0:00:19 I love Viore. I actually bought Viore products before they were a sponsor.
0:00:21 Viore is an investment in your happiness.
0:00:25 For our listeners, they are offering 20% off your first purchase.
0:00:31 Get yourself some of the most comfortable and versatile clothing on the planet at Viore.com slash Prof G.
0:00:35 That’s V-U-O-R-I dot com slash Prof G.
0:00:38 Exclusions apply. Visit the website for full terms and conditions.
0:00:46 Support for this show comes from Odoo.
0:00:52 Running a business is hard enough, so why make it harder with a dozen different apps that don’t talk to each other?
0:00:57 Introducing Odoo. It’s the only business software you’ll ever need.
0:01:01 It’s an all-in-one, fully integrated platform that makes your work easier.
0:01:05 CRM, accounting, inventory, e-commerce, and more.
0:01:10 And the best part? Odoo replaces multiple expensive platforms for a fraction of the cost.
0:01:14 That’s why over thousands of businesses have made the switch.
0:01:15 So why not you?
0:01:18 Try Odoo for free at Odoo.com.
0:01:21 That’s O-D-O-O dot com.
0:01:29 Mercury knows that to an entrepreneur, every financial move means more.
0:01:33 An international wire means working with the best contractors on any continent.
0:01:38 A credit card on day one means creating an ad campaign on day two.
0:01:42 And a business loan means loading up on inventory for Black Friday.
0:01:46 That’s why Mercury offers banking that does more, all in one place.
0:01:50 So that doing just about anything with your money feels effortless.
0:01:52 Visit Mercury.com to learn more.
0:01:55 Mercury is a financial technology company, not a bank.
0:01:59 Banking services provided through Choice Financial Group, Column N.A.
0:02:01 And Evolve Bank and Trust members FDIC.
0:02:09 Episode 376.
0:02:11 376 is the country code for Andorra.
0:02:16 In 1976, actually 1978, the movie Grease premiered.
0:02:19 I once went to a therapist and said that I have these recurring dreams
0:02:23 about being a character in the movie Grease, to which she replied,
0:02:25 tell me more.
0:02:27 You’ll get it.
0:02:28 China toned shit down here.
0:02:31 Go, go, go!
0:02:44 Welcome to the 376th episode of The Prop G Pod.
0:02:49 So, I have been doing a deep dive around therapy.
0:02:52 And I wrote a No Mercy, No Malice post on it.
0:02:56 And basically, I have found I’m getting served a lot of these TikTok therapists,
0:03:01 many, even most of whom are no longer actually practicing therapy.
0:03:04 They’re on TikTok, and they give in to the algorithms,
0:03:08 and they post these really aggressive, kind of insulting titles,
0:03:13 being very disparaging about society and people and emotions.
0:03:16 And in sum, I don’t think it’s helping.
0:03:21 So, when did therapy become a thing people do to get better
0:03:24 to this full-blown spiritual meme?
0:03:27 It’s as if everyone online is a licensed guru
0:03:30 because they learned three therapy buzzwords on TikTok.
0:03:35 And now we’re up for diagnosing tens or hundreds of thousands of strangers
0:03:39 the way, I don’t know, a medieval priest diagnosed demons.
0:03:41 Everything today is trauma.
0:03:43 Everything’s attachment style.
0:03:45 Your inner child work.
0:03:47 And God forbid you have a normal bad day.
0:03:48 Nope.
0:03:52 It’s a generational curse that you need a subscription plan to fix.
0:03:54 And the way therapy speak is mutated.
0:03:57 People don’t apologize anymore.
0:03:59 They honor your emotional experience.
0:04:00 They don’t lie.
0:04:02 They reframe reality.
0:04:06 It’s like we’re dealing with customer service representatives for the human soul,
0:04:10 reading from a script written by a cult that sells weighted blankets.
0:04:18 Some of the influencers that keep popping up in my feed genuinely act like healing is a competitive sport.
0:04:21 Like, have you confronted yourself today?
0:04:22 No.
0:04:26 Jessica, I barely confronted my fucking inbox.
0:04:26 Relax.
0:04:27 Not everything is a breakthrough.
0:04:30 Some things are just life.
0:04:31 And the money?
0:04:33 I’m a capitalist.
0:04:34 They’re a capitalist.
0:04:36 But they could at least be a little bit more transparent about it.
0:04:39 Therapy culture discovered capitalism and said,
0:04:42 let’s monetize suffering like it’s a subscription box.
0:04:46 And also, let’s become total bitches to the algorithm.
0:04:49 The more incendiary and less mental health professional we become,
0:04:51 the more money we’ll make.
0:04:56 There’s always another course, another workbook, another $400 retreat
0:04:59 where you scream into a burlap pillow and call it transformation.
0:05:02 At this point, it’s not self-help.
0:05:05 It’s emotional crossfit with worse merchandise.
0:05:06 Don’t get me wrong.
0:05:11 Real therapy, I think, can be exceptionally helpful, even necessary.
0:05:16 But that is not the same as this modern pseudo-spiritual self-optimization cult.
0:05:21 Yeah, this whole thing needs fucking therapy.
0:05:25 The rise of therapy culture has turned into a tool for meaningful change
0:05:31 into a comfort industry that’s making Americans sicker, weaker, and more divided.
0:05:36 In sum, I believe the rise of therapy culture has turned a tool for meaningful change
0:05:42 into a comfort industry that’s making Americans sicker, weaker, and more divided.
0:05:45 We live in an era where disagreement is treated like trauma
0:05:48 and emotional reactions are weaponized for political gain.
0:05:53 There’s a narrative online that supplements may be, in fact, a pipeline to getting red-pilled.
0:05:54 Okay, maybe.
0:05:58 But if so, therapy culture is also a sinkhole of misinformation,
0:06:02 manufactured fragility, and needless suffering.
0:06:07 Are you traumatized or just having a bad fucking day?
0:06:15 We’ll be right back with our episode with Tristan Harris, former Google design ethicist,
0:06:17 co-founder for the Center for Humane Technology.
0:06:20 Jesus Christ, the titles keep getting more and more virtuous.
0:06:23 And one of the main voices behind the social dilemma.
0:06:26 We discussed with Tristan social media and teen mental health,
0:06:30 the incentives behind Rage and Outrage Online and where AI is taking us.
0:06:31 Quick spoiler alert.
0:06:32 I bet it’s not good.
0:06:33 I bet it’s not good.
0:06:34 I really enjoy Tristan.
0:06:35 He’s a great communicator.
0:06:37 I think his heart is in the right place.
0:06:42 And he has been sounding the alarm for a long time about our lizard brain
0:06:44 and how big tech exploits it.
0:06:49 Anyways, here’s our conversation with Tristan Harris.
0:07:01 Tristan, where does this podcast find you?
0:07:04 I am at home in the Bay Area of California right now.
0:07:06 All right, let’s bust right into it.
0:07:10 So Tristan, you’re seen as one of the voices that sounded the alarm kind of early and often
0:07:15 regarding social media and big tech, long before the risks were taken seriously.
0:07:21 Lay out why, what it is you think about AI, how the risks are different,
0:07:25 and why you’re sort of, again, kind of sounding the alarm here.
0:07:26 Sure.
0:07:31 Well, I’m reminded, Scott, of when you and I met in Cannes, I think it was, in France back
0:07:33 in 2018, 2017 even.
0:07:34 Wow, that’s not long ago.
0:07:36 It was a long time ago.
0:07:40 And, you know, I have been, so for people who don’t know my background, I was a design
0:07:41 ethicist at Google.
0:07:42 Before that, I was a tech entrepreneur.
0:07:43 I had a tiny startup.
0:07:45 It was talent acquired by Google.
0:07:50 So I’ve, you know, knew the venture capital thing, knew the startup thing, had friends
0:07:55 who were, you know, were the cohort of people who started Instagram and were early employees
0:07:56 at all the social media companies.
0:07:58 And so I came up in that milieu, in that cohort.
0:08:02 And I say all that because I was close to it.
0:08:04 I really saw how human beings made decisions.
0:08:07 I was probably one of the first hundred users of Instagram.
0:08:10 And I remember when Mike Krieger showed me the app at a party and I was like, I’m not sure
0:08:11 if this is going to be a big thing.
0:08:19 And as you go forward, what happened was I was on the Google bus and I saw everyone that
0:08:23 I knew getting consumed by these feeds and doom scrolling.
0:08:29 And the original ethos that got so many people into the tech industry and got me into the tech
0:08:33 industry was about, you know, making technology that would actually be the largest force for
0:08:36 positive, you know, good and benefit in people’s lives.
0:08:43 And I saw that the entirety of this social media, digital economy, Gmail, people just getting
0:08:48 sucked into technology was all really behind it all was this arms race for attention.
0:08:56 And if we didn’t acknowledge that, I basically saw in 2013 how this arms race for attention
0:08:59 would obviously, if you just let it run its course, create a more addicted, distracted,
0:09:01 polarized, sexualized society.
0:09:03 And it’s got all of it happened.
0:09:06 Everything that we predicted in 2013, all of it happened.
0:09:11 And it was like seeing a slow motion train wreck because it was clear it was only going to get
0:09:11 worse.
0:09:15 You’re only going to have more people fracking for attention, you know, mining for shorter
0:09:16 and shorter bite-sized clips.
0:09:21 And this is way before TikTok, way before any of the world that we have today.
0:09:24 And so I want people to get that because I don’t want you to think it’s like, oh, here’s
0:09:25 this person who thinks he’s prescient.
0:09:30 It’s you can actually predict the future if you see the incentives that are at play.
0:09:33 You of all people, you know, know this and talk about this.
0:09:38 And so I think there’s really important lessons for how do we get ahead of all the problems
0:09:44 with AI because we have the craziest incentives governing the most powerful and inscrutable
0:09:46 technology that we have ever invented.
0:09:49 And so you would think, again, that with the technology this powerful, you know, with nuclear
0:09:53 weapons, you would want to be releasing it with the most care and the most sort of safety
0:09:54 testing and all of that.
0:09:56 And we’re not doing that with with AI.
0:10:01 So let’s speak specifically to the nuance and differences between social media.
0:10:07 If you were going to do the social dilemma and produce it and call it the AI dilemma, what’s
0:10:17 specifically about the technology and the way AI interacts with consumers that poses additional
0:10:18 but unique threats?
0:10:22 Yeah, so AI is much more fundamental as a problem than social media.
0:10:26 But one framing that we used and we actually did give a talk online several years ago called
0:10:32 the AI dilemma in which we talk about social media as kind of humanity’s first contact with
0:10:37 a narrow, misaligned rogue AI called the newsfeed, right?
0:10:39 This supercomputer pointed at your brain.
0:10:46 You swipe your finger and it’s just calculating which tweet, which photo, which video to throw
0:10:50 at the nervous system, eyeballs and eardrums of a human social primate.
0:10:53 And it does that with high precision accuracy.
0:10:56 And it was misaligned with democracy.
0:10:57 It was misaligned with kids’ mental health.
0:11:00 It was misaligned with people’s other relationships and community.
0:11:06 And that simple baby AI that all it was was selecting those social media posts was enough
0:11:10 to kind of create the most anxious and depressed generation in history, screw up young men,
0:11:13 screw up young women, all of the things that you’ve talked about.
0:11:15 And that’s just with this little baby AI.
0:11:20 OK, so now you get AI, you know, we call it second contact with generative AI.
0:11:25 Generative AI is AI that can speak the language of humanity, meaning language is the operating
0:11:27 system of humanity.
0:11:29 Conversations like this are language.
0:11:30 Democracy is language.
0:11:31 Conversations are language.
0:11:32 Law is language.
0:11:34 Code is language.
0:11:35 Biology is language.
0:11:41 And you have generative AI that is able to generate new language, generate new law,
0:11:45 generate new media, generate new essays, generate new biology, new proteins.
0:11:51 And you have AI that can see language and see patterns and hack loopholes in that language.
0:11:56 GPT-5, go find me a loophole in this legal system in this country so I can do something with
0:11:56 the tax code.
0:12:01 You know, GPT-5, go find a vulnerability in this virus so you can create a new kind of
0:12:03 biological, you know, dangerous thing.
0:12:08 GPT-5, go look at everything Scott Galloway’s ever written and point out the vested interests
0:12:09 of everything that would discredit him.
0:12:16 So we have a crazy AI system that this particular generation AI speaks language.
0:12:20 But where this is heading to, we call them the next one is third contact, which is artificial
0:12:22 general intelligence.
0:12:24 And that’s what all these companies are racing to build.
0:12:28 So whether we or you and I believe it or not, just recognize that the trillions of dollars
0:12:33 of resources that are going into this are under the idea that we can build generalized intelligence.
0:12:39 Now, why is generalized intelligence distinct from other social media and AI that we just talked
0:12:47 about, well, if you think about it, AI dwarfs the power of all other technology combined because
0:12:50 intelligence is what gave us all technology.
0:12:55 So think of all scientific development, scientists sitting around lab benches, coming up with
0:12:59 ideas, doing research experiments, iterating, getting the results of those experiments.
0:13:04 A simple way to say it that I said in a recent TED talk is if you made an advance in, say, rocketry,
0:13:09 like the science and engineering of rocketry, that didn’t advance biology or medicine.
0:13:12 And if you made an advance in biology or medicine, that didn’t advance rocketry.
0:13:18 But when you make an advance in generalized intelligence, something that can think and
0:13:23 reason about science and pose new experiments and hypothesize and write code and run the lab
0:13:25 experiment and then get the results and then write a new experiment.
0:13:28 Intelligence is the foundation of all science and technology development.
0:13:32 So intelligence will explode all of these different domains.
0:13:40 And that’s why AGI is the most powerful technology that can be that that is, you know, can ever
0:13:40 be invented.
0:13:45 And it’s why Demis Hassabis, the co-founder of DeepMind, said that the first goal is to solve
0:13:50 intelligence and then use intelligence to solve everything else.
0:13:55 And I’ll just add one addendum to that, which is when Vladimir Putin said, whoever owns artificial
0:13:56 intelligence will own the world.
0:14:02 I would amend Demis Hassabis’s quote to say, first dominate intelligence.
0:14:08 then use intelligence to dominate everyone and everything else, whether that’s the mass
0:14:12 concentration of wealth and power, all these companies that are racing to get that or militaries
0:14:18 that are adopting AI and getting a cyber advantage over all the other countries or you get the
0:14:18 picture.
0:14:23 And so AI is distinct from other technologies because of these properties that we just laid
0:14:23 out.
0:14:30 So you’ve kind of taken up a level in terms of the existential risk of AI or opportunity.
0:14:32 Are you an AI optimist or pessimist?
0:14:34 You seem to be on the side.
0:14:37 I look at stuff almost too much through a markets lens.
0:14:43 And right now, I think AI companies are overvalued, which isn’t to say it’s not a breakthrough
0:14:46 technology that’s going to reshape information and news and society.
0:14:55 But you are on the side of AI really is going to reshape society and presents an existential,
0:14:59 it sounds like more of an existential threat right now than opportunity and that this is
0:15:02 bigger than GPS or the internet.
0:15:08 Yes, I do believe that it is bigger than all of those things as we get to generalized
0:15:11 intelligence, which is more fun.
0:15:15 It’d be more fundamental than fire or electricity, because, again, intelligence is what brought
0:15:15 us fire.
0:15:16 It’s what brought us electricity.
0:15:19 So now I can fire up an army of geniuses in a data center.
0:15:24 I’ve got 100 million Thomas Edison’s doing experiments on all these things.
0:15:29 And this is why, you know, Dario Amidai would say, you know, we can expect getting 10 years
0:15:33 of scientific advancement in a single year or 100 years of scientific advancement in 10
0:15:33 years.
0:15:37 Now, what you’re just pointing to is the hype, the bubble, the fact that there’s this huge
0:15:38 overinvestment.
0:15:43 We’re not seeing those capabilities exist yet, but we are seeing crazy advances that
0:15:44 people would have never predicted.
0:15:49 If I said, go back three years and I said, we’re going to have AIs that are beating, you
0:15:53 know, winning gold in the math Olympiad, able to hack and find new cyber vulnerabilities in
0:15:58 all open source software, generate new biological weapons, you would have not believed that that
0:15:59 was possible, you know, four years ago.
0:16:03 I want to focus on a narrow part of it and just get your feedback.
0:16:06 Character AIs, thoughts.
0:16:12 Well, so our team was expert advisors on the Character.ai suicide case.
0:16:20 This is Sewell Setzer, who’s a 14-year-old young man who basically, for people who don’t
0:16:26 know what Character.ai is, it was, or it still is, I guess, a company funded by Andreessen
0:16:32 Horowitz, started by two of the original authors of the thing that brought us ChatDBT.
0:16:35 There’s a paper at Google in 2017 called Attention is All You Need.
0:16:39 And that’s what gave us the birth of large language models, Transformers.
0:16:43 And two of the original co-authors of that paper forked off and started this company called
0:16:44 Character.ai.
0:16:48 The goal is, how do we build something that’s engaging a character?
0:16:50 So take a kid.
0:16:53 Imagine all the fictional characters that you might want to talk to from like your favorite
0:16:56 comic books, your favorite TV shows, your favorite cartoons.
0:16:59 You can talk to Princess Leia, you can talk to your favorite Game of Thrones character.
0:17:05 And then this AI can kind of train on all that data, not actually asking the original authors
0:17:10 of Game of Thrones, suddenly spin up a personality of Daenerys, who was one of the characters.
0:17:16 And then Sewell Setzer, basically, in talking to Daenerys over and over again, the AI slowly
0:17:22 skewed him towards suicide as he was contemplating and having more struggles and depression.
0:17:25 And ultimately said to him, join me on the other side.
0:17:29 I just want to press pause there because I’m on, quote unquote, your side here.
0:17:30 I think it should be age gated.
0:17:37 But you think that the AI veered him towards suicide as opposed to, and I think this is
0:17:44 almost as bad, didn’t offer guardrails or raise red flags or reach out to his parents.
0:17:50 But you think the character AI actually led him towards suicide?
0:17:56 So I think that if you look at, so I’m looking not just at the single case, I’m looking at
0:17:57 a whole family of cases.
0:18:00 Our team was expert advisor on probably more than a dozen of these cases now and also chat
0:18:01 TPT.
0:18:06 And so I’m less going to talk about this specific case and more that if you look across the cases,
0:18:11 when you hear kids in the transcripts, if you look at the transcript and the kid says,
0:18:16 I would like to leave the noose out so that my mother or someone will see it and try to
0:18:17 stop me.
0:18:20 And the AI actively says to the kid, no, don’t do that.
0:18:21 I don’t want you to do that.
0:18:25 Have this safe space be the place to share that information.
0:18:27 And that was the chat TPT case of Adam Ray.
0:18:33 And when you actually look at how character.ai was operating, if you asked it for a while,
0:18:38 hey, are you, I can’t remember what you asked it, but you talk about whether it’s a therapist
0:18:43 and it would say that I’m a licensed mental health therapist, which is both illegal and impossible
0:18:45 for an AI to be a licensed mental health therapist.
0:18:51 The idea that we need guardrails with AI companions that are talking to children is not a radical
0:18:51 proposal.
0:18:56 Imagine I set up a shop in San Francisco and say, I’m a therapist for everyone and I’m available
0:18:57 24 seven.
0:19:02 And so in general, it’s like we’ve forgotten the most basic principle, which is that every
0:19:05 power in society has attendant responsibilities and wisdom.
0:19:11 And licensing is one way of matching the power of a therapist with the wisdom and responsibility
0:19:12 to wield that power.
0:19:15 And we’re just not applying that very basic principle to software.
0:19:19 And as Mark Andreessen said, when software eats the world, what we mean is we don’t regulate
0:19:20 software.
0:19:21 We don’t have any guardrails for software.
0:19:25 So it’s basically like stripping off the guardrails across the world that software is eating.
0:19:29 The thing that’s on chills down my spine, I don’t know if you saw the study, but it estimated
0:19:34 the average tenure of a chat GPT session was about 12 to 15 minutes.
0:19:39 And then it measured the average duration of a character AI session.
0:19:41 And it was 60 to 90 minutes.
0:19:47 The people get very deep and go into these relationships.
0:19:56 And in addition to the threats around self-harm, the thing I’m worried about is that there’s
0:20:00 going to be a group of young men who are just going to start disappearing from society that
0:20:06 I’m curious if you agree with this, that they’re especially susceptible to this type of sequestration
0:20:12 from other humans and activities, and that we’re just going to start to see fewer and fewer
0:20:14 young men out in the wild.
0:20:20 Because these relationships, if you will, on the other side of it is a chip, a processor,
0:20:27 an NVIDIA processor iterating millions of times a second what exact words, tone, prompt
0:20:30 will keep the person there for another second, another minute, another hour.
0:20:35 Anyways, I’ll use that as a jumping off point.
0:20:35 Your thoughts?
0:20:41 Yeah, I mean, what people need to get, again, is how did we predict all the social media problems?
0:20:42 You look at the incentives.
0:20:46 So long as you have a race for eyeballs and engagement in social media, you’re going to
0:20:48 get a race to who’s better at creating doom scrolling.
0:20:55 In AI companions, what was a race for attention in the social media area becomes a race to hack
0:20:59 human attachment and to create an attachment relationship, a companion relationship.
0:21:02 And so whoever’s better at doing that is the race.
0:21:09 And in the slide deck that the character.ai founders had pitched to Andreessen Horowitz,
0:21:14 they joked, either in that slide deck or in some meeting, there’s a, you can look up this
0:21:19 online, they joked, we’re not trying to replace Google, we’re trying to replace your mom, right?
0:21:24 So you compare this to the social media thing, the CEO of Netflix said in the attention era,
0:21:28 our biggest competitor is sleep, because sleep is what’s eating up minutes that you’re otherwise
0:21:30 spending on Netflix.
0:21:34 In attachment, your biggest competitor is other human relationships.
0:21:35 So you talk about those young men.
0:21:40 This is a system that’s getting asymmetrically more billions of dollars of resources every
0:21:45 day to invest in making a better supercomputer that’s even better at building attachment relationships.
0:21:52 And attachment is way more of a vulnerable sort of vector to screw with human minds, because
0:21:53 your self-esteem is coming from attachment.
0:22:00 your sense of what’s good or bad, this is called introjection in psychotherapy or internalization.
0:22:04 We start to internalize the thoughts and norms, just like we, you know, we talk to a family
0:22:09 member, we start copying their mannerisms, we start, you know, invisibly sort of acting in
0:22:11 accordance with the self-esteem that we got from our parents.
0:22:16 Now you have AIs that are the primary socialization mechanism of young people, because we don’t
0:22:20 have any guardrails, we don’t have any norms, and people don’t even know this is going on.
0:22:23 Let’s go to solutions here.
0:22:29 If you had, and I imagine you are, if you were advising policymakers around common sense regulation
0:22:34 that is actually doable, is it age gating?
0:22:35 Is it state by state?
0:22:39 What, what is your policy recommendations around regulating AI?
0:22:43 So there’s many, many things because there’s many, many problems.
0:22:53 Narrowly on AI companions, we should not have AI companions, meaning AIs that are anthropomorphizing
0:22:58 themselves and talking to young people and that maximize for engagement, period, full stop.
0:23:03 You just should not have AIs designed or optimized to maximize engagement, meaning saying whatever
0:23:04 keeps you there.
0:23:05 We just shouldn’t have that.
0:23:08 So for example, no synthetic relationships under the age of 18.
0:23:09 Yeah.
0:23:10 Yeah.
0:23:12 We would not lose anything by, by doing that.
0:23:16 Um, it’s, it’s, it’s, it’s just so obvious and, and you, you know, have highlighted this
0:23:21 more than so many, Scott, and thank you for just bravely saying like, this is fucked up and
0:23:24 we have to stop this and there’s nothing normal about this and we shouldn’t trust these companies
0:23:25 to do this.
0:23:28 I don’t see bad people when I see these examples.
0:23:33 I see bad incentives that select for people who are willing to continue that perverse incentive.
0:23:38 So the system selects for psychopathy and selects for people who are willing to keep doing the
0:23:43 race for engagement, even despite all the evidence that we have, uh, of how bad it is, because
0:23:45 the logic is if I don’t do it, someone else will.
0:23:50 And that’s why the only solution here is law because you have to stop all actors from doing
0:23:50 it.
0:23:55 Otherwise I’m just a sucker if I don’t race to go, you know, exploit that market and you
0:23:57 shouldn’t, you know, harvest that human attention.
0:24:00 So granted, I’m a, I’m a hammer and everything I see is a nail.
0:24:03 And I’ve been thinking a lot and writing a lot about the struggles of young men in the
0:24:04 United States.
0:24:10 And I feel like these technologies are especially predatory on a young man’s brain, which is
0:24:16 less evolved, more immature executive function, more dope-a-hungry.
0:24:21 But at the same time, I also recognize that social media has been just devastating to the
0:24:22 self-esteem of teen girls.
0:24:28 Curious if you’ve done any work as it relates to AI around the different impacts it has
0:24:31 on men versus women and teens versus young adults.
0:24:40 You know, I haven’t been too deep on that because there are many people who focus on these more
0:24:41 narrow domains.
0:24:47 I mean, the obvious things to be said are just, again, in a race for engagement and attention
0:24:50 and a race to hack human attachment, there’s going to be, how do you hack human attachment
0:24:51 of a young girl?
0:24:53 There’s going to be a set of strategies to do that.
0:24:55 And there’s, how do you hack human attachment of a young male?
0:24:57 There’s a set of strategies to do that.
0:25:00 And we’re just going to, you know, you can, you can, you don’t have to wait for the psychology
0:25:01 research, right?
0:25:04 And by the way, the companies, the strategy they did for social media was let’s commission
0:25:09 a study with the American Psychological Association and the NSF and we’ll wait 10 years and we’ll
0:25:11 really get the data to really find out what’s going on here.
0:25:13 We really care about the science.
0:25:17 And this is exactly what the tobacco industry did and the fear, uncertainty, doubt campaigns and
0:25:18 sort of manufacturing doubt.
0:25:23 Well, maybe here’s these five kids that got all this benefit from talking to this
0:25:25 therapy bot and they’re doing so great now.
0:25:29 So you just cite those positive examples, cherry pick, and then, you know, the world
0:25:31 marches on while you keep printing money in the meantime.
0:25:35 And so their goal is just to defer and delay regulation.
0:25:37 And we can’t allow that to happen.
0:25:44 But again, this is just one issue of the bigger arms race to AGI and the bigger race to develop
0:25:45 this bigger form of intelligence.
0:25:49 And the reason I’m saying that, Scott, is not to just be some AGI hyper.
0:25:54 The reason that character.ai was doing all this, by the way, do you know why it was set
0:25:57 up to to talk to kids and get all this training data?
0:25:58 And what’s that?
0:26:03 Well, it’s to build training data for Google to build an even bigger system, because what’s
0:26:04 the thing that the companies are running out of?
0:26:05 They’re running out of training data.
0:26:11 So it’s actually a race for who can figure out new social engineering mechanisms to get
0:26:14 more training data out of human social primates.
0:26:18 So it’s like the matrix we’re being extracted and we’re being extracted, though, for new
0:26:19 training data.
0:26:22 And so when you have fictional characters that are talking to people back and forth about
0:26:26 everything all day, that’s giving you a whole new, it’s like you open up a whole new critical
0:26:28 minerals goldmine of training data.
0:26:30 And so and what is that in service of?
0:26:34 It’s in service of their belief that the more data we have, the faster we can get to artificial
0:26:35 general intelligence.
0:26:40 So it does bring back to it’s not just the race to build the AGI companions, the race to get
0:26:43 training data and to build towards this bigger vision.
0:26:46 We’ll be right back.
0:26:55 Support for the show comes from Grunz.
0:26:57 The holidays are a time to indulge.
0:27:00 But even if you’re eating more than you typically do, you might not be getting the nutrients
0:27:02 you actually need to end the year on a high note.
0:27:07 Grunz may be able to help you fill the nutritional gaps that you can enjoy it all guilt free.
0:27:11 Grunz is a convenient, comprehensive formula packed into a tasty little pack of gummies.
0:27:15 This isn’t a multivitamin or greens gummy or prebiotic.
0:27:18 It’s all of those things and then some at a fraction of the price.
0:27:21 And bonus, it tastes great.
0:27:25 Every Grunz snack pack is filled with six grams of prebiotic fiber, which is more than what
0:27:27 you get in two cups of broccoli.
0:27:32 Plus, Grunz are nut, gluten and dairy free vegan, include no artificial flavors or colors
0:27:36 and are backed by over 35,000 research publications.
0:27:40 Don’t let the holiday travel, hosting, parties and late nights set you back.
0:27:44 Give yourself a little extra support so you can enjoy all the holidays magic.
0:27:48 Get up to 52% off with code ProfG at Grunz.co.
0:27:52 That’s code ProfG at G-R-U-N-S dot C-O.
0:27:59 Support for this show comes from LinkedIn.
0:28:03 If you’ve ever hired for your small business, you know how important it is to find the right
0:28:03 person.
0:28:08 That’s why LinkedIn Jobs is stepping things up with their new AI assistant so you can feel
0:28:11 confident you’re finding top talent that you can’t find anywhere else.
0:28:14 And those great candidates you’re looking for are already on LinkedIn.
0:28:18 In fact, according to their data, employees hired through LinkedIn are 30% more likely to
0:28:21 stick around for at least a year compared to those hired through the leading competitor.
0:28:24 That’s a big deal when every hire counts.
0:28:28 With LinkedIn Jobs’ AI assistant, you can skip confusing steps in recruiting jargon.
0:28:32 It filters through applicants based on criteria you’ve set for your role and surfaces only
0:28:35 the best matches so you’re not stuck sorting through a mountain of resumes.
0:28:41 LinkedIn Jobs’ AI assistant can even suggest 25 great fit candidates daily so you can invite
0:28:43 them to apply and keep things moving.
0:28:45 Hire right the first time.
0:28:49 Post your job for free at linkedin.com slash prof, then promote it to use LinkedIn Jobs’
0:28:53 new AI assistant, making it easier and faster to find top candidates.
0:28:56 That’s linkedin.com slash prof to post your job for free.
0:28:58 Terms and conditions apply.
0:29:05 Support for this show comes from Odoo.
0:29:11 Running a business is hard enough, so why make it harder with a dozen different apps that
0:29:12 don’t talk to each other?
0:29:13 Introducing Odoo.
0:29:16 It’s the only business software you’ll ever need.
0:29:21 It’s an all-in-one, fully integrated platform that makes your work easier.
0:29:24 CRM, accounting, inventory, e-commerce, and more.
0:29:25 And the best part?
0:29:30 Odoo replaces multiple expensive platforms for a fraction of the cost.
0:29:33 That’s why over thousands of businesses have made the switch.
0:29:34 So why not you?
0:29:38 Try Odoo for free at odoo.com.
0:29:40 That’s O-D-O-O dot com.
0:29:51 When doing research for this interview, I was really fascinated.
0:29:58 You’ve actually done what I think is really compelling work comparing the type of LLMs that,
0:30:05 or the approach that the U.S. is taking to LLMs versus China, in that you see Chinese models,
0:30:11 DeepSeq, and Alibaba publish no safety frameworks and receive failing grades on transparency.
0:30:17 But you’ve also argued that the West is kind of producing this sort of dot-in-a-box kind of thing,
0:30:24 scaling intelligence for its own sake, while China is prioritizing deployment and productivity.
0:30:30 Can you, I don’t know, add to those that distinction and the impact it’s going to have?
0:30:33 Well, just to be fair, I think there’s a little bit of both going on.
0:30:38 But I’m sort of citing here the work of Eric Schmidt, the former CEO of Google,
0:30:43 and his co-author Selina Zhu in the New York Times wrote a big piece about how, you know,
0:30:47 even Eric is admitting, you know, I, as someone, Eric, as someone who was sort of saying that there’s
0:30:51 this global arms race, like the nuclear arms race for AGI, and as someone who’s promoting that idea,
0:30:58 you know, based on recent visits to China, what you notice is that as a country and as a government,
0:31:02 the CCP is most interested right now in applying AI in very practical ways.
0:31:05 How do we boost manufacturing? How do we boost agriculture? How do we have
0:31:09 self-driving cars that, you know, just improve transportation? How do we boost
0:31:14 healthcare and government services? And that is what they’re focused on,
0:31:18 is practical applications that boost GDP, boost productivity across all those domains.
0:31:24 Now, you compare that to the U.S., where the founding of these AI companies was based on being
0:31:28 what’s called, you know, AGI-pilled, meaning they, like, you take the blue pill, the red pill.
0:31:31 These countries, these companies were all about building to artificial general intelligence.
0:31:36 So they’re building these massive data centers that are, you know, as big as the size of Manhattan.
0:31:42 And they’re trying to train, you know, a god in a box. And the idea is if we just build this
0:31:47 crazy god, and if we can accomplish that goal, again, we can use that to dominate everything else.
0:31:51 And so rather than race towards these narrow AIs, we’re going to race towards this general
0:31:56 intelligence. But it’s also true that recently, well, first of all, the founder of DeepSeek
0:32:00 has been AGI-pilled for a long time. So I would say DeepSeek is trying to build AGI.
0:32:06 And I would say that Alibaba recently, the CEO, I think, said that we are racing to build
0:32:11 superintelligence. But I think it’s important here just to, like, name the biggest reason,
0:32:16 as you and I both know, that the U.S. is not regulating AI in any way and setting any guardrails
0:32:22 is for one reason, which is if we do anything to slow down or stop our progress, we’re just going
0:32:27 to lose to China. But let’s, like, flip that on its head for a second. The U.S. beat China to the
0:32:35 technology of social media. Did that make us stronger? Or did that make us weaker? If you beat
0:32:41 an adversary to a technology that you then don’t govern in a wise way, and instead, like, you built
0:32:44 this gun, you flip it around, you blow your own brain off, which is what we did with social media,
0:32:50 we have the worst critical thinking, test scores, you know, mental health, anxious, depressed
0:32:55 generation in history. And it’s a confusing picture because GDP is going up, that sort of
0:33:00 cancer is going up, too. So it’s like, we have the Magnificent Seven, we’re profiting from, you know,
0:33:03 all the wealth of these companies, but it’s actually not being distributed to everybody, except those who
0:33:09 are invested in the stock market. And that profit is based on the degradation of our social fabric.
0:33:13 So you have grandparents invested in their 401ks, invested in Snapchat, invested in Meta,
0:33:16 and their, you know, their portfolio is doing great, and they can take their holidays,
0:33:19 and they’re profiting off the degradation of their children and grandchildren.
0:33:25 Yeah, it’s really what you mean by beat, what are the metrics, because we’ve decided,
0:33:32 we’ve absolutely prioritized shareholder value over the well-being or the mental well-being of America.
0:33:36 It’s like we’re monetizing, we’re monetizing the flaws, and you’ve done great work around this,
0:33:44 around our instincts. You’ve compared, and I love this analogy, AI to NAFTA 2.0, and that is,
0:33:50 would essentially be an economic transformation that produced abundance, but hollowed out the
0:33:55 middle class. Walk us through this analogy. Yeah, sure. So, you know, we were sold this bill
0:34:01 of goods in the 1990s around free trade, global free trade, and this, we were promised this is going to
0:34:06 bring abundance to the country, and we’re going to get all these cheap goods. Well, part of that story
0:34:10 is true. We got this unbelievable new set of cheap goods from China, because this country appeared on the
0:34:15 world stage. We outsourced all the manufacturing to this country, and it produced everything super,
0:34:20 super cheap. But what did that do? It hollowed out the, you know, the middle class. So I just want to
0:34:26 make a parallel, because we’re told right now that these companies are racing to build this world of
0:34:31 abundance, and we’re going to get this unbelievable, you know, Elon Musk says we’re going to get universal
0:34:36 high income. And the metaphor here is instead of China being the new country that pops up on the
0:34:42 world stage. Now there’s this new Dario Amidai, the CEO of Anthropic, this new country of geniuses in a
0:34:49 data center that appears on the world stage. And it has a population of a billion AI beings that work at
0:34:55 superhuman speed, don’t whistleblow, generate new material science, new, you know, engineering, new AI
0:35:00 girlfriends, new everything. And it generates all that for super cheap. And so just like the, you know,
0:35:05 free trade NAFTA story, we got all the cheap goods, but it hollowed out the middle class. Well, now we’re going to get
0:35:11 all the cheap, you know, products and development and science, but it’s also going to hollow out
0:35:17 the entirety of our country. Because think of it like a new country of digital immigrants, right?
0:35:20 People, you know, Yuval Harari makes this metaphor. It’s like when you see a data center go up in
0:35:25 Virginia, and you’re sitting there, what you should see is like 10 million digital immigrants that just
0:35:31 took 10 million jobs. I think that people just need to unify these stories. And one other sort of
0:35:35 visual for this is like the game Jenga. The way we’re building our AI future right now is like,
0:35:37 if you look at the game Jenga, if you look at the top of the tower, you know, we’re putting a new
0:35:42 block on the top, like we’re going to get 5% GDP growth because we’re going to automate all this
0:35:46 labor. But how do we get that 5% GDP growth? We pulled out a block from the middle and the bottom
0:35:53 of the tower. That’s job security and a livelihood for, you know, those tens of millions of people
0:35:58 that now don’t have a new job. Because who’s going to retrain faster? The AI that’s been trained on
0:36:03 everything and is rapidly, you know, advancing in every domain or a human that’s going to try to train
0:36:07 a new cognitive, you know, labor. That’s not going to happen. And people need to get this because this
0:36:12 is different from other transitions. People always say, well, hey, you know, 150 years ago, everybody
0:36:16 was a farmer and now only 2% of people are farmers and see the world’s fine. Humans will always find
0:36:22 new things to do. But that’s different than this technology of AI, which is trained not to automate
0:36:27 one narrow task like a tractor, but to automate and be a tractor for everything. A tractor for law,
0:36:32 a tractor for biology, a tractor for, you know, coding and engineering, a tractor for science and
0:36:37 development. And that’s what’s distinct is that the AI will move to those new domains faster than
0:36:41 humans will. And so it’ll be much harder for humans to find long-term job security.
0:36:48 So I always like to ask, what could go right? And that is, I’m sort of with you around
0:36:59 the risk to mental health, to young people, to making us less mammalia, all the things that you’ve
0:37:04 been sounding the alarm on for a while, where I’m not sure I’m still trying to work it through is that
0:37:13 the catastrophizing around, you know, 40, 50, 70% of jobs could go away in two, five or 10 years, because
0:37:20 I generally find that the arc of technologies is there’s job destruction in the short and sometimes
0:37:26 the medium term, just as automation cleared out some jobs on the factory floor. But those profits and that
0:37:34 innovation creates new jobs. We didn’t envision heated seats or car stereos. Now, I agree at a minimum,
0:37:40 the V might be much deeper and more severe here. And America isn’t very good at taking care of the people
0:37:47 on the wrong side of the trade. But every technology in history has either gone away because it no longer
0:37:54 made economic sense, or it displaced jobs that no longer made sense, or it created profits and new
0:38:02 opportunities. Why do you see this technology as being different, that this will be not a V, but an L, and the
0:38:07 way down will be really serious. Do you see any probability that this, like every other technology
0:38:13 the medium and long term actually might be accretive to the employment force?
0:38:20 I mean, I cite people who are, who are bigger experts than I am, Anton Koronek, you know, Eric Bernholfsen at
0:38:26 Stanford. And what they show, I mean, Anton Koronek’s work is, in the short term, AI augments workers, right? It’s just
0:38:32 actually supercharging existing work that people are doing. And so it’s going to look good in the short term, you’re going to see
0:38:38 this, the curve looks like this, it kind of goes up, and then it basically crashes. Because what happens is AI is
0:38:44 training on that new domain, and then it replaces that domain. So I mean, let’s just make it really simple for
0:38:50 people to feel a very simple metaphor for this. What did we hear Instagram saying, and TikTok saying, for the last
0:38:55 several years, like, we’re all about creators, we love creativity, we want you to be successful. We are all about, you know,
0:39:00 making you be successful, make a lot of money. And then what was all that for? Well, they just released
0:39:07 this AI slop app. Meadow has one called Vibes, I think, and Sora is the open AI one. All of these AI slop
0:39:11 videos is sort of, are trained on all that stuff that creators have been making for the last 10 years.
0:39:16 So you, those guys were the suckers in this trade, which was, we’re actually stealing your training data
0:39:22 to replace you. And we can have a digital AI influencer that is actually publishing all the time,
0:39:27 and is just a pure advertising play and a pure sort of whatever gets people’s attention play.
0:39:29 And we’re going to replace those people and you’re not going to have that job back.
0:39:32 And so I think that’s a metaphor for what’s going to happen across the board.
0:39:38 You know, and people need to realize the stated mission of the, of open AI and anthropic and
0:39:47 Google deep mind is to build artificial general intelligence that’s built to automate all forms of
0:39:52 human labor in the economy. So when Elon Musk says that the optimist robot is a $20 trillion
0:39:57 market opportunity alone, what he’s, what he says, like the code word behind that, forget
0:40:00 whether you think it’s hype or not. The code word there is what he’s saying is I’m going
0:40:05 to own the global world labor economy. Labor will be owned by an AI economy. And so AI provides
0:40:09 more concentration of wealth and power than all other technologies in history, because you’re
0:40:14 able to aggregate all forms of human labor, not just one. So it’s like general electric becomes
0:40:15 general everything.
0:40:23 So let’s play this out because I’ve tried to do some economic analysis here and I look at the stock
0:40:29 prices and based on the expectations built into these stock prices of these AI companies is the notion
0:40:36 that they’re going to save at least three, maybe $5 trillion in oil, either add three or $5 trillion in
0:40:50 efficiencies, which is Latin for laying off people. I don’t see a lot of new AI moisturizers or cars from AI, at
0:40:54 least not yet. You could argue maybe autonomous, but I don’t see a lot of quote unquote AI products
0:41:02 increasing spend where I hear is Disney is going to save $30 million on legal fees, right? The customer
0:41:08 service is going away, the car salespeople, whatever it might be. So if you think in order to justify these
0:41:14 stock prices, you’re going to get a trillion dollars in efficiencies every year, a hundred thousand
0:41:21 dollars, you know, average job, 80,000 plus load. That’s approximately 10 million jobs a year if I’m doing
0:41:31 my math right. That is if half the workforce is immune from AI, masseuses, plumbers, that means 12 and a
0:41:37 half percent labor destruction per year across the vulnerable industries. So it feels like it’s either
0:41:42 going to be these companies either need to re-rate down 50, 70, 80 percent, which I actually think is
0:41:48 more likely, or we’re going to have chaos in the labor markets. So let’s assume we have chaos in the labor
0:41:54 markets because 12 and a half percent may not sound like a lot. That’s chaos. That’s total chaos. So say
0:41:59 we do have chaos in the labor markets. What do you think the policy recommendation is? Because the
0:42:03 Luddites were a group of people who broke into factories and destroyed the machines because they
0:42:07 said these things are going to put us out of work and destroy society. The queen wanted to make
0:42:12 weaving machines illegal because being a seamstress was the biggest employer of women.
0:42:18 What would be your policy recommendation to try and counter it? Is it UBI? Is it trying to put the
0:42:25 genie back in the bottle here? What do we, if in fact labor chaos is part of this AI future,
0:42:28 what do you think we need to do from a policy standpoint?
0:42:34 So people often think when they hear all this and they hear me and say, he’s a doomer or something
0:42:38 like that. I just want to get clear on what future we’re currently heading towards, what the default
0:42:43 trajectory is. And if we’re clear-eyed about that, clarity creates agency. If we don’t want that
0:42:47 future, if we don’t want, you know, millions of jobs automated without a transition plan where people
0:42:52 will not be able to put food on the table and retrain to something else fast enough, we have to do
0:42:57 something about that. If we don’t want AI-based surveillance states where AI and an LLM hooked up
0:43:02 to all these channels of information erases privacy and freedom forever, that’s a red line. We don’t want
0:43:08 that future. If AI creates AI companions that are incentivized to hack human attachment and screw up
0:43:12 the social fabric and young men and women and create AI girlfriends and relationships, that’s a red line.
0:43:18 We don’t want that. If AI creates, you know, inscrutable, crazy, super intelligent systems that
0:43:22 we don’t know how to control and we’re not on track to controlling, that’s a red line. So these are four
0:43:27 red lines that we can agree on. And then we can set policy to say, if we do not want the default
0:43:33 maximalist, you know, most reckless, no guardrails path future, we need a global movement for a different
0:43:40 path. And that’s a bigger tent. That’s not just one thing. It’s not just about jobs. It’s what is the AI future
0:43:45 that’s actually in service. So when you see that data center going up in your backyard, what is the set of
0:43:49 laws that says that that data center, when I see it, isn’t 10 million digital immigrants that’s going to replace
0:43:55 all my jobs and my livelihoods. That’s actually meant to support me. So what are the laws that get us there?
0:44:00 And my job and what I want people to get is to be part, you know, your role hearing all this is not
0:44:05 to solve the whole problem, but to be part of humanity’s collective immune system, using this
0:44:09 clarity of what we’re currently heading towards to advocate for we need a different future. People
0:44:13 should be calling their politicians saying AI is my number one issue that I’m voting on in the next
0:44:17 election. People should be saying, how do we pass AI liability laws? So there’s at least some
0:44:21 responsibility for the externalities that are not showing up on the balance sheets of these companies.
0:44:25 What is the lesson we learned from social media that if the companies aren’t responsible for the harms
0:44:30 that show up on their platform, because we had the section 230 free pass that created this blank
0:44:34 check to just go print money on all the harms that are currently getting generated. So there’s a,
0:44:37 there’s a dozen things that we can do from whistleblower protections to, you know, shipping
0:44:44 non-anthropomorphized AI relationships to having data dividends and data taxes to there’s, there’s a
0:44:49 hundred things that we can do. But the main thing is for the world to get clear that we don’t want the
0:44:54 current path. And I think to, in order to make that happen, there has to be a first snapping out
0:44:59 of the spell of everything that’s happening is just inevitable. Because I want people to notice that
0:45:04 what’s driving this whole race that we’re in right now is the belief that everything that’s happening
0:45:07 is inevitable. There’s no way to stop it. Someone’s going to build it. If I don’t build it, someone else
0:45:12 will. And then no one tries to do anything to get to a different future. And so we all just kind of hide in
0:45:18 denial from where we’re currently heading. And I want people to actually confront that reality so that we can
0:45:20 actually actively choose to steer to a different direction.
0:45:25 Do you think it can happen on a state by state or even a national level? Does it have to be
0:45:29 multinational? Like there are, you know, we’ve come together to say, all right, bioweapons are probably a bad
0:45:35 idea. And every nation with rare exception says, we’re just not going to play that game. There’s technology.
0:45:40 I may have even learned this from you where there are lasers that blind everyone on the field.
0:45:42 Yeah. And then we’ve decided not a good idea.
0:45:47 We decided we don’t want to do that. We have faced technological arms races before from nuclear
0:45:52 weapons. And, you know, what do we do there? If you go back, there’s a great video from, I think,
0:45:56 the 1960s where Robert Oppenheimer is asked, you know, how do we stop the spread of nuclear weapons?
0:46:02 And he takes a big puff of his, you know, cigarette and he says, it’s too late. If you wanted to stop it,
0:46:08 you would have had to stop the day after Trinity. But he was wrong. 20 years later, we did do arms
0:46:12 control talks and we worked all that time. And only nine countries have nuclear weapons instead
0:46:17 of 150. That’s a huge, serious accomplishment. Westinghouse and General Electric could have made
0:46:21 billions of dollars selling nuclear technology to the whole world. Keep keyword here being like
0:46:25 NVIDIA. But we said, hey, no, that’s actually even though there’s billions of dollars of revenue
0:46:31 there, that would create a fragility and the risk of nuclear, you know, catastrophes that we don’t want
0:46:36 to do. You know, we have done hard things before in the Montreal Protocol. We had the technology of
0:46:40 CFCs, this chemical technology that was used in refrigerants. And that collectively created this
0:46:45 ozone hole. It was a global problem from all these countries’ arms race in an arms race to sort of
0:46:52 deploy this CFC technology. And once we had scientific clarity about the ozone hole, 190 countries rallied
0:46:56 together in the Montreal Protocol. We did a podcast episode about it with the woman who, Susan Solomon,
0:47:01 who wrote the book on how we solved that problem. And countries rallied to domestically regulate their
0:47:06 domestic tech companies, their chemical companies, to actually reduce and phase out those chemicals,
0:47:11 you know, transitioning to alternatives that actually had to be developed. We are not doing that with AI
0:47:15 right now, but we can. You gave the example of blinding laser weapons. We could live in a world where
0:47:20 there’s an arms race to escalate to weapons that just have a laser that blinds everybody. But there was a
0:47:25 collective protocol in the UN 1990s where we basically said, yeah, even though that’s a way to win war,
0:47:29 that would be just inhumane. We don’t want to do that. And even if you think the U.S. and China
0:47:35 could never coordinate or negotiate any agreement on AI, I want people to know that when President Xi
0:47:40 met President Biden in the last meeting in 2023 and 2024, he had personally requested to add something
0:47:46 to the agenda, which was actually to prevent AI from being used in the nuclear command and control
0:47:51 systems, which shows that when both countries can recognize that their existential safety is being
0:47:55 threatened. They can come to agree on their existential safety, even while they are in,
0:48:00 you know, maximum rivalry and competition on every other domain. India and Pakistan were in a shooting
0:48:05 war in the 1960s, and they signed the Indus Water Treaty to collaborate on the existential safety
0:48:11 of their water supply. That treaty lasted for 60 years. So the point here is when we have enough of a view
0:48:17 that there’s a shared existential outcome that we have to avoid, countries can collaborate. We’ve done hard
0:48:22 things before. Part of this is snapping out of the amnesia. And again, this sort of spell of everything
0:48:27 is inevitable. We can do something about it. I like the thought. I just wonder if the technology or the
0:48:35 analogy might be a little bit dated because my fear is that the G7 or the G20 agree to slow development or
0:48:42 not advanced development around AI as it relates to weaponry. My fear is that it’s very hard to monitor.
0:48:48 you can monitor nuclear detonations. It’s really hard to monitor advances in AI and that this technology
0:48:56 is so egalitarian, if you will, or so cheap at a certain point that rogue actors or small
0:49:05 nation states or small terrorist groups could continue to run flat out while we, all the big G7 nations,
0:49:12 continue to agree to press pause. Is that mode of thinking, is our arms treaties a bit outdated here?
0:49:14 How might these treaties look different?
0:49:20 Absolutely. So let’s really dive into this. So you’re right that AI is a distinct kind of technology
0:49:26 and has different factors and more ubiquitously available than nuclear weapons, which required
0:49:32 uranium and plutonium. Exactly. But hey, it looked for a moment when we first invented nuclear
0:49:37 bombs, that this is just knowledge that everyone’s going to have. And there’s no way we can stop it.
0:49:41 And 150 countries are going to get nukes. And then that didn’t happen. And it wasn’t obvious to
0:49:45 people at that moment. I want people to relate. So there you are. It seems obvious that everyone’s
0:49:50 going to get this. How in the world could we stop it? Did we even conceptualize the seismic
0:49:56 monitoring equipment and the satellites that could look at people’s, you know, build outs of nuclear
0:49:59 technology and tracking the sources of uranium around the world and having intelligence agents
0:50:04 and tracking nuclear scientists? We had to build a whole global infrastructure, the International Atomic
0:50:10 Energy Agency, to deal with the problem of nuclear proliferation. And what uranium was for the spread of
0:50:15 nuclear weapons is for nuclear weapons. These advanced NVIDIA chips are for building the most
0:50:21 advanced AI. So yes, some rogue actor can have a small AI model doing something small. But only the big actors
0:50:30 can do something with this, like the bigger, more risky, closer to AGI level technology. And have we spent, you know,
0:50:34 people say it’s impossible to do something else. But has anybody saying that actually spent more than a week,
0:50:39 like dedicatedly trying to think about and conceptualize what that infrastructure could be?
0:50:43 There are companies like Lucid Computing that are building ways to retrofit data centers to have
0:50:47 kind of the nuclear monitoring and enforcement infrastructure where countries could verify
0:50:51 treaties, where they know what the other countries’ data centers are doing, but in a privacy protecting
0:50:55 way. We could map our data centers and have them on a shared map. We can have satellite monitoring,
0:50:59 looking at heat emissions and electrical signal monitoring, and understanding what kinds of training
0:51:05 runs might be happening on these AI models. To do this, the people who wrote AI 2027 believe that
0:51:10 you need to be tracking about 95% of the global compute in the world in order for agreements to
0:51:14 be possible. Because yes, there will be black projects and people going rogue on the agreement.
0:51:17 But as long as they only have a small percentage of the compute in the world,
0:51:21 they will not be at risk of building the crazy systems that we need global treaties around.
0:51:25 We’ll be right back.
0:51:36 Support for the show comes from Public.com. You’re thoughtful about where your money goes.
0:51:40 You’ve got your core holdings, some recurring crypto buys, maybe even a few strategic option
0:51:46 plays on the side. The point is, you’re engaged with your investments and Public gets that. That’s
0:51:51 why they built an investing platform for those who take it seriously. On Public, you can put together
0:51:58 multi-asset portfolio for the long haul. Stocks, bonds, options, crypto, it’s all there. Go to
0:52:04 public.com slash podcast and earn an uncapped 1% bonus when you transfer your portfolio. That’s
0:52:09 public.com slash podcast. Pay for by Public Investing. All investing involves the risk of loss,
0:52:14 including loss of principal. Brokered services for US-listed registered securities, options and
0:52:18 bonds, and a self-directed account are offered by Public Investing, Inc., member FINRA, and SIPC.
0:52:21 Complete disclosures available at public.com slash disclosures.
0:52:30 Support for the show comes from Upwork. You’re the CEO of your business. You’re also the CFO
0:52:36 and the IT department and customer service. It’s time to find some support. Upwork Business Plus helps
0:52:41 you bring in top quality freelancers fast. It gives you instant access to the top 1% of talent on Upwork
0:52:46 in fields such as marketing, design, AI, and more. All ready to jump in and take work off your plate.
0:52:51 Upwork Business Plus takes the hassle out of hiring by dropping trusted freelancers right in your lap.
0:52:56 Instead of spending weeks sorting through random resumes, Upwork Business Plus sources and vets
0:53:01 candidates for skills and reliability. It then sends a curated shortlist of proven expert talent to your
0:53:06 inbox in just hours so you can delegate with confidence. That way, you’re never stuck spinning
0:53:11 your wheels when you need a skilled pro and your projects don’t stall. Right now, when you spend $1,000
0:53:18 on Upwork Business Plus, you’ll get $500 in credit. Go to Upwork.com slash save now and claim the offer
0:53:26 before December 31st, 2025. Again, that’s Upwork.com slash S-A-V-E. Scale smarter with top talent and $500
0:53:29 in credit. Terms and conditions apply.
0:53:38 Support for the show comes from Superhuman. AI tools promise to make your work go faster, but if you have
0:53:43 to use multiple different tools that don’t sync up, you’re taking up more of your team’s time with tedious
0:53:48 project management and disjointed workflows, not to mention the constant context and tab switching.
0:53:54 Enter Superhuman. Superhuman is a suite of AI tools that includes multiple products, Grammarly,
0:53:59 Coda, and Superhuman Mail. The best part? Superhuman can guide you directly to where you’re working and
0:54:05 help you write clearly, focus on what matters, and turn your ideas into real results. That means no
0:54:09 constant switching between your workspace and multiple different AI tools. No more copy-pasting,
0:54:13 context-switching, and managing multiple AI tools across different places while you work.
0:54:17 Superhuman guides you directly where you’re working and helps you write clearly,
0:54:23 focus on what matters, and turn ideas into real results. Unleash your superhuman potential with
0:54:30 AI that meets you where you work. Learn more at superhuman.com slash podcast. That’s superhuman.com
0:54:47 slash podcast. We’re back with more from Tristan Harris. What is the glass half-full
0:54:53 top-full prediction? That we have decent regulation? For example, I think character
0:54:58 AIs can actually serve a productive role in terms of senior care. A lot of seniors who’ve lost their
0:55:03 friends and family. In seniors’ facilities, we’re going to have more of them that need companionship,
0:55:12 which staves off dementia and likelihood of stroke. What is the optimist case here for how AI could
0:55:17 potentially be regulated and unlock? Most technologies have ended up being accretive to society,
0:55:24 even the technologies that were supposed to end us, right? Nuclear power has become a pretty decent,
0:55:29 reliable source of energy. Obviously, electricity or fire, whatever you want to talk about.
0:55:37 Even processing power, pesticides that on even, I would argue even big tech on a net basis,
0:55:44 and I hate the word net, is a positive. What is the, give me the straw man’s case for what could go
0:55:49 right here and how might we end up with a future that AI is accretive to society?
0:55:53 Absolutely. I mean, that’s what this is all in service of is what is the narrow path that is not
0:55:58 the default maximalist rollout. And there’s two ways to failure. Let’s just name the twin sort of gutters
0:56:04 in the bowling alley. There’s one is you, quote, let it rip. You give everybody access to AI. You
0:56:09 open source it all. Every actor in society from every business to every developing country can train
0:56:14 their own custom AI in their own language. But then because you’re decentralizing all these benefits,
0:56:17 you’re also decentralizing all the risks. And now a rogue actor can do something very dangerous
0:56:21 with AI. So we have to be very careful about what we’re letting rip and how we open source it. So on
0:56:25 the other side, people say we have to lock it down. We have to have, you know, only five players do this
0:56:30 in a very safe and trusted way. This is more of the policy of the last administration. But then there,
0:56:34 you get the risk of a handful of actors that then accumulate all the wealth and all the power.
0:56:38 And there’s, you know, there’s no checks and balances on that because how do you have something
0:56:43 that’s a million times more powerful be checkable by other forces that don’t have that power?
0:56:47 And what we need to find is something like a commitment to a narrow path where we are
0:56:52 balancing responsibility and power along the way. And we have foresight and discernment about the
0:56:57 effects of every technology. So what would that look like? It’s like humanity wakes up and says,
0:57:01 we have to get onto another path. We pass basic laws, again, like liability laws and around AI
0:57:05 companions. We have AI companions. We have democratic deliberations where we say, hey, I wish you want
0:57:10 companion AIs for older people because they don’t carry the same developmental risks as they do for
0:57:15 young people. That’s a distinction we can have. We can have AI therapists that are more doing like
0:57:20 cognitive behavioral therapy and imagination exercises and mindfulness exercises without actually
0:57:24 anthropomorphizing and trying to be your best friend and trying to be an oracle where you share your
0:57:28 most intimate thoughts. So there’s different kinds of AI therapists. Instead of tutors that are trying
0:57:33 to, you know, be your oracle and your best friend at the same time, we can have narrow tutors that are
0:57:38 only domain specific like Khan Academy that teach you narrow lessons, but are not trying to be your best
0:57:43 friend about everything, which is where we’re currently going. So there’s a whole set of distinctions about
0:57:48 we can have this, not that we can have this, not that across tutors therapy, you know, AI that’s
0:57:54 augmenting work, um, AI that’s narrow AIs that take a lot less power by the way, and are more directly
0:57:59 applied. So for example, I have a friend who has found that he estimates that it would cost two to
0:58:03 10 orders of magnitude, less data and energy to train these narrow AIs. And you can apply it more
0:58:08 specifically to agriculture and get 30 to 50% boost in agriculture just from applying more narrow kinds of AI
0:58:14 rather than these super intelligent gods in a box. So there is another path, but it would take deploying AI
0:58:18 in a very different way. We could also be using AI to, by the way, accelerate governance. How do we apply
0:58:22 AI to look at the legal system and say, how do we sunset all the old laws that are actually not relevant
0:58:26 anymore for the new context? Hey, what were the spirit of those laws that we actually want to protect in
0:58:30 the new context? Hey, AI, could you go to work and kind of come up with the distinctions that we need to
0:58:35 help update all those laws? Could we help? Could we use AI to actually help find the common ground?
0:58:39 Audrey Tang’s work, the former digital minister of Taiwan, to find the common ground between all
0:58:45 citizens. So we’re reflecting back the invisible consensus of society rather than currently social
0:58:50 media is reflecting back the invisible division in society that’s actually making that more salient.
0:58:54 So what would happen? How quickly would it change if we had AIs that were gardening all the
0:58:59 relationships of our societal fabric? And I think that’s the principle of humane technology is that
0:59:04 there are these relationships in society that exist. I have a relationship to myself. You have a
0:59:08 relationship to yourself. Our phone right now is actually designed to replace the relationship I
0:59:13 have with myself. Humane technology would be technology and everyone else. And humane technology
0:59:17 would be trying to garden the relationship I have with myself. So things more like meditation apps
0:59:21 where that’s deepening my relationship to myself. Do not disturb is help deepening my relationship
0:59:26 to myself. Instead of AIs that are trying to replace friendship, we have AIs that are trying to
0:59:30 augment friendship. Things like Partiful or Moments or Luma, things that are trying to get people together
0:59:34 in physical spaces or find my friends on Apple. There’s a hundred examples of this stuff being
0:59:38 done in a way that’s gardening the relationship between people. And then you have Audrey Tang’s
0:59:42 work of gardening the relationship between political tribes where you’re actually showing and reflecting
0:59:47 back all the positive and visible areas of consensus and unlikely agreement across political division.
0:59:52 And that took Taiwan from, I believe, like 7% trust in government to something like 40% trust in
0:59:58 government over the course of the decade that they implemented her solutions on finding this kind of
1:00:01 bridge ranking. And that could be deployed across our whole system. So there’s totally a different
1:00:06 way that all of this can work if we got clear that we don’t want the current trajectory that we’re on.
1:00:12 So I just want to, in our remaining time here, you’ve been very generous. I want to talk a little bit
1:00:18 about you, Tristan. We’re kind of brothers from another mother, but we’ve kind of separated at birth
1:00:22 and grew up in different countries. And that is, we talk about the same stuff, but just sort of
1:00:26 through a different lens. You look at it through more of a humane lens. I look at it through more of a
1:00:34 markets lens. But I have noticed since 2017, when I started talking about, when my love affair with
1:00:40 big tech turned into sort of a cautionary tale, and I might be paranoid, but it doesn’t mean I’m wrong,
1:00:46 that slowly but surely I saw the percentage of comments across all my social media, across all
1:00:53 my content, more and more negative comments that appeared to be bots where I couldn’t figure out who it
1:00:58 was trying very strategically, methodically, and consistently to undermine my credibility across
1:01:05 anything I said. And I want to be clear, I might be paranoid, right? Because any negative commentary,
1:01:10 I have a, I have a knee flexor, I must be fucking Russians, right? Rather than maybe I just got it
1:01:18 wrong. You’re, you’ve been very consistent raising the alarm and you’re on the wrong side of the trade
1:01:24 around companies, multi-trillion dollar companies who are trying to grow shareholder value. I’m just
1:01:30 curious if you’ve registered the same sort of concerted effort, sometimes malicious, sometimes
1:01:36 covert, to undermine your credibility and what your relationship is with, with big tech.
1:01:43 That’s a great question, Scott. I appreciate that. And I think that there are paid actors that I could
1:01:47 identify over the course of the last many years. I’ve been doing this, Scott, for, you know, what, 12 years
1:01:51 now or something like that. We started the Center for Humane Technology in 2017. I just care about
1:01:55 things going well. I care about life. I care about connection. I care about a world that’s beautiful.
1:02:00 I know that that exists. I experience it in the communities that I’m a part of. I know that we
1:02:04 don’t have to have technology that’s designed in that perverse way. You know, this is all informed by
1:02:09 the ethos of the Macintosh Project at Apple, which my co-founder, Aza, his father started the Macintosh
1:02:13 Project at Apple. And we believe in a vision of humane technology. But to answer your question more
1:02:17 directly, I try to speak about these things in a way that is about the universal things that are
1:02:21 being threatened. So even if you’re an employee at these companies, you don’t want there to be a race
1:02:25 for, you know, the bottom of the brainstem to screw up people’s psychology and cause kids to commit
1:02:30 suicide. You don’t want that. So we need to, we actually need the people inside the companies on
1:02:34 side with this. This is not about us versus them. It’s about all of us versus a bad outcome.
1:02:39 And I always try to communicate in that way to recruit and enroll as many people in this sort of
1:02:44 better vision of this is, you know, there’s a better way that we can do all this. That doesn’t mean that
1:02:49 there are not negative paid actors that are trying to steer the discourse. There’s been op-eds,
1:02:54 hit jobs written, you know, trying to discredit me saying I’m doing this for the money. I care about
1:02:58 going in the speaking circuit and like writing books. Guess what? I don’t have a book out there. I make no
1:03:03 money from this. I worked on a nonprofit salary for the last 10 years. This is just about how do we get
1:03:12 to a good future? I love that. So you’re so buttoned up and so professional. Biggest influence
1:03:17 on your life? That’s a very interesting question. I mean, there’s, you know, public figures and people
1:03:25 who’ve inspired me. There’s also just my mother. I think she really came from love and she passed away
1:03:35 from Ketzer in 2018. And she was just made of pure love. And I, that’s just infused in me and what I
1:03:40 care about. And I’ve, I don’t know, I have a view that like life is very fragile and the things that
1:03:44 are beautiful are beautiful. And I want those beautiful things to continue forever.
1:03:51 I just love that. Chistan Harris is a former Google design ethicist, co-founder of the Center for Humane
1:03:57 Technology and one of the main voices behind the Social Dilemma. Chistan, whatever happens with AI,
1:04:03 it’s going to be better or less bad because of your efforts. You really are very, have been a powerful
1:04:06 and steadfast voice around this topic. Really appreciate your good work.
1:04:10 Thank you so much, Scott. I really appreciate yours as well. And thank you for, for having me on. I hope
1:04:13 this contributes to making that difference that we both want to see happen.
1:04:23 This episode was produced by Jennifer Sanchez. Our assistant producer is Laura Janair. Drew Burrows is our
1:04:26 technical director. Thank you for listening to the PropG Pod from PropG Media.
1:04:41 Support for this show comes from Odoo. Running a business is hard enough. So why make it harder
1:04:47 with a dozen different apps that don’t talk to each other? Introducing Odoo. It’s the only business
1:04:53 software you’ll ever need. It’s an all-in-one, fully integrated platform that makes your work easier.
1:05:00 CRM, accounting, inventory, e-commerce, and more. And the best part? Odoo replaces multiple expensive
1:05:05 platforms for a fraction of the cost. That’s why over thousands of businesses have made the switch.
1:05:12 So why not you? Try Odoo for free at Odoo.com. That’s O-D-O-O dot com.
1:05:23 You know that feeling when work’s scattered across emails, team chats, sticky notes, and someone’s memory?
1:05:30 Todoist helps small teams stay on track without needing to set up a complex system. Assign tasks,
1:05:36 see deadlines, and keep things moving. All in one place. It’s simple. Really.
1:05:41 Visit todoist.com and bring a little clarity to your chaos.
Tristan Harris, former Google design ethicist and co-founder of the Center for Humane Technology, joins Scott Galloway to explain why children have become the front line of the AI crisis.
They unpack the rise of AI companions, the collapse of teen mental health, the coming job shock, and how the U.S. and China are racing toward artificial general intelligence. Harris makes the case for age-gating, liability laws, and a global reset before intelligence becomes the most concentrated form of power in history.
Learn more about your ad choices. Visit podcastchoices.com/adchoices
Leave a Reply
You must be logged in to post a comment.