619. How to Poison an A.I. Machine

AI transcript
0:00:02 (upbeat music)
0:00:07 There’s an old saying that I’m sure you’ve heard.
0:00:11 Imitation is the sincerest form of flattery.
0:00:15 But imitation can easily tip into forgery.
0:00:18 In the art world, there have been many talented foragers
0:00:19 over the years.
0:00:21 The Dutch painter Han van Maegeren,
0:00:24 a master forager of the 20th century,
0:00:27 was so good that his paintings were certified and sold,
0:00:32 often to Nazis, as works by Johann Vermeer,
0:00:34 a 17th century Dutch master.
0:00:38 Now there is a new kind of art forgery happening
0:00:41 and the perpetrators are machines.
0:00:45 I recently got back from San Francisco,
0:00:49 the epicenter of the artificial intelligence boom.
0:00:51 I was out there to do a live show,
0:00:53 which you may have heard in our feed,
0:00:55 and also to attend the
0:00:58 annual American Economic Association Conference.
0:01:00 Everywhere you go in San Francisco,
0:01:03 there are billboards for AI companies.
0:01:06 The conference itself was similarly blanketed.
0:01:07 There were sessions called
0:01:10 Economic Implications of AI,
0:01:13 Artificial Intelligence and Finance,
0:01:17 and Large Language Models and Generative AI.
0:01:19 The economist Eric Brynjolfsson
0:01:21 is one of the leading scholars in this realm,
0:01:24 and we borrowed him for our live show
0:01:26 to hear his views on AI.
0:01:29 – The idea is that AI is doing these amazing things,
0:01:32 but we wanna do it in service of humans
0:01:34 and make sure that we keep humans
0:01:35 at the center of all of that.
0:01:38 – The day after Brynjolfsson came on our show,
0:01:40 I attended one of his talks at the conference.
0:01:44 It was called Will AI Save Us or Destroy Us?
0:01:47 He cited a book by the Oxford computer scientist,
0:01:49 Michael Woldridge, called A Brief History
0:01:51 of Artificial Intelligence.
0:01:54 Brynjolfsson read from a list of problems
0:01:57 that Woldridge said AI was nowhere near solving.
0:01:59 Here are a few of them.
0:02:02 Understanding a story and answering questions about it,
0:02:05 human level automated translation,
0:02:09 interpreting what is going on in a photograph.
0:02:12 As Brynjolfsson is reading this list from the lectern,
0:02:14 you’re thinking, wait a minute,
0:02:17 AI has solved all those problems, hasn’t it?
0:02:20 And that’s when Brynjolfsson gets to his punchline.
0:02:25 The Woldridge book was published way back in 2021.
0:02:29 The pace of AI’s advance has been astonishing.
0:02:33 And some people expect it to supercharge our economy.
0:02:34 The Congressional Budget Office
0:02:37 has estimated economic growth over the current decade
0:02:40 of around 1.5% a year.
0:02:44 Eric Brynjolfsson thinks that AI could double that.
0:02:46 He argues that many views of AI
0:02:50 are either too fearful or too narrow.
0:02:51 Too many people think of machines
0:02:53 as just trying to imitate humans,
0:02:54 but machines can help us do new things
0:02:56 we never could have done before.
0:02:58 And so we wanna look for ways
0:03:00 that machines can compliment humans,
0:03:02 not simply imitate or replace them.
0:03:04 – So that sounds promising,
0:03:07 but what about the machines that are just imitating humans?
0:03:12 What about machines that are essentially high-tech foragers?
0:03:14 Today on Freakinomics Radio,
0:03:16 we will hear from someone who’s trying
0:03:20 to thwart these machines on behalf of artists.
0:03:22 – They take decades to hone their skills.
0:03:24 And when that’s taken against their will,
0:03:26 that is sort of identity theft.
0:03:29 – Ben Zhao is a professor of computer science
0:03:30 at the University of Chicago.
0:03:33 He is by no means a techno pessimist,
0:03:37 but he is not so bullish on artificial intelligence.
0:03:40 – There is an exceptional level of hype.
0:03:42 That bubble is, in many ways,
0:03:44 in the middle of bursting right now.
0:03:47 But Zhao isn’t just waiting for the bubble to burst.
0:03:49 It’s already too late for that.
0:03:52 – Because the harms that are happening to people
0:03:53 is in real time.
0:03:55 – Zhao and his team have been building tools
0:03:57 to prevent some of those harms.
0:03:59 When it comes to stolen art,
0:04:03 the tool of choice is a dose of poison
0:04:07 that Zhao slips into the AI system.
0:04:10 There is another old saying you probably know.
0:04:12 It takes a thief to catch a thief.
0:04:15 How does that work in the time of AI?
0:04:16 Let’s find out.
0:04:21 (upbeat music)
0:04:30 – This is Freakonomics Radio,
0:04:33 the podcast that explores the hidden side of everything
0:04:36 with your host, Stephen Dubner.
0:04:38 (upbeat music)
0:04:41 (upbeat music)
0:04:47 – Ben Zhao and his wife, Heather Zhang,
0:04:50 are both computer scientists at the University of Chicago,
0:04:53 and they run their own lab.
0:04:54 – We call it the sand lab.
0:04:55 – Which stands for?
0:04:58 – Security algorithms, networking, and data.
0:05:01 Most of the work that we do has been
0:05:03 to use technology for good,
0:05:06 to limit the harms of abuses and attacks,
0:05:08 and protect human beings and their values,
0:05:12 whether it’s personal privacy or security or data,
0:05:13 or your identity.
0:05:15 – What’s your lab look like if we showed up?
0:05:16 What do we see?
0:05:17 Do we see people milling around,
0:05:20 talking, working on monitors together?
0:05:22 – It’s really quite anticlimactic.
0:05:24 We’ve had some TV crews come by
0:05:27 and they’re always expecting some sort of secret layer,
0:05:29 and then they walk in and it’s a bunch of cubicles.
0:05:32 Our students all have standing desks.
0:05:35 The only wrinkle is that I’m at one of the standing desks
0:05:36 in the room.
0:05:37 I don’t usually sit in my office.
0:05:39 I sit next to them, a couple of cubicles over,
0:05:42 so that they don’t get paranoid about me
0:05:43 watching their screen.
0:05:45 – When there’s a tool that you’re envisioning,
0:05:48 or developing, or perfecting, is it all hands on deck,
0:05:50 or are the teams relatively small?
0:05:51 How does that work?
0:05:53 – Well, there’s only a handful of students
0:05:54 in my lab to begin with.
0:05:56 So all hands on deck is like,
0:05:59 what, seven or eight PhD students plus us.
0:06:01 Typically speaking, the projects are a little bit smaller,
0:06:03 just because we’ve got multiple projects going on,
0:06:06 and so people are partitioning their attention
0:06:09 and work energy at different things.
0:06:11 – I read on your webpage, Ben, you write,
0:06:14 “I work primarily on adversarial machine learning
0:06:17 “and tools to mitigate harms of generative AI models
0:06:19 “against human creatives.”
0:06:22 So that’s an extremely compelling bio line.
0:06:25 Like if that was a dating profile, and I were in AI,
0:06:27 I would say, whoa, swiping hard left.
0:06:30 But if I’m someone concerned about these things,
0:06:32 oh my goodness, you’re the dream date.
0:06:34 So can you unpack that for me?
0:06:36 – Aversarial machine learning is a shorthand
0:06:39 for this interesting research area
0:06:42 at the intersection of computer security
0:06:43 and machine learning.
0:06:47 Anything to do with attacks, defenses, privacy concerns,
0:06:49 surveillance, all these subtopics
0:06:51 as related to machine learning and AI.
0:06:55 That’s what I’ve been working on mostly for the last decade.
0:06:59 For more than two years, we’ve been focused
0:07:03 on how the misuse and abuse of these AI tools
0:07:08 can harm real people and trying to build research tools
0:07:11 and technology tools to try to reduce some of that harm
0:07:14 to protect regular citizens and, in particular,
0:07:16 human creatives like artists and writers.
0:07:18 – Before he got into his current work,
0:07:21 Protecting Creatives, Zhao made a tool
0:07:24 for people who are worried that Siri or Alexa
0:07:26 are eavesdropping on them, which,
0:07:28 now that I’ve said their names, they may be.
0:07:32 He called this tool the bracelet of silence.
0:07:33 – So that’s for my D&D days.
0:07:35 – Yeah. (laughs)
0:07:38 – It’s a fun little project we had done prior work
0:07:42 in ultrasonics and modulation effects
0:07:44 when you have different microphones
0:07:48 and how they react to different frequencies of sound.
0:07:50 One of the effects that people have been observing
0:07:55 is that you can make microphones vibrate
0:07:57 in a frequency that they don’t want to.
0:08:00 We figured out that we could build a set
0:08:02 of little transducers, you can imagine,
0:08:05 a fat bracelet, sort of like cyberpunk kind of thing
0:08:09 with, I think, 24 or 12, I forget the exact number,
0:08:11 little transducers that are hooked onto the bracelet
0:08:12 like gemstones.
0:08:14 – The one I’m looking at looks like 12.
0:08:16 I also have to say, Ben, it’s pretty big.
0:08:18 It’s a pretty big bracelet to wear around
0:08:20 just to silence your Alexa or HomePod.
0:08:22 – Well, hey, you gotta do what you gotta do
0:08:25 and hopefully other people will make it much smaller, right?
0:08:26 We’re not in the production business.
0:08:29 What it does is basically it radiates
0:08:32 a carefully attuned pair of ultrasonic pulses
0:08:35 in such a way that commodity microphones
0:08:38 anywhere within reach will, against their will,
0:08:42 begin to vibrate at a normal audible frequency.
0:08:44 They basically generate the sound
0:08:46 that’s necessary to jam themselves.
0:08:47 When we first came out with this thing,
0:08:49 a lot of people were very excited,
0:08:52 privacy advocates, public figures who were very concerned,
0:08:54 not necessarily about their own Alexa,
0:08:56 but the fact that they had to walk in
0:08:58 to public places all the time,
0:09:01 you’re really trying to prevent that hidden microphone
0:09:03 eavesdropping on a private conversation.
0:09:05 – Okay, that’s the bracelet of silence.
0:09:08 I’d like you to describe another privacy tool
0:09:10 you built, the one called Fox.
0:09:12 – Fox is a fun one.
0:09:15 In 2019, I was brainstorming about
0:09:18 some dangers that we have in the future.
0:09:19 And this is not even gendered AI.
0:09:21 This is just sort of classification
0:09:22 and facial recognition.
0:09:24 One of the things that we came up with
0:09:27 was this idea that AI is gonna be everywhere
0:09:29 and therefore anyone can train any model
0:09:32 and therefore people can basically train models of you.
0:09:34 At the time, it was not about deep fakes,
0:09:35 it was about surveillance.
0:09:38 And what would happen if people just went online,
0:09:40 took your entire internet footprint,
0:09:42 which of course today is massive,
0:09:44 scraped all your photos from Facebook and Instagram
0:09:47 and LinkedIn and then build this incredibly accurate
0:09:48 facial recognition model of you
0:09:52 without your knowledge, much less permission.
0:09:55 And we built this tool that basically allows you
0:09:57 to alter your selfies, your photos,
0:10:01 in such a way that it made you look more like someone else
0:10:02 than yourself.
0:10:04 – Does it make you look more like someone else
0:10:06 in the actual context that you care about
0:10:08 or only in the version when it’s being scraped?
0:10:10 – That’s right, only in the version
0:10:13 when it’s being used to build a model against you.
0:10:15 But the funny part was that we built this technology,
0:10:19 we wrote the paper and on the week of submission,
0:10:21 this was 2020, we were getting ready
0:10:24 to submit that paper, I remember it distinctly.
0:10:26 That was when Cashmere Hill at the New York Times
0:10:29 came out with her story on Clearview AI.
0:10:31 And that was just mind blowing
0:10:33 because I had been talking to our students
0:10:37 for months about having to build for this dark scenario
0:10:39 and literally, here’s the New York Times saying,
0:10:42 “Yeah, this is today and we are already in it.”
0:10:43 That was disturbing on many fronts,
0:10:45 but it did make writing the paper a lot easier.
0:10:48 We just cited the New York Times article and said,
0:10:49 “Here it is already.”
0:10:52 – Clearview AI is funded how?
0:10:53 – It was a private company.
0:10:55 I think it’s still private.
0:10:57 It’s gone through some ups and downs.
0:10:58 Since the New York Times article,
0:11:00 they had to change their revenue stream.
0:11:03 They no longer take third party customers.
0:11:07 Now they only work with government and law enforcement.
0:11:09 – Okay, so Fox is the tool you invented
0:11:12 to fight that kind of facial recognition abuse.
0:11:15 Is Fox an app or software that anyone can use?
0:11:19 – Fox was designed as a research paper and algorithm,
0:11:21 but we did produce a little app.
0:11:23 I think it went over a million downloads.
0:11:25 We stopped keeping track of it,
0:11:28 but we stopped a mailing list and that mailing list
0:11:30 is actually how some artists reach out.
0:11:38 – When Ben Zhao says that some artists reached out,
0:11:41 that was how he started down his current path,
0:11:43 defending visual artists.
0:11:45 A Belgian artist named Kim Van Dunne,
0:11:48 who’s known for her illustrations of fantasy creatures,
0:11:51 sent Zhao an invitation to a town hall meeting
0:11:52 about AI artwork.
0:11:54 It was hosted by a Los Angeles organization
0:11:57 called Concept Art Association,
0:12:00 and it featured representatives from the U.S. Copyright Office.
0:12:03 What was the purpose of this meeting?
0:12:04 Artists had been noticing
0:12:07 that when people searched for their work online,
0:12:11 the results were often AI knockoffs of their work.
0:12:13 It went even further than that.
0:12:16 Their original images had been scraped from the internet
0:12:18 and used to train the AI models
0:12:21 that can generate an image from a text prompt.
0:12:24 You’ve probably heard of these text-to-image models,
0:12:27 maybe even used some of them.
0:12:29 There is Dali from OpenAI,
0:12:30 Imagine from Google,
0:12:32 Image Playground from Apple,
0:12:35 Stable Diffusion from Stability AI,
0:12:37 and Mid Journey from the San Francisco Research Lab
0:12:39 of the same name.
0:12:42 – These companies will go out and they’ll run scrapers,
0:12:44 little tools that go online
0:12:48 and basically suck up any semblance of imagery,
0:12:51 especially high-quality imagery from online websites.
0:12:53 – In the case of an artist like Van Dunne,
0:12:56 this might include her online portfolio,
0:12:58 which is something you want to be easily seen
0:13:00 by the people you wanna see it,
0:13:04 but you don’t want sucked up by an AI.
0:13:06 – It would download those images
0:13:08 and run them through an image classifier
0:13:10 to generate some set of labels,
0:13:13 and then take that pair of images and their labels,
0:13:15 and then see that into the pipeline
0:13:18 to some text-image model.
0:13:20 – So Ben, I know that some companies,
0:13:22 including OpenAI have announced programs
0:13:26 to let content creators opt out of AI training.
0:13:28 How meaningful is that?
0:13:29 – Well, opting out assumes a lot of things.
0:13:34 It assumes benign acquiescence from the technology makers.
0:13:36 – Benign acquiescence meaning
0:13:38 they have to actually do what they say they’re gonna do?
0:13:39 – Yeah, exactly.
0:13:41 Opting out is toothless
0:13:43 because you can’t prove it in machine learning,
0:13:45 because you’re not gonna be able to do it
0:13:47 in your own business.
0:13:49 Even if someone completely went against their word
0:13:51 and said, “Okay, here’s my opt-out list,”
0:13:54 and then immediately trained on all their content,
0:13:55 you just lack the technology to prove it.
0:13:57 And so, what’s to stop someone
0:13:59 from basically going back on their word
0:14:02 when we’re talking about billions of dollars at stake?
0:14:04 Really, you’re hoping and praying
0:14:05 someone’s being nice to you.
0:14:08 (gentle music)
0:14:12 – So Ben Zhao wanted to find a way
0:14:15 of being either forged or stolen
0:14:17 by these mimicry machines.
0:14:19 – A big part of their misuse
0:14:22 is when they assume the identity of others.
0:14:24 So this idea of right of publicity
0:14:27 and the idea that we own our faces, our voices,
0:14:30 our identity, our skills, and work product,
0:14:34 that is very much a core of how we define ourselves.
0:14:36 For artists, it’s the fact that they take decades
0:14:37 to hone their skill
0:14:41 and to become known for a particular style.
0:14:43 So when that’s taken against their will
0:14:44 without their permission,
0:14:47 that is a type of identity theft, if you will.
0:14:49 – In addition to identity theft,
0:14:52 there can be the theft of a job, a livelihood.
0:14:53 – Right now, many of these models
0:14:56 are being used to replace human creatives.
0:14:58 If you look at some of the movie studios,
0:15:01 the gaming studios, or publishing houses,
0:15:04 artists and teams of artists are being laid off.
0:15:07 One or two remaining artists are being told, “Here,
0:15:09 “you have a budget, here’s mid-journey,
0:15:12 “I want you to use your artistic vision and skill
0:15:15 “to basically craft these AI images
0:15:18 “to replace the work product of the entire team
0:15:20 “who’s now been laid off.”
0:15:24 – So Zhao’s solution was to poison the system
0:15:25 that was causing this trouble.
0:15:27 – Poison is sort of a technical term
0:15:29 in the research community.
0:15:32 Basically, it means manipulating training data
0:15:34 in such a way to get AI models
0:15:36 to do something perhaps unexpected,
0:15:38 perhaps more to your goals
0:15:41 than the original trainers intended to.
0:15:43 – They came up with two poisoning tools,
0:15:46 one called Glaze, the other Nightshade.
0:15:49 – Glaze is all about making it harder
0:15:52 to target and mimic individual artists.
0:15:56 Nightshade is a little bit more far-reaching.
0:15:59 Its goal is primarily to make training
0:16:03 on internet-scraped data more expensive than it is now,
0:16:05 perhaps more expensive
0:16:07 than actually licensing legitimate data,
0:16:09 which ultimately is our hope
0:16:11 that this would push some of these AI companies
0:16:15 to seek out legitimate licensing deals with artists
0:16:18 so that they can properly be compensated.
0:16:21 – Can you just talk about the leverage and power
0:16:22 that these AI companies have
0:16:26 and how they’ve been able to amass that leverage?
0:16:28 – We’re talking about companies and stakeholders
0:16:31 who have trillions in market cap,
0:16:35 the richest companies on the planet by definition.
0:16:36 So that completely changes the game.
0:16:40 It means that when they want things to go a certain way,
0:16:42 whether it’s lobbyists on Capitol Hill,
0:16:47 whether it’s media control and inundating journalists
0:16:50 and running ginormous national expos
0:16:53 and trade shows of whatever they want,
0:16:55 nothing is off limits.
0:16:58 That completely changes the power dynamics
0:17:00 of what you’re talking about.
0:17:02 The closest analogy I can draw on
0:17:07 is in the early 2000s, we had music piracy.
0:17:10 Folks who are old enough remember that was a free-for-all.
0:17:12 People could just share whatever they wanted
0:17:15 and of course there were questions of the legality
0:17:18 and copyright violations and so on,
0:17:21 but there it was very, very different from what it is today.
0:17:25 Those who are with the power and the money and the control
0:17:27 are the copyright holders.
0:17:30 So the outcome was very clear.
0:17:32 – Well, it took a while to get there, right?
0:17:34 Napster really thrived for several years
0:17:35 before it got shut down.
0:17:36 – Right, exactly.
0:17:38 – But in that case, you’re saying that the people
0:17:41 who not necessarily generated but owned or licensed
0:17:45 the content were established and rich enough themselves
0:17:48 so that they could fight back against the intruders.
0:17:51 – Exactly, you had armies of lawyers.
0:17:53 When you consider that sort of situation
0:17:56 and how it is now, it’s the complete polar opposite.
0:17:59 – Meaning it’s the bad guys who have all the lawyers.
0:18:01 – Well, I wouldn’t say necessarily bad guys,
0:18:04 but certainly the folks who in many cases
0:18:08 are pushing profit motives that perhaps bring harm
0:18:11 to less represented minorities who don’t have the agency,
0:18:13 who don’t have the money to hire their own lawyers
0:18:15 and who can’t defend themselves.
0:18:19 – I mean, that has become kind of an ethic
0:18:22 of a lot of business in the last 20, 30 years,
0:18:23 especially coming on the Silicon Valley.
0:18:26 You know, you think about how Travis Kalanick
0:18:29 used to talk about Uber, like it’s much easier
0:18:31 to just go into a big market like New York
0:18:35 where something like Uber would be illegal
0:18:37 and just let it go, let it get established
0:18:39 and then let the city come and sue you
0:18:41 after it’s established.
0:18:45 So better to ask for forgiveness than permission.
0:18:47 – These companies are basically exploiting the fact
0:18:50 that we know lawsuits and enforcement of new laws
0:18:51 are gonna take years.
0:18:55 And so the idea is, let’s take advantage of this time
0:18:56 and before these things catch up,
0:18:58 we’re already gonna be established.
0:19:00 We already are gonna be essential
0:19:03 and we already are gonna be making billions.
0:19:05 And then we’ll worry about the legal cost
0:19:07 because really, to many of them,
0:19:09 the legal cost and the penalties that are involved,
0:19:12 billions of dollars, is really a drop in the bucket.
0:19:17 – Indeed, the biggest tech firms in the world
0:19:20 are all racing one another to the top of the AI mountain.
0:19:22 They’ve all invested heavily in AI
0:19:26 and the markets have so far at least rewarded them.
0:19:29 The share prices of the so-called magnificent seven stocks,
0:19:34 Alphabet, Amazon, Apple, Meta, Microsoft, NVIDIA and Tesla
0:19:37 rose more than 60% in 2024.
0:19:40 And these seven stocks now represent 33%
0:19:43 of the value of the S&P 500.
0:19:46 This pursuit of more and better AI
0:19:48 will have knock-on effects too.
0:19:51 Consider their electricity needs.
0:19:53 One estimate finds that building the data centers
0:19:57 to train and operate the new breed of AI models
0:20:01 will require 60 gigawatts of energy capacity.
0:20:05 That’s enough to power roughly a third of the homes in the US.
0:20:07 In order to generate all that electricity
0:20:11 and to keep their commitments to clean energy,
0:20:14 open AI, Amazon, Google, Meta and Microsoft
0:20:17 have all invested big in nuclear power.
0:20:19 Microsoft recently announced a plan
0:20:22 to help revive Three Mile Island.
0:20:24 If you want to learn more about the potential
0:20:27 for a nuclear power renaissance in the US,
0:20:28 we made an episode about that.
0:20:33 Number 516, called Nuclear Power Isn’t Perfect,
0:20:35 is it good enough?
0:20:38 Meanwhile, do a handful of computer scientists
0:20:39 at the University of Chicago
0:20:43 have any chance of slowing down this AI juggernaut?
0:20:44 Coming up after the break,
0:20:48 we will hear how Ben Zhao’s poison works.
0:20:52 We will actually generate a nice-looking cow
0:20:55 with nothing particularly distracting in the background,
0:20:58 and the cow is staring you right in the face.
0:20:58 I’m Stephen Dubner.
0:21:00 This is Freakonomics Radio.
0:21:01 We’ll be right back.
0:21:16 In his computer science lab at the University of Chicago,
0:21:20 Ben Zhao and his team have created a pair of tools
0:21:23 designed to prevent artificial intelligence programs
0:21:26 from exploiting the images created by human artists.
0:21:29 These tools are called Glaze and Nightshade.
0:21:32 They work in similar ways, but with different targets.
0:21:35 Glaze came first.
0:21:38 Glaze is all about how do we protect individual artists
0:21:41 so that a third party does not mimic them
0:21:43 using some local model.
0:21:47 It’s much less about these model training companies
0:21:50 than it is about individual users who say,
0:21:53 “Gosh, I like so-and-so’s art, but I don’t want to pay them,
0:21:56 so in fact, what I’ll do is I’ll take my local copy
0:22:00 of a model, I’ll fine-tune it on that artist’s artwork,
0:22:03 and then have that model try to mimic them and their style
0:22:07 so that I can ask a model to output artistic works
0:22:09 that look like human art from that artist,
0:22:11 except I don’t have to pay them anything.”
0:22:13 And how about Nightshade?
0:22:17 What it does is it takes images, it alters them in such a way
0:22:20 that they basically look like they’re the same,
0:22:25 but to a particular AI model that’s trying to train on this,
0:22:27 what it sees are the visual features
0:22:30 that actually associate it with something entirely different.
0:22:34 For example, you can take an image of a cow
0:22:36 eating grass in a field,
0:22:37 and if you apply it to Nightshade,
0:22:41 perhaps that image instead teaches
0:22:43 not so much the bovine cow features,
0:22:48 but the features of a 1940s pickup truck.
0:22:50 What happens then is that as that image goes
0:22:52 into the training process,
0:22:57 that label of this is a cow will become associated
0:22:59 in the model that’s trying to learn about
0:23:01 what does a cow look like?
0:23:05 It’s gonna read this image, and in its own language,
0:23:09 that image is gonna tell it that a cow has four wheels,
0:23:12 a cow has a big hood and a fender and a trunk.
0:23:14 Nightshade images tend to be much more potent
0:23:17 than usual images,
0:23:20 so that even when they’ve just seen a few hundred of them,
0:23:22 they are willing to throw away everything
0:23:25 that they’ve learned from the hundreds of thousands
0:23:27 of other images of cows,
0:23:30 and declare that its understanding has now adapted
0:23:31 to this new understanding,
0:23:35 that in fact cows have a shiny bumper and four wheels.
0:23:36 Once that has happened,
0:23:40 someone asking the model, give me a cow eating grass,
0:23:44 the model might generate a car with a pile of hay on top.
0:23:47 – The underlying process of creating this AI poison
0:23:50 is, as you might imagine, quite complicated,
0:23:52 but for an artist who’s using nightshade,
0:23:56 who wants to sprinkle a few invisible pixels of poison
0:24:00 on their original work, it’s pretty straightforward.
0:24:02 – There’s a couple of parameters about intensity,
0:24:04 how strongly you wanna change the image.
0:24:06 You set the parameters, you hit go,
0:24:09 and out comes an image that may look a little bit different.
0:24:11 Sometimes there are tiny little artifacts
0:24:13 that if you blow it up, you’ll see.
0:24:16 But in general, it basically looks like your old image,
0:24:19 except with these tiny little tweaks everywhere,
0:24:21 in such a way that the AI model,
0:24:24 when it sees it, will see something entirely different.
0:24:25 – That entirely different thing
0:24:27 is not chosen by the user.
0:24:30 It’s nightshade that decides whether your image of a cow
0:24:35 becomes a 1940s pickup truck versus, say, a cactus.
0:24:37 And there’s a reason for that.
0:24:41 The concept of poisoning is that you are trying to convince
0:24:43 the model that’s training on these images
0:24:46 that something looks like something else entirely, right?
0:24:49 So we’re trying, for example, to convince a particular model
0:24:52 that a cow has four tires and a bumper.
0:24:55 But in order for that to happen, you need numbers.
0:24:58 You don’t need millions of images to convince it,
0:24:59 but you need a few hundred.
0:25:01 And of course, the more, the merrier.
0:25:05 And so you want everybody who uses nightshade around the world,
0:25:07 whether they’re photographers or illustration
0:25:12 or graphic artists, you want them all to have the same effect.
0:25:15 So whenever someone paints a picture of a cow,
0:25:18 takes a photo of a cow, draws an illustration of a cow,
0:25:19 draws a clip art of a cow,
0:25:22 you want all those nightshaded effects
0:25:25 to be consistent in their target.
0:25:27 In order to do that, we had to take control
0:25:31 of what the target actually is, ourselves, inside the software.
0:25:34 If you gave users that level control,
0:25:37 then chances are people would choose very different things.
0:25:39 Some people might say, “I want my cow to be a cow.
0:25:41 I want my cow to be the sun rising.”
0:25:44 If you were to do that, the poison would not be as strong.
0:25:47 And what do the artificial intelligence companies think
0:25:50 about this nightshade being thrown at them?
0:25:54 A spokesperson for OpenAI recently described data poison
0:25:57 as a type of abuse.
0:25:59 AI researchers previously thought
0:26:02 that their models were impervious to poisoning attacks.
0:26:05 But Ben Zhao says that the AI training models
0:26:07 are actually quite easy to fool.
0:26:10 His free nightshade app has been downloaded
0:26:11 over two million times,
0:26:13 so it’s safe to say that plenty of images
0:26:15 have already been shaded.
0:26:19 But how can you tell if nightshade is actually working?
0:26:22 You probably won’t see the effects of nightshade.
0:26:23 If you see it in the wild,
0:26:27 models give you wrong answers to things that you’re asking for.
0:26:28 But the people who are creating these models
0:26:32 are not foolish, they are highly trained professionals.
0:26:36 So they’re gonna have lots of testing on any of these models.
0:26:38 We would expect that effects of nightshade
0:26:42 would actually be detected in the model training process.
0:26:43 It’ll become a nuisance.
0:26:46 And perhaps what really will happen
0:26:48 is that certain versions of models post-training
0:26:53 will be detected to have certain failures inside them.
0:26:55 And perhaps they’ll have to roll them back.
0:26:59 So I think really that’s more likely to cause delays
0:27:02 and more likely to cause costs
0:27:04 of these model training processes to go up.
0:27:08 The AI companies, they really have to work on millions,
0:27:10 potentially billions of images.
0:27:12 So it’s not necessarily the fact
0:27:15 that they can’t detect nightshade on a particular image.
0:27:17 It’s the question of can they detect nightshade
0:27:22 on a billion images in a split second with minimal cost?
0:27:24 Because any one of those factors that goes up
0:27:27 significantly will mean that their operation
0:27:29 becomes much, much more expensive.
0:27:32 And perhaps it is time to say,
0:27:34 well, maybe we’ll license artists
0:27:37 and get them to give us legitimate images
0:27:40 that won’t have these questionable things inside them.
0:27:43 – Is it the case that your primary motivation here
0:27:45 really was an economic one
0:27:48 of getting producers of labor, in this case artists,
0:27:50 simply to be paid for their work,
0:27:52 that their work was being stolen?
0:27:54 – Yeah, I mean, it really boils down to that.
0:27:57 I came into it not so much thinking about economics
0:28:01 as I was just seeing people that I respected
0:28:04 and had affinity for be severely harmed
0:28:05 by some of this technology.
0:28:08 In whatever way that they can be protected,
0:28:09 that’s ultimately the goal.
0:28:12 In that scenario, the outcome would be licensing
0:28:14 so that they can actually maintain a lively hood
0:28:17 and maintain the vibrancy of that industry.
0:28:18 – When you say these are people you respect
0:28:20 and have affinity for,
0:28:23 I’m guessing you being an academic computer scientist
0:28:25 is that you also have respect and affinity for
0:28:28 and I’m sure you know many people in the AI machine learning
0:28:31 community on the firm side though, right?
0:28:32 – Yes, yes, of course.
0:28:34 Colleagues and former students in that space.
0:28:37 – And how do they feel about Ben Jow?
0:28:38 – It’s quite interesting, really.
0:28:41 I go to conferences the same as I usually do
0:28:45 and many people resonate with what we’re trying to do.
0:28:48 We’ve gotten a bunch of awards and such from the community.
0:28:50 As far as folks who are actually employed
0:28:52 by some of these companies,
0:28:55 some of them I had to say appreciate our work.
0:28:57 They may or may not have the agency
0:28:59 to publicly speak about it,
0:29:00 but lots of private conversations
0:29:02 where people are very excited.
0:29:05 I will say that yeah, there’s been some cooling effects,
0:29:07 burn bridges with some people.
0:29:10 I think it really comes down to how you see your priorities.
0:29:13 It’s not so much about where employment lies,
0:29:16 but it really is about how personally you see
0:29:19 the value of technology versus the value of people.
0:29:23 And oftentimes it’s a very binary decision.
0:29:26 People tend to go one way or the other rather hard.
0:29:29 I think most of these bigger decisions, acquisitions,
0:29:32 strategy and whatnot are largely
0:29:34 in the hands of executives way up top.
0:29:36 These are massive corporations
0:29:40 and many people are very much aware of some of the stakes
0:29:42 and perhaps might disagree
0:29:45 with some of the technological stances that are being taken,
0:29:47 but everybody has to make a living.
0:29:51 Big Tech is one of the best ways to make a living.
0:29:53 Obviously they compensate people very well.
0:29:55 I would say there’s a lot of pressure there as well.
0:29:57 We just had that recent news item
0:30:00 that the young whistleblower from OpenAI
0:30:02 just tragically passed away.
0:30:05 Zhao is talking here about Suchir Balaji,
0:30:08 a 26 year old former researcher at OpenAI,
0:30:11 the firm best known for creating chat GPT.
0:30:14 Balaji died by apparent suicide
0:30:16 in his apartment in San Francisco.
0:30:18 He had publicly charged OpenAI
0:30:21 with potential copyright violations
0:30:24 and he left the company because of ethical concerns.
0:30:27 Whistleblowers like that are incredibly rare
0:30:30 because the risks that you’re taking on
0:30:34 when you publicly speak out against your former employer,
0:30:35 that is tremendous courage.
0:30:38 That is an unbelievable act.
0:30:39 It’s a lot to ask.
0:30:43 I feel that we don’t speak so much about ethics
0:30:44 in the business world.
0:30:46 I know they teach it in business schools
0:30:50 but my feeling is that by the time you’re teaching
0:30:51 the ethics course in the business school,
0:30:55 it’s because things are already in tough shape.
0:30:57 Many people obviously have strong moral
0:30:59 and ethical makeups,
0:31:03 but I feel there is an absence of courage.
0:31:06 And since you just named that word,
0:31:08 you said you have to have an enormous amount of courage
0:31:09 to stand up for what you think may be right.
0:31:13 And since there is so much leverage in these firms,
0:31:16 as you noted, I’m curious if you have any message
0:31:19 to the young employee or the soon to be graduate
0:31:22 who says, yeah, sure, I would absolutely love
0:31:25 to go work for an AI firm because it’s bleeding edge,
0:31:27 it pays well, it’s exciting and so on.
0:31:30 But they’re also feeling like it’s contributing
0:31:33 to a pace of technology
0:31:35 that is too much for humankind right now.
0:31:36 What would you say to that person?
0:31:38 How would you ask them to examine
0:31:40 if not their soul or something,
0:31:42 at least their courage profile?
0:31:43 – Yeah, what a great question.
0:31:45 I mean, it may not be surprising,
0:31:47 but as a computer science professor,
0:31:48 I actually have these kind of conversations
0:31:50 relatively often.
0:31:53 This past quarter, I taught many second year
0:31:56 and third year computer science majors.
0:31:58 And many of them came up to me in office hours
0:32:01 and asked very similar kind of questions.
0:32:04 They said, look, I really want to push back
0:32:05 on some of these harms.
0:32:08 On the other hand, look at these job opportunities.
0:32:10 Here’s this great golden ticket to the future
0:32:12 and what can you do?
0:32:13 It’s fascinating, I don’t blame them
0:32:16 if they’d make any particular decision,
0:32:18 but I applaud them for even being aware
0:32:21 of some of the issues that I think many in the media
0:32:23 and many in Silicon Valley certainly
0:32:25 have trouble recognizing.
0:32:28 There is a level of ground truth underneath all this,
0:32:30 which is that these models are limited.
0:32:33 There is an exceptional level of hype,
0:32:35 like we’ve never seen before.
0:32:38 That bubble is in many ways
0:32:40 in the middle bursting right now.
0:32:41 – What do you say to that?
0:32:43 – There’s been many papers published on the fact
0:32:47 that these generative AI models are well at their end
0:32:48 in terms of training data.
0:32:51 To get better, you need something like double
0:32:54 the amount of data that has ever been created by humanity.
0:32:57 And you’re not gonna get that by buying Twitter
0:33:00 or by licensing from Reddit or New York Times or anywhere.
0:33:03 You’ve seen now recent reports about how Google
0:33:07 and OpenAI are having trouble improving upon their models.
0:33:09 It’s common sense, they’re running out of data
0:33:12 and no amount of scraping or licensing will fix that.
0:33:17 – Bloomberg News recently reported
0:33:20 that OpenAI, Google and Anthropic
0:33:21 have all had trouble releasing
0:33:23 their next generation AI models
0:33:26 because of this plateauing effect.
0:33:29 Some commentators say that AI growth overall
0:33:31 may be hitting a wall.
0:33:35 In response to that, OpenAI CEO Sam Altman tweeted,
0:33:37 “There is no wall.
0:33:40 Ben Zhao is in the wall camp.”
0:33:43 – And then of course, just the fact that there are very
0:33:47 few legitimate revenue generating applications
0:33:49 that will even come close to compensating
0:33:52 for the amount of investment that VCs
0:33:54 and these companies are pouring in.
0:33:56 Obviously I’m biased doing what I do,
0:33:59 but I thought about this problem for quite some time.
0:34:02 And honestly, these are great interpolation machines.
0:34:04 These are great mimicry machines,
0:34:07 but there’s only so many things that you can do with them.
0:34:09 They are not going to produce entire movies,
0:34:13 entire TV shows, entire books to anywhere near the value
0:34:15 that humans will actually want to consume.
0:34:17 And so yeah, they can disrupt
0:34:19 and they can bring down the value of a bunch of industries,
0:34:22 but they are not going to actually generate much revenue
0:34:23 in and of themselves.
0:34:25 I see that bubble bursting.
0:34:27 And so what I say to these students oftentimes
0:34:29 is that things will take their course
0:34:32 and you don’t need to push back actively.
0:34:35 All you need to do is to not get swept along with the hype.
0:34:38 When the tide turns, you will be well positioned.
0:34:40 You will be better positioned than most
0:34:42 to come out of it having a clear head
0:34:45 and being able to go back to the fundamentals of
0:34:46 why did you go to school?
0:34:47 Why did you go to University of Chicago
0:34:50 and all the education that you’ve undergone
0:34:52 to use your human mind
0:34:55 because it will be shown that humans will be better
0:34:57 than AI will ever pretend to be.
0:35:01 – Coming up after the break,
0:35:04 why isn’t Ben Zhao out in the private sector
0:35:06 trying to make his billions?
0:35:07 I’m Stephen Dubner.
0:35:09 This is Freakin’omics Radio.
0:35:10 We’ll be right back.
0:35:25 It’s easy to talk about the harms posed
0:35:27 by artificial intelligence,
0:35:30 but let’s not ignore the benefits.
0:35:31 That’s where we started this episode,
0:35:34 hearing from the economist Eric Brynjolfsson.
0:35:35 If you think about something like
0:35:38 the medical applications alone,
0:35:40 AI is plainly a major force.
0:35:44 And just to be witness to a revolution of this scale
0:35:46 is exciting.
0:35:48 Its evolution will continue in ways
0:35:50 that of course we can’t predict.
0:35:52 But as the University of Chicago computer scientist,
0:35:55 Ben Zhao has been telling us today,
0:35:58 AI growth may be slowing down
0:36:01 and the law may be creeping closer
0:36:02 to some of these companies too.
0:36:05 Open AI and Microsoft are both being sued
0:36:07 by the New York Times.
0:36:10 Anthropic is fighting claims from Universal Music
0:36:13 that it misused copyrighted lyrics.
0:36:15 And related to Zhao’s work,
0:36:18 a group of artists are suing stability AI,
0:36:20 mid-journey and deviant art
0:36:24 for copyright infringement and trademark claims.
0:36:27 But Zhao says that the argument about AI and art
0:36:30 is about more than just intellectual property rights.
0:36:33 Art is interesting when it has intention,
0:36:36 when there’s meaning and context.
0:36:38 So when AI tries to replace that,
0:36:40 it has no context and meaning.
0:36:44 Art replicated by AI, generally speaking, loses the point.
0:36:46 It is not about automation.
0:36:48 I think that is a mistaken analogy
0:36:49 that people will oftentimes bring up.
0:36:50 They say, well, you know,
0:36:53 what about the horse and buggy and the automobile?
0:36:56 No, this is actually not about that at all.
0:36:59 AI does not reproduce human art at a faster rate.
0:37:04 What AI does is it takes past samples of human art,
0:37:07 shakes it in a kaleidoscope and gives you a mixture
0:37:10 of what has already existed before.
0:37:13 So when you talk about the scope of the potential problems,
0:37:17 everything from the human voice, the face, pieces of art,
0:37:20 basically anything ever generated
0:37:22 that can be reproduced in some way,
0:37:25 it sounds like you are no offense.
0:37:27 Tiny little band of Don Quixote is there
0:37:30 in the middle of the country,
0:37:33 tilting at these massive global windmills
0:37:36 of artificial intelligence and technology,
0:37:38 overlordship and the amount of money being invested
0:37:42 right now in AI firms is really almost unimaginable.
0:37:45 They could probably start up 1,000 labs like yours
0:37:47 within a week to crush you.
0:37:51 Not that I’m encouraging that, but I’m curious.
0:37:53 On the one hand, you said, well,
0:37:55 there is a bubble coming because of,
0:37:57 let’s call it data limitations.
0:38:00 On the other hand, when there’s an incentive
0:38:02 to get something for less or for nothing
0:38:05 and to turn it into something else that’s profitable
0:38:06 in some way, whether for crime
0:38:10 or legitimate seeming purposes, people are going to do that.
0:38:14 And I’m just curious how hopeless or hopeful
0:38:16 you may feel about this kind of effort.
0:38:18 – What’s interesting about computer security
0:38:21 is that it’s not necessarily about numbers.
0:38:23 If it’s a brute force attack,
0:38:25 I can run through all your pen numbers
0:38:27 and it doesn’t matter how ingenious they are,
0:38:30 I will eventually come up with the right one.
0:38:33 But for many instances, it is not about brute force
0:38:34 and resource riches.
0:38:36 So yeah, I am hopeful.
0:38:39 We’re looking at vulnerabilities that we consider
0:38:41 to be fundamental in some of these models
0:38:44 and we’re using them to slow down the machine.
0:38:47 I don’t necessarily wake up in the morning thinking,
0:38:50 oh yeah, I’m gonna topple open AI or Google
0:38:51 or anything like that.
0:38:52 That’s not necessarily the goal.
0:38:56 – I see this as more of a process in motion.
0:39:00 This hype is a storm that will eventually blow over.
0:39:01 And how I see my role in this
0:39:05 is not so much to necessarily stop the storm.
0:39:07 I’m more of you will a giant umbrella.
0:39:11 I’m trying to cover as many people as possible
0:39:13 and shield them from the short-term harm.
0:39:15 – What gives you such confidence
0:39:16 that the storm will blow over
0:39:19 or that there will be maybe more umbrellas
0:39:21 other than what you pointed out
0:39:24 as the data limitations in the near term.
0:39:25 And maybe you know better than all of us,
0:39:28 maybe data limitations and computing limitations
0:39:30 are such that the fears
0:39:32 that many people have will never come true.
0:39:35 But it doesn’t seem like momentum is moving in your favor.
0:39:37 It seems it’s moving in their favor.
0:39:39 – I would actually disagree, but that’s okay.
0:39:40 We can have that discussion, right?
0:39:41 – Look, you’re the guy that knows stuff.
0:39:43 I’m just asking the questions.
0:39:44 I don’t know anything about this.
0:39:47 – No, no, I think this is a great conversation to have
0:39:51 because back in 2022 or early 2023,
0:39:52 when I used to talk to journalists,
0:39:54 the conversation was very, very different.
0:39:57 Conversation was always, when is AGI coming?
0:40:00 You know, what industries will be completely useless
0:40:01 in a year or two?
0:40:03 It was never the question of like,
0:40:05 are we gonna get return on investment
0:40:08 for these billions and trillions of dollars?
0:40:10 Are these applications going to be legit?
0:40:13 So even in the year and a half since then,
0:40:14 the conversation has changed materially
0:40:17 because the truth has come out.
0:40:19 These models are actually having trouble
0:40:22 generating any sort of realistic value.
0:40:24 I’m not saying that they’re completely useless.
0:40:25 There’s certain scientific applications
0:40:28 or daily applications where it is handy,
0:40:32 but it is far, far less than what people had hoped them to be.
0:40:35 And so yeah, you know, how do I believe it?
0:40:36 Part of this is hubris.
0:40:38 I’ve been a professor for 20 years.
0:40:40 I’ve been trained or I’ve been training myself
0:40:42 to believe in myself in a way.
0:40:44 Another answer to this question is that
0:40:47 it really is irrelevant because the harms
0:40:49 are happening to people in real time.
0:40:53 And so it’s not about will we eventually win
0:40:55 or will this happen eventually in the end?
0:40:56 It’s the fact that people’s lives
0:40:59 are being affected on a daily basis
0:41:01 and I can make a difference in that
0:41:03 than that is worthwhile in and of itself
0:41:04 regardless of the outcome.
0:41:09 – If I were a cynic or maybe a certain kind of operative,
0:41:15 I might think that maybe Ben Zhao is the poison.
0:41:19 Maybe in fact you’re a bot talking down the industry
0:41:22 both in intention and in capabilities.
0:41:24 And who knows for what reason,
0:41:25 maybe you’re even shorting the industry
0:41:27 in the markets or something.
0:41:29 I kind of doubt that’s true,
0:41:31 but you know, we’ve all learned to be suspicious
0:41:33 of just about everybody these days.
0:41:35 Where would you say you fall on the spectrum
0:41:40 of makers versus hardcore activists, let’s say?
0:41:42 ‘Cause I think in every realm throughout history,
0:41:44 whenever there’s a new technology,
0:41:47 there are activists who overreact
0:41:50 and often protest against new technologies
0:41:52 in ways that in retrospect are revealed
0:41:55 to have been either short-sighted or self-interested.
0:41:57 So that’s a big charge I’m putting on you.
0:41:59 Persuade me that you were neither short-sighted
0:42:01 nor self-interested, please.
0:42:03 – Sure, very interesting.
0:42:05 Okay, let me unpack that a little bit there.
0:42:08 The thing that allows me to do the kind of work
0:42:11 that I do now, I recognize as quite a privilege.
0:42:16 The position in being a senior tenure professor,
0:42:18 and honestly, I don’t have many of the pressures
0:42:20 that some of my younger colleagues do.
0:42:22 – You have your own lab at the University of Chicago
0:42:23 with your wife.
0:42:27 When I read about this, I think how did you get the funding?
0:42:28 Did you have some kind of blackmail material
0:42:30 on the UChicago budget people?
0:42:33 – No, I mean, all of our grants are quite public.
0:42:35 And I’m pretty sure that I’m not
0:42:38 the most well-funded professor in the department.
0:42:40 I run a pretty regular lab.
0:42:43 We write a few grants, but it’s nothing or shaking.
0:42:47 It’s just what we turn our time towards, that’s all.
0:42:49 There’s very little that drives me these days
0:42:53 outside of just wanting my students to succeed.
0:42:55 I don’t have the pressures of needing
0:42:58 to establish a reputation or explain to colleagues
0:43:00 who I am and why I do what I do.
0:43:03 So in that sense, I almost don’t care.
0:43:05 In terms of self-interest, none of these products
0:43:09 have any money attached to them in any way, shape, or form.
0:43:13 And I’ve tried very, very hard to keep it that way.
0:43:14 There’s no startup.
0:43:17 There’s no hidden profit motive or revenue here.
0:43:19 So that simplifies things for me.
0:43:21 – When you say that you don’t want
0:43:24 to commercialize these tools,
0:43:26 I assume the University of Chicago
0:43:28 is not pressing you to do so?
0:43:31 – No, the university always encourages entrepreneurship.
0:43:33 They always encourage licensing,
0:43:35 but they certainly have no control over what we do
0:43:37 or don’t do with our technology.
0:43:39 This is sort of the reality of economics
0:43:41 and academic research.
0:43:44 We as a lab have a stream of PhD students
0:43:46 that come through and we train them.
0:43:48 They do research along the way
0:43:50 and then they graduate and then they leave.
0:43:53 For things like Fox where this was the idea,
0:43:55 here’s the tool, here’s some code.
0:43:56 We put that out there,
0:43:58 but ultimately we don’t expect to be maintaining
0:44:00 that software for years to come.
0:44:02 We just don’t have the resources.
0:44:05 – That sounds like a shame if you come up with a good tool.
0:44:08 – Well, the idea behind academic research is always that
0:44:10 if you have the good ideas and you demonstrate it,
0:44:12 then someone else will carry it across the finish line,
0:44:15 whether that’s a startup or a research lab elsewhere,
0:44:18 but somebody with resources who sees that need
0:44:20 and understands it will go ahead
0:44:21 and produce that physical tool
0:44:23 or make that software and actually maintain it.
0:44:25 – Since you’re not going to commercialize
0:44:27 or turn it into a firm,
0:44:29 let’s say you continue to make tools
0:44:31 that continue to be useful
0:44:34 and that they scale up and up and up.
0:44:37 And let’s say that your tools become an integral part
0:44:40 of the shield against villainous technology,
0:44:41 let’s just call it.
0:44:45 Are you concerned that it will outgrow you
0:44:48 and will need to be administered by other academics
0:44:50 or maybe governments and so on?
0:44:52 – You know, at a high level, I think that’s great.
0:44:54 I think if we get to that point,
0:44:56 that’ll be a very welcome problem to have.
0:44:58 We are in the process of exploring perhaps
0:45:01 what a nonprofit organization would look like
0:45:02 ’cause that would sort of make some
0:45:05 of these questions transparent, it would.
0:45:06 – That’s what Elon Musk once said
0:45:08 about open AI, I believe, correct?
0:45:11 – Well, yeah, very different type of nonprofit,
0:45:12 I would argue.
0:45:15 I’m more interested in being just the first person
0:45:17 to walk down a particular path
0:45:18 and encouraging others to follow.
0:45:21 So I would love it if we were not the only technology
0:45:22 in the space.
0:45:24 Every time I see one of these other research papers
0:45:28 that wars to protect human creatives, I plot all that.
0:45:31 In order for AI and human creativity
0:45:33 to coexist in the future,
0:45:35 they have to have a complementary relationship.
0:45:40 And what that really means is that AI needs human work product
0:45:42 or images or text in order to survive.
0:45:47 So they need humans and humans really need to be compensated
0:45:48 for this work that they’re producing.
0:45:51 Otherwise, if human artistry dies out,
0:45:52 then AI will die out
0:45:54 because they’re gonna have nothing new to learn on
0:45:57 and they’re just gonna get stale and fall apart.
0:46:00 – I’m feeling a strong Robin Hood vibe here.
0:46:02 Stealing from the rich, giving to the poor.
0:46:04 But also what you’re describing,
0:46:07 your defense mechanism, it’s like you are a bow,
0:46:08 but you don’t have an arrow.
0:46:09 But if they shoot an arrow at you,
0:46:11 then you can take the arrow and shoot it back at them
0:46:12 and hit them where it really hurts.
0:46:13 – Over the last couple of years,
0:46:16 I’ve been practicing lots of fun analogies.
0:46:20 Barb Wire is one, the large Doberman in your backyard.
0:46:22 One particular funny one is where the hot sauce
0:46:24 that you put on your lunch.
0:46:27 So if that unscrupulous coworker steals your lunch,
0:46:28 repeatedly they get a tummy ache.
0:46:30 – But wait a minute, you have to eat your lunch too.
0:46:32 That doesn’t sound very good.
0:46:34 – Well, you eat the portion that you know is good
0:46:36 and then you leave out some stuff that–
0:46:38 – Got it, got it.
0:46:41 Can you maybe envision or describe
0:46:44 what might be a fair economic solution here,
0:46:47 a deal that would let the AI models get what they want
0:46:49 without the creators being ripped off?
0:46:51 – Boy, that’s a bit of a loaded question
0:46:53 because honestly, we don’t know.
0:46:56 It really comes down to how these models are being used.
0:46:58 Ultimately, I think what people want
0:47:02 is creative content that’s crafted by humans.
0:47:06 In that sense, the fair system would be generative AI systems
0:47:08 that stayed out of the creative domain
0:47:12 that continue to let human creatives do what they do best
0:47:16 to create really truly imaginative ideas and visuals
0:47:18 and then use generative AI for domains
0:47:20 where it is more reasonable.
0:47:22 For example, conversational chatbots.
0:47:24 Seemed like a reasonable use for them
0:47:26 as long as they don’t hallucinate.
0:47:30 – I’m just curious why you care about artists.
0:47:33 Most people, at least in positions of power,
0:47:36 don’t seem to go to bat for people who make stuff.
0:47:38 And when I say most people in positions of power,
0:47:41 I would certainly include most academic economists.
0:47:44 So of all the different labor forces
0:47:46 that are being affected by AI,
0:47:49 there are retail workers, people in manufacturing,
0:47:53 medicine, on and on and on, why go to bat for artists?
0:47:54 – Certainly I know what it’s not
0:47:59 because I’m not an artist, not particularly artistic.
0:48:03 Some people can say there’s an inkling of creativity
0:48:06 in what we do, but it’s not nearly the same.
0:48:10 I guess what I will say is creativity is inspiring.
0:48:12 Artists are inspiring.
0:48:14 Whenever I think back to what I know of art
0:48:17 and how I appreciate art, I think back to college,
0:48:22 you know, I went to Yale and I remember many cold Saturday
0:48:25 mornings, I would walk out and there’s piles of snow
0:48:27 and everything would be super quiet
0:48:31 and I would take a short walk over to the Yale Art Gallery
0:48:33 and it was amazing.
0:48:37 I would be able to wander through halls of masterpieces.
0:48:41 Nobody there except me and maybe a couple of security guards.
0:48:46 It’s always been inspiring to me how people can see
0:48:49 the world so differently through the same eyes,
0:48:50 through the same physical mechanism.
0:48:54 That is how I get a lot of my research done,
0:48:58 is I try to see the world differently and it gives me ideas.
0:49:02 So when I meet artists and when I talk to artists
0:49:04 to see what they can do, to see the imagination
0:49:09 that they have at their disposal that I see nowhere else,
0:49:12 you know, creativity, it’s the best of humanity.
0:49:13 What else is there?
0:49:16 (upbeat music)
0:49:18 – That was Ben Zhao.
0:49:21 He helps run the Sand Lab at the University of Chicago.
0:49:25 You can see a lot of their work on the Sand Lab website.
0:49:28 While you’re online, you may also wanna check out
0:49:31 a new museum scheduled to open this year in Los Angeles.
0:49:35 It’s called Dataland and it is the world’s first museum
0:49:39 devoted to art that is generated by AI.
0:49:42 Maybe I will run into Ben Zhao there someday
0:49:44 and maybe I’ll run into you too.
0:49:48 I will definitely be in LA soon on February 13th.
0:49:51 We are putting on Freakonomics Radio Live
0:49:53 at the Gorgeous E-Bell Theater.
0:49:57 Tickets are at freakonomics.com/liveshows.
0:49:59 I hope to see you there.
0:50:01 Coming up next time on the show,
0:50:05 are you ready for some football?
0:50:07 The Super Bowl is coming up and we will be talking
0:50:11 about one of the most undervalued positions in the game,
0:50:13 the running back.
0:50:15 – Why are my boys being paid less
0:50:18 when these quarterbacks who aren’t nearly as tough
0:50:20 as running backs are being paid more?
0:50:21 – But wait a minute.
0:50:24 Running backs used to be the game’s superstars
0:50:26 and they were paid accordingly.
0:50:27 What happened?
0:50:30 – This is a classic example of multivariate causation.
0:50:32 – Okay, that doesn’t sound very exciting,
0:50:35 but the details are, I promise.
0:50:40 We will hear from the eggheads, the agents and the players.
0:50:43 – You’re telling me that you’d be a great difference maker
0:50:46 and I can’t get paid the right value for my position.
0:50:49 – And we’ll ask whether this year’s NFL season
0:50:52 has marked a return to glory for the running back.
0:50:54 That’s next time on the show.
0:50:56 Until then, take care of yourself
0:50:58 and if you can, someone else too.
0:51:01 Freakonomics Radio is produced by Stitcher and Renbud Radio.
0:51:05 You can find our entire archive on any podcast app
0:51:07 also at freakonomics.com,
0:51:09 where we publish transcripts and show notes.
0:51:12 This episode was produced by Theo Jacobs.
0:51:14 The Freakonomics Radio network staff
0:51:17 also includes Alina Coleman, Augusta Chapman,
0:51:20 Dalvin Abouaghi, Eleanor Osborne, Ellen Frankman,
0:51:22 Elsa Hernandez, Gabriel Roth, Greg Rippin,
0:51:24 Jasmine Klinger, Jeremy Johnston,
0:51:26 John Snars, Morgan Levy, Neal Carruth,
0:51:28 Sarah Lilly, and Zach Lipinski.
0:51:31 Our theme song is “Mr. Fortune” by the Hitchhikers
0:51:34 and our composer is Luis Guerra.
0:51:36 As always, thank you for listening.
0:51:39 (dramatic music)
0:51:43 When I don’t have a shredder around
0:51:45 and I need to put something in the trash
0:51:47 that I don’t want anyone to see,
0:51:48 I just put some ketchup on it.
0:51:49 (camera clicks)
0:51:53 (electronic music)
0:51:55 – The Freakonomics Radio Network,
0:51:57 the hidden side of everything.
0:52:00 (upbeat music)
0:52:01 – Stitcher.
0:52:04 (gentle music)
0:52:14 [BLANK_AUDIO]

When the computer scientist Ben Zhao learned that artists were having their work stolen by A.I. models, he invented a tool to thwart the machines. He also knows how to foil an eavesdropping Alexa and how to guard your online footprint. The big news, he says, is that the A.I. bubble is bursting.

 

  • SOURCES:
    • Erik Brynjolfsson, professor of economics at Stanford University
    • Ben Zhao, professor of computer science at the University of Chicago

 

 

Leave a Comment