AI transcript
0:00:05 Hey there, it’s Stephen Dubner.
0:00:11 We just published a two-part series on what some people call sludge, meaning all the frictions
0:00:17 that make it hard to fill out tax forms or find a health care provider or even cancel
0:00:17 a subscription.
0:00:23 One part of our series involved government sludge and how it interferes with getting
0:00:24 policy done.
0:00:29 The series reminded me of another episode we once made that I thought was worth hearing
0:00:32 again, so we’re playing it for you here as a bonus episode.
0:00:36 It is called Policymaking is Not a Science Yet.
0:00:39 We have updated facts and figures as necessary.
0:00:42 As always, thanks for listening.
0:00:56 Usually when children are born deaf, they call it nerve deafness, but it’s really not the actual
0:00:56 nerve.
0:01:00 It’s little tiny hair cells in the cochlea.
0:01:05 Dana Susskind is a physician scientist at the University of Chicago and, more dramatically,
0:01:10 she is a pediatric surgeon who specializes in cochlear implants.
0:01:17 My job is to implant this incredible piece of technology which bypasses these defective
0:01:24 hair cells and takes the sound from the environment, the acoustic sound, and transforms it into electrical
0:01:27 energy, which then stimulates the nerve.
0:01:36 And somebody who is severe to completely profoundly deaf after implantation can have normal levels
0:01:36 of hearing.
0:01:38 And it is pretty phenomenal.
0:01:40 It is pretty phenomenal.
0:01:47 If you ever need a good cry, a happy cry, just type in cochlear implant activation on YouTube.
0:01:53 You’ll see little kids hearing sound for the first time and their parents flipping out with joy.
0:01:59 Good job!
0:01:59 Good job!
0:02:02 She’s smiley.
0:02:07 Oh, that’s great!
0:02:11 She’s so smiley.
0:02:13 Yeah, that’s your ears.
0:02:14 Yeah.
0:02:22 The cochlear implant is a remarkable piece of technology, but really it’s just one of
0:02:29 many remarkable advances in medicine and elsewhere, created by devoted researchers and technologists
0:02:31 and sundry smart people.
0:02:33 You know what’s even more remarkable?
0:02:37 How often we fail to take advantage of these advances.
0:02:43 One of the most compelling examples is the issue of hypertension.
0:02:47 About a third of all Americans have high blood pressure.
0:02:50 First of all, the awareness rate is about only 80%.
0:02:53 Of the total amount, only 50% actually are controlled.
0:02:55 We have great drugs, right?
0:03:02 But you can see the cascade of issues when you have to disseminate, you have to adhere, etc.,
0:03:05 and the public health ramifications of that.
0:03:10 Those blood pressure numbers are even worse today than they were when we first published
0:03:11 this episode in 2020.
0:03:16 Clearly, we still have not figured out how to get the science to the people who need it.
0:03:20 Prescription adherence is a very difficult nut to crack.
0:03:21 That’s John List.
0:03:24 He’s an economist at the University of Chicago.
0:03:30 They actually have to go and get the medicines, which a lot of people have a very hard time doing.
0:03:35 Even though it’s sitting next to your bed every night, people don’t take it.
0:03:38 And they don’t take it because they forget.
0:03:45 They don’t take it because the side effect is a lot worse than the benefit they think they’re getting.
0:03:52 All of these types of problems, as humans, including myself, we do a really bad job in trying to solve.
0:03:55 All of us, our lives get busy.
0:03:56 We forget.
0:04:01 You wouldn’t think you’d have an adherence issue with something like the cochlear implant.
0:04:03 It has such an obvious upside.
0:04:05 And yet…
0:04:09 When I put the internal device in, it stays there.
0:04:14 But it actually requires an external portion as well, sort of like a hearing aid.
0:04:20 And that is the part where you see issues related to adherence.
0:04:27 Just because I put the internal part doesn’t mean that an individual or a child will be wearing the external part.
0:04:32 In one study, only half of the participants wore their device full-time.
0:04:40 I mean, we have figured through randomized control trials to understand causation, real impact in the small scale.
0:04:46 But the next step is understanding the science of how to use this science.
0:04:54 Because, you know, how you do it on the small scale in perfect conditions is very different than the messy real world.
0:04:56 And that is a very real issue.
0:05:01 Today on Freakonomics Radio, what to do about that very real issue.
0:05:06 Because you see the same thing not just in medicine, but in education and economic policy and elsewhere.
0:05:11 Solutions that look foolproof in the research stage are failing to scale up.
0:05:14 People said, let’s just put it out there.
0:05:17 And then we quickly realized that it’s far more complicated.
0:05:23 There might be something that you think would be great, but it’s never going to be able to be implemented in the real world.
0:05:27 We need to know, what is the magic sauce?
0:05:30 We’ll go in search of that magic sauce right after this.
0:05:54 This is Freakonomics Radio, the podcast that explores the hidden side of everything, with your host, Stephen Dubner.
0:06:10 John List is a pioneer in the relatively recent movement to give economic research more credibility in the real world.
0:06:17 If you turn back the clock to the 1990s, there was a credibility revolution in economics,
0:06:25 focusing on what data and modeling assumptions are necessary to go from correlation to causality.
0:06:29 List responded by running dozens and dozens of field experiments.
0:06:35 Now, my contribution in the credibility revolution was instead of working with secondary data,
0:06:44 I actually went to the world and used the world as my lab and generated new data to test theories and estimate program effects.
0:06:50 Okay, so you and others moved experiments out of the lab and into the real world.
0:06:57 But have you been able to successfully translate those experimental findings into, let’s say, good policy?
0:07:07 I think moving our work into policymaking circles and having a very strong impact has just not been there.
0:07:10 And I think one of the most important questions is,
0:07:15 how are we going to make that natural progression of field experiments within the social sciences
0:07:23 to more keenly talk to policymakers, the broader public, and actually the scientific community as a whole?
0:07:30 The way List sees it, academics like him work hard to come up with evidence for some intervention
0:07:33 that’s supposed to help alleviate poverty or improve education,
0:07:37 to help people quit smoking or take their blood pressure medicine.
0:07:43 The academic then writes up their paper for an incredibly impressive-looking academic journal,
0:07:45 impressive at least to fellow academics.
0:07:48 The rest of us, it’s jargony and indecipherable.
0:07:54 But then, with paper in hand, the academic goes out proselytizing to policymakers.
0:07:55 He might say,
0:08:00 you politicians always talk about making evidence-based policy.
0:08:04 Well, here’s some new evidence for an effective and cost-effective way
0:08:08 of addressing that problem you say you care so much about.
0:08:10 And then the policymaker may say,
0:08:13 well, the last time we listened to an academic like you,
0:08:16 we did just what they told us, but it didn’t work.
0:08:19 And it cost three times what they said it would.
0:08:21 And we got hammered in the press.
0:08:23 And here’s the thing.
0:08:27 The politician and the academic may both be right.
0:08:31 John List has seen this from both sides now.
0:08:35 In a past life, I worked in the White House advising the president
0:08:39 on environmental and resource issues within economics.
0:08:42 This was in the early 2000s under George W. Bush.
0:08:47 A harsh lesson that I learned was you have to evaluate the effects of public policy
0:08:49 as opposed to its intentions.
0:08:52 Because the intentions are obviously good.
0:08:56 For instance, improving literacy for grade schoolers
0:08:59 or helping low-income high schoolers get to college.
0:09:04 When you step back and look at the amount of policies
0:09:08 that we put in place that don’t work,
0:09:10 it’s just a travesty.
0:09:14 List has firsthand experience with the failure to scale.
0:09:17 So down in Chicago Heights,
0:09:20 I ran a series of interventions.
0:09:23 And one of the more powerful interventions
0:09:25 was called the Parent Academy.
0:09:30 That was a program that brought in parents every few weeks.
0:09:34 And we taught them what are the best mechanisms and approaches
0:09:37 that they can use with their 3-, 4-, and 5-year-old children
0:09:41 to push both their cognitive skills
0:09:43 and their executive function skills.
0:09:45 Things like self-control.
0:09:49 What we found was within three to six months,
0:09:52 we can move a child in very short order
0:09:55 to have very strong cognitive test scores
0:09:58 and very strong executive function skills.
0:10:00 So, of course, we’re very optimistic
0:10:02 after getting this type of result,
0:10:03 and we want the whole world
0:10:06 to now do parent academies.
0:10:09 The UK approaches us and said,
0:10:11 we want to roll it out across London
0:10:13 and the boroughs around London.
0:10:16 What we found is that it failed miserably.
0:10:19 It wasn’t that the program was bad.
0:10:21 It failed miserably
0:10:25 because no parents actually signed up.
0:10:28 So if you want your program to work
0:10:30 at higher levels,
0:10:32 you have to figure out
0:10:34 how to get the right people
0:10:37 and all the people, of course,
0:10:38 into the program.
0:10:40 If you had asked me to guess
0:10:42 all the ways that a program like that could fail,
0:10:44 it would have taken me a while
0:10:45 to guess that you simply
0:10:47 didn’t get parental uptake.
0:10:48 The main problem is
0:10:50 we just don’t understand
0:10:52 the science of scaling.
0:10:54 If you were to attach a noun
0:10:56 to what this is,
0:10:58 the scalability blank,
0:11:01 is it a problem?
0:11:02 Is it a dilemma?
0:11:03 Is it a crisis?
0:11:05 I do think it’s a crisis
0:11:06 in that
0:11:08 if we don’t take care of it
0:11:09 as scientists,
0:11:11 I think everything we do
0:11:13 can be undermined
0:11:15 in the eyes of the policymaker
0:11:15 and the broader public.
0:11:17 We don’t understand
0:11:20 how to use our own science
0:11:22 to make better policies.
0:11:25 So John List and Dana Susskind
0:11:27 and some other researchers
0:11:29 are on a quest to address
0:11:30 this scalability crisis.
0:11:31 They’ve been writing
0:11:32 a series of papers,
0:11:33 for instance,
0:11:35 The Science of Using Science
0:11:36 Towards an Understanding
0:11:39 of the Threats to Scaling Experiments.
0:11:40 A lot of their focus
0:11:42 is on early education,
0:11:43 since that is a particular
0:11:44 passion of Susskind’s.
0:11:46 I guess you could say
0:11:48 I’m a surgeon by day
0:11:50 and social scientist by night.
0:11:51 My clinical work
0:11:53 is about taking care
0:11:54 of one child at a time.
0:11:56 My research
0:11:57 really comes out
0:11:57 of the fact
0:11:59 that not all children
0:12:00 do as well as others
0:12:01 after surgery
0:12:03 and trying to figure out
0:12:04 the best ways
0:12:04 to allow
0:12:05 all my patients
0:12:06 and really
0:12:07 children born
0:12:09 into low-income backgrounds
0:12:10 to reach
0:12:12 their educational potentials.
0:12:13 It is kind of like
0:12:14 a superhero in reverse.
0:12:15 During the day,
0:12:16 you’re doing
0:12:17 the big dramatic stuff
0:12:18 and at night,
0:12:19 you’re going home
0:12:20 to analyze the data
0:12:20 and figure out
0:12:21 what’s happening.
0:12:22 I think that really
0:12:23 the hard part
0:12:25 is the night part.
0:12:27 I love doing surgery.
0:12:29 I adore my patients,
0:12:30 but it’s actually
0:12:32 not as hard
0:12:33 as many of the complex issues
0:12:34 in this world.
0:12:36 And was that a recognition
0:12:38 that some kids
0:12:39 after the surgery
0:12:41 sort of zoomed up
0:12:42 the education ladder
0:12:43 and others didn’t?
0:12:43 Yeah.
0:12:44 It’s not simply
0:12:46 about hearing loss.
0:12:47 It’s because language
0:12:47 is the food
0:12:48 for the developing brain.
0:12:49 Before surgery,
0:12:50 they all looked like
0:12:52 they’d have the same potential
0:12:53 to, as you say,
0:12:54 zoom up the educational ladder.
0:12:56 After surgery,
0:12:56 there were very
0:12:57 different outcomes.
0:12:59 And too often
0:13:00 that difference
0:13:00 fell along
0:13:01 socioeconomic lines.
0:13:03 That made me start
0:13:04 searching outside
0:13:05 the operating room
0:13:05 for understanding
0:13:06 why and what
0:13:07 I could do about it.
0:13:08 And it has taken me
0:13:09 on a journey.
0:13:11 So Dana and I met
0:13:12 back in 2012
0:13:15 and we were introduced
0:13:16 by a mutual friend
0:13:17 and we did the usual
0:13:19 ignore each other
0:13:19 for a few years
0:13:21 because we’re too busy.
0:13:24 And push came to shove.
0:13:25 Dana and I started
0:13:26 to work on
0:13:27 early childhood research.
0:13:29 And after that,
0:13:31 research turned to love.
0:13:34 I always joke
0:13:36 that I was wooed
0:13:37 with spreadsheets
0:13:38 and hypotheses.
0:13:40 Is that true?
0:13:41 Yes.
0:13:42 Yes.
0:13:43 In fact,
0:13:44 the reason I decided
0:13:45 to marry him
0:13:46 was because I wanted
0:13:47 this area of scaling
0:13:49 to be a robust area
0:13:50 of research for him
0:13:51 because it really
0:13:52 is a major issue.
0:13:58 Suskind started
0:13:58 what was then called
0:14:00 the 30 million words
0:14:00 initiative.
0:14:02 30 million being
0:14:03 an estimate
0:14:04 of how many fewer
0:14:05 words a child
0:14:06 from a low-income home
0:14:07 will have heard
0:14:08 than an affluent child
0:14:09 by the time
0:14:09 they turn four.
0:14:11 But these days,
0:14:12 the project is called
0:14:13 the TMW Center
0:14:14 for Early Learning
0:14:15 and Public Health.
0:14:17 we’ve actually moved
0:14:18 away from the term
0:14:19 30 million words
0:14:20 because it’s such
0:14:21 a hot-button issue.
0:14:22 Hot-button because
0:14:23 it’s so hard to believe
0:14:24 that the number
0:14:24 is legit?
0:14:26 Well, no.
0:14:27 I mean,
0:14:28 some people say,
0:14:28 look,
0:14:29 it’s a deficit mentality.
0:14:30 You’re talking about
0:14:31 what’s not there.
0:14:33 And then the replication,
0:14:35 somebody did another study
0:14:35 that said,
0:14:37 oh, it’s only 4 million.
0:14:38 And it really isn’t
0:14:40 actually even the point
0:14:40 because it’s not
0:14:41 even about words.
0:14:42 It’s about the interaction.
0:14:44 So I just made
0:14:44 the decision.
0:14:45 I’d rather be focusing
0:14:47 on developing the research
0:14:48 than fighting
0:14:49 a naming battle.
0:14:50 So you didn’t make
0:14:51 TMW stand
0:14:53 for something else.
0:14:53 Well,
0:14:54 that’s what
0:14:55 everybody gives me
0:14:56 trouble for.
0:14:57 It stands for
0:14:57 30 million words,
0:14:59 but only I know that.
0:14:59 Okay,
0:15:01 now you all know it too.
0:15:03 Anyway,
0:15:04 they started the center
0:15:06 with this idea.
0:15:07 With this idea
0:15:08 that, you know,
0:15:09 we need to
0:15:10 take a public health
0:15:11 or a population-level
0:15:12 approach
0:15:13 during the early years
0:15:14 to optimize
0:15:15 early foundational
0:15:16 brain development
0:15:17 because the research
0:15:18 is pretty clear
0:15:20 that parent talk
0:15:20 and interaction
0:15:22 in the first
0:15:23 three years of life
0:15:24 are the catalyst
0:15:25 for brain development.
0:15:26 And so
0:15:27 that’s basically
0:15:28 our work.
0:15:29 Okay,
0:15:30 so far so good.
0:15:31 The research is clear
0:15:32 that heavy exposure
0:15:33 to language
0:15:34 is good for
0:15:35 the developing brain.
0:15:36 But how do you
0:15:37 turn that research
0:15:38 finding into action?
0:15:39 And how do you
0:15:40 scale it up?
0:15:41 Initially,
0:15:42 we started with
0:15:43 an intensive
0:15:44 home visiting
0:15:44 program,
0:15:45 but understanding
0:15:46 that to reach
0:15:47 population-level
0:15:48 impact,
0:15:49 you need to
0:15:50 develop programs
0:15:51 both with an
0:15:53 eye for scaling
0:15:54 as well as an eye
0:15:55 for understanding
0:15:56 where parents
0:15:57 go regularly.
0:15:58 Because healthcare,
0:15:59 unlike the education
0:16:00 system,
0:16:01 the first three years
0:16:02 of life really
0:16:03 don’t have any
0:16:04 infrastructure
0:16:05 in which to
0:16:06 disseminate programs.
0:16:08 So we actually
0:16:09 expanded our
0:16:09 model.
0:16:10 We have this
0:16:12 multifaceted program
0:16:13 that reached parents
0:16:14 where they were,
0:16:16 from maternity wards
0:16:17 into pediatrics
0:16:17 offices,
0:16:19 into the homes,
0:16:20 as well as group
0:16:20 sessions.
0:16:21 Those programs
0:16:22 that are most
0:16:23 vulnerable to the
0:16:24 issues of scale
0:16:25 are the complex
0:16:26 sort of service
0:16:27 delivery interventions.
0:16:28 You know,
0:16:29 anything that takes
0:16:31 a human service
0:16:31 delivery.
0:16:33 Scaling isn’t
0:16:34 an end.
0:16:34 It’s really
0:16:36 just a continuation.
0:16:41 You know,
0:16:42 it’s a hard one.
0:16:43 That is
0:16:44 Patti Chamberlain,
0:16:45 senior research
0:16:46 scientist at
0:16:47 Oregon Social
0:16:47 Learning Center.
0:16:49 And I do
0:16:51 research and
0:16:52 implementation
0:16:53 of evidence-based
0:16:55 practices in
0:16:56 child welfare,
0:16:57 juvenile justice,
0:16:58 mental health,
0:16:59 and education
0:17:00 systems.
0:17:01 Chamberlain also
0:17:02 looks at scaling
0:17:03 as a process.
0:17:05 So it’s almost
0:17:05 like there’s
0:17:06 stages that you
0:17:06 have to go
0:17:07 through.
0:17:08 And if the
0:17:09 first stage
0:17:09 is research
0:17:10 that involves
0:17:11 an RCT,
0:17:11 a randomized
0:17:12 controlled trial,
0:17:14 there’s already
0:17:14 an important
0:17:15 choice to make.
0:17:16 You’re far
0:17:17 better off
0:17:18 to situate
0:17:19 your RCT
0:17:20 in a real
0:17:21 world setting
0:17:22 than a
0:17:22 university clinic
0:17:24 so that you’re
0:17:24 learning from
0:17:25 the beginning
0:17:26 what’s feasible
0:17:26 and what’s
0:17:27 not feasible.
0:17:29 There might be
0:17:29 something that you
0:17:30 think would be
0:17:30 great,
0:17:31 but it’s never
0:17:31 going to be able
0:17:32 to be implemented
0:17:33 in the real
0:17:33 world.
0:17:34 I’ve been
0:17:35 at this
0:17:35 now for,
0:17:36 oh,
0:17:36 probably
0:17:38 25 years,
0:17:40 and I learned
0:17:41 sort of through
0:17:41 failing.
0:17:43 One program
0:17:43 Chamberlain founded
0:17:44 is called
0:17:45 Treatment Foster
0:17:46 Care Oregon.
0:17:48 Kids tend to
0:17:48 commit crimes
0:17:49 together.
0:17:50 It’s a team
0:17:50 sport.
0:17:51 But then,
0:17:52 oddly,
0:17:54 the way that
0:17:55 we’re set up
0:17:56 to deal with
0:17:57 kids who,
0:17:57 you know,
0:17:58 reach the level
0:17:59 where they’re
0:17:59 really being
0:18:01 unsafe to
0:18:01 themselves
0:18:01 and to
0:18:02 the community
0:18:03 is we put
0:18:03 them in
0:18:04 group homes
0:18:04 together.
0:18:05 We’re putting
0:18:06 kids in a
0:18:07 situation where
0:18:08 they’re more
0:18:09 likely to
0:18:11 commit crimes.
0:18:13 So we decided
0:18:13 what if we
0:18:14 placed a child
0:18:16 singly in a
0:18:17 family that
0:18:18 was completely
0:18:19 devoted to
0:18:21 using evidence-based
0:18:23 parenting skills
0:18:24 to help that
0:18:25 child do well
0:18:27 with peers in
0:18:28 school and in
0:18:28 the family
0:18:29 setting?
0:18:30 what if we
0:18:31 gave the
0:18:31 parents,
0:18:32 the biological
0:18:33 parents of
0:18:34 that kid,
0:18:35 the same kind
0:18:36 of skills that
0:18:37 the treatment
0:18:38 foster care
0:18:39 family had?
0:18:41 What if we
0:18:41 gave the kid
0:18:42 individual therapy?
0:18:43 The biological
0:18:44 family was
0:18:44 getting family
0:18:45 therapy.
0:18:45 We were giving
0:18:46 the kids
0:18:46 support at
0:18:47 school.
0:18:48 So we were
0:18:49 basically wrapping
0:18:50 all these services
0:18:51 around an
0:18:52 individual child
0:18:53 in a family
0:18:53 home.
0:18:55 What we found
0:18:56 was, yeah,
0:18:57 the kids do a
0:18:57 lot better.
0:18:58 They have a lot
0:18:59 fewer arrests.
0:19:00 they spend
0:19:01 less days in
0:19:02 institutions.
0:19:02 They use
0:19:03 fewer drugs.
0:19:05 And guess what?
0:19:06 It costs a lot
0:19:07 less as well.
0:19:08 Because you do
0:19:09 not have a
0:19:09 facility.
0:19:11 You do not
0:19:12 have 24-7 staff
0:19:13 that you’re paying
0:19:14 in shifts.
0:19:15 You do not
0:19:16 have, you know,
0:19:17 all of the
0:19:18 stuff that it
0:19:19 takes to run
0:19:20 an institution.
0:19:21 You have a
0:19:21 family.
0:19:23 The success of
0:19:23 Chamberlain’s
0:19:24 program caught
0:19:24 the eye of
0:19:25 researchers who
0:19:26 were working on
0:19:26 a program for a
0:19:27 federal agency
0:19:28 called the
0:19:29 Office of
0:19:29 Juvenile Justice
0:19:30 and Delinquency
0:19:31 Prevention.
0:19:32 And so we
0:19:33 got this call
0:19:34 saying, you
0:19:35 know, we
0:19:36 want you to
0:19:37 implement your
0:19:37 program in
0:19:39 15 sites.
0:19:40 If the
0:19:40 program was
0:19:41 successful at
0:19:42 one site, how
0:19:43 hard could it be
0:19:44 to make it work
0:19:44 at 15?
0:19:46 I went in
0:19:47 thinking that it
0:19:48 wouldn’t be that
0:19:50 hard because we
0:19:50 had good outcomes.
0:19:51 We showed that we
0:19:52 could save money.
0:19:55 And yet, we
0:19:55 were absolutely
0:19:56 not ready.
0:19:58 It wasn’t because
0:19:58 we didn’t have
0:19:59 enough data.
0:20:01 We had, at that
0:20:02 point, plenty of
0:20:02 data.
0:20:04 But we didn’t
0:20:05 have the know-how
0:20:06 of how to put
0:20:07 this thing down
0:20:08 in the real
0:20:08 world.
0:20:10 And it blew up.
0:20:11 One reason?
0:20:12 Systemic
0:20:13 complication.
0:20:15 The three
0:20:16 systems, child
0:20:17 welfare, juvenile
0:20:18 justice, and
0:20:19 mental health, all
0:20:20 put some money in
0:20:21 the pot to fund
0:20:22 this implementation.
0:20:24 I was completely
0:20:25 delighted.
0:20:25 I thought, oh,
0:20:26 this is going to
0:20:28 be great because
0:20:29 we have all the
0:20:30 relevant systems
0:20:31 buying into
0:20:31 this.
0:20:32 Well, what
0:20:34 happened was
0:20:35 when we tried
0:20:35 to implement,
0:20:37 we ran into
0:20:39 tremendous
0:20:40 barriers because
0:20:42 if we satisfied
0:20:43 the policies
0:20:44 and procedures
0:20:44 of one
0:20:46 system, we
0:20:46 were at
0:20:47 odds with
0:20:47 the policies
0:20:48 and procedures
0:20:48 in the
0:20:49 other system.
0:20:51 Patty
0:20:52 Chamberlain had
0:20:52 run up against
0:20:53 something that
0:20:54 Dana Susskind
0:20:54 had come to
0:20:55 see as an
0:20:56 inherent disconnect
0:20:57 when you try
0:20:58 to scale up
0:20:58 a research
0:20:59 finding.
0:20:59 There’s
0:21:00 obviously the
0:21:00 implementation,
0:21:02 everybody focusing
0:21:02 on adherence,
0:21:03 but there’s
0:21:04 also sort of
0:21:05 the infrastructure
0:21:06 delivery mechanism,
0:21:07 which I think
0:21:08 is an issue,
0:21:09 whether it’s
0:21:09 government or
0:21:10 health care,
0:21:11 that they’re
0:21:12 just not
0:21:12 set up for
0:21:13 interventions,
0:21:14 which are
0:21:14 sort of like
0:21:15 innovations.
0:21:16 So you’ve got
0:21:16 these researchers
0:21:17 who think of
0:21:18 themselves as
0:21:20 scientific entrepreneurs
0:21:21 developing the
0:21:22 next best thing,
0:21:24 thinking you build
0:21:25 it and they
0:21:25 will come,
0:21:26 and then you’ve
0:21:27 got organizations
0:21:28 that are sort of
0:21:29 built for
0:21:30 efficiency rather
0:21:30 than effectiveness
0:21:31 that can’t
0:21:32 uptake it.
0:21:33 If only there
0:21:34 were another
0:21:34 science,
0:21:35 a science to
0:21:36 help these
0:21:37 scientific
0:21:38 entrepreneurs
0:21:39 and institutions
0:21:40 come together
0:21:40 to implement
0:21:41 this new
0:21:41 research.
0:21:43 Maybe something
0:21:43 that could
0:21:44 be called
0:21:45 Implementation
0:21:45 science.
0:21:46 Implementation
0:21:46 science.
0:21:47 Implementation
0:21:48 science.
0:21:48 Implementation
0:21:49 science.
0:21:50 Okay, let’s
0:21:51 define
0:21:51 implementation
0:21:52 science.
0:21:53 It’s the
0:21:54 study of how
0:21:55 programs get
0:21:56 implemented into
0:21:57 practice and
0:21:58 how the quality
0:21:59 of that
0:22:00 implementation may
0:22:01 affect how well
0:22:01 that program
0:22:02 works or
0:22:03 doesn’t work.
0:22:04 That is
0:22:05 Lauren Suplee.
0:22:06 When we spoke
0:22:06 with her,
0:22:07 Suplee was the
0:22:07 deputy chief
0:22:08 operating officer
0:22:09 of a nonprofit
0:22:10 called Child
0:22:11 Trends, which
0:22:12 promotes evidence
0:22:13 based policy to
0:22:14 improve children’s
0:22:14 lives.
0:22:16 This whole science
0:22:17 is maybe 15
0:22:18 years old.
0:22:19 It’s really
0:22:21 coming out of
0:22:22 this movement of
0:22:23 evidence based
0:22:24 policy and
0:22:24 programs where
0:22:25 people said,
0:22:26 well, we have
0:22:27 this program.
0:22:28 It appears to
0:22:28 change important
0:22:29 outcomes.
0:22:30 Let’s just put
0:22:31 it out there
0:22:31 and then we
0:22:32 quickly realized
0:22:33 that there are
0:22:34 a lot of
0:22:35 issues and
0:22:36 actually that
0:22:36 put it out
0:22:37 there is far
0:22:38 more complicated.
0:22:39 A lot of the
0:22:39 evidence based
0:22:40 programs we have
0:22:41 were designed
0:22:42 by academic
0:22:44 researchers who
0:22:45 were testing it
0:22:46 in the maybe
0:22:47 more ideal
0:22:48 circumstances that
0:22:49 they had available
0:22:50 to them that
0:22:50 might have
0:22:51 included graduate
0:22:52 students.
0:22:53 It might have
0:22:53 been a school
0:22:54 district that
0:22:55 was very amenable
0:22:56 to research.
0:22:57 And then you
0:22:57 take the results
0:22:58 of that and
0:22:59 trying to put
0:22:59 that into
0:23:01 another location
0:23:01 is where the
0:23:03 challenge happened.
0:23:06 So coming up
0:23:07 after the break,
0:23:09 can implementation
0:23:10 science really
0:23:10 help?
0:23:11 You know, I want
0:23:12 policy science not
0:23:14 to be an oxymoron.
0:23:15 You’re listening to
0:23:16 Freakonomics Radio.
0:23:17 I’m Stephen Dubner.
0:23:17 We will be right
0:23:17 back.
0:23:33 What randomized
0:23:34 controlled trials
0:23:35 tell us about
0:23:35 an intervention
0:23:38 is what that
0:23:39 actual intervention
0:23:41 does in a
0:23:42 particular population
0:23:44 in a particular
0:23:44 context.
0:23:46 It doesn’t mean
0:23:46 that it’s
0:23:47 generalizable.
0:23:48 That, again,
0:23:49 is Dana Susskind
0:23:50 from the University
0:23:51 of Chicago.
0:23:52 But you have to
0:23:53 continue the science
0:23:54 so you can understand
0:23:55 how it’s going to
0:23:55 work in a different
0:23:56 place, in a different
0:23:57 context, in a different
0:23:59 population and have
0:23:59 the same effect.
0:24:00 And that’s part of
0:24:02 the scaling science.
0:24:03 The scaling science.
0:24:05 That is what Susskind
0:24:06 and her economist
0:24:07 collaborator John List,
0:24:08 who’s also her
0:24:09 husband, and other
0:24:10 researchers have been
0:24:11 working on.
0:24:12 They’ve been
0:24:13 systematically examining
0:24:14 why interventions
0:24:15 that work well in
0:24:16 experimental or
0:24:17 research settings
0:24:18 often fail to
0:24:19 scale up.
0:24:20 You can see why
0:24:21 this is an
0:24:22 important puzzle
0:24:22 to solve.
0:24:24 Scaling up a new
0:24:25 intervention, like
0:24:26 a medical procedure
0:24:27 or a teaching
0:24:28 method, has the
0:24:29 potential to help
0:24:31 thousands, millions,
0:24:32 maybe billions of
0:24:32 people.
0:24:34 But what if it
0:24:36 simply fails at
0:24:36 scale?
0:24:37 What if it ends up
0:24:39 costing way more
0:24:40 than anticipated or
0:24:41 creates serious
0:24:43 unintended consequences?
0:24:44 That’ll make it that
0:24:45 much harder for the
0:24:46 next set of
0:24:46 researchers to
0:24:47 persuade the next
0:24:48 set of policymakers
0:24:49 to listen to them.
0:24:50 So List and
0:24:51 Susskind have been
0:24:52 looking at scaling
0:24:53 failures from the
0:24:54 past and trying to
0:24:55 categorize what went
0:24:56 wrong.
0:25:00 You can kind of
0:25:01 put what we’ve
0:25:03 learned into three
0:25:04 general buckets that
0:25:05 seem to encompass the
0:25:06 failures.
0:25:08 Bucket number one is
0:25:09 that the evidence was
0:25:10 just not there to
0:25:12 justify scaling the
0:25:12 program in the first
0:25:13 place.
0:25:15 The Department of
0:25:16 Education did this
0:25:18 broad survey on
0:25:20 prevention programs
0:25:20 attempting to
0:25:22 attenuate youth
0:25:24 substance and crime
0:25:25 and aspects like
0:25:25 that.
0:25:26 And what they
0:25:28 found is that only
0:25:29 8% of those
0:25:31 programs were
0:25:32 actually backed by
0:25:33 research evidence.
0:25:35 Many programs that
0:25:37 we put in place
0:25:39 really don’t have
0:25:41 the research findings
0:25:42 to support them.
0:25:43 And this is what a
0:25:44 scientist would call a
0:25:44 false positive.
0:25:46 So are we talking
0:25:47 about bad research?
0:25:48 Are we talking
0:25:49 about cherry picking?
0:25:49 Are we talking
0:25:50 about publication
0:25:51 bias?
0:25:52 So here we’re
0:25:52 talking about none
0:25:53 of those.
0:25:54 We’re talking about
0:25:55 a small-scale
0:25:57 research finding
0:25:58 that was the
0:25:59 truth in that
0:26:00 finding.
0:26:02 But because of the
0:26:03 mechanics of
0:26:04 statistical inference,
0:26:06 and it just won’t
0:26:06 be right,
0:26:08 what you were
0:26:09 getting into is
0:26:11 what I would call
0:26:12 the second bucket
0:26:13 of why things
0:26:14 fail, and that’s
0:26:15 what I call the
0:26:16 wrong people were
0:26:17 studied.
0:26:18 You know, these
0:26:19 are studies that
0:26:21 have a particular
0:26:22 sample of people
0:26:25 that shows really
0:26:26 large program
0:26:27 effect sizes,
0:26:28 but when you
0:26:30 program is gone
0:26:31 to general
0:26:32 populations,
0:26:33 that effect
0:26:34 disappears.
0:26:34 So essentially,
0:26:35 we were looking
0:26:36 at the wrong
0:26:37 people and scaling
0:26:38 to the wrong
0:26:38 people.
0:26:39 And when you
0:26:39 say the wrong
0:26:40 people, the
0:26:41 people that are
0:26:41 being studied
0:26:42 then are to
0:26:42 what?
0:26:45 They are the
0:26:46 people who
0:26:47 are the
0:26:48 fraction or
0:26:49 the group of
0:26:50 people who
0:26:51 receive the
0:26:52 largest program
0:26:53 benefits.
0:26:54 So I think
0:26:55 of some of the
0:26:55 experiments that
0:26:56 are done on
0:26:57 college campuses,
0:26:57 right, where
0:26:58 there’s a
0:26:59 professor who’s
0:27:00 looking to find
0:27:00 out something
0:27:01 about, let’s
0:27:02 say, altruism,
0:27:04 and the
0:27:04 experimental
0:27:05 setting is a
0:27:06 classroom where
0:27:07 20 college
0:27:07 students will
0:27:08 come in, and
0:27:08 they’re a pretty
0:27:10 homogeneous population,
0:27:11 they’re pretty
0:27:12 motivated, maybe
0:27:12 they’re very
0:27:13 disciplined, and
0:27:14 that may not
0:27:15 represent what
0:27:16 the world
0:27:16 actually is.
0:27:17 Is that what
0:27:18 you’re talking
0:27:18 about?
0:27:19 That’s one
0:27:20 piece of it.
0:27:22 Another piece
0:27:23 is who will
0:27:24 sign their
0:27:25 kids up for
0:27:26 Head Start or
0:27:27 for a program
0:27:29 in a neighborhood
0:27:30 that advances
0:27:31 the reading
0:27:32 skills of the
0:27:32 child?
0:27:33 Who’s going
0:27:33 to be first
0:27:34 in line?
0:27:35 The people who
0:27:36 really care about
0:27:37 education and
0:27:38 the people who
0:27:39 think their
0:27:40 child will
0:27:40 receive the
0:27:41 most benefits
0:27:41 from the
0:27:42 program.
0:27:43 Now, another
0:27:44 way to get
0:27:44 it is sort
0:27:45 of along the
0:27:45 lines that
0:27:46 you talked
0:27:46 about.
0:27:46 It could
0:27:47 be the
0:27:49 researcher knows
0:27:50 something about
0:27:51 the population
0:27:52 that other
0:27:53 people don’t
0:27:53 know.
0:27:55 Like, I want
0:27:55 to give my
0:27:56 program its
0:27:57 best shot of
0:27:58 working.
0:27:59 Okay, and
0:28:00 what’s in your
0:28:00 third bucket
0:28:02 of scaling
0:28:02 failures?
0:28:03 The third
0:28:04 bucket is
0:28:05 something that
0:28:06 we call
0:28:08 the wrong
0:28:09 situation was
0:28:09 used.
0:28:11 And what I
0:28:11 mean by that
0:28:12 is that certain
0:28:13 aspects of the
0:28:15 situation change
0:28:16 when you go
0:28:17 from the
0:28:17 original research
0:28:18 to the scaled
0:28:19 research program.
0:28:21 We don’t
0:28:23 understand what
0:28:24 properties of
0:28:25 the situation
0:28:26 or features of
0:28:27 the environment
0:28:28 will matter.
0:28:30 there are a
0:28:31 really large
0:28:32 group of
0:28:33 implementation
0:28:35 scientists who
0:28:35 have explored
0:28:36 this question
0:28:37 for years.
0:28:39 Now, what
0:28:40 they emphasize
0:28:41 and focus on
0:28:42 is something
0:28:43 called voltage
0:28:44 drop.
0:28:46 And voltage
0:28:47 drop essentially
0:28:48 means I
0:28:49 found a really
0:28:51 good result in
0:28:52 my original
0:28:52 research study,
0:28:53 but then when
0:28:54 they do it at
0:28:55 scale, that
0:28:57 voltage drop
0:28:58 ends up being,
0:28:58 for example,
0:29:00 a tenth of
0:29:00 the original
0:29:01 result or a
0:29:02 quarter of the
0:29:03 original result.
0:29:05 An example of
0:29:06 this is when
0:29:07 you look at
0:29:07 Head Start’s
0:29:08 home visiting
0:29:10 services, what
0:29:10 they do there
0:29:11 is this is an
0:29:11 early childhood
0:29:13 intervention that
0:29:14 found huge
0:29:16 improvements in
0:29:17 both child and
0:29:17 parent outcomes
0:29:18 in the original
0:29:20 study, except
0:29:20 when they tried
0:29:21 to scale that
0:29:22 up and do
0:29:24 home visits at
0:29:24 a much larger
0:29:26 scale, what
0:29:27 they found is
0:29:28 that, for
0:29:29 example, home
0:29:30 visits for
0:29:31 at-risk families
0:29:32 involved a lot
0:29:33 more distractions
0:29:34 in the house
0:29:35 and there was
0:29:36 less time on
0:29:37 child-focused
0:29:38 activities.
0:29:38 So this is
0:29:40 sort of the
0:29:41 wrong dosage or
0:29:41 the wrong
0:29:42 program is given
0:29:43 at scale.
0:29:46 There are many
0:29:46 factors that
0:29:47 contribute to
0:29:48 this voltage
0:29:49 drop, including
0:29:50 the admirably
0:29:51 high standards
0:29:52 set by the
0:29:53 original researchers.
0:29:54 when the
0:29:55 researcher starts
0:29:56 his or her
0:29:57 experiment, the
0:29:59 inclination is
0:29:59 I’m going to
0:30:00 get the best
0:30:01 tutors in the
0:30:02 world, so I’m
0:30:02 going to be able
0:30:03 to show how
0:30:03 effective my
0:30:04 intervention is.
0:30:05 Dana Susskind
0:30:06 again.
0:30:07 you only needed
0:30:08 10 math tutors
0:30:09 and you happen
0:30:10 to get the
0:30:10 PhD students
0:30:11 from the
0:30:11 University of
0:30:12 Chicago, and
0:30:13 then what
0:30:14 happens is you
0:30:14 show this
0:30:15 tremendous effect
0:30:16 size, and in
0:30:17 the scaling, all
0:30:18 of a sudden, you
0:30:19 need a hundred or
0:30:21 a thousand, and you
0:30:22 no longer have that
0:30:23 access to those
0:30:24 individuals, and you
0:30:26 go either down the
0:30:27 supply chain with
0:30:28 individuals who are
0:30:29 not quite as well
0:30:31 trained, or you end
0:30:32 up having to pay a
0:30:32 whole lot more
0:30:33 money to
0:30:34 maintain the
0:30:35 trained tutor
0:30:36 program, and one
0:30:37 way or the other,
0:30:39 either the impacts
0:30:40 of the intervention
0:30:42 go down, or your
0:30:43 costs go up
0:30:44 significantly.
0:30:46 Another problem in
0:30:46 this third bucket,
0:30:48 it’s a big bucket,
0:30:49 is when the person
0:30:50 who designed the
0:30:51 intervention and
0:30:52 masterminded the
0:30:53 initial trial can
0:30:54 no longer be so
0:30:55 involved once the
0:30:57 program scales up to
0:30:57 multiple locations.
0:30:59 Imagine if instead
0:31:00 of talking about an
0:31:01 educational or
0:31:02 medical program, we
0:31:03 were talking about
0:31:03 a successful
0:31:04 restaurant and the
0:31:05 original chef.
0:31:07 When you think about
0:31:08 the chef, if a
0:31:09 restaurant succeeds
0:31:11 because of the
0:31:13 magical work of
0:31:15 the chef, and you
0:31:16 think about scaling
0:31:18 that, if you can’t
0:31:20 scale the magic in
0:31:22 the chef, that’s not
0:31:22 scalable.
0:31:25 Now, if the magic is
0:31:26 because of the mix
0:31:28 of ingredients, and
0:31:29 the secret sauce, like
0:31:30 Domino’s, for
0:31:31 example, the secret
0:31:33 sauce or Papa John’s
0:31:35 is the actual
0:31:36 ingredients, then
0:31:37 that will be
0:31:38 scalable.
0:31:43 Now, if you are
0:31:43 the kind of pizza
0:31:44 eater who doesn’t
0:31:45 think Domino’s or
0:31:47 Papa John’s is good
0:31:49 pizza, well, welcome
0:31:50 to the scaling
0:31:50 dilemma.
0:31:52 Going big means you
0:31:53 have to be many
0:31:54 things to many
0:31:54 people.
0:31:56 Going big means you
0:31:57 will face a lot of
0:31:57 trade-offs.
0:31:59 Going big means you’ll
0:31:59 have a lot of people
0:32:01 asking you, do you
0:32:01 want this done
0:32:03 fast, or do you
0:32:03 want it done right?
0:32:05 Once you peer
0:32:07 inside these failure
0:32:08 buckets that List and
0:32:09 Susskind describe, it’s
0:32:11 not so surprising that
0:32:12 so many good ideas
0:32:13 fail to scale up.
0:32:15 So, what do they
0:32:16 propose that could
0:32:16 help?
0:32:18 Now, our proposal
0:32:20 is that we do not
0:32:22 believe that we
0:32:24 should scale a
0:32:27 program until you’re
0:32:29 95% certain the
0:32:30 result is true.
0:32:32 So, essentially, what
0:32:34 that means is we
0:32:35 need the original
0:32:37 research and then
0:32:40 three or four well-powered
0:32:42 independent replications
0:32:43 of the original
0:32:44 findings.
0:32:45 And how often is that
0:32:47 already happening in the
0:32:48 real world of, let’s
0:32:50 say, education reform
0:32:50 research?
0:32:52 I can’t name one.
0:32:52 Wow.
0:32:53 How about in the
0:32:55 realm of medical
0:32:56 compliance research?
0:32:58 My intuition is that
0:33:00 they’re probably not far
0:33:01 away from three or four
0:33:03 well-powered independent
0:33:03 replications.
0:33:07 In the hard sciences, in
0:33:08 many cases, you not only
0:33:10 have the original
0:33:13 research, but you have a
0:33:16 first replication also
0:33:17 published in science.
0:33:19 you know, the current
0:33:21 credibility crisis in
0:33:22 science is a serious
0:33:23 one that major
0:33:25 results are not
0:33:25 replicating.
0:33:28 The reason why is
0:33:29 because we weren’t
0:33:30 serious about
0:33:31 replication in the
0:33:31 first place.
0:33:33 So, this sort of puts
0:33:34 the onus on
0:33:35 policymakers and
0:33:37 funding agencies in
0:33:37 a sense of saying,
0:33:39 we need to change the
0:33:39 equilibrium.
0:33:42 So, that
0:33:43 suggests that
0:33:45 policymakers or
0:33:46 decision makers, they
0:33:47 are being, what,
0:33:50 overeager, premature in
0:33:52 accepting a finding that
0:33:53 looks good to them and
0:33:54 want to rush it into
0:33:54 play?
0:33:56 Or is it that the
0:33:57 researchers are
0:33:59 overconfident themselves
0:34:00 or maybe pushing this
0:34:01 research too hard?
0:34:02 Where is this failure
0:34:03 really happening?
0:34:04 Well, I think it’s sort
0:34:05 of a mix.
0:34:07 I think it’s fair to
0:34:08 say that some
0:34:10 policymakers are out
0:34:11 looking for evidence
0:34:13 to base their
0:34:14 preferred program on.
0:34:15 What this will do is
0:34:16 slow that down.
0:34:18 if you have a
0:34:19 pet project that
0:34:19 you want to get
0:34:21 through, fund the
0:34:22 replications and
0:34:23 let’s make sure the
0:34:24 science is correct.
0:34:25 We think we should
0:34:26 actually be rewarding
0:34:28 scholars for
0:34:29 attempting to
0:34:29 replicate.
0:34:31 You know, right now
0:34:33 in my community, if I
0:34:33 try to replicate
0:34:35 someone else, guess
0:34:35 what I’ve just
0:34:36 made?
0:34:38 I’ve just made a
0:34:39 mortal enemy for
0:34:39 life.
0:34:41 If you find a
0:34:42 publishable result,
0:34:43 what result is that?
0:34:43 you’re refuting
0:34:45 previous research.
0:34:47 Now I’ve doubled
0:34:48 down on my
0:34:48 enemy.
0:34:50 So that’s like a
0:34:52 first step in
0:34:53 terms of rewarding
0:34:55 scholars who are
0:34:55 attempting to
0:34:56 replicate.
0:34:58 Now, to
0:34:59 complement that, I
0:34:59 think we should
0:35:01 also reward
0:35:02 scholars who
0:35:02 have produced
0:35:04 results that are
0:35:05 independently
0:35:06 replicated.
0:35:07 You know, and I’m
0:35:08 talking about tying
0:35:09 tenure decisions,
0:35:11 grant money, and the
0:35:13 like to people who
0:35:14 have given us
0:35:15 credible research
0:35:16 that replicates.
0:35:20 After the break,
0:35:21 how can researchers
0:35:22 make sure that the
0:35:23 science they are
0:35:24 replicating works
0:35:25 when it scales up?
0:35:40 Before the break, we
0:35:41 were talking with the
0:35:41 University of Chicago
0:35:43 economist John List
0:35:44 about the challenges
0:35:45 of turning good
0:35:46 research into good
0:35:46 policy.
0:35:48 One challenge is
0:35:49 making sure that the
0:35:50 research findings are
0:35:52 in fact robust enough
0:35:53 to scale up.
0:35:54 Say I’m doing an
0:35:55 experiment in Chicago
0:35:57 Heights on early
0:35:59 childhood, and I find
0:36:01 a great result, how
0:36:03 confident should I be
0:36:05 that when we take that
0:36:06 result to all of
0:36:07 Illinois or all of the
0:36:09 Midwest or all of
0:36:10 America, is that
0:36:12 result still going
0:36:14 to find that
0:36:15 important benefit
0:36:17 cost profile that
0:36:18 we found in
0:36:18 Chicago Heights?
0:36:20 We need to know
0:36:21 what is the magic
0:36:22 sauce.
0:36:23 Was it the 20
0:36:25 teachers you hired
0:36:26 down in Chicago
0:36:28 Heights where if we
0:36:29 go nationally, we
0:36:30 need 20,000?
0:36:33 So it should
0:36:34 behoove me as an
0:36:36 original researcher
0:36:37 teacher to say,
0:36:38 look, if this
0:36:40 scales up, we’re
0:36:41 going to need many
0:36:42 more teachers.
0:36:44 I know teachers are
0:36:45 an important input.
0:36:47 Is the average
0:36:48 teacher in the
0:36:51 20,000 the same
0:36:52 as the average
0:36:53 teacher in the
0:36:53 20?
0:36:55 This is the dreaded
0:36:56 voltage drop that
0:36:57 implementation
0:36:58 scientists talk
0:36:58 about.
0:36:59 And the
0:36:59 implementation
0:37:01 scientists have
0:37:02 focused on
0:37:04 fidelity as a core
0:37:05 component behind
0:37:06 the voltage
0:37:06 drop.
0:37:08 Fidelity
0:37:09 meaning that the
0:37:10 scaled up program
0:37:10 reflects the
0:37:11 integrity of the
0:37:12 original program.
0:37:13 Measures of
0:37:14 fidelity.
0:37:15 That’s a really
0:37:16 critical part of
0:37:17 the implementation
0:37:18 process.
0:37:19 That, again, is
0:37:20 Patty Chamberlain,
0:37:21 founder of
0:37:21 Treatment Foster
0:37:22 Care Oregon.
0:37:23 You’ve got to be
0:37:24 able to measure,
0:37:26 is this thing
0:37:27 that’s down in the
0:37:28 real world the
0:37:29 same, you know,
0:37:30 does it have the
0:37:31 same components
0:37:32 that produce the
0:37:33 outcomes in the
0:37:33 RCTs.
0:37:35 Remember, it was
0:37:35 Chamberlain’s good
0:37:37 outcomes with young
0:37:37 people in foster
0:37:38 care that made
0:37:39 federal officials want
0:37:40 to scale up her
0:37:41 program in the first
0:37:41 place.
0:37:43 We got this call
0:37:44 saying, we want you
0:37:45 to implement your
0:37:47 program in 15
0:37:48 sites.
0:37:49 She found the
0:37:50 scaling up initially
0:37:51 very challenging.
0:37:52 It wasn’t the
0:37:54 kumbaya moment that
0:37:54 we thought it was
0:37:55 going to be.
0:37:56 But in time,
0:37:57 Treatment Foster
0:37:58 Care Oregon became
0:37:59 a very well-regarded
0:38:00 program.
0:38:00 It’s been around for
0:38:02 roughly 30 years
0:38:03 now, and the
0:38:04 model has spread
0:38:05 well beyond Oregon.
0:38:06 One key to this
0:38:07 success has been
0:38:08 developing fidelity
0:38:09 standards.
0:38:10 So the way that we
0:38:11 do it is we have
0:38:12 people upload all of
0:38:13 their sessions onto
0:38:14 a HIPAA secure
0:38:15 website, and then
0:38:16 we code those.
0:38:18 And if they’re not
0:38:18 meeting the fidelity
0:38:20 standards, then we
0:38:21 offer a fidelity
0:38:22 recovery plan.
0:38:23 You know, we
0:38:24 haven’t had to drop
0:38:24 a site, but we
0:38:26 have had to have
0:38:27 some of the people
0:38:28 in the site
0:38:30 retrained or not
0:38:31 continue.
0:38:32 being able to
0:38:33 measure fidelity
0:38:34 well from afar
0:38:35 provides another
0:38:37 benefit to scaling
0:38:37 up.
0:38:38 It allows the
0:38:39 people who
0:38:39 developed the
0:38:40 original program
0:38:41 to ultimately
0:38:43 step back, so
0:38:44 they don’t become
0:38:45 a bottleneck, which
0:38:45 is a common
0:38:46 scaling problem.
0:38:47 There can be
0:38:48 sort of an
0:38:49 orderly process
0:38:50 whereby you
0:38:52 step back in
0:38:53 increments as
0:38:54 people become
0:38:55 more and more
0:38:56 competent doing
0:38:56 what they’re
0:38:57 doing.
0:38:57 And that’s
0:38:58 what you want
0:38:58 because you
0:38:59 don’t want to
0:38:59 have this tied to
0:39:00 the developer
0:39:00 forever.
0:39:01 Otherwise, you
0:39:02 can’t get any
0:39:03 kind of reasonable
0:39:03 reach.
0:39:05 That said, you
0:39:06 also need to
0:39:06 have some
0:39:07 humility.
0:39:08 When you’re
0:39:08 scaling up, you
0:39:09 shouldn’t assume
0:39:10 your original
0:39:11 program was
0:39:12 perfect, that it
0:39:13 won’t need
0:39:14 adjustment, and
0:39:15 you need to be
0:39:15 willing to make
0:39:16 adjustments.
0:39:18 For example, we
0:39:19 recognized that
0:39:20 when we were in
0:39:21 real-world
0:39:23 communities, kids
0:39:23 needed something
0:39:24 that wasn’t
0:39:25 therapy, per se.
0:39:26 they needed
0:39:28 skills because
0:39:29 the kids had
0:39:30 often been
0:39:31 excluded from
0:39:32 normal socializing
0:39:33 sort of things
0:39:34 like sports
0:39:35 teams and
0:39:36 clubs.
0:39:37 And so we
0:39:38 needed what
0:39:39 we call a
0:39:39 skills coach
0:39:41 to help
0:39:42 those kids
0:39:42 learn the
0:39:43 moves that
0:39:44 they needed
0:39:44 to be able
0:39:45 to participate
0:39:47 in these
0:39:47 pro-social
0:39:49 activities that
0:39:49 are normal
0:39:50 kind of things.
0:39:51 So you have
0:39:51 research, you
0:39:52 have a theory,
0:39:52 and then you
0:39:53 have the
0:39:54 implementation, and
0:39:54 that feeds
0:39:55 into more
0:39:55 research, more
0:39:56 theory, more
0:39:56 implementation.
0:40:01 Look, everybody’s
0:40:01 motivation at the
0:40:02 end of the day is
0:40:03 about trying to
0:40:04 do good for
0:40:05 the people they
0:40:05 serve.
0:40:07 Dana Susskind
0:40:07 again.
0:40:08 There are many
0:40:09 children out there,
0:40:10 and there are a
0:40:11 lot of injustices,
0:40:12 so we need to
0:40:13 move, but I
0:40:14 don’t know.
0:40:14 The science is
0:40:15 slower than
0:40:16 you’d like.
0:40:17 People have
0:40:18 wanted things
0:40:19 before I thought
0:40:19 they were ready,
0:40:21 and finding a
0:40:22 way to deal
0:40:23 with that dance
0:40:24 of people wanting
0:40:26 information, but
0:40:27 also wanting to
0:40:28 continue to build
0:40:28 the evidence.
0:40:29 I think we can
0:40:30 figure out how
0:40:31 to do it.
0:40:31 I think that’s
0:40:32 exactly right.
0:40:33 And John List
0:40:34 again.
0:40:35 I think too
0:40:36 many times,
0:40:38 whether it’s
0:40:39 in public
0:40:40 policy, whether
0:40:41 it’s a for-profit
0:40:43 or a not-for-profit,
0:40:44 we tend to
0:40:45 only focus on
0:40:46 one side of
0:40:47 the market when
0:40:47 we have
0:40:49 problems, and
0:40:50 you really need
0:40:51 to take account
0:40:52 of both sides
0:40:52 because your
0:40:53 optimal solutions,
0:40:54 the best
0:40:55 solutions, are
0:40:55 only going to
0:40:56 come when you
0:40:56 look at both
0:40:57 sides of the
0:40:57 market.
0:40:58 I’m probably
0:40:59 getting this
0:40:59 wrong, or at
0:41:00 least being way
0:41:00 too reductive,
0:41:01 but to me it
0:41:01 sounds like the
0:41:03 chief barrier to
0:41:04 scaling up programs
0:41:05 to help people
0:41:07 is people, that
0:41:08 people are the
0:41:08 problem.
0:41:10 Yeah, so I do
0:41:11 think inherently
0:41:12 it is about
0:41:13 people.
0:41:15 That said, this
0:41:17 is not a fatal
0:41:20 flaw that causes
0:41:21 us to throw up
0:41:22 our arms and
0:41:23 say, well, this
0:41:24 isn’t physics,
0:41:24 this isn’t
0:41:25 chemistry, we
0:41:26 have to deal
0:41:27 with people, so
0:41:28 we can’t use
0:41:28 science.
0:41:29 I think that’s
0:41:30 wrong, because
0:41:30 there are some
0:41:32 very, very neat
0:41:34 advantages of
0:41:35 scaling.
0:41:36 Think about on
0:41:37 the cost side,
0:41:38 economists always
0:41:39 talk about, you
0:41:39 know, when
0:41:40 things get bigger
0:41:42 and bigger, guess
0:41:42 what happens?
0:41:44 The per-unit cost
0:41:45 goes down.
0:41:46 It’s called
0:41:47 increasing returns
0:41:48 to scale.
0:41:49 The problem that
0:41:50 kind of we’re
0:41:51 thinking about is
0:41:52 let’s make sure
0:41:53 that those
0:41:54 policymakers who
0:41:55 really want to
0:41:56 do the right
0:41:57 thing in use
0:41:58 science, let’s
0:41:59 make sure that
0:42:00 they have the
0:42:01 right programs to
0:42:01 implement.
0:42:03 So one of your
0:42:04 papers includes
0:42:05 this quote from
0:42:06 Bill Clinton, or
0:42:06 at least something
0:42:07 that Clinton may
0:42:07 have said, which
0:42:08 is essentially
0:42:09 that nearly
0:42:10 every problem
0:42:11 has been solved
0:42:12 by someone
0:42:13 somewhere, but
0:42:13 we just can’t
0:42:14 seem to replicate
0:42:15 those solutions
0:42:16 anywhere else.
0:42:18 So what makes
0:42:19 you think that
0:42:20 you’ve got the
0:42:21 keys to success
0:42:21 here where
0:42:22 others may not
0:42:23 have been able
0:42:23 to do it?
0:42:25 You know, I
0:42:26 view what we’ve
0:42:27 done is put
0:42:29 forward a set
0:42:29 of modest
0:42:31 proposals as
0:42:32 only a start
0:42:34 to tackle what
0:42:35 I think is the
0:42:36 most vexing
0:42:37 problem in
0:42:38 evidence-based
0:42:39 policymaking,
0:42:39 which is
0:42:39 scaling.
0:42:40 I think we’re
0:42:41 just taking
0:42:42 some small
0:42:44 steps theoretically
0:42:45 and empirically,
0:42:46 but I do think
0:42:47 that these first
0:42:49 set of steps
0:42:49 are important
0:42:51 because if
0:42:52 you go in the
0:42:53 right direction,
0:42:54 what I’ve
0:42:54 learned is that
0:42:55 literature will
0:42:56 follow that
0:42:56 direction.
0:42:58 If you go in
0:42:58 the wrong
0:42:58 direction,
0:43:00 sometimes the
0:43:01 literature follows
0:43:02 that wrong
0:43:02 direction for
0:43:03 several years,
0:43:04 and we
0:43:05 really don’t
0:43:05 have the
0:43:06 time.
0:43:07 Right now,
0:43:08 the opportunity
0:43:09 cost of time
0:43:10 is very high.
0:43:13 You know, in
0:43:14 the end, I
0:43:14 want policy
0:43:15 science not
0:43:15 to be an
0:43:16 oxymoron,
0:43:17 and I think
0:43:18 that’s what this
0:43:19 research agenda
0:43:19 is about.
0:43:21 The way that I
0:43:21 would view it
0:43:23 is that the
0:43:24 world is
0:43:25 imperfect because
0:43:26 we haven’t
0:43:27 used science
0:43:28 in policymaking,
0:43:30 and if we
0:43:31 add science
0:43:31 to it,
0:43:33 we have a
0:43:34 chance to
0:43:34 make an
0:43:35 imperfect world
0:43:36 a little bit
0:43:37 more perfect.
0:43:42 If you want
0:43:42 to read the
0:43:43 papers that
0:43:44 John List and
0:43:45 Dana Susskind
0:43:45 and their
0:43:46 collaborators
0:43:46 have been
0:43:47 working on,
0:43:47 you will find
0:43:48 links on
0:43:49 Freakonomics.com
0:43:50 as well as
0:43:51 links to
0:43:51 Patty Chamberlain’s
0:43:52 work with
0:43:53 Treatment Foster
0:43:53 Care Oregon
0:43:55 and much more,
0:43:56 including, as
0:43:56 always, a
0:43:57 complete transcript
0:43:58 of this episode.
0:43:59 And we will
0:44:00 be back soon
0:44:00 with another
0:44:01 new episode
0:44:02 of Freakonomics
0:44:03 Radio.
0:44:03 Until then,
0:44:04 take care of
0:44:04 yourself.
0:44:05 And if you
0:44:06 can, someone
0:44:07 else, too.
0:44:09 Freakonomics
0:44:09 Radio is produced
0:44:10 by Stitcher
0:44:11 and Renbud
0:44:11 Radio.
0:44:12 You can find
0:44:13 our entire
0:44:14 archive on
0:44:14 any podcast
0:44:15 app, also
0:44:17 at Freakonomics.com
0:44:18 where we publish
0:44:19 transcripts and
0:44:19 show notes.
0:44:21 This episode was
0:44:21 produced by
0:44:22 Matt Hickey
0:44:23 with an update
0:44:24 by Augusta
0:44:24 Chapman.
0:44:25 The Freakonomics
0:44:26 Radio network
0:44:27 staff also includes
0:44:28 Alina Cullman,
0:44:28 Dalvin
0:44:29 Abuaji,
0:44:30 Eleanor Osborne,
0:44:31 Ellen Frankman,
0:44:31 Elsa Hernandez,
0:44:32 Gabriel Roth,
0:44:33 Greg Rippon,
0:44:34 Jasmine Klinger,
0:44:35 Jeremy Johnston,
0:44:35 John Schnarz,
0:44:36 Morgan Levy,
0:44:37 Neil Carruth,
0:44:38 Sarah Lilly,
0:44:39 Tao Jacobs,
0:44:40 and Zach Lipinski.
0:44:41 Our theme song
0:44:42 is Mr. Fortune
0:44:43 by the Hitchhikers
0:44:43 and our composer
0:44:45 is Luis Guerra.
0:44:46 As always,
0:44:47 thanks for listening.
0:44:56 So you want to
0:44:56 talk scaling?
0:44:57 Wow,
0:44:57 it’s a heavy
0:44:58 paper, right?
0:44:58 It’s great.
0:44:59 I thought it
0:45:00 was about
0:45:01 scaling fish
0:45:01 initially,
0:45:03 so that was
0:45:04 all my
0:45:05 background reading.
0:45:05 Yeah,
0:45:06 so I don’t
0:45:06 know anything
0:45:07 about what
0:45:07 we’re going
0:45:08 to talk about
0:45:08 today.
0:45:10 Neither do I,
0:45:10 so we can
0:45:11 just both
0:45:11 wing it.
0:45:17 The Freakonomics
0:45:18 Radio Network,
0:45:19 the hidden
0:45:19 side of
0:45:20 everything.
0:45:24 Stitcher.
0:00:11 We just published a two-part series on what some people call sludge, meaning all the frictions
0:00:17 that make it hard to fill out tax forms or find a health care provider or even cancel
0:00:17 a subscription.
0:00:23 One part of our series involved government sludge and how it interferes with getting
0:00:24 policy done.
0:00:29 The series reminded me of another episode we once made that I thought was worth hearing
0:00:32 again, so we’re playing it for you here as a bonus episode.
0:00:36 It is called Policymaking is Not a Science Yet.
0:00:39 We have updated facts and figures as necessary.
0:00:42 As always, thanks for listening.
0:00:56 Usually when children are born deaf, they call it nerve deafness, but it’s really not the actual
0:00:56 nerve.
0:01:00 It’s little tiny hair cells in the cochlea.
0:01:05 Dana Susskind is a physician scientist at the University of Chicago and, more dramatically,
0:01:10 she is a pediatric surgeon who specializes in cochlear implants.
0:01:17 My job is to implant this incredible piece of technology which bypasses these defective
0:01:24 hair cells and takes the sound from the environment, the acoustic sound, and transforms it into electrical
0:01:27 energy, which then stimulates the nerve.
0:01:36 And somebody who is severe to completely profoundly deaf after implantation can have normal levels
0:01:36 of hearing.
0:01:38 And it is pretty phenomenal.
0:01:40 It is pretty phenomenal.
0:01:47 If you ever need a good cry, a happy cry, just type in cochlear implant activation on YouTube.
0:01:53 You’ll see little kids hearing sound for the first time and their parents flipping out with joy.
0:01:59 Good job!
0:01:59 Good job!
0:02:02 She’s smiley.
0:02:07 Oh, that’s great!
0:02:11 She’s so smiley.
0:02:13 Yeah, that’s your ears.
0:02:14 Yeah.
0:02:22 The cochlear implant is a remarkable piece of technology, but really it’s just one of
0:02:29 many remarkable advances in medicine and elsewhere, created by devoted researchers and technologists
0:02:31 and sundry smart people.
0:02:33 You know what’s even more remarkable?
0:02:37 How often we fail to take advantage of these advances.
0:02:43 One of the most compelling examples is the issue of hypertension.
0:02:47 About a third of all Americans have high blood pressure.
0:02:50 First of all, the awareness rate is about only 80%.
0:02:53 Of the total amount, only 50% actually are controlled.
0:02:55 We have great drugs, right?
0:03:02 But you can see the cascade of issues when you have to disseminate, you have to adhere, etc.,
0:03:05 and the public health ramifications of that.
0:03:10 Those blood pressure numbers are even worse today than they were when we first published
0:03:11 this episode in 2020.
0:03:16 Clearly, we still have not figured out how to get the science to the people who need it.
0:03:20 Prescription adherence is a very difficult nut to crack.
0:03:21 That’s John List.
0:03:24 He’s an economist at the University of Chicago.
0:03:30 They actually have to go and get the medicines, which a lot of people have a very hard time doing.
0:03:35 Even though it’s sitting next to your bed every night, people don’t take it.
0:03:38 And they don’t take it because they forget.
0:03:45 They don’t take it because the side effect is a lot worse than the benefit they think they’re getting.
0:03:52 All of these types of problems, as humans, including myself, we do a really bad job in trying to solve.
0:03:55 All of us, our lives get busy.
0:03:56 We forget.
0:04:01 You wouldn’t think you’d have an adherence issue with something like the cochlear implant.
0:04:03 It has such an obvious upside.
0:04:05 And yet…
0:04:09 When I put the internal device in, it stays there.
0:04:14 But it actually requires an external portion as well, sort of like a hearing aid.
0:04:20 And that is the part where you see issues related to adherence.
0:04:27 Just because I put the internal part doesn’t mean that an individual or a child will be wearing the external part.
0:04:32 In one study, only half of the participants wore their device full-time.
0:04:40 I mean, we have figured through randomized control trials to understand causation, real impact in the small scale.
0:04:46 But the next step is understanding the science of how to use this science.
0:04:54 Because, you know, how you do it on the small scale in perfect conditions is very different than the messy real world.
0:04:56 And that is a very real issue.
0:05:01 Today on Freakonomics Radio, what to do about that very real issue.
0:05:06 Because you see the same thing not just in medicine, but in education and economic policy and elsewhere.
0:05:11 Solutions that look foolproof in the research stage are failing to scale up.
0:05:14 People said, let’s just put it out there.
0:05:17 And then we quickly realized that it’s far more complicated.
0:05:23 There might be something that you think would be great, but it’s never going to be able to be implemented in the real world.
0:05:27 We need to know, what is the magic sauce?
0:05:30 We’ll go in search of that magic sauce right after this.
0:05:54 This is Freakonomics Radio, the podcast that explores the hidden side of everything, with your host, Stephen Dubner.
0:06:10 John List is a pioneer in the relatively recent movement to give economic research more credibility in the real world.
0:06:17 If you turn back the clock to the 1990s, there was a credibility revolution in economics,
0:06:25 focusing on what data and modeling assumptions are necessary to go from correlation to causality.
0:06:29 List responded by running dozens and dozens of field experiments.
0:06:35 Now, my contribution in the credibility revolution was instead of working with secondary data,
0:06:44 I actually went to the world and used the world as my lab and generated new data to test theories and estimate program effects.
0:06:50 Okay, so you and others moved experiments out of the lab and into the real world.
0:06:57 But have you been able to successfully translate those experimental findings into, let’s say, good policy?
0:07:07 I think moving our work into policymaking circles and having a very strong impact has just not been there.
0:07:10 And I think one of the most important questions is,
0:07:15 how are we going to make that natural progression of field experiments within the social sciences
0:07:23 to more keenly talk to policymakers, the broader public, and actually the scientific community as a whole?
0:07:30 The way List sees it, academics like him work hard to come up with evidence for some intervention
0:07:33 that’s supposed to help alleviate poverty or improve education,
0:07:37 to help people quit smoking or take their blood pressure medicine.
0:07:43 The academic then writes up their paper for an incredibly impressive-looking academic journal,
0:07:45 impressive at least to fellow academics.
0:07:48 The rest of us, it’s jargony and indecipherable.
0:07:54 But then, with paper in hand, the academic goes out proselytizing to policymakers.
0:07:55 He might say,
0:08:00 you politicians always talk about making evidence-based policy.
0:08:04 Well, here’s some new evidence for an effective and cost-effective way
0:08:08 of addressing that problem you say you care so much about.
0:08:10 And then the policymaker may say,
0:08:13 well, the last time we listened to an academic like you,
0:08:16 we did just what they told us, but it didn’t work.
0:08:19 And it cost three times what they said it would.
0:08:21 And we got hammered in the press.
0:08:23 And here’s the thing.
0:08:27 The politician and the academic may both be right.
0:08:31 John List has seen this from both sides now.
0:08:35 In a past life, I worked in the White House advising the president
0:08:39 on environmental and resource issues within economics.
0:08:42 This was in the early 2000s under George W. Bush.
0:08:47 A harsh lesson that I learned was you have to evaluate the effects of public policy
0:08:49 as opposed to its intentions.
0:08:52 Because the intentions are obviously good.
0:08:56 For instance, improving literacy for grade schoolers
0:08:59 or helping low-income high schoolers get to college.
0:09:04 When you step back and look at the amount of policies
0:09:08 that we put in place that don’t work,
0:09:10 it’s just a travesty.
0:09:14 List has firsthand experience with the failure to scale.
0:09:17 So down in Chicago Heights,
0:09:20 I ran a series of interventions.
0:09:23 And one of the more powerful interventions
0:09:25 was called the Parent Academy.
0:09:30 That was a program that brought in parents every few weeks.
0:09:34 And we taught them what are the best mechanisms and approaches
0:09:37 that they can use with their 3-, 4-, and 5-year-old children
0:09:41 to push both their cognitive skills
0:09:43 and their executive function skills.
0:09:45 Things like self-control.
0:09:49 What we found was within three to six months,
0:09:52 we can move a child in very short order
0:09:55 to have very strong cognitive test scores
0:09:58 and very strong executive function skills.
0:10:00 So, of course, we’re very optimistic
0:10:02 after getting this type of result,
0:10:03 and we want the whole world
0:10:06 to now do parent academies.
0:10:09 The UK approaches us and said,
0:10:11 we want to roll it out across London
0:10:13 and the boroughs around London.
0:10:16 What we found is that it failed miserably.
0:10:19 It wasn’t that the program was bad.
0:10:21 It failed miserably
0:10:25 because no parents actually signed up.
0:10:28 So if you want your program to work
0:10:30 at higher levels,
0:10:32 you have to figure out
0:10:34 how to get the right people
0:10:37 and all the people, of course,
0:10:38 into the program.
0:10:40 If you had asked me to guess
0:10:42 all the ways that a program like that could fail,
0:10:44 it would have taken me a while
0:10:45 to guess that you simply
0:10:47 didn’t get parental uptake.
0:10:48 The main problem is
0:10:50 we just don’t understand
0:10:52 the science of scaling.
0:10:54 If you were to attach a noun
0:10:56 to what this is,
0:10:58 the scalability blank,
0:11:01 is it a problem?
0:11:02 Is it a dilemma?
0:11:03 Is it a crisis?
0:11:05 I do think it’s a crisis
0:11:06 in that
0:11:08 if we don’t take care of it
0:11:09 as scientists,
0:11:11 I think everything we do
0:11:13 can be undermined
0:11:15 in the eyes of the policymaker
0:11:15 and the broader public.
0:11:17 We don’t understand
0:11:20 how to use our own science
0:11:22 to make better policies.
0:11:25 So John List and Dana Susskind
0:11:27 and some other researchers
0:11:29 are on a quest to address
0:11:30 this scalability crisis.
0:11:31 They’ve been writing
0:11:32 a series of papers,
0:11:33 for instance,
0:11:35 The Science of Using Science
0:11:36 Towards an Understanding
0:11:39 of the Threats to Scaling Experiments.
0:11:40 A lot of their focus
0:11:42 is on early education,
0:11:43 since that is a particular
0:11:44 passion of Susskind’s.
0:11:46 I guess you could say
0:11:48 I’m a surgeon by day
0:11:50 and social scientist by night.
0:11:51 My clinical work
0:11:53 is about taking care
0:11:54 of one child at a time.
0:11:56 My research
0:11:57 really comes out
0:11:57 of the fact
0:11:59 that not all children
0:12:00 do as well as others
0:12:01 after surgery
0:12:03 and trying to figure out
0:12:04 the best ways
0:12:04 to allow
0:12:05 all my patients
0:12:06 and really
0:12:07 children born
0:12:09 into low-income backgrounds
0:12:10 to reach
0:12:12 their educational potentials.
0:12:13 It is kind of like
0:12:14 a superhero in reverse.
0:12:15 During the day,
0:12:16 you’re doing
0:12:17 the big dramatic stuff
0:12:18 and at night,
0:12:19 you’re going home
0:12:20 to analyze the data
0:12:20 and figure out
0:12:21 what’s happening.
0:12:22 I think that really
0:12:23 the hard part
0:12:25 is the night part.
0:12:27 I love doing surgery.
0:12:29 I adore my patients,
0:12:30 but it’s actually
0:12:32 not as hard
0:12:33 as many of the complex issues
0:12:34 in this world.
0:12:36 And was that a recognition
0:12:38 that some kids
0:12:39 after the surgery
0:12:41 sort of zoomed up
0:12:42 the education ladder
0:12:43 and others didn’t?
0:12:43 Yeah.
0:12:44 It’s not simply
0:12:46 about hearing loss.
0:12:47 It’s because language
0:12:47 is the food
0:12:48 for the developing brain.
0:12:49 Before surgery,
0:12:50 they all looked like
0:12:52 they’d have the same potential
0:12:53 to, as you say,
0:12:54 zoom up the educational ladder.
0:12:56 After surgery,
0:12:56 there were very
0:12:57 different outcomes.
0:12:59 And too often
0:13:00 that difference
0:13:00 fell along
0:13:01 socioeconomic lines.
0:13:03 That made me start
0:13:04 searching outside
0:13:05 the operating room
0:13:05 for understanding
0:13:06 why and what
0:13:07 I could do about it.
0:13:08 And it has taken me
0:13:09 on a journey.
0:13:11 So Dana and I met
0:13:12 back in 2012
0:13:15 and we were introduced
0:13:16 by a mutual friend
0:13:17 and we did the usual
0:13:19 ignore each other
0:13:19 for a few years
0:13:21 because we’re too busy.
0:13:24 And push came to shove.
0:13:25 Dana and I started
0:13:26 to work on
0:13:27 early childhood research.
0:13:29 And after that,
0:13:31 research turned to love.
0:13:34 I always joke
0:13:36 that I was wooed
0:13:37 with spreadsheets
0:13:38 and hypotheses.
0:13:40 Is that true?
0:13:41 Yes.
0:13:42 Yes.
0:13:43 In fact,
0:13:44 the reason I decided
0:13:45 to marry him
0:13:46 was because I wanted
0:13:47 this area of scaling
0:13:49 to be a robust area
0:13:50 of research for him
0:13:51 because it really
0:13:52 is a major issue.
0:13:58 Suskind started
0:13:58 what was then called
0:14:00 the 30 million words
0:14:00 initiative.
0:14:02 30 million being
0:14:03 an estimate
0:14:04 of how many fewer
0:14:05 words a child
0:14:06 from a low-income home
0:14:07 will have heard
0:14:08 than an affluent child
0:14:09 by the time
0:14:09 they turn four.
0:14:11 But these days,
0:14:12 the project is called
0:14:13 the TMW Center
0:14:14 for Early Learning
0:14:15 and Public Health.
0:14:17 we’ve actually moved
0:14:18 away from the term
0:14:19 30 million words
0:14:20 because it’s such
0:14:21 a hot-button issue.
0:14:22 Hot-button because
0:14:23 it’s so hard to believe
0:14:24 that the number
0:14:24 is legit?
0:14:26 Well, no.
0:14:27 I mean,
0:14:28 some people say,
0:14:28 look,
0:14:29 it’s a deficit mentality.
0:14:30 You’re talking about
0:14:31 what’s not there.
0:14:33 And then the replication,
0:14:35 somebody did another study
0:14:35 that said,
0:14:37 oh, it’s only 4 million.
0:14:38 And it really isn’t
0:14:40 actually even the point
0:14:40 because it’s not
0:14:41 even about words.
0:14:42 It’s about the interaction.
0:14:44 So I just made
0:14:44 the decision.
0:14:45 I’d rather be focusing
0:14:47 on developing the research
0:14:48 than fighting
0:14:49 a naming battle.
0:14:50 So you didn’t make
0:14:51 TMW stand
0:14:53 for something else.
0:14:53 Well,
0:14:54 that’s what
0:14:55 everybody gives me
0:14:56 trouble for.
0:14:57 It stands for
0:14:57 30 million words,
0:14:59 but only I know that.
0:14:59 Okay,
0:15:01 now you all know it too.
0:15:03 Anyway,
0:15:04 they started the center
0:15:06 with this idea.
0:15:07 With this idea
0:15:08 that, you know,
0:15:09 we need to
0:15:10 take a public health
0:15:11 or a population-level
0:15:12 approach
0:15:13 during the early years
0:15:14 to optimize
0:15:15 early foundational
0:15:16 brain development
0:15:17 because the research
0:15:18 is pretty clear
0:15:20 that parent talk
0:15:20 and interaction
0:15:22 in the first
0:15:23 three years of life
0:15:24 are the catalyst
0:15:25 for brain development.
0:15:26 And so
0:15:27 that’s basically
0:15:28 our work.
0:15:29 Okay,
0:15:30 so far so good.
0:15:31 The research is clear
0:15:32 that heavy exposure
0:15:33 to language
0:15:34 is good for
0:15:35 the developing brain.
0:15:36 But how do you
0:15:37 turn that research
0:15:38 finding into action?
0:15:39 And how do you
0:15:40 scale it up?
0:15:41 Initially,
0:15:42 we started with
0:15:43 an intensive
0:15:44 home visiting
0:15:44 program,
0:15:45 but understanding
0:15:46 that to reach
0:15:47 population-level
0:15:48 impact,
0:15:49 you need to
0:15:50 develop programs
0:15:51 both with an
0:15:53 eye for scaling
0:15:54 as well as an eye
0:15:55 for understanding
0:15:56 where parents
0:15:57 go regularly.
0:15:58 Because healthcare,
0:15:59 unlike the education
0:16:00 system,
0:16:01 the first three years
0:16:02 of life really
0:16:03 don’t have any
0:16:04 infrastructure
0:16:05 in which to
0:16:06 disseminate programs.
0:16:08 So we actually
0:16:09 expanded our
0:16:09 model.
0:16:10 We have this
0:16:12 multifaceted program
0:16:13 that reached parents
0:16:14 where they were,
0:16:16 from maternity wards
0:16:17 into pediatrics
0:16:17 offices,
0:16:19 into the homes,
0:16:20 as well as group
0:16:20 sessions.
0:16:21 Those programs
0:16:22 that are most
0:16:23 vulnerable to the
0:16:24 issues of scale
0:16:25 are the complex
0:16:26 sort of service
0:16:27 delivery interventions.
0:16:28 You know,
0:16:29 anything that takes
0:16:31 a human service
0:16:31 delivery.
0:16:33 Scaling isn’t
0:16:34 an end.
0:16:34 It’s really
0:16:36 just a continuation.
0:16:41 You know,
0:16:42 it’s a hard one.
0:16:43 That is
0:16:44 Patti Chamberlain,
0:16:45 senior research
0:16:46 scientist at
0:16:47 Oregon Social
0:16:47 Learning Center.
0:16:49 And I do
0:16:51 research and
0:16:52 implementation
0:16:53 of evidence-based
0:16:55 practices in
0:16:56 child welfare,
0:16:57 juvenile justice,
0:16:58 mental health,
0:16:59 and education
0:17:00 systems.
0:17:01 Chamberlain also
0:17:02 looks at scaling
0:17:03 as a process.
0:17:05 So it’s almost
0:17:05 like there’s
0:17:06 stages that you
0:17:06 have to go
0:17:07 through.
0:17:08 And if the
0:17:09 first stage
0:17:09 is research
0:17:10 that involves
0:17:11 an RCT,
0:17:11 a randomized
0:17:12 controlled trial,
0:17:14 there’s already
0:17:14 an important
0:17:15 choice to make.
0:17:16 You’re far
0:17:17 better off
0:17:18 to situate
0:17:19 your RCT
0:17:20 in a real
0:17:21 world setting
0:17:22 than a
0:17:22 university clinic
0:17:24 so that you’re
0:17:24 learning from
0:17:25 the beginning
0:17:26 what’s feasible
0:17:26 and what’s
0:17:27 not feasible.
0:17:29 There might be
0:17:29 something that you
0:17:30 think would be
0:17:30 great,
0:17:31 but it’s never
0:17:31 going to be able
0:17:32 to be implemented
0:17:33 in the real
0:17:33 world.
0:17:34 I’ve been
0:17:35 at this
0:17:35 now for,
0:17:36 oh,
0:17:36 probably
0:17:38 25 years,
0:17:40 and I learned
0:17:41 sort of through
0:17:41 failing.
0:17:43 One program
0:17:43 Chamberlain founded
0:17:44 is called
0:17:45 Treatment Foster
0:17:46 Care Oregon.
0:17:48 Kids tend to
0:17:48 commit crimes
0:17:49 together.
0:17:50 It’s a team
0:17:50 sport.
0:17:51 But then,
0:17:52 oddly,
0:17:54 the way that
0:17:55 we’re set up
0:17:56 to deal with
0:17:57 kids who,
0:17:57 you know,
0:17:58 reach the level
0:17:59 where they’re
0:17:59 really being
0:18:01 unsafe to
0:18:01 themselves
0:18:01 and to
0:18:02 the community
0:18:03 is we put
0:18:03 them in
0:18:04 group homes
0:18:04 together.
0:18:05 We’re putting
0:18:06 kids in a
0:18:07 situation where
0:18:08 they’re more
0:18:09 likely to
0:18:11 commit crimes.
0:18:13 So we decided
0:18:13 what if we
0:18:14 placed a child
0:18:16 singly in a
0:18:17 family that
0:18:18 was completely
0:18:19 devoted to
0:18:21 using evidence-based
0:18:23 parenting skills
0:18:24 to help that
0:18:25 child do well
0:18:27 with peers in
0:18:28 school and in
0:18:28 the family
0:18:29 setting?
0:18:30 what if we
0:18:31 gave the
0:18:31 parents,
0:18:32 the biological
0:18:33 parents of
0:18:34 that kid,
0:18:35 the same kind
0:18:36 of skills that
0:18:37 the treatment
0:18:38 foster care
0:18:39 family had?
0:18:41 What if we
0:18:41 gave the kid
0:18:42 individual therapy?
0:18:43 The biological
0:18:44 family was
0:18:44 getting family
0:18:45 therapy.
0:18:45 We were giving
0:18:46 the kids
0:18:46 support at
0:18:47 school.
0:18:48 So we were
0:18:49 basically wrapping
0:18:50 all these services
0:18:51 around an
0:18:52 individual child
0:18:53 in a family
0:18:53 home.
0:18:55 What we found
0:18:56 was, yeah,
0:18:57 the kids do a
0:18:57 lot better.
0:18:58 They have a lot
0:18:59 fewer arrests.
0:19:00 they spend
0:19:01 less days in
0:19:02 institutions.
0:19:02 They use
0:19:03 fewer drugs.
0:19:05 And guess what?
0:19:06 It costs a lot
0:19:07 less as well.
0:19:08 Because you do
0:19:09 not have a
0:19:09 facility.
0:19:11 You do not
0:19:12 have 24-7 staff
0:19:13 that you’re paying
0:19:14 in shifts.
0:19:15 You do not
0:19:16 have, you know,
0:19:17 all of the
0:19:18 stuff that it
0:19:19 takes to run
0:19:20 an institution.
0:19:21 You have a
0:19:21 family.
0:19:23 The success of
0:19:23 Chamberlain’s
0:19:24 program caught
0:19:24 the eye of
0:19:25 researchers who
0:19:26 were working on
0:19:26 a program for a
0:19:27 federal agency
0:19:28 called the
0:19:29 Office of
0:19:29 Juvenile Justice
0:19:30 and Delinquency
0:19:31 Prevention.
0:19:32 And so we
0:19:33 got this call
0:19:34 saying, you
0:19:35 know, we
0:19:36 want you to
0:19:37 implement your
0:19:37 program in
0:19:39 15 sites.
0:19:40 If the
0:19:40 program was
0:19:41 successful at
0:19:42 one site, how
0:19:43 hard could it be
0:19:44 to make it work
0:19:44 at 15?
0:19:46 I went in
0:19:47 thinking that it
0:19:48 wouldn’t be that
0:19:50 hard because we
0:19:50 had good outcomes.
0:19:51 We showed that we
0:19:52 could save money.
0:19:55 And yet, we
0:19:55 were absolutely
0:19:56 not ready.
0:19:58 It wasn’t because
0:19:58 we didn’t have
0:19:59 enough data.
0:20:01 We had, at that
0:20:02 point, plenty of
0:20:02 data.
0:20:04 But we didn’t
0:20:05 have the know-how
0:20:06 of how to put
0:20:07 this thing down
0:20:08 in the real
0:20:08 world.
0:20:10 And it blew up.
0:20:11 One reason?
0:20:12 Systemic
0:20:13 complication.
0:20:15 The three
0:20:16 systems, child
0:20:17 welfare, juvenile
0:20:18 justice, and
0:20:19 mental health, all
0:20:20 put some money in
0:20:21 the pot to fund
0:20:22 this implementation.
0:20:24 I was completely
0:20:25 delighted.
0:20:25 I thought, oh,
0:20:26 this is going to
0:20:28 be great because
0:20:29 we have all the
0:20:30 relevant systems
0:20:31 buying into
0:20:31 this.
0:20:32 Well, what
0:20:34 happened was
0:20:35 when we tried
0:20:35 to implement,
0:20:37 we ran into
0:20:39 tremendous
0:20:40 barriers because
0:20:42 if we satisfied
0:20:43 the policies
0:20:44 and procedures
0:20:44 of one
0:20:46 system, we
0:20:46 were at
0:20:47 odds with
0:20:47 the policies
0:20:48 and procedures
0:20:48 in the
0:20:49 other system.
0:20:51 Patty
0:20:52 Chamberlain had
0:20:52 run up against
0:20:53 something that
0:20:54 Dana Susskind
0:20:54 had come to
0:20:55 see as an
0:20:56 inherent disconnect
0:20:57 when you try
0:20:58 to scale up
0:20:58 a research
0:20:59 finding.
0:20:59 There’s
0:21:00 obviously the
0:21:00 implementation,
0:21:02 everybody focusing
0:21:02 on adherence,
0:21:03 but there’s
0:21:04 also sort of
0:21:05 the infrastructure
0:21:06 delivery mechanism,
0:21:07 which I think
0:21:08 is an issue,
0:21:09 whether it’s
0:21:09 government or
0:21:10 health care,
0:21:11 that they’re
0:21:12 just not
0:21:12 set up for
0:21:13 interventions,
0:21:14 which are
0:21:14 sort of like
0:21:15 innovations.
0:21:16 So you’ve got
0:21:16 these researchers
0:21:17 who think of
0:21:18 themselves as
0:21:20 scientific entrepreneurs
0:21:21 developing the
0:21:22 next best thing,
0:21:24 thinking you build
0:21:25 it and they
0:21:25 will come,
0:21:26 and then you’ve
0:21:27 got organizations
0:21:28 that are sort of
0:21:29 built for
0:21:30 efficiency rather
0:21:30 than effectiveness
0:21:31 that can’t
0:21:32 uptake it.
0:21:33 If only there
0:21:34 were another
0:21:34 science,
0:21:35 a science to
0:21:36 help these
0:21:37 scientific
0:21:38 entrepreneurs
0:21:39 and institutions
0:21:40 come together
0:21:40 to implement
0:21:41 this new
0:21:41 research.
0:21:43 Maybe something
0:21:43 that could
0:21:44 be called
0:21:45 Implementation
0:21:45 science.
0:21:46 Implementation
0:21:46 science.
0:21:47 Implementation
0:21:48 science.
0:21:48 Implementation
0:21:49 science.
0:21:50 Okay, let’s
0:21:51 define
0:21:51 implementation
0:21:52 science.
0:21:53 It’s the
0:21:54 study of how
0:21:55 programs get
0:21:56 implemented into
0:21:57 practice and
0:21:58 how the quality
0:21:59 of that
0:22:00 implementation may
0:22:01 affect how well
0:22:01 that program
0:22:02 works or
0:22:03 doesn’t work.
0:22:04 That is
0:22:05 Lauren Suplee.
0:22:06 When we spoke
0:22:06 with her,
0:22:07 Suplee was the
0:22:07 deputy chief
0:22:08 operating officer
0:22:09 of a nonprofit
0:22:10 called Child
0:22:11 Trends, which
0:22:12 promotes evidence
0:22:13 based policy to
0:22:14 improve children’s
0:22:14 lives.
0:22:16 This whole science
0:22:17 is maybe 15
0:22:18 years old.
0:22:19 It’s really
0:22:21 coming out of
0:22:22 this movement of
0:22:23 evidence based
0:22:24 policy and
0:22:24 programs where
0:22:25 people said,
0:22:26 well, we have
0:22:27 this program.
0:22:28 It appears to
0:22:28 change important
0:22:29 outcomes.
0:22:30 Let’s just put
0:22:31 it out there
0:22:31 and then we
0:22:32 quickly realized
0:22:33 that there are
0:22:34 a lot of
0:22:35 issues and
0:22:36 actually that
0:22:36 put it out
0:22:37 there is far
0:22:38 more complicated.
0:22:39 A lot of the
0:22:39 evidence based
0:22:40 programs we have
0:22:41 were designed
0:22:42 by academic
0:22:44 researchers who
0:22:45 were testing it
0:22:46 in the maybe
0:22:47 more ideal
0:22:48 circumstances that
0:22:49 they had available
0:22:50 to them that
0:22:50 might have
0:22:51 included graduate
0:22:52 students.
0:22:53 It might have
0:22:53 been a school
0:22:54 district that
0:22:55 was very amenable
0:22:56 to research.
0:22:57 And then you
0:22:57 take the results
0:22:58 of that and
0:22:59 trying to put
0:22:59 that into
0:23:01 another location
0:23:01 is where the
0:23:03 challenge happened.
0:23:06 So coming up
0:23:07 after the break,
0:23:09 can implementation
0:23:10 science really
0:23:10 help?
0:23:11 You know, I want
0:23:12 policy science not
0:23:14 to be an oxymoron.
0:23:15 You’re listening to
0:23:16 Freakonomics Radio.
0:23:17 I’m Stephen Dubner.
0:23:17 We will be right
0:23:17 back.
0:23:33 What randomized
0:23:34 controlled trials
0:23:35 tell us about
0:23:35 an intervention
0:23:38 is what that
0:23:39 actual intervention
0:23:41 does in a
0:23:42 particular population
0:23:44 in a particular
0:23:44 context.
0:23:46 It doesn’t mean
0:23:46 that it’s
0:23:47 generalizable.
0:23:48 That, again,
0:23:49 is Dana Susskind
0:23:50 from the University
0:23:51 of Chicago.
0:23:52 But you have to
0:23:53 continue the science
0:23:54 so you can understand
0:23:55 how it’s going to
0:23:55 work in a different
0:23:56 place, in a different
0:23:57 context, in a different
0:23:59 population and have
0:23:59 the same effect.
0:24:00 And that’s part of
0:24:02 the scaling science.
0:24:03 The scaling science.
0:24:05 That is what Susskind
0:24:06 and her economist
0:24:07 collaborator John List,
0:24:08 who’s also her
0:24:09 husband, and other
0:24:10 researchers have been
0:24:11 working on.
0:24:12 They’ve been
0:24:13 systematically examining
0:24:14 why interventions
0:24:15 that work well in
0:24:16 experimental or
0:24:17 research settings
0:24:18 often fail to
0:24:19 scale up.
0:24:20 You can see why
0:24:21 this is an
0:24:22 important puzzle
0:24:22 to solve.
0:24:24 Scaling up a new
0:24:25 intervention, like
0:24:26 a medical procedure
0:24:27 or a teaching
0:24:28 method, has the
0:24:29 potential to help
0:24:31 thousands, millions,
0:24:32 maybe billions of
0:24:32 people.
0:24:34 But what if it
0:24:36 simply fails at
0:24:36 scale?
0:24:37 What if it ends up
0:24:39 costing way more
0:24:40 than anticipated or
0:24:41 creates serious
0:24:43 unintended consequences?
0:24:44 That’ll make it that
0:24:45 much harder for the
0:24:46 next set of
0:24:46 researchers to
0:24:47 persuade the next
0:24:48 set of policymakers
0:24:49 to listen to them.
0:24:50 So List and
0:24:51 Susskind have been
0:24:52 looking at scaling
0:24:53 failures from the
0:24:54 past and trying to
0:24:55 categorize what went
0:24:56 wrong.
0:25:00 You can kind of
0:25:01 put what we’ve
0:25:03 learned into three
0:25:04 general buckets that
0:25:05 seem to encompass the
0:25:06 failures.
0:25:08 Bucket number one is
0:25:09 that the evidence was
0:25:10 just not there to
0:25:12 justify scaling the
0:25:12 program in the first
0:25:13 place.
0:25:15 The Department of
0:25:16 Education did this
0:25:18 broad survey on
0:25:20 prevention programs
0:25:20 attempting to
0:25:22 attenuate youth
0:25:24 substance and crime
0:25:25 and aspects like
0:25:25 that.
0:25:26 And what they
0:25:28 found is that only
0:25:29 8% of those
0:25:31 programs were
0:25:32 actually backed by
0:25:33 research evidence.
0:25:35 Many programs that
0:25:37 we put in place
0:25:39 really don’t have
0:25:41 the research findings
0:25:42 to support them.
0:25:43 And this is what a
0:25:44 scientist would call a
0:25:44 false positive.
0:25:46 So are we talking
0:25:47 about bad research?
0:25:48 Are we talking
0:25:49 about cherry picking?
0:25:49 Are we talking
0:25:50 about publication
0:25:51 bias?
0:25:52 So here we’re
0:25:52 talking about none
0:25:53 of those.
0:25:54 We’re talking about
0:25:55 a small-scale
0:25:57 research finding
0:25:58 that was the
0:25:59 truth in that
0:26:00 finding.
0:26:02 But because of the
0:26:03 mechanics of
0:26:04 statistical inference,
0:26:06 and it just won’t
0:26:06 be right,
0:26:08 what you were
0:26:09 getting into is
0:26:11 what I would call
0:26:12 the second bucket
0:26:13 of why things
0:26:14 fail, and that’s
0:26:15 what I call the
0:26:16 wrong people were
0:26:17 studied.
0:26:18 You know, these
0:26:19 are studies that
0:26:21 have a particular
0:26:22 sample of people
0:26:25 that shows really
0:26:26 large program
0:26:27 effect sizes,
0:26:28 but when you
0:26:30 program is gone
0:26:31 to general
0:26:32 populations,
0:26:33 that effect
0:26:34 disappears.
0:26:34 So essentially,
0:26:35 we were looking
0:26:36 at the wrong
0:26:37 people and scaling
0:26:38 to the wrong
0:26:38 people.
0:26:39 And when you
0:26:39 say the wrong
0:26:40 people, the
0:26:41 people that are
0:26:41 being studied
0:26:42 then are to
0:26:42 what?
0:26:45 They are the
0:26:46 people who
0:26:47 are the
0:26:48 fraction or
0:26:49 the group of
0:26:50 people who
0:26:51 receive the
0:26:52 largest program
0:26:53 benefits.
0:26:54 So I think
0:26:55 of some of the
0:26:55 experiments that
0:26:56 are done on
0:26:57 college campuses,
0:26:57 right, where
0:26:58 there’s a
0:26:59 professor who’s
0:27:00 looking to find
0:27:00 out something
0:27:01 about, let’s
0:27:02 say, altruism,
0:27:04 and the
0:27:04 experimental
0:27:05 setting is a
0:27:06 classroom where
0:27:07 20 college
0:27:07 students will
0:27:08 come in, and
0:27:08 they’re a pretty
0:27:10 homogeneous population,
0:27:11 they’re pretty
0:27:12 motivated, maybe
0:27:12 they’re very
0:27:13 disciplined, and
0:27:14 that may not
0:27:15 represent what
0:27:16 the world
0:27:16 actually is.
0:27:17 Is that what
0:27:18 you’re talking
0:27:18 about?
0:27:19 That’s one
0:27:20 piece of it.
0:27:22 Another piece
0:27:23 is who will
0:27:24 sign their
0:27:25 kids up for
0:27:26 Head Start or
0:27:27 for a program
0:27:29 in a neighborhood
0:27:30 that advances
0:27:31 the reading
0:27:32 skills of the
0:27:32 child?
0:27:33 Who’s going
0:27:33 to be first
0:27:34 in line?
0:27:35 The people who
0:27:36 really care about
0:27:37 education and
0:27:38 the people who
0:27:39 think their
0:27:40 child will
0:27:40 receive the
0:27:41 most benefits
0:27:41 from the
0:27:42 program.
0:27:43 Now, another
0:27:44 way to get
0:27:44 it is sort
0:27:45 of along the
0:27:45 lines that
0:27:46 you talked
0:27:46 about.
0:27:46 It could
0:27:47 be the
0:27:49 researcher knows
0:27:50 something about
0:27:51 the population
0:27:52 that other
0:27:53 people don’t
0:27:53 know.
0:27:55 Like, I want
0:27:55 to give my
0:27:56 program its
0:27:57 best shot of
0:27:58 working.
0:27:59 Okay, and
0:28:00 what’s in your
0:28:00 third bucket
0:28:02 of scaling
0:28:02 failures?
0:28:03 The third
0:28:04 bucket is
0:28:05 something that
0:28:06 we call
0:28:08 the wrong
0:28:09 situation was
0:28:09 used.
0:28:11 And what I
0:28:11 mean by that
0:28:12 is that certain
0:28:13 aspects of the
0:28:15 situation change
0:28:16 when you go
0:28:17 from the
0:28:17 original research
0:28:18 to the scaled
0:28:19 research program.
0:28:21 We don’t
0:28:23 understand what
0:28:24 properties of
0:28:25 the situation
0:28:26 or features of
0:28:27 the environment
0:28:28 will matter.
0:28:30 there are a
0:28:31 really large
0:28:32 group of
0:28:33 implementation
0:28:35 scientists who
0:28:35 have explored
0:28:36 this question
0:28:37 for years.
0:28:39 Now, what
0:28:40 they emphasize
0:28:41 and focus on
0:28:42 is something
0:28:43 called voltage
0:28:44 drop.
0:28:46 And voltage
0:28:47 drop essentially
0:28:48 means I
0:28:49 found a really
0:28:51 good result in
0:28:52 my original
0:28:52 research study,
0:28:53 but then when
0:28:54 they do it at
0:28:55 scale, that
0:28:57 voltage drop
0:28:58 ends up being,
0:28:58 for example,
0:29:00 a tenth of
0:29:00 the original
0:29:01 result or a
0:29:02 quarter of the
0:29:03 original result.
0:29:05 An example of
0:29:06 this is when
0:29:07 you look at
0:29:07 Head Start’s
0:29:08 home visiting
0:29:10 services, what
0:29:10 they do there
0:29:11 is this is an
0:29:11 early childhood
0:29:13 intervention that
0:29:14 found huge
0:29:16 improvements in
0:29:17 both child and
0:29:17 parent outcomes
0:29:18 in the original
0:29:20 study, except
0:29:20 when they tried
0:29:21 to scale that
0:29:22 up and do
0:29:24 home visits at
0:29:24 a much larger
0:29:26 scale, what
0:29:27 they found is
0:29:28 that, for
0:29:29 example, home
0:29:30 visits for
0:29:31 at-risk families
0:29:32 involved a lot
0:29:33 more distractions
0:29:34 in the house
0:29:35 and there was
0:29:36 less time on
0:29:37 child-focused
0:29:38 activities.
0:29:38 So this is
0:29:40 sort of the
0:29:41 wrong dosage or
0:29:41 the wrong
0:29:42 program is given
0:29:43 at scale.
0:29:46 There are many
0:29:46 factors that
0:29:47 contribute to
0:29:48 this voltage
0:29:49 drop, including
0:29:50 the admirably
0:29:51 high standards
0:29:52 set by the
0:29:53 original researchers.
0:29:54 when the
0:29:55 researcher starts
0:29:56 his or her
0:29:57 experiment, the
0:29:59 inclination is
0:29:59 I’m going to
0:30:00 get the best
0:30:01 tutors in the
0:30:02 world, so I’m
0:30:02 going to be able
0:30:03 to show how
0:30:03 effective my
0:30:04 intervention is.
0:30:05 Dana Susskind
0:30:06 again.
0:30:07 you only needed
0:30:08 10 math tutors
0:30:09 and you happen
0:30:10 to get the
0:30:10 PhD students
0:30:11 from the
0:30:11 University of
0:30:12 Chicago, and
0:30:13 then what
0:30:14 happens is you
0:30:14 show this
0:30:15 tremendous effect
0:30:16 size, and in
0:30:17 the scaling, all
0:30:18 of a sudden, you
0:30:19 need a hundred or
0:30:21 a thousand, and you
0:30:22 no longer have that
0:30:23 access to those
0:30:24 individuals, and you
0:30:26 go either down the
0:30:27 supply chain with
0:30:28 individuals who are
0:30:29 not quite as well
0:30:31 trained, or you end
0:30:32 up having to pay a
0:30:32 whole lot more
0:30:33 money to
0:30:34 maintain the
0:30:35 trained tutor
0:30:36 program, and one
0:30:37 way or the other,
0:30:39 either the impacts
0:30:40 of the intervention
0:30:42 go down, or your
0:30:43 costs go up
0:30:44 significantly.
0:30:46 Another problem in
0:30:46 this third bucket,
0:30:48 it’s a big bucket,
0:30:49 is when the person
0:30:50 who designed the
0:30:51 intervention and
0:30:52 masterminded the
0:30:53 initial trial can
0:30:54 no longer be so
0:30:55 involved once the
0:30:57 program scales up to
0:30:57 multiple locations.
0:30:59 Imagine if instead
0:31:00 of talking about an
0:31:01 educational or
0:31:02 medical program, we
0:31:03 were talking about
0:31:03 a successful
0:31:04 restaurant and the
0:31:05 original chef.
0:31:07 When you think about
0:31:08 the chef, if a
0:31:09 restaurant succeeds
0:31:11 because of the
0:31:13 magical work of
0:31:15 the chef, and you
0:31:16 think about scaling
0:31:18 that, if you can’t
0:31:20 scale the magic in
0:31:22 the chef, that’s not
0:31:22 scalable.
0:31:25 Now, if the magic is
0:31:26 because of the mix
0:31:28 of ingredients, and
0:31:29 the secret sauce, like
0:31:30 Domino’s, for
0:31:31 example, the secret
0:31:33 sauce or Papa John’s
0:31:35 is the actual
0:31:36 ingredients, then
0:31:37 that will be
0:31:38 scalable.
0:31:43 Now, if you are
0:31:43 the kind of pizza
0:31:44 eater who doesn’t
0:31:45 think Domino’s or
0:31:47 Papa John’s is good
0:31:49 pizza, well, welcome
0:31:50 to the scaling
0:31:50 dilemma.
0:31:52 Going big means you
0:31:53 have to be many
0:31:54 things to many
0:31:54 people.
0:31:56 Going big means you
0:31:57 will face a lot of
0:31:57 trade-offs.
0:31:59 Going big means you’ll
0:31:59 have a lot of people
0:32:01 asking you, do you
0:32:01 want this done
0:32:03 fast, or do you
0:32:03 want it done right?
0:32:05 Once you peer
0:32:07 inside these failure
0:32:08 buckets that List and
0:32:09 Susskind describe, it’s
0:32:11 not so surprising that
0:32:12 so many good ideas
0:32:13 fail to scale up.
0:32:15 So, what do they
0:32:16 propose that could
0:32:16 help?
0:32:18 Now, our proposal
0:32:20 is that we do not
0:32:22 believe that we
0:32:24 should scale a
0:32:27 program until you’re
0:32:29 95% certain the
0:32:30 result is true.
0:32:32 So, essentially, what
0:32:34 that means is we
0:32:35 need the original
0:32:37 research and then
0:32:40 three or four well-powered
0:32:42 independent replications
0:32:43 of the original
0:32:44 findings.
0:32:45 And how often is that
0:32:47 already happening in the
0:32:48 real world of, let’s
0:32:50 say, education reform
0:32:50 research?
0:32:52 I can’t name one.
0:32:52 Wow.
0:32:53 How about in the
0:32:55 realm of medical
0:32:56 compliance research?
0:32:58 My intuition is that
0:33:00 they’re probably not far
0:33:01 away from three or four
0:33:03 well-powered independent
0:33:03 replications.
0:33:07 In the hard sciences, in
0:33:08 many cases, you not only
0:33:10 have the original
0:33:13 research, but you have a
0:33:16 first replication also
0:33:17 published in science.
0:33:19 you know, the current
0:33:21 credibility crisis in
0:33:22 science is a serious
0:33:23 one that major
0:33:25 results are not
0:33:25 replicating.
0:33:28 The reason why is
0:33:29 because we weren’t
0:33:30 serious about
0:33:31 replication in the
0:33:31 first place.
0:33:33 So, this sort of puts
0:33:34 the onus on
0:33:35 policymakers and
0:33:37 funding agencies in
0:33:37 a sense of saying,
0:33:39 we need to change the
0:33:39 equilibrium.
0:33:42 So, that
0:33:43 suggests that
0:33:45 policymakers or
0:33:46 decision makers, they
0:33:47 are being, what,
0:33:50 overeager, premature in
0:33:52 accepting a finding that
0:33:53 looks good to them and
0:33:54 want to rush it into
0:33:54 play?
0:33:56 Or is it that the
0:33:57 researchers are
0:33:59 overconfident themselves
0:34:00 or maybe pushing this
0:34:01 research too hard?
0:34:02 Where is this failure
0:34:03 really happening?
0:34:04 Well, I think it’s sort
0:34:05 of a mix.
0:34:07 I think it’s fair to
0:34:08 say that some
0:34:10 policymakers are out
0:34:11 looking for evidence
0:34:13 to base their
0:34:14 preferred program on.
0:34:15 What this will do is
0:34:16 slow that down.
0:34:18 if you have a
0:34:19 pet project that
0:34:19 you want to get
0:34:21 through, fund the
0:34:22 replications and
0:34:23 let’s make sure the
0:34:24 science is correct.
0:34:25 We think we should
0:34:26 actually be rewarding
0:34:28 scholars for
0:34:29 attempting to
0:34:29 replicate.
0:34:31 You know, right now
0:34:33 in my community, if I
0:34:33 try to replicate
0:34:35 someone else, guess
0:34:35 what I’ve just
0:34:36 made?
0:34:38 I’ve just made a
0:34:39 mortal enemy for
0:34:39 life.
0:34:41 If you find a
0:34:42 publishable result,
0:34:43 what result is that?
0:34:43 you’re refuting
0:34:45 previous research.
0:34:47 Now I’ve doubled
0:34:48 down on my
0:34:48 enemy.
0:34:50 So that’s like a
0:34:52 first step in
0:34:53 terms of rewarding
0:34:55 scholars who are
0:34:55 attempting to
0:34:56 replicate.
0:34:58 Now, to
0:34:59 complement that, I
0:34:59 think we should
0:35:01 also reward
0:35:02 scholars who
0:35:02 have produced
0:35:04 results that are
0:35:05 independently
0:35:06 replicated.
0:35:07 You know, and I’m
0:35:08 talking about tying
0:35:09 tenure decisions,
0:35:11 grant money, and the
0:35:13 like to people who
0:35:14 have given us
0:35:15 credible research
0:35:16 that replicates.
0:35:20 After the break,
0:35:21 how can researchers
0:35:22 make sure that the
0:35:23 science they are
0:35:24 replicating works
0:35:25 when it scales up?
0:35:40 Before the break, we
0:35:41 were talking with the
0:35:41 University of Chicago
0:35:43 economist John List
0:35:44 about the challenges
0:35:45 of turning good
0:35:46 research into good
0:35:46 policy.
0:35:48 One challenge is
0:35:49 making sure that the
0:35:50 research findings are
0:35:52 in fact robust enough
0:35:53 to scale up.
0:35:54 Say I’m doing an
0:35:55 experiment in Chicago
0:35:57 Heights on early
0:35:59 childhood, and I find
0:36:01 a great result, how
0:36:03 confident should I be
0:36:05 that when we take that
0:36:06 result to all of
0:36:07 Illinois or all of the
0:36:09 Midwest or all of
0:36:10 America, is that
0:36:12 result still going
0:36:14 to find that
0:36:15 important benefit
0:36:17 cost profile that
0:36:18 we found in
0:36:18 Chicago Heights?
0:36:20 We need to know
0:36:21 what is the magic
0:36:22 sauce.
0:36:23 Was it the 20
0:36:25 teachers you hired
0:36:26 down in Chicago
0:36:28 Heights where if we
0:36:29 go nationally, we
0:36:30 need 20,000?
0:36:33 So it should
0:36:34 behoove me as an
0:36:36 original researcher
0:36:37 teacher to say,
0:36:38 look, if this
0:36:40 scales up, we’re
0:36:41 going to need many
0:36:42 more teachers.
0:36:44 I know teachers are
0:36:45 an important input.
0:36:47 Is the average
0:36:48 teacher in the
0:36:51 20,000 the same
0:36:52 as the average
0:36:53 teacher in the
0:36:53 20?
0:36:55 This is the dreaded
0:36:56 voltage drop that
0:36:57 implementation
0:36:58 scientists talk
0:36:58 about.
0:36:59 And the
0:36:59 implementation
0:37:01 scientists have
0:37:02 focused on
0:37:04 fidelity as a core
0:37:05 component behind
0:37:06 the voltage
0:37:06 drop.
0:37:08 Fidelity
0:37:09 meaning that the
0:37:10 scaled up program
0:37:10 reflects the
0:37:11 integrity of the
0:37:12 original program.
0:37:13 Measures of
0:37:14 fidelity.
0:37:15 That’s a really
0:37:16 critical part of
0:37:17 the implementation
0:37:18 process.
0:37:19 That, again, is
0:37:20 Patty Chamberlain,
0:37:21 founder of
0:37:21 Treatment Foster
0:37:22 Care Oregon.
0:37:23 You’ve got to be
0:37:24 able to measure,
0:37:26 is this thing
0:37:27 that’s down in the
0:37:28 real world the
0:37:29 same, you know,
0:37:30 does it have the
0:37:31 same components
0:37:32 that produce the
0:37:33 outcomes in the
0:37:33 RCTs.
0:37:35 Remember, it was
0:37:35 Chamberlain’s good
0:37:37 outcomes with young
0:37:37 people in foster
0:37:38 care that made
0:37:39 federal officials want
0:37:40 to scale up her
0:37:41 program in the first
0:37:41 place.
0:37:43 We got this call
0:37:44 saying, we want you
0:37:45 to implement your
0:37:47 program in 15
0:37:48 sites.
0:37:49 She found the
0:37:50 scaling up initially
0:37:51 very challenging.
0:37:52 It wasn’t the
0:37:54 kumbaya moment that
0:37:54 we thought it was
0:37:55 going to be.
0:37:56 But in time,
0:37:57 Treatment Foster
0:37:58 Care Oregon became
0:37:59 a very well-regarded
0:38:00 program.
0:38:00 It’s been around for
0:38:02 roughly 30 years
0:38:03 now, and the
0:38:04 model has spread
0:38:05 well beyond Oregon.
0:38:06 One key to this
0:38:07 success has been
0:38:08 developing fidelity
0:38:09 standards.
0:38:10 So the way that we
0:38:11 do it is we have
0:38:12 people upload all of
0:38:13 their sessions onto
0:38:14 a HIPAA secure
0:38:15 website, and then
0:38:16 we code those.
0:38:18 And if they’re not
0:38:18 meeting the fidelity
0:38:20 standards, then we
0:38:21 offer a fidelity
0:38:22 recovery plan.
0:38:23 You know, we
0:38:24 haven’t had to drop
0:38:24 a site, but we
0:38:26 have had to have
0:38:27 some of the people
0:38:28 in the site
0:38:30 retrained or not
0:38:31 continue.
0:38:32 being able to
0:38:33 measure fidelity
0:38:34 well from afar
0:38:35 provides another
0:38:37 benefit to scaling
0:38:37 up.
0:38:38 It allows the
0:38:39 people who
0:38:39 developed the
0:38:40 original program
0:38:41 to ultimately
0:38:43 step back, so
0:38:44 they don’t become
0:38:45 a bottleneck, which
0:38:45 is a common
0:38:46 scaling problem.
0:38:47 There can be
0:38:48 sort of an
0:38:49 orderly process
0:38:50 whereby you
0:38:52 step back in
0:38:53 increments as
0:38:54 people become
0:38:55 more and more
0:38:56 competent doing
0:38:56 what they’re
0:38:57 doing.
0:38:57 And that’s
0:38:58 what you want
0:38:58 because you
0:38:59 don’t want to
0:38:59 have this tied to
0:39:00 the developer
0:39:00 forever.
0:39:01 Otherwise, you
0:39:02 can’t get any
0:39:03 kind of reasonable
0:39:03 reach.
0:39:05 That said, you
0:39:06 also need to
0:39:06 have some
0:39:07 humility.
0:39:08 When you’re
0:39:08 scaling up, you
0:39:09 shouldn’t assume
0:39:10 your original
0:39:11 program was
0:39:12 perfect, that it
0:39:13 won’t need
0:39:14 adjustment, and
0:39:15 you need to be
0:39:15 willing to make
0:39:16 adjustments.
0:39:18 For example, we
0:39:19 recognized that
0:39:20 when we were in
0:39:21 real-world
0:39:23 communities, kids
0:39:23 needed something
0:39:24 that wasn’t
0:39:25 therapy, per se.
0:39:26 they needed
0:39:28 skills because
0:39:29 the kids had
0:39:30 often been
0:39:31 excluded from
0:39:32 normal socializing
0:39:33 sort of things
0:39:34 like sports
0:39:35 teams and
0:39:36 clubs.
0:39:37 And so we
0:39:38 needed what
0:39:39 we call a
0:39:39 skills coach
0:39:41 to help
0:39:42 those kids
0:39:42 learn the
0:39:43 moves that
0:39:44 they needed
0:39:44 to be able
0:39:45 to participate
0:39:47 in these
0:39:47 pro-social
0:39:49 activities that
0:39:49 are normal
0:39:50 kind of things.
0:39:51 So you have
0:39:51 research, you
0:39:52 have a theory,
0:39:52 and then you
0:39:53 have the
0:39:54 implementation, and
0:39:54 that feeds
0:39:55 into more
0:39:55 research, more
0:39:56 theory, more
0:39:56 implementation.
0:40:01 Look, everybody’s
0:40:01 motivation at the
0:40:02 end of the day is
0:40:03 about trying to
0:40:04 do good for
0:40:05 the people they
0:40:05 serve.
0:40:07 Dana Susskind
0:40:07 again.
0:40:08 There are many
0:40:09 children out there,
0:40:10 and there are a
0:40:11 lot of injustices,
0:40:12 so we need to
0:40:13 move, but I
0:40:14 don’t know.
0:40:14 The science is
0:40:15 slower than
0:40:16 you’d like.
0:40:17 People have
0:40:18 wanted things
0:40:19 before I thought
0:40:19 they were ready,
0:40:21 and finding a
0:40:22 way to deal
0:40:23 with that dance
0:40:24 of people wanting
0:40:26 information, but
0:40:27 also wanting to
0:40:28 continue to build
0:40:28 the evidence.
0:40:29 I think we can
0:40:30 figure out how
0:40:31 to do it.
0:40:31 I think that’s
0:40:32 exactly right.
0:40:33 And John List
0:40:34 again.
0:40:35 I think too
0:40:36 many times,
0:40:38 whether it’s
0:40:39 in public
0:40:40 policy, whether
0:40:41 it’s a for-profit
0:40:43 or a not-for-profit,
0:40:44 we tend to
0:40:45 only focus on
0:40:46 one side of
0:40:47 the market when
0:40:47 we have
0:40:49 problems, and
0:40:50 you really need
0:40:51 to take account
0:40:52 of both sides
0:40:52 because your
0:40:53 optimal solutions,
0:40:54 the best
0:40:55 solutions, are
0:40:55 only going to
0:40:56 come when you
0:40:56 look at both
0:40:57 sides of the
0:40:57 market.
0:40:58 I’m probably
0:40:59 getting this
0:40:59 wrong, or at
0:41:00 least being way
0:41:00 too reductive,
0:41:01 but to me it
0:41:01 sounds like the
0:41:03 chief barrier to
0:41:04 scaling up programs
0:41:05 to help people
0:41:07 is people, that
0:41:08 people are the
0:41:08 problem.
0:41:10 Yeah, so I do
0:41:11 think inherently
0:41:12 it is about
0:41:13 people.
0:41:15 That said, this
0:41:17 is not a fatal
0:41:20 flaw that causes
0:41:21 us to throw up
0:41:22 our arms and
0:41:23 say, well, this
0:41:24 isn’t physics,
0:41:24 this isn’t
0:41:25 chemistry, we
0:41:26 have to deal
0:41:27 with people, so
0:41:28 we can’t use
0:41:28 science.
0:41:29 I think that’s
0:41:30 wrong, because
0:41:30 there are some
0:41:32 very, very neat
0:41:34 advantages of
0:41:35 scaling.
0:41:36 Think about on
0:41:37 the cost side,
0:41:38 economists always
0:41:39 talk about, you
0:41:39 know, when
0:41:40 things get bigger
0:41:42 and bigger, guess
0:41:42 what happens?
0:41:44 The per-unit cost
0:41:45 goes down.
0:41:46 It’s called
0:41:47 increasing returns
0:41:48 to scale.
0:41:49 The problem that
0:41:50 kind of we’re
0:41:51 thinking about is
0:41:52 let’s make sure
0:41:53 that those
0:41:54 policymakers who
0:41:55 really want to
0:41:56 do the right
0:41:57 thing in use
0:41:58 science, let’s
0:41:59 make sure that
0:42:00 they have the
0:42:01 right programs to
0:42:01 implement.
0:42:03 So one of your
0:42:04 papers includes
0:42:05 this quote from
0:42:06 Bill Clinton, or
0:42:06 at least something
0:42:07 that Clinton may
0:42:07 have said, which
0:42:08 is essentially
0:42:09 that nearly
0:42:10 every problem
0:42:11 has been solved
0:42:12 by someone
0:42:13 somewhere, but
0:42:13 we just can’t
0:42:14 seem to replicate
0:42:15 those solutions
0:42:16 anywhere else.
0:42:18 So what makes
0:42:19 you think that
0:42:20 you’ve got the
0:42:21 keys to success
0:42:21 here where
0:42:22 others may not
0:42:23 have been able
0:42:23 to do it?
0:42:25 You know, I
0:42:26 view what we’ve
0:42:27 done is put
0:42:29 forward a set
0:42:29 of modest
0:42:31 proposals as
0:42:32 only a start
0:42:34 to tackle what
0:42:35 I think is the
0:42:36 most vexing
0:42:37 problem in
0:42:38 evidence-based
0:42:39 policymaking,
0:42:39 which is
0:42:39 scaling.
0:42:40 I think we’re
0:42:41 just taking
0:42:42 some small
0:42:44 steps theoretically
0:42:45 and empirically,
0:42:46 but I do think
0:42:47 that these first
0:42:49 set of steps
0:42:49 are important
0:42:51 because if
0:42:52 you go in the
0:42:53 right direction,
0:42:54 what I’ve
0:42:54 learned is that
0:42:55 literature will
0:42:56 follow that
0:42:56 direction.
0:42:58 If you go in
0:42:58 the wrong
0:42:58 direction,
0:43:00 sometimes the
0:43:01 literature follows
0:43:02 that wrong
0:43:02 direction for
0:43:03 several years,
0:43:04 and we
0:43:05 really don’t
0:43:05 have the
0:43:06 time.
0:43:07 Right now,
0:43:08 the opportunity
0:43:09 cost of time
0:43:10 is very high.
0:43:13 You know, in
0:43:14 the end, I
0:43:14 want policy
0:43:15 science not
0:43:15 to be an
0:43:16 oxymoron,
0:43:17 and I think
0:43:18 that’s what this
0:43:19 research agenda
0:43:19 is about.
0:43:21 The way that I
0:43:21 would view it
0:43:23 is that the
0:43:24 world is
0:43:25 imperfect because
0:43:26 we haven’t
0:43:27 used science
0:43:28 in policymaking,
0:43:30 and if we
0:43:31 add science
0:43:31 to it,
0:43:33 we have a
0:43:34 chance to
0:43:34 make an
0:43:35 imperfect world
0:43:36 a little bit
0:43:37 more perfect.
0:43:42 If you want
0:43:42 to read the
0:43:43 papers that
0:43:44 John List and
0:43:45 Dana Susskind
0:43:45 and their
0:43:46 collaborators
0:43:46 have been
0:43:47 working on,
0:43:47 you will find
0:43:48 links on
0:43:49 Freakonomics.com
0:43:50 as well as
0:43:51 links to
0:43:51 Patty Chamberlain’s
0:43:52 work with
0:43:53 Treatment Foster
0:43:53 Care Oregon
0:43:55 and much more,
0:43:56 including, as
0:43:56 always, a
0:43:57 complete transcript
0:43:58 of this episode.
0:43:59 And we will
0:44:00 be back soon
0:44:00 with another
0:44:01 new episode
0:44:02 of Freakonomics
0:44:03 Radio.
0:44:03 Until then,
0:44:04 take care of
0:44:04 yourself.
0:44:05 And if you
0:44:06 can, someone
0:44:07 else, too.
0:44:09 Freakonomics
0:44:09 Radio is produced
0:44:10 by Stitcher
0:44:11 and Renbud
0:44:11 Radio.
0:44:12 You can find
0:44:13 our entire
0:44:14 archive on
0:44:14 any podcast
0:44:15 app, also
0:44:17 at Freakonomics.com
0:44:18 where we publish
0:44:19 transcripts and
0:44:19 show notes.
0:44:21 This episode was
0:44:21 produced by
0:44:22 Matt Hickey
0:44:23 with an update
0:44:24 by Augusta
0:44:24 Chapman.
0:44:25 The Freakonomics
0:44:26 Radio network
0:44:27 staff also includes
0:44:28 Alina Cullman,
0:44:28 Dalvin
0:44:29 Abuaji,
0:44:30 Eleanor Osborne,
0:44:31 Ellen Frankman,
0:44:31 Elsa Hernandez,
0:44:32 Gabriel Roth,
0:44:33 Greg Rippon,
0:44:34 Jasmine Klinger,
0:44:35 Jeremy Johnston,
0:44:35 John Schnarz,
0:44:36 Morgan Levy,
0:44:37 Neil Carruth,
0:44:38 Sarah Lilly,
0:44:39 Tao Jacobs,
0:44:40 and Zach Lipinski.
0:44:41 Our theme song
0:44:42 is Mr. Fortune
0:44:43 by the Hitchhikers
0:44:43 and our composer
0:44:45 is Luis Guerra.
0:44:46 As always,
0:44:47 thanks for listening.
0:44:56 So you want to
0:44:56 talk scaling?
0:44:57 Wow,
0:44:57 it’s a heavy
0:44:58 paper, right?
0:44:58 It’s great.
0:44:59 I thought it
0:45:00 was about
0:45:01 scaling fish
0:45:01 initially,
0:45:03 so that was
0:45:04 all my
0:45:05 background reading.
0:45:05 Yeah,
0:45:06 so I don’t
0:45:06 know anything
0:45:07 about what
0:45:07 we’re going
0:45:08 to talk about
0:45:08 today.
0:45:10 Neither do I,
0:45:10 so we can
0:45:11 just both
0:45:11 wing it.
0:45:17 The Freakonomics
0:45:18 Radio Network,
0:45:19 the hidden
0:45:19 side of
0:45:20 everything.
0:45:24 Stitcher.
Why do so many promising solutions in education, medicine, and criminal justice fail to scale up into great policy? And can a new breed of “implementation scientists” crack the code?
- SOURCES:
- Patti Chamberlain, senior research scientist at the Oregon Social Learning Center.
- John List, professor of economics at the University of Chicago.
- Lauren Supplee, former deputy chief operating officer at Child Trends.
- Dana L. Suskind, professor of surgery at the University of Chicago.
- RESOURCES:
- “How Can Experiments Play a Greater Role in Public Policy? 12 Proposals from an Economic Model of Scaling,” by Omar Al-Ubaydli, John List, Claire Mackevicius, Min Sok Lee, and Dana Suskind.
- “The Science of Using Science: Towards an Understanding of the Threats to Scaling Experiments,” by Omar Al-Ubaydli, John List, and Dana Suskind (The Field Experiments Website, 2019).
- “Inconsistent Device Use in Pediatric Cochlear Implant Users: Prevalence and Risk Factors,” by K.B.Wiseman and A.D. Warner-Czyz (U.S. National Library of Medicine National Institutes of Health, 2018).
- EXTRAS:
- “Why Do Most Ideas Fail to Scale?” by Freakonomics Radio (2022).
- “The Price of Doing Business with John List,” by People I (Mostly) Admire (2022).
- Child Trends.
- Oregon Social Learning Center.
- T.M.W. Center for Early Learning and Public Health.
- The Field Experiments Website.