Summary & Insights
The image of a child hearing for the first time after receiving a cochlear implant is a powerful testament to human ingenuity, yet it masks a more pervasive problem: our frequent failure to get proven scientific advances to the people who need them. This episode delves into the stubborn gap between research and real-world impact, exploring why interventions that show spectacular results in controlled studies—from early childhood education programs to hypertension medication—so often fizzle when scaled up. The conversation frames this as a “scalability crisis,” where the bridge from a laboratory or field experiment to widespread policy is fraught with unseen obstacles.
Economist John List and pediatric surgeon Dana Susskind detail three primary reasons for these scaling failures. First, the initial evidence itself can be misleading, based on studies that are too small or not robust enough. Second, the research might have been conducted with the “wrong people”—a highly motivated, atypical group that doesn’t represent the broader population. Third, and most complex, is the “wrong situation,” where crucial conditions from the original experiment, like exceptionally talented implementers or an ideal environment, cannot be replicated at a larger scale. This often leads to a “voltage drop,” where the effect of the program diminishes dramatically.
The discussion argues that moving a program to scale is not the final step of research, but a new phase of science unto itself—often called implementation science. It requires planning for real-world friction from the very beginning, such as by running initial trials in realistic settings and meticulously measuring “fidelity” to the original program’s active ingredients. The path forward involves a cultural shift in academia and policy, incentivizing multiple independent replications of a study before scaling and rewarding researchers for building interventions designed to survive contact with complex, human systems.
Surprising Insights
- Medical non-adherence applies even to “miraculous” technology: Children with cochlear implants, which can restore hearing, often don’t consistently wear the external processor, highlighting that the challenge isn’t just access to technology but human behavior.
- A successful program can fail at scale for the simplest reason: A highly effective parent academy in Chicago failed when replicated in London not because the curriculum was wrong, but because not enough parents signed up—a basic uptake problem overlooked in the research phase.
- Good intentions can create systemic barriers: When different government agencies (child welfare, juvenile justice, mental health) all fund a single program, their conflicting policies and procedures can make implementation impossible, turning collaboration into a hurdle.
- The “magic” of a successful program can be its biggest flaw: If a program’s success depends on the unique charisma or skill of its founder (like a brilliant chef), it is inherently unscalable. Scalability requires transferring a “secret sauce” that others can replicate.
Practical Takeaways
- Demand multiple replications before scaling: Policymakers and funders should require several independent, well-powered replications of a study’s results in different contexts before investing in a large-scale rollout.
- Design studies with scale in mind from the start: Researchers should run initial trials in realistic, community-based settings with typical staff, not in ideal university-run clinics, to uncover feasibility problems early.
- Systematically measure fidelity and adapt: When scaling, continuously measure whether the program is being delivered as intended (fidelity) and have a plan to retrain or correct course, but also remain humble and open to adapting the model for new contexts.
- Incentivize replication in academia: Academic institutions should reward researchers for conducting replication studies and for producing original work that others can successfully replicate, tying these activities to tenure and funding decisions.
Why do so many promising solutions — in education, medicine, criminal justice, etc. — fail to scale up into great policy? And can a new breed of “implementation scientists” crack the code?



Leave a Reply
You must be logged in to post a comment.