AI transcript
0:00:04 Hey there, it’s Stephen Dubner.
0:00:09 This is the second and final part of a series we are revisiting from last year.
0:00:19 Stick around for an update at the end of the episode.
0:00:23 Last week’s episode was called Why Is There So Much Fraud in Academia.
0:00:29 We heard about the alleged fraudsters, we heard about the whistleblowers, and then a lawsuit
0:00:31 against the whistleblowers.
0:00:38 My very first thoughts were like, oh my god, how’s anyone going to be able to do this again?
0:00:43 We heard about feelings of betrayal from a co-author who was also a long-time friend
0:00:45 of the accused.
0:00:52 We once even got to the point of our two families making an offer to a developer on a project
0:00:55 to have houses connected to each other.
0:01:01 We also heard an admission from inside the house that the house is on fire.
0:01:05 If you were just a rational agent acting in the most self-interested way possible as
0:01:09 a researcher in academia, I think you would cheat.
0:01:14 That episode was a little gossipy for us at least.
0:01:20 Today we are back to wonky, but don’t worry, it is still really interesting.
0:01:26 Today we look into the academic research industry, and believe me, it is an industry.
0:01:30 And there is misconduct everywhere from the universities.
0:01:36 The most likely career path for anyone who has committed misconduct is a long and fruitful
0:01:41 career because most people, if they’re caught at all, they skate.
0:01:46 There’s misconduct at academic journals, some of which are essentially fake.
0:01:50 There may be something that sounds a lot less nefarious than what I just described, but
0:01:53 that is actually what’s happening.
0:01:59 And we’ll hear how the rest of us contribute, because after all, we love these research
0:02:00 findings.
0:02:04 You know, you wear red, you must be angry, or if it says that this is definitely a cure
0:02:05 for cancer.
0:02:09 We’ll also hear from the reformers who are trying to push back.
0:02:13 It was a tense few months, but in the end, I was allowed to continue doing what I was
0:02:14 doing.
0:02:16 Can academic fraud be stopped?
0:02:27 Let’s find out.
0:02:32 This is Freakonomics Radio, the podcast that explores the hidden side of everything with
0:02:44 your host, Stephen Dubner.
0:02:50 This week, we heard about two alleged cases of data fraud from two separate researchers
0:02:51 in one paper.
0:02:57 The paper claimed that if you ask people to sign a form at the top before they fill out
0:03:03 the information, you will get more truthful answers than if they sign at the bottom.
0:03:09 After many unsuccessful attempts to replicate this finding and allegations that the data
0:03:14 supporting it had been faked, the original paper was finally retracted.
0:03:20 The two alleged fraudsters are Dan Ariely of Duke and Francesca Gino, who has been suspended
0:03:22 by Harvard Business School.
0:03:27 Gino subsequently sued Harvard, as well as the three other academic researchers who blew
0:03:28 the whistle.
0:03:32 The lawsuit against the researchers was dismissed.
0:03:35 The whistleblowers maintain a blog called Data Colada.
0:03:40 They have written that they believe there is fake data in many of the papers that Francesca
0:03:43 Gino coauthored, perhaps dozens.
0:03:48 Gino and Ariely, meanwhile, both maintain their innocence.
0:03:52 They also both declined our request for an interview.
0:03:56 On that one paper that caused all the trouble about signing at the top, there were three
0:04:01 other coauthors, Lisa Shoe, Nina Mazar, and Max Bazerman.
0:04:05 None of them have been accused of any wrongdoing.
0:04:10 So let’s pick up where we left off with Max Bazerman, the most senior researcher on that
0:04:11 paper.
0:04:13 He also teaches at Harvard Business School.
0:04:19 When there’s somebody who engages in bad behavior, there’s always people around who
0:04:23 could have noticed more and acted more.
0:04:26 Bazerman was close with Francesca Gino.
0:04:29 He had been her advisor and he trusted her.
0:04:34 So he has been spending a lot of time lately thinking about the mess.
0:04:40 He recently published a book called Complicit, How We Enable the Unethical and How to Stop,
0:04:45 and he’s working on another book about social science fraud.
0:04:48 This has led him to consider what makes people cheat.
0:04:54 Let’s take the case of Dieteric Stapel, a Dutch professor of social psychology, who
0:05:02 after years of success admitted to fabricating and manipulating data in dozens of studies.
0:05:09 Part of the path toward data fabrication occurred in part because he liked complex ideas, and
0:05:16 academia didn’t like complex ideas as much as they liked the snappy sort of quick bait,
0:05:22 and that moved him in that direction and also put him on the path toward fraudulent behavior.
0:05:28 So here’s something that Stapel wrote later when he wrote a book of confession essentially
0:05:29 about his fraud.
0:05:35 He wrote, “I was doing fine, but then I became impatient, overambitious, reckless.
0:05:39 I wanted to go faster and better and higher and smarter all the time.
0:05:43 I thought it would help if I just took this one tiny little shortcut, but then I found
0:05:46 myself more and more often in completely the wrong lane in the end.
0:05:49 I wasn’t even on the road at all.”
0:05:54 What struck me about that, and I think about that with Dan Ariely and Francesca Gino as
0:05:59 well, which is that the people who have been accused of having committed academic fraud
0:06:05 are really successful already, and I’m curious what that tells you about either the stakes
0:06:11 or the incentives or maybe the psychology of how this happens, because honestly, it surprises
0:06:12 me.
0:06:18 I would say we don’t know that much about why the fraudsters do what they do.
0:06:22 And the most interesting source you just mentioned, so Stapel wrote a book in Dutch
0:06:29 called “Outsporing,” which means something like D-Rail, where he provides his information
0:06:33 and he goes on from the material you talked about to describing that he became like an
0:06:36 alcoholic or a heroin addict.
0:06:41 And he got used to the easy successes, and he began to believe that he wasn’t doing
0:06:42 any harm.
0:06:48 After all, he was just making it easier to publish information that was undoubtedly true.
0:06:55 So this aspect of sort of being lured onto the path of unethical behavior followed by
0:07:00 addictive-like behavior becomes part of the story, and Stapel goes on to talk about lots
0:07:06 of other aspects like the need to score, ambition, laziness, wanting power, status.
0:07:12 So he provides this good insight, but most of the admitted fraudsters or the people who
0:07:18 have lost their university positions based on allegations of fraud have simply disappeared
0:07:20 and have never talked about it.
0:07:25 One of the interesting parts is that Mark Houser, who resigned from Harvard, and Ariely
0:07:32 and Gino, who are alleged to have committed fraud by some parties, all three of them wrote
0:07:39 on the topic of moral behavior and specifically why people might engage in bad behavior.
0:07:40 That’s right.
0:07:46 A lot of the fraud and suspected fraud comes from researchers who explore fraud.
0:07:54 In 2012, Francesca Gino and Dan Ariely collaborated on another paper called The Dark Side of Creativity.
0:07:57 Original thinkers can be more dishonest.
0:08:01 They wrote, “We propose that a creative personality and a creative mindset promote
0:08:09 individuals’ ability to justify their behavior, which in turn leads to unethical behavior.”
0:08:15 So just how much unethical behavior is there in the world of academic research?
0:08:21 That’s a hard question to answer precisely, but let’s start with this man.
0:08:26 I essentially spend all of my nights and weekends thinking about scientific fraud, scientific
0:08:28 misconduct, scientific integrity for that matter.
0:08:35 Ivan Oranski is a medical doctor and editor of a neuroscience publication called The Transmitter.
0:08:41 He’s also a distinguished journalist in residence at NYU, and on the side, he runs a website
0:08:43 called Retraction Watch.
0:08:49 We hear from whistleblowers all the time people we call sleuths who are actually out there
0:08:53 finding these problems, and often that’s pre-retraction or they’ll explain to us why
0:08:54 retraction happened.
0:08:57 We also do things like file public records requests.
0:09:02 He began Retraction Watch in 2010 with Adam Marcus, another science journalist.
0:09:08 Marcus had broken a story about a Massachusetts anesthesiologist named Scott Rubin.
0:09:13 Rubin had received funding from several drug companies to conduct clinical trials, but
0:09:18 instead he faked the data and published results without running the trials.
0:09:22 I went to Adam and I said, “What if we create a blog about this?”
0:09:26 It seems like there are all these stories that are hiding in plain sight that essentially
0:09:30 we and other journalists are leaving on the table.
0:09:37 When we looked at the actual retraction notices, the information was somewhere between misleading
0:09:38 and opaque.
0:09:40 What do you mean by that?
0:09:46 When a paper is retracted and it’s probably worth defining that, a retraction is a signal
0:09:51 to the scientific community or really to any readers of a particular peer-reviewed journal
0:09:58 article that you should not rely on that anymore, that there’s something about it that means
0:10:03 you should … You can not pretend it doesn’t exist, but you shouldn’t base any other work
0:10:04 on it.
0:10:09 But when you call it misleading or opaque, you’re saying the explanation for the retraction
0:10:12 is often not transparent.
0:10:16 When you retract a paper, you’re supposed to put a retraction notice on it the same way
0:10:19 when you correct an article in the newspaper, you’re supposed to put a correction notice
0:10:21 on it.
0:10:25 When you actually read these retraction notices, and to be fair, this has changed a fair amount
0:10:30 in the 13 years that we’ve been doing this, sometimes they include no information at all.
0:10:34 Sometimes they include information that is woefully incomplete.
0:10:39 Sometimes it’s some version of getting al Capone on tax evasion.
0:10:45 They fake the data, but we’re going to say they forgot to fill out this form, which is
0:10:50 still a reason to retract, but isn’t the whole story.
0:10:56 Let’s say we back up and I ask you, how significant or widespread is the problem of, we’ll call
0:10:58 it academic fraud?
0:11:07 We think that probably 2% of papers should be retracted for something that would be considered
0:11:10 either out and out fraud or maybe just severe bad mistake.
0:11:18 According to our data, which we have the most retraction data of any database, about 0.1%
0:11:23 of the world’s literature is retracted, so one in a thousand papers.
0:11:26 We think it should be about 20 times that, about 2%.
0:11:29 There’s a bunch of reasons, but they come down to one.
0:11:34 There was a survey back in 2009 which has been repeated and done differently and come
0:11:40 up with roughly the same number, actually even higher numbers recently that says 2% of researchers,
0:11:44 if you ask them anonymously, they will say, yes, I’ve committed something that would be
0:11:45 considered misconduct.
0:11:48 Of course, when you ask them how many people they know who have committed misconduct, it
0:11:52 goes much, much higher than that.
0:11:54 That’s one line of evidence, which is admittedly indirect.
0:11:59 The other is that when you talk to the sleuths, the people doing the real work of figuring
0:12:04 out what’s wrong with literature and letting people know about it, they keep lists of papers
0:12:09 they’ve flagged for publishers and for authors and journals, and routinely, most of them are
0:12:10 not retracted.
0:12:12 Again, we came to 2%.
0:12:15 Is it exactly 2% and is that even the right number?
0:12:18 No, we’re pretty sure that’s the lower bound.
0:12:23 Others say it should be even higher.
0:12:28 Modern watch has a searchable database that includes more than 45,000 retractions from
0:12:32 journals in just about any academic field you can imagine.
0:12:38 They also post a leaderboard, a ranking of the researchers with the most papers retracted.
0:12:44 At the top of that list is another anesthesiologist, this one a German researcher named Joachim
0:12:45 Bolt.
0:12:49 He came up briefly in last week’s episode too.
0:12:52 Bolt has had more than 200 papers retracted.
0:12:57 Bolt, an anesthesiology researcher, was studying something called hetistarch, which was essentially
0:12:58 a blood substitute.
0:13:05 Not exactly blood, but something that when you were on a heart-lung pump, a machine during
0:13:08 certain surgeries or you’re in the ICU or something like that, it would basically cut
0:13:14 down on the amount of blood transfusions people would need, and that’s got obvious benefits.
0:13:20 Now he did a lot of the important work in that area, and his work was cited in all the guidelines.
0:13:23 It turned out that he was faking data.
0:13:29 Bolt was caught in 2010 after an investigation by a German state medical association.
0:13:34 The method he had been promoting was later found to be associated with a significant
0:13:37 increased risk of mortality.
0:13:43 So in this case, the fraud led to real danger, and what happened to Bolt?
0:13:47 He at one point was under criminal indictment, or at least criminal investigation.
0:13:48 That didn’t go anywhere.
0:13:54 The hospital, the clinic also, which to be fair, had actually identified a lot of problems.
0:14:00 They came under pretty severe scrutiny, but in terms of actual sanctions, pretty minimal.
0:14:03 You’ve written that universities protect fraudsters.
0:14:06 Can you give an example other than Bolt, let’s say?
0:14:09 So universities, they protect fraudsters in a couple of different ways.
0:14:13 One is that they are very slow to act, they’re very slow to investigate, and they keep all
0:14:16 of those investigations hidden.
0:14:22 The other is that because lawyers run universities, like they, frankly, run everything else, they
0:14:25 tell people who are involved in investigations.
0:14:31 If someone calls for a reference letter, let’s say someone leaves, and they haven’t been
0:14:36 quite found guilty, but as a plea bargain sort of thing, they will leave, and then they’ll
0:14:38 stop the investigation.
0:14:42 Then when someone calls for a reference, and we actually have their receipts on this because
0:14:49 we filed public records requests for emails between different parties, we learned that
0:14:53 they would be routinely told not to say anything about the misconduct.
0:14:58 Let’s just take three of the most recent high profile cases of academic fraud or accusations
0:14:59 of academic fraud.
0:15:04 We’ve got Francesca Geno, who was a psychology researcher at Harvard Business School, Dan
0:15:08 Ariely, who’s a psychologist at Duke, and then Mark Tessier-Levin, who was president
0:15:12 of Stanford Medical Researcher, three pretty different outcomes, right?
0:15:18 Tessier-Levin was defenestrated from his presidency, Geno was suspended by HBS, and Dan Ariely,
0:15:23 who’s had accusations lobbed at him for years now, is just kind of going on.
0:15:26 Can you just comment on that heterogeneity?
0:15:31 So then I would just not so much as a correction, but just to say that, yes, Mark Tessier-Levin
0:15:33 was defenestrated as president.
0:15:38 He remains, at least at the time of this discussion, a tenured professor at Stanford, which is
0:15:42 a pretty nice position to be in.
0:15:47 A Stanford report found that Tessier-Levin didn’t commit fraud or falsify any of his
0:15:54 data, although work in his labs, quote, “fell below customary standards of scientific rigor,”
0:16:00 and multiple members of his labs appear to have manipulated research data.
0:16:05 Dan Ariely also remains a professor at Duke, and Duke has declined to comment publicly
0:16:10 on whether an investigation into the allegations of data fraud even occurred.
0:16:15 As Ivan Oransky told us, universities are run by lawyers.
0:16:21 I’ve been quoted saying that the most likely career path or the most likely outcome for
0:16:26 anyone who has committed misconduct is a long and fruitful career.
0:16:30 And I mean that because it’s true, because most people, if they’re caught at all, they
0:16:31 skate.
0:16:36 The number of cases we write about, which grows every year, but is still a tiny fraction
0:16:43 of what’s really going on, Dan Ariely, we interviewed Dan years ago about some questions
0:16:45 in his research.
0:16:49 Duke is actually, I would argue, a little bit of a singular case.
0:16:58 Duke in 2019 settled with the US government for $112.5 million because they had repeatedly
0:17:06 alleged to have covered up really bad significant misconduct.
0:17:08 Duke has had particular trouble with medical research.
0:17:14 There was one physician researcher who faked the data in his cancer research.
0:17:20 There were allegations of federal grant money being mishandled, also failing to protect patients
0:17:22 in some studies.
0:17:28 At one point, the National Institutes of Health placed special sanctions on all Duke researchers.
0:17:34 So I’ve got to be honest, and people in Durham may not like me saying this, but I think Duke
0:17:39 has a lot of work to do to demonstrate that their investigations are complete and that
0:17:43 they are doing the right thing by research dollars and for patients.
0:17:48 Most of the researchers we have been talking about are already well-established in their
0:17:49 fields.
0:17:54 If they feel pressure to get a certain result, it’s probably the kind of pressure that comes
0:17:59 with further burnishing your high status, but junior researchers face a different kind
0:18:01 of pressure.
0:18:04 They need published papers to survive.
0:18:06 Publish or perish is the old saying.
0:18:10 If they can’t consistently get their papers in a good journal, they probably won’t get
0:18:17 tenure and they may not even have a career, and those journals are flooded with submissions.
0:18:22 So the competition is fierce and it is global.
0:18:27 This is easy to miss if you’re in the US since so many of the top universities and journals
0:18:34 are here, but academic research is very much a global industry and it’s also huge.
0:18:44 Even if you are a complete nerd and you could name 50 journals in your field, you know nothing.
0:18:50 Every year, more than 4 million articles are published in somewhere between 25 and 50,000
0:18:55 journals, depending on how you count, and that number is always growing.
0:19:02 You have got journals called Aggressive Behavior and Frontiers in Ceramics, Neglected Tropical
0:19:10 Diseases, and Rangefer, which covers the biology and management of reindeer and caribou.
0:19:16 There used to be a journal of mundane behavior, but that one, sadly, is defunct.
0:19:21 But no matter how many journals there are, there is never enough space for all the papers.
0:19:27 And this has led to some, well, let’s call it scholarly innovation.
0:19:30 Here again is Ivan Oransky.
0:19:34 People are now really fixated on what are known as paper mills.
0:19:41 So if you think about the economics of this, right, it is worthwhile if you are a researcher
0:19:47 to actually pay, in other words, it’s an investment in your own future, to pay to publish a certain
0:19:48 paper.
0:19:53 What I’m talking about is literally buying a paper or buying authorship on a paper.
0:19:58 So to give you a little bit of a sense of how this might work, you’re a researcher who
0:20:00 is about to publish a paper.
0:20:06 So Donner et al. have got some interesting finding that you’ve actually written up and
0:20:08 a journal has accepted.
0:20:10 It’s gone through peer review.
0:20:13 And you’re like, great, and that’s good for you.
0:20:17 But you actually also want to make a little extra money on the side.
0:20:22 So you take that paper, that essentially it’s not a paper, it’s really still a manuscript.
0:20:26 You put it up on a brokerage site, or you put the title up on a brokerage site.
0:20:31 You say, I’ve got a paper where, you know, there are four authors right now.
0:20:35 It’s going into this journal, which is a top tier or mid tier, whatever it is, et cetera.
0:20:37 It’s in this field.
0:20:42 If you would like to be an author, the bidding starts now, or it’s just, it’s $500 or 500
0:20:43 euros or whatever it is.
0:20:47 And then Ivan Moransky comes along and says, I need a paper.
0:20:49 I need to get tenure.
0:20:50 I need to get promoted.
0:20:51 Oh, great.
0:20:53 Let me click on this, you know, brokerage site.
0:20:55 Let me give you $500.
0:21:01 And now all of a sudden, you write to the journal, by the way, I have a new author.
0:21:02 His name is Ivan Moransky.
0:21:03 He’s at New York University.
0:21:07 He just joined late, but he’s been so invaluable to the process.
0:21:10 Now what do the other co-authors have to say about this?
0:21:13 Often they don’t know about it, or at least they claim they don’t know about it.
0:21:18 What about fraudulent journals, do those exist, or fraudulent publication sites that look
0:21:22 legit enough to pass muster for somebody’s department?
0:21:23 They do.
0:21:28 There are all different versions of fraudulent with a lower case F publications.
0:21:32 There are publications that are legit in the sense that they can point to doing all the
0:21:34 things that you’re supposed to do as a journal.
0:21:36 You have papers submitted.
0:21:38 You do something that looks sort of like peer review.
0:21:42 You assign it what’s known as a digital object identifier.
0:21:44 You do all that publishing stuff.
0:21:48 And they’re not out and out fraudulent in the sense of they don’t exist and people are
0:21:52 just making it up, or they’re trying to, you know, use a name that isn’t really theirs.
0:21:55 But they’re fraudulent with a lower case F in the sense that they’re not doing most
0:21:57 of those things.
0:22:01 Then there are actual, what we refer to, we in Anna Abelkina, who works with us on this,
0:22:03 as hijacked journals.
0:22:06 We have more than 200 on this list now.
0:22:08 They were at one point legitimate journals.
0:22:14 So it’s a real title that some university or funding agency or et cetera will actually
0:22:15 recognize.
0:22:22 But what happened was some version of the publisher sort of forgot to renew their domain.
0:22:25 I mean, literally something like that.
0:22:29 Now, they’re more nefarious versions of it, but it’s that sort of thing where, you know,
0:22:35 these really bad players are inserting themselves and taking advantage of the vulnerabilities
0:22:40 in the system, of which there are many, to really print money because then they can get
0:22:44 people to pay them to publish in those journals and they even are getting indexed in the places
0:22:45 that matter.
0:22:53 It sounds like there are thousands of people who are willing to pay journals that are quasi
0:22:56 real to publish their papers.
0:22:58 At least thousands, yes.
0:23:01 Who are the kind of authors who would publish in those journals?
0:23:06 Are they American, not American, or their particular fields or disciplines that happen
0:23:08 to be most common?
0:23:12 Well, I think what it tends to correlate with is how direct or intense the publisher
0:23:16 of Paris culture is in that particular area.
0:23:21 And generally that varies more by country or region than anything else.
0:23:25 If you look at, for example, the growth in China of a number of papers published, what’s
0:23:30 calculated is the impact of those papers, which relies on things like how often they’re
0:23:31 cited.
0:23:37 You can trace that growth very directly from government mandates.
0:23:43 For example, if you publish in certain journals known as a high impact factor, you actually
0:23:48 got a cash bonus that was a sort of multiple of the number of the impact factor.
0:23:53 And that can make a big difference in your life.
0:23:59 Recent research by John Yenedes, Thomas Collins, and Yerun Boss has examined what they call
0:24:02 extremely productive authors.
0:24:07 They left out physicists since some physics projects are so massive that one paper can
0:24:10 have more than 5,000 authors.
0:24:14 So leaving aside the physicists, they found that over the course of one year, more than
0:24:21 1,200 researchers around the world publish the equivalent of one paper every five days.
0:24:28 The top countries for these extremely productive authors were China, the US, Saudi Arabia,
0:24:30 Italy, and Germany.
0:24:35 When there is so much research being published, you’d also expect that the vast majority of
0:24:38 it is never read by more than a handful of people.
0:24:43 There’s also the problem, as the economics blogger Noah Smith recently pointed out, that
0:24:50 too much academic research is just useless, at least to anyone beyond the author.
0:24:53 But you shouldn’t expect any of this to change.
0:24:58 Global scholarly publishing is a $28 billion market.
0:25:04 Everyone complains about the very high price of journal subscriptions, but universities
0:25:07 and libraries are essentially forced to pay.
0:25:12 There is, however, another business model in the research paper industry, and it is
0:25:13 growing fast.
0:25:16 It’s called Open Access Publishing.
0:25:22 And here it’s most often the authors who pay to get into the journal.
0:25:27 If you think that sounds problematic, well, yes.
0:25:31 Consider Hindawi, an Egyptian publisher with more than 200 journals, including the Journal
0:25:37 of Combustion, the International Journal of Ecology, the Journal of Advanced Transportation.
0:25:41 Most of their journals were not at all prestigious, but that doesn’t mean they weren’t lucrative.
0:25:47 A couple years ago, Hindawi was bought by John Wiley and Sons, the huge American academic
0:25:50 publisher for nearly $300 million.
0:25:55 So Hindawi’s business model was they’re an open access publisher, which usually means
0:26:00 you charge authors to publish in your journal, and you charge them, you know, it could be
0:26:05 anywhere from hundreds of dollars to even thousands of dollars per paper, and they’re
0:26:09 publishing, you know, tens of thousands and sometimes even more papers per year.
0:26:11 So you can start to do that math.
0:26:17 What happened at Hindawi was that somehow, paper mills realized that they were vulnerable.
0:26:19 So they started targeting them.
0:26:24 They’ve actually started paying some of these editors to accept papers from their paper mill.
0:26:31 And long story short, they now have had to retract something like, we’re still figuring
0:26:34 out the exact numbers when the dust settles, but in the thousands.
0:26:40 In 2023, Hindawi retracted more than 8,000 papers.
0:26:45 That was more retractions than there had ever been in a year from all academic publishers
0:26:46 combined.
0:26:52 Wiley recently announced that they will stop using the Hindawi brand name, and they folded
0:26:55 the remaining Hindawi journals into their portfolio.
0:26:59 But they are not getting out of the pay to publish business.
0:27:02 Publishers earn more from publishing more.
0:27:03 It’s a volume play.
0:27:07 And when you’re owned by, you know, shareholders who want growth all the time, that is the
0:27:09 best way to grow.
0:27:14 And these are businesses with, you know, very impressive and enviable profit margins of,
0:27:17 you know, sometimes up to 40%.
0:27:18 And these are not on small numbers.
0:27:23 The profit itself is in the billions often.
0:27:28 It’s hard to blame publishers for wanting to earn billions from an industry with such
0:27:30 bizarre incentives.
0:27:36 But if publishers aren’t looking out for the integrity of the research, who is?
0:27:37 That’s coming up after the break.
0:27:40 I’m Stephen Dubner, and this is Freakonomics Radio.
0:27:56 We feel like when we’re in cultures that there is no way for any of us to change the culture.
0:27:57 It’s a culture.
0:28:00 My God, how could we change it?
0:28:06 But we also recognize that cultures are created by the people that comprise them.
0:28:12 And the notion that we collectively can actually do something to shift the research culture
0:28:19 I think has spread, and that spreading has actually accelerated the change of the research
0:28:21 culture for the better.
0:28:23 That is Brian Nosek.
0:28:27 He’s a psychology professor at the University of Virginia, and he runs the Center for Open
0:28:29 Science.
0:28:34 For more than a decade, his center has been trying to get more transparency in academic
0:28:35 research.
0:28:41 You might think there would already be transparency in academic research, at least I did.
0:28:45 But here’s what Nosek said in part one of the series, when we were talking about how
0:28:49 researchers tend to hoard their data rather than share it.
0:28:54 Yeah, it’s based on the academic reward system.
0:28:56 Publication is the currency of advancement.
0:29:02 I need publications to have a career, to advance my career, to get promoted.
0:29:08 And so the work that I do that leads to publication, I have a very strong sense of, “Oh my gosh,
0:29:15 if others now have control over this, my ideas, my data, my designs, my solutions, then I
0:29:18 will disadvantage my career.”
0:29:22 I asked Nosek how he thinks this culture can be changed.
0:29:27 So for example, we have to make it easy for researchers to be more transparent with their
0:29:28 work.
0:29:32 If it’s really hard to share your data, then adding on that extra work is going to slow
0:29:34 down my progress.
0:29:37 We have to make it normative.
0:29:41 People have to be able to see that others in their community are doing this.
0:29:42 They’re being more transparent.
0:29:46 They’re being more rigorous so that we, instead of saying, “Oh, that’s great ideas and nobody
0:29:49 does it,” you say, “Oh, there’s somebody over there that’s doing it.
0:29:51 Oh, maybe I could do it too.”
0:29:53 We have to deal with the incentives.
0:29:58 Is it actually relevant for my advancement in my career to be transparent, to be rigorous,
0:30:00 to be reproducible?
0:30:03 And then we have to address the policy framework.
0:30:09 If it’s not embedded in how it is that funders decide who to fund, institutions decide who
0:30:15 to hire, and journals to decide what to publish, then it’s not going to be internally and completely
0:30:17 embedded in the system.
0:30:18 Okay.
0:30:20 So that is a lot of change.
0:30:24 Here’s one problem NOSC and his team are trying to address.
0:30:30 Some researchers will cherry-pick or otherwise manipulate their data or find ways to goose
0:30:34 their results to make sure they come up with a finding that will capture the attention
0:30:36 of journal editors.
0:30:42 So NOSC’s team created a software platform called the Open Science Framework where researchers
0:30:49 can pre-register their project and their hypothesis before they start collecting data.
0:30:50 Yeah.
0:30:56 So the idea is you register your designs and you’ve made that commitment in advance.
0:30:59 And then as you’re carrying out the research, if things change along the way, which happens
0:31:02 all the time, you can update that registration.
0:31:04 You could say, “Here’s what’s changing.”
0:31:08 We didn’t anticipate that going into this community was going to be so hard and here’s
0:31:10 how we had to adapt.
0:31:11 That’s fine.
0:31:12 You should be able to change.
0:31:17 You just have to be transparent about those changes so that the reader can evaluate.
0:31:20 And then those data are time-stamped together.
0:31:21 Exactly.
0:31:22 Yeah.
0:31:23 You put your data and your materials.
0:31:25 If you did a survey, you add the surveys.
0:31:28 If you did behavioral tasks, you can add those.
0:31:33 So all of that stuff can be attached then to the registration so that you have a more
0:31:36 comprehensive record of what it is you did.
0:31:41 It sounds like you’re basically raising the cost of sloppiness or fraud, yes?
0:31:44 It makes fraud more inconvenient.
0:31:46 And that’s actually a reasonable intervention.
0:31:52 I don’t think any intervention that we could design could prevent fraud in a way that doesn’t
0:31:55 stifle actual legitimate research.
0:32:00 We just want to make visible all the things that legitimate researchers are doing so that
0:32:04 someone that doesn’t want to do that extra work has a harder time.
0:32:08 And eventually, if everything is exposed, then the person who would be motivated to
0:32:12 do fraud might say, “Well, it’s just as easy to do the research the real way.
0:32:16 So I guess I’ll do that.”
0:32:19 The idea of pre-registration isn’t new.
0:32:22 It goes back to at least the late 19th century.
0:32:25 But there is a big appetite for it now.
0:32:28 The Data Collada team came up with their own version.
0:32:33 Here is Yuri Simonson from the Asade Business School in Barcelona.
0:32:35 It’s a platform that we launched.
0:32:37 It’s called Aspredicted.
0:32:39 And it’s basically eight questions that people write.
0:32:43 People call author, sign on it, it’s time stamped, people, you can show the PDF.
0:32:47 And when we launched it, we thought, “Okay, when do we call this a failure?”
0:32:49 You know, thinking ahead, “When do you shut down the website?”
0:32:54 But all right, if we don’t get 100 a year, we’re going to call that failure.
0:32:56 And we’re getting about 140 a day now.
0:33:02 Brian Nosik says, “The registered report model can be especially helpful to the journals.”
0:33:07 So in the standard publishing model, I do all of my research, I get my findings, I write
0:33:09 it up in a paper and I send it to the journal.
0:33:12 In that model, the reward system is about the findings.
0:33:17 I need to get those findings to be as positive, novel, and tidy as I can so that you, the
0:33:20 reviewer, say, “Okay, okay, you can publish it.”
0:33:25 That’s dysfunctional and it leads to all of those practices that might lead the claims
0:33:28 to be more exaggerated than the evidence.
0:33:35 The registered report model says, “To the journal, you are going to submit, Brian, the
0:33:39 methodology that you’re thinking about doing and why you’re asking that question and
0:33:43 the background research supporting that question being important and that methodology being
0:33:45 effective methodology.
0:33:46 We’ll review that.
0:33:49 We don’t know what the results are, you don’t know what the results are, but we’re going
0:33:52 to review based on, do you have an important question?”
0:33:56 So this is almost like before you build a house, you’re going to show us your plan and
0:34:00 we’re the building department and we’re going to come and say, “Yeah, that looks legit.
0:34:03 It’s not going to collapse, it’s not going to infringe on your neighbor,” and so on.
0:34:04 Is that the idea?
0:34:05 Exactly.
0:34:12 The key part is that the reward, me getting that publication, is based on you agreeing
0:34:15 that I’m asking an important question and I’ve designed an effective method to test
0:34:16 it.
0:34:17 It’s no longer about the results.
0:34:19 None of us know what the results are.
0:34:26 And so even if the results are uninteresting, not new, et cetera, we’ll know they’re legitimate,
0:34:30 but there would seem to be a conflict of incentive there, which is that, “Oh, now do I need to
0:34:33 publish this uninteresting, not new result?
0:34:34 What do you do about that?”
0:34:39 Yeah, so the commitment that the journal makes is we’re going to publish it regardless
0:34:43 of outcome and the authors are making that commitment too.
0:34:47 We’re going to carry this out as we said we would and we’ll report what happens.
0:34:52 Now an interesting thing happens in the change of the culture here in evaluating research
0:34:57 because you said, “Well, if it’s an uninteresting finding, do we still have to publish it?”
0:35:01 It turns out that when you have to make a decision of whether to publish or not before
0:35:06 knowing that the results are, the orientation that the reviewers bring, that the authors
0:35:10 bring is, “Do we need to know the answer to this?”
0:35:13 Regardless of what happens, do we need to know the answer?
0:35:16 Is the question important in other words?
0:35:17 Exactly.
0:35:22 Is the question important enough that we need evidence regardless of what the evidence is?
0:35:25 And it dramatically shifts what ends up being published.
0:35:30 So in the early evidence with register reports, more than half of the hypotheses that are
0:35:35 proposed end up not being supported in the final paper.
0:35:43 In the standard literature, comparable type of domains, more than 95% of the hypotheses
0:35:45 are supported in the paper.
0:35:48 You wonder in the standard literature, “If we’re always right, why do we bother doing
0:35:49 the research?”
0:35:50 Right?
0:35:52 Our hypotheses are always right.
0:35:57 And of course, it’s laughable because we know that’s not what’s actually happening.
0:36:01 We know that all that failed stuff is getting left out and we’re not seeing it.
0:36:08 And the actual literature is an exaggeration of what the real literature is.
0:36:12 I think we should say a couple of things here about academia.
0:36:17 The best academics are driven by a real scientific impulse.
0:36:22 They may know a lot, but they’re not afraid to admit how much we still don’t know.
0:36:28 But they’re driven by an urge to investigate, and not necessarily at least, an urge to produce
0:36:33 a result that will increase their own status.
0:36:39 But academia is also an extraordinarily status conscious place.
0:36:41 I’m not saying there’s anything wrong with that.
0:36:47 If status is the reward that encourages a certain type of smart disciplined person to
0:36:51 do research for the sake of research, rather than taking their talents to an industry that
0:36:56 might pay them much more, that is fantastic.
0:37:02 But if the pursuit of status for status is sake leads an academic researcher to cheat,
0:37:04 well, yeah, that’s bad.
0:37:07 I mean, the incentives are part of the problem, but I don’t think it’s a part of the problem
0:37:09 that we have to fix.
0:37:12 That again is Yuri Simonson from Data Colada.
0:37:17 I think the incentives, it’s like going to be little rub banks because their incentives
0:37:20 are there, but it doesn’t mean that we should stop, you know, rewarding cash.
0:37:26 It just, we should, you know, make our saves safer because it’s good for cash to buy things,
0:37:30 and it’s good for people who publish interesting findings to get recognition.
0:37:36 Brian Nosek says that more than 300 journals are now using the registered reports model.
0:37:43 I think there is broad buy-in on the need to change, and it has already hit the mainstream
0:37:50 of many of the changes that we promote, sharing data materials code, pre-registering research,
0:37:52 reporting all outcomes.
0:37:59 So we’re in the scaling phase for those activities, and what I am optimistic about is that there
0:38:05 is this meta science community that is interrogating whether these solutions are actually having
0:38:07 the desired impact.
0:38:12 And so this is the most exciting part of the movement as I’m looking to the future is this
0:38:16 dialogue between activism and reform.
0:38:17 We can do these things.
0:38:21 Let’s make these changes, and meta science and evaluation.
0:38:22 Is this working?
0:38:24 Did you do what you said it was going to do and et cetera?
0:38:29 And I hope that the tightness of that loop will stay tight because that I think will
0:38:34 make for a very healthy discipline that is constantly skeptical of itself and constantly
0:38:39 looking to do better.
0:38:43 Is Brian Nosek too optimistic?
0:38:44 Maybe.
0:38:51 100 journals is a great start, but that represents maybe 1% of all journals.
0:38:58 For journals and authors, the existing publishing incentives are very strong.
0:39:01 So I think journals have really complicated incentives.
0:39:06 That is Samin Vizier, a psychology professor at the University of Melbourne.
0:39:09 Of course, they want to publish good work to begin with, so there’s some incentive to
0:39:12 do some quality check and cover their ass there.
0:39:16 But once they publish something, there’s a strong incentive for them to defend it or
0:39:19 at least to not publicize any errors.
0:39:27 And here’s a reason to think that Brian Nosek is right to be optimistic about research reform.
0:39:32 Some of his fellow reformers, including Samin Vizier, have been promoted into prestigious
0:39:34 positions in their field.
0:39:40 Vizier spent some time as editor-in-chief of the journal Social Psychological and Personality
0:39:41 Science.
0:39:44 So one of the things the editor-in-chief does is when a manuscript is submitted, I would
0:39:48 read it and decide whether it should continue through the peer review process or I could
0:39:51 reject it there, and that’s called desk rejection.
0:39:54 One thing I started doing at the journal that wasn’t official policy, it was just a practice
0:39:58 I decided to adopt, was that when a manuscript was submitted, I would hide the author’s
0:39:59 names for myself.
0:40:03 So I was rejecting things without looking at who the authors were.
0:40:06 So the publication committee started a conversation with me, which is totally reasonable, about
0:40:10 the overall desk rejection rate, am I rejecting too many things, et cetera.
0:40:14 There was some conversation about whether I was desrejecting the wrong people.
0:40:18 So if I was stepping on important people’s toes, and an email was forwarded to me from
0:40:24 a quote-unquote award-winning social psychologist, Samin desrejected my paper, I found this extremely
0:40:27 distasteful and I won’t be submitting there again.
0:40:31 And when I would try to engage about the substance of my decisions, the scientific basis for
0:40:34 them, that wasn’t what the conversation was about.
0:40:37 So it was basically like, do you know who I am?
0:40:38 Yeah.
0:40:39 Yeah.
0:40:41 I’ll send a question to you and that journal then.
0:40:46 It was a tense few months, but in the end, I was allowed to continue doing what I was
0:40:47 doing.
0:40:52 Vizier recently took on the editor-in-chief job at a different journal, Psychological
0:40:53 Science.
0:40:57 It is one of the premier journals in the field.
0:41:03 It’s also the journal where Francesca Gino published two of her allegedly fraudulent papers.
0:41:08 So I asked Vizier what changes she is hoping to make.
0:41:10 We’re expanding a team that used to have a different name.
0:41:15 We’re going to call them the statistics, transparency and rigor editors, the star editors.
0:41:20 And so that team will be supplementing the handling editors, the editors who actually
0:41:24 organize the peer review and make the decisions on submissions.
0:41:28 Like if a handling editor has a question about the data integrity or about details of the
0:41:32 methods or things like that, the star editor team will provide their expertise and help
0:41:33 fill in those gaps.
0:41:37 We’re also, I’m not sure exactly what form this will take, but try to incentivize more
0:41:41 accurate and calibrated claims and less hype and exaggeration.
0:41:45 This is something that I think is particularly challenging with short articles like Psychological
0:41:49 Science publishes and especially a journal that has really high rejection rate where
0:41:51 the vast majority of submissions are rejected.
0:41:55 Authors are competing for those few spots and so it feels like they have to make a really
0:41:57 bold claim.
0:42:00 And so it’s going to be very difficult to play this like back and forth where authors are
0:42:02 responding to the perception of what the incentives are.
0:42:06 So we need to convey to them that actually if you go too far, make two bold of claims
0:42:10 that aren’t warranted, you will be more likely to get rejected.
0:42:12 But I’m not sure if authors will believe that just because we say that.
0:42:16 They’re still competing for a very selective number of spots.
0:42:21 So as a journal editor, how do you think about the upside risk of publishing something
0:42:25 new and exciting against the downside risk of being wrong?
0:42:27 Oh, I don’t mind being wrong.
0:42:29 I think journalists should publish things that turn out to be wrong.
0:42:32 It would be a bad thing to approach journal editing by saying we’re only going to publish
0:42:35 true things or things that we’re 100% sure are true.
0:42:39 The important thing is that the things that are more likely to be wrong are presented
0:42:42 in a more uncertain way and sometimes we’ll make mistakes even there.
0:42:45 Sometimes we’ll present things with certainty that we shouldn’t have.
0:42:49 What I would like to be involved in and what I plan to do is to encourage more post publication
0:42:55 critique and correction, reward the whistleblowers who identify errors that are valid and that
0:43:01 need to be acted upon and create more incentives for people to do that and do that well.
0:43:04 How would you reward whistleblowers?
0:43:05 I don’t know.
0:43:10 Do you have any ideas?
0:43:15 Right now, the rewards for whistleblowers in academia may seem backwards.
0:43:20 Remember, the data collado whistleblowers were sued by Francesca Gino, one of the people
0:43:21 they blew the whistle on.
0:43:25 They needed a go fund me campaign for their legal defense.
0:43:31 So no, the whistleblowers aren’t collecting any bounties, nor do they cover themselves
0:43:33 in any kind of glory.
0:43:38 Stephen, I’m the person that walks into these academic conferences and everyone is like,
0:43:40 here comes Debbie Downer.
0:43:43 That’s Leif Nelson, another member of Data Collada.
0:43:47 He’s a professor of business administration at UC Berkeley.
0:43:53 In a recent New Yorker piece by Gideon Lewis Krause about these fraud scandals, Nelson
0:43:58 and his Data Collada partners were described as having a, quote, “basic willingness to
0:44:03 call bullshit.”
0:44:09 So now that you’ve become part of this group that are collectively, I would think of as
0:44:15 the primary whistleblower or police or steward, whatever word we want to use, against fraudulent
0:44:20 research in the social sciences, what does that feel like?
0:44:24 I’m guessing on one level, it feels like an accomplishment.
0:44:28 On the other hand, it makes me think of a police force where there’s the Internal Affairs
0:44:36 Bureau where detectives are put to find the bad apples and even though everybody’s in
0:44:41 favor of rooting out the bad apples, everybody kind of hates the IAB guys.
0:44:48 And I’m curious what the emotional toll or cost has been to you.
0:44:50 Wow.
0:44:51 Bad question?
0:44:52 No.
0:44:55 Like, it reminds me of how stressful it all is.
0:44:59 We struggle a little bit with thinking about analogies for what we do.
0:45:01 We’re definitely not police.
0:45:03 Police, amongst other things, have institutional power.
0:45:06 They have badges, whatever.
0:45:07 We don’t have any of that.
0:45:09 We’re not enforcers in any way.
0:45:15 The Internal Affairs thing hurts a little bit, but I get it because that’s saying, “Hey,
0:45:18 within the behavioral science community, we’re the people that are watching the behavioral
0:45:20 scientists.”
0:45:22 And you’re right, no one likes internal affairs.
0:45:27 Most of our thinking is that we want to be journalists, that it’s fun to investigate.
0:45:28 That’s true for everybody in the field, right?
0:45:31 They’re all curious about whatever it is they’re studying.
0:45:33 And so we’re curious about this.
0:45:37 And then when we find things that we think are interesting, we also want to talk about
0:45:40 it, not just with each other, but with the outside world.
0:45:45 But I don’t identify as much with being a police officer or even a detective, though
0:45:49 every now and then people will compare us to something like Sherlock Holmes and that
0:45:51 feels more fun.
0:45:55 But in truth, the reason I sort of wins at the question is that the vast majority of
0:46:00 the time, it comes with far more burden than it does pleasure.
0:46:02 Even before the lawsuit?
0:46:10 Yeah, the lawsuit makes all of the psychological burden into a concrete observable thing.
0:46:16 But the prior to that is that every time we report on anything that’s going to be like,
0:46:23 “Look, we think something bad happened here, someone is going to be mad at us and probably
0:46:27 more people are going to be, and I don’t want people to be mad at me.”
0:46:32 And I think about some of the people involved and it’s hard because I know a lot of these
0:46:36 people and I know they’re friends and I know the friends of the friends and that carries
0:46:40 real, real stress for I think all three of us.
0:46:45 In the New Yorker piece, there are still people who call you pretty harsh names.
0:46:47 You’ve been compared to the Stasi, for instance.
0:46:49 Yeah, that’s real bad.
0:46:53 I’m not happy with being compared to the Stasi.
0:46:56 The optimistic take is that there’s less of that than there used to be.
0:47:02 When any of the three of us go and visit universities, for example, and we talk to doctoral students
0:47:06 and we talk to assistant professors and we talk to associate professors, we talk to senior
0:47:11 professors, the students basically all behave as though they don’t understand why anyone
0:47:13 would ever be against what we’re saying.
0:47:18 They wouldn’t understand the Stasi thing, but they also wouldn’t even understand why
0:47:21 they almost are at the level, “I don’t understand why we’re having you come for a talk.
0:47:23 Doesn’t everyone already believe this?”
0:47:26 But when I talk to people that are closer to retirement than they are to being a grad
0:47:32 student, they’re more like, “You’re making waves where you don’t need to, you’re pushing
0:47:34 back against something that’s not there.
0:47:36 We’ve been doing this for decades.
0:47:38 Why fix what isn’t broken?”
0:47:39 That sort of thing.
0:47:42 If they were to say that to you directly, “Why fix what isn’t broken?”
0:47:43 What would you say?
0:47:46 I would say, “But it is broken.”
0:47:48 And your evidence for that would be?
0:47:53 The evidence for that is a multi-fold.
0:47:57 After the break, multi-fold we shall.
0:47:58 I’m Stephen Dubner.
0:47:59 This is Freakonomics Radio.
0:48:08 We’ll be right back.
0:48:11 Can academic fraud be eliminated?
0:48:12 Certainly not.
0:48:15 The incentives are too strong.
0:48:21 Also, to be reductive, cheaters are going to cheat, and I doubt there is one field of
0:48:27 human endeavor, no matter how noble or righteous or honest it claims to be, where some cheating
0:48:29 doesn’t happen.
0:48:34 But can academic fraud at least be greatly reduced?
0:48:35 Perhaps.
0:48:42 But that would likely require some big changes, including a new type of gatekeeper.
0:48:48 Samin Vizier, the journal editor we heard from earlier, is one kind of gatekeeper.
0:48:50 Sometimes, for example, we’ll get a submission where the research is really solid, but the
0:48:54 conclusion is too strong, and I’ll sometimes tell authors, “Hey, look, I’ll publish your
0:48:57 paper if you tone down the conclusion,” or even sometimes change the conclusion from
0:49:01 saying there is evidence for my hypothesis to there’s no evidence when we are the other,
0:49:05 but it’s still interesting data, and authors are not always willing to do that, even if
0:49:07 it means getting a publication in this journal.
0:49:11 So I do think that’s a sign that maybe it’s a sign that they genuinely believe what they’re
0:49:15 saying, which is maybe to their credit, I don’t know if that’s good news or bad news.
0:49:20 I think often when we’re kind of overselling something, we probably believe what we’re
0:49:21 saying.
0:49:24 And there’s another important gatekeeper in academic journals, one that we’ve barely
0:49:29 talked about, the referees who assess journal submissions.
0:49:35 Peer review is a bedrock component of what makes academic publishing so credible, at
0:49:40 least in theory, but as we’ve been hearing about every part of this industry, the incentives
0:49:44 for peer reviewers are also off.
0:49:48 Here again is Ivan Oransky from Retraction Watch.
0:49:53 If you add up the number of papers published every year, and then you multiply that times
0:49:59 the two or three peer reviewers who are typically supposed to review those papers, and sometimes
0:50:04 they go through multiple rounds, it’s easily in the tens of millions of peer reviews as
0:50:05 a unit.
0:50:10 And if each of those takes anywhere from four hours to eight hours of your life as an expert,
0:50:14 which you don’t really have ’cause you gotta be teaching, you gotta be doing your own research,
0:50:18 you come up with a number that cannot possibly be met by qualified people.
0:50:19 Really, it can’t.
0:50:21 I mean, the match just doesn’t work.
0:50:23 And none of them are paid.
0:50:27 You are sort of expected to do this because somebody will peer review your paper at some
0:50:30 other point, which sort of makes sense until you really pick it apart.
0:50:35 Now peer reviewers, so even the best of them, and by best I mean people who really sit and
0:50:40 take the time and probe what’s going on in the paper and look at all the data.
0:50:41 But you can’t always look at the data.
0:50:45 In fact, most of the time you can’t look at the raw data, even if you had time because
0:50:47 the authors don’t make it available.
0:50:53 So peer review, it’s become really peer review light and maybe not even that at the vast
0:50:54 majority of journals.
0:50:59 So it’s no longer surprising that so much gets through the system that shouldn’t.
0:51:02 This is a very hot topic.
0:51:07 And that again is Leif Nelson from UC Berkeley and Data Colada.
0:51:13 Editors largely in my field are uncompensated for their job, and reviewers are almost purely
0:51:15 uncompensated for their job.
0:51:19 And so they’re all doing it for the love of the field.
0:51:21 And those jobs are hard.
0:51:24 I’m an occasional reviewer and an occasional editor.
0:51:27 And every time I do it, it’s basically a taxing.
0:51:34 The first part of the job was reading a whole paper and deciding whether the topic was interesting.
0:51:38 Whether it was contextualized well enough that people would understand what it was about.
0:51:44 Whether the study as designed was good at testing the hypothesis as articulated.
0:51:48 And only after you get past all of those levels would you say, okay, and now do they have
0:51:50 evidence in favor of the hypothesis.
0:51:58 By the way, we have mostly been talking about the production side of academic research this
0:52:00 whole time.
0:52:02 What about the consumer side?
0:52:07 All of us are also looking for the most interesting and useful studies.
0:52:13 All of us in industry, in government, in the media, especially the media.
0:52:15 Here’s Ivan Oransky again.
0:52:17 We have been conditioned.
0:52:20 And in fact, because of our own attention economy.
0:52:25 We end up covering studies overall else when it comes to science and medicine.
0:52:26 I like to think that’s changing a little bit.
0:52:28 I hope it is.
0:52:33 But we cover individual studies and we cover the studies that sound the most interesting
0:52:36 or that have the biggest effect size and things like that.
0:52:42 You wear red, you must be angry or if it says that this is definitely a cure for cancer.
0:52:43 And journalists love that stuff.
0:52:44 They lap it up.
0:52:48 Like signing a document at the top will make you more likely to be honest on the forum.
0:53:00 And on that note, I went back to Max Baserman, one of the co-authors of that paper, which
0:53:03 inspired this series.
0:53:09 For Baserman, the experience of getting caught up in fraud accusations was particularly bewildering
0:53:14 because the accusations were against a collaborator and friend that he fully trusted, Francesca
0:53:15 Gina.
0:53:22 So, you know, when we think about Ponzi schemes, it’s named after a guy named Ponzi who was
0:53:26 an Italian-American who preyed on the Italian-American community.
0:53:32 And if we think about Bernie Madoff, he preyed on lots of people, but particularly many very
0:53:36 wealthy Jewish individuals and organizations.
0:53:40 One of the interesting things about trust is that it creates so many wonderful opportunities.
0:53:45 So in the academic world, the fact that I can trust my colleagues means that we can diffuse
0:53:48 the work to the person who can handle it best.
0:53:51 So there’s lots of enormous benefits from trust.
0:53:56 But it’s also true that if there’s somebody out there who’s going to commit a fraud of
0:54:04 any type, those of us who are trusting that individual are perhaps in the worst position
0:54:07 to notice that something’s wrong.
0:54:12 And quite honestly, Steven, you know, I’ve been working with junior colleagues who are
0:54:18 smarter than me and know how to do a variety of tasks better than me for such a long time.
0:54:22 I’ve always trusted them, certainly for junior colleagues.
0:54:27 For the most new doctoral students, I may not have trusted their competence because they
0:54:28 were still learning.
0:54:34 But in terms of using the word trust in an ethical sense, I’ve never questioned the ethics
0:54:35 of my colleagues.
0:54:39 So this current episode has really hit me pretty, pretty heavily.
0:54:44 Can I tell you, Max, that is what upsets me about this scandal, even though I’m not an
0:54:48 academic, but I’ve been writing about and interacting with academics for quite a while
0:54:49 now.
0:54:53 And the problem is that I maybe gave them overall too much credit.
0:54:59 I considered academia one of the last bastions of, I mean, I do sound like a fool now when
0:55:06 I say it, but one of the last bastions of honest, transparent, empirical behavior where
0:55:11 you’re bound by a sort of code that only very rarely would someone think about intentionally
0:55:12 violating.
0:55:19 I’m curious if you felt that way as well, that you were sort of played or were naive
0:55:20 in retrospect.
0:55:22 Undoubtedly, I was naive.
0:55:27 You know, not only did I trust my colleagues on the signing first paper, but I think I’ve
0:55:30 trusted my colleagues for decades.
0:55:36 And hopefully with a good basis for trusting them, I do want to highlight that there’s
0:55:38 so many benefits of trust.
0:55:44 So the world has done a lot better because we trust science.
0:55:49 And the fact that there’s an occasional scientist who we shouldn’t trust should not keep us
0:55:52 from gaining the benefit that science creates.
0:56:00 And so one of the harms created by the fraudsters is that they give credibility to the science
0:56:09 deniers who are so often keeping us from making progress in society.
0:56:14 It’s worth pointing out that scientific research findings have been refuted and overturned
0:56:17 since the beginning of scientific research.
0:56:20 That’s part of the process.
0:56:26 But what’s happening at this moment, especially in some fields like social psychology, it can
0:56:28 be disheartening.
0:56:33 It’s not just a replication crisis or a data crisis.
0:56:36 It’s a believability crisis.
0:56:39 Samine Vizier acknowledges this.
0:56:42 There were a lot of societal phenomena that we really wanted explanations for.
0:56:47 And then social psych offered these kind of easy explanations or maybe not so easy, but
0:56:50 these relatively simple explanations that people wanted to believe just to have an answer
0:56:51 and an explanation.
0:56:56 So just how bad is the believability crisis?
0:57:01 Danny Kahneman, who died last year, was perhaps the biggest name in academic psychology in
0:57:05 a couple generations, so big that he once won a Nobel Prize in economics.
0:57:10 His work has been enormously influential in many fields and industries.
0:57:15 But in a New York Times article about the Francesca Gino and Dan Ariely scandals, he
0:57:21 said, “When I see a surprising finding, my default is not to believe it.
0:57:26 12 years ago, my default was to believe anything that was surprising.”
0:57:30 Here again is Max Baserman, who was a colleague and friend of Kahneman’s.
0:57:36 I think that my generation fought against the open science movement for far too long,
0:57:40 and it’s time that we get on the bandwagon and realize that we need some pretty massive
0:57:46 reform of how social science is done, not only to improve the quality of social science,
0:57:49 but also to make us more credible with the world.
0:57:54 So many of us are attracted to social science because we think we can make the world better,
0:57:58 and we can’t make the world better if the world doesn’t believe our results anymore.
0:58:03 So I think that we have a fundamental challenge to figure out how do we go about doing that.
0:58:08 In terms of training, I think that for a long time, if we think about training and research
0:58:13 methods and statistics, that was more like the medicine that you have to take as part
0:58:15 of becoming a social scientist.
0:58:21 And I think we need to realize that it’s a much more central and important topic.
0:58:27 If we’re going to be creating reproducible, credible social science, we need to deal with
0:58:31 lots of the issues that the open science movement is telling us about.
0:58:34 And we’ve taken too long to listen to their advice.
0:58:41 So if we go from data collada, talking about p-hacking in 2011, you know, there were lots
0:58:44 of hints that it was time to start moving.
0:58:49 And the field obviously has moved in the direction that data collada and brianosic have moved
0:58:50 us.
0:58:56 And finally, we have samine vasir as a new incoming editor of psych science, which is
0:58:58 sort of a fascinating development as well.
0:59:00 So we’re moving in the right direction.
0:59:06 It’s taken us too long to pay attention to the wise advice that the open science movement
0:59:10 has outlined for us.
0:59:23 I do think there needs to be a reckoning.
0:59:29 I think that people need to wake up and realize that the foundation of at least a sizable
0:59:34 chunk of our field is built on something that’s not true.
0:59:39 And if a foundation of your field is not true, what does a good scientist do to break into
0:59:41 that field?
0:59:45 Like imagine you have a whole literature that is largely false.
0:59:49 And imagine that when you publish a paper, you need to acknowledge that literature.
0:59:53 And that if you contradict that literature, your probability of publishing really goes
0:59:54 down.
0:59:56 What do you do?
0:59:59 So what it does is it winds up weeding out the careful people who are doing true stuff.
1:00:04 And it winds up rewarding the people who are cutting corners or even worse.
1:00:12 So it basically becomes a field that reinforces rewards, bad science and punishes good science
1:00:14 and good scientist.
1:00:21 Like this is about an incentive system and the incentive system is completely broken.
1:00:23 And we need to get a new one.
1:00:27 And the people in power who are reinforcing this incentive system, they need to not be
1:00:28 in power anymore.
1:00:32 You know, this is illustrating that there’s sort of a rot at the core of some of the stuff
1:00:34 that we’re doing.
1:00:41 And we need to put the right people who have the right values, who care about the details,
1:00:45 who understand that the materials and the data, they are the evidence.
1:00:48 We need those people to be in charge.
1:00:53 Like there can’t be this idea that these are one-off cases, they’re not.
1:00:55 They are not one-off cases.
1:00:56 So it’s broken.
1:01:00 You have to fix it.
1:01:02 That again was Joe Simmons.
1:01:06 Once we published this series last year, there have been reports of fraud in many fields.
1:01:11 Not just the behavioral sciences, but in botany, physics, neuroscience and more.
1:01:16 So we went back to Brian Nosik, who runs the Center for Open Science.
1:01:23 There really is accelerating movement in the sense that some of the base principles of
1:01:28 we need to be more transparent, we need to improve data sharing, we need to facilitate
1:01:33 the processes of self-correction are not just head nods.
1:01:39 Yeah, that’s an important thing, but have really moved into, yeah, how are we going
1:01:40 to do that?
1:01:45 And so I guess that’s been the theme of 2024 is how can we help people do it well?
1:01:48 At his center, Nosik is trying out a new plan.
1:01:53 One of the more exciting things that we’ve been working on is a new initiative that we’re
1:01:56 calling Lifecycle Journal.
1:02:03 And the basic idea is to reimagine scholarly publishing without the original constraints
1:02:04 of paper.
1:02:11 A lot of how the peer review process and publishing occurs today was done because of their limits
1:02:13 of paper.
1:02:17 But in a world where we can actually communicate digitally, there’s no reason that we need
1:02:22 to wait till the research is done to provide some evaluation.
1:02:27 There’s no reason to consider it final when it could be easily revised and updated.
1:02:33 There’s no reason to think of review as a singular one set of activities by three people
1:02:35 who judge the entire thing.
1:02:40 And so we will have a full marketplace of evaluation services that are each evaluating
1:02:42 the research in different ways.
1:02:46 It’ll happen across the research lifecycle from planning through completion.
1:02:51 And researchers will always be able to update and revise when errors or corrections are
1:02:52 needed.
1:02:57 But the need for corrections can move in mysterious ways.
1:03:04 Brian Nosik himself and collaborators including Leif Nelson of Data Colada had to retract
1:03:09 a recent article about the benefits of pre-registration after other researchers pointed out that their
1:03:15 article hadn’t properly pre-registered all their hypotheses.
1:03:16 Nosik was embarrassed.
1:03:22 My whole life is about trying to promote transparent research practices, greater openness, trying
1:03:24 to improve rigor and reproducibility.
1:03:28 I am just as vulnerable to error as anybody else.
1:03:36 And so one of the real lessons, I think, is that without transparency, these errors will
1:03:39 go unexposed.
1:03:44 It would have been very hard for the critics to identify that we had screwed this up without
1:03:50 being able to access the portions of the materials that we were able to make public.
1:03:57 And as people are engaged with critique and pursuing transparency, and transparency is
1:04:04 becoming more normal, we might for a while see an ironic effect, which is transparency
1:04:11 seems to be associated with poorer research because more errors are identified.
1:04:15 And that ought to happen because errors are occurring.
1:04:18 Without transparency, you can’t possibly catch them.
1:04:26 But what might emerge over time as our verification processes improve as we have a sense of accountability
1:04:33 to our transparency, then the fact that transparency is there may decrease error over time, but
1:04:34 not the need to check.
1:04:36 And that’s the key.
1:04:40 Still, trust in science in the US has been declining.
1:04:45 So we asked Nosik if he is worried that this new transparency, which will likely uncover
1:04:49 more errors, might hurt his cause.
1:04:54 This is a real challenge that we wrestle with and have wrestled with since the origins of
1:05:03 the center is how do we promote this culture of critique and self-criticism about our field
1:05:10 and simultaneously have that be understood as the strength of research rather than its
1:05:12 weakness.
1:05:16 One of the phrases that I’ve liked to use in this is that the reason to trust science
1:05:19 is because it doesn’t trust itself.
1:05:25 That part of what makes science great as a social system is its constant self-scrutiny
1:05:31 and willingness to try to find and expose its errors so that the evidence that comes
1:05:37 out at the end is the most robust, reliable, valid evidences could be.
1:05:43 And that continuous process is the best process in the world that we’ve ever invented for
1:05:46 knowledge production.
1:05:47 We can do better.
1:05:55 I think our mistake in some prior efforts of promoting science is to appeal to authority,
1:05:57 saying you should trust science because scientists know what they’re doing.
1:06:04 I don’t think that’s the way to gain trust in science because anyone can make that claim.
1:06:06 Appeals to authority are very weak arguments.
1:06:13 I think our opportunity as a field to address the skepticism of institutions generally and
1:06:21 science specifically is to show our work, is by being transparent, by allowing the criticism
1:06:28 to occur, by in fact encouraging and promoting critical engagement with our evidence.
1:06:33 That is the playing field I’d much rather be on with people who are the so-called enemies
1:06:39 of science than in competing appeals to authority.
1:06:43 Because if they need to wrestle with the evidence and an observer says, “Wow, one group is totally
1:06:49 avoiding the evidence and the other group is actually showing their work,” I think people
1:06:51 will know who to trust.
1:06:53 It’s easy to say it’s very hard to do.
1:06:58 These are hard problems.
1:06:59 I agree.
1:07:00 These are hard problems.
1:07:05 To be fair, easy problems get solved or they simply evaporate.
1:07:11 It’s the hard problems that keep us all digging and we at Freakonomics Radio will keep digging
1:07:13 in this new year.
1:07:15 Thanks to Brian Nosick for the update.
1:07:18 Thanks especially to you for listening.
1:07:20 Coming up next time on the show.
1:07:22 The sun right here is 12 foot tall.
1:07:30 The economics of highway signs and after that, some 30 million Americans think that they
1:07:36 are allergic to the penicillin family of drugs and the vast majority of them are not.
1:07:37 Why does this matter?
1:07:43 Nothing kills bacteria better than these drugs.
1:07:46 We go inside the bizarro world of allergies.
1:07:47 Thanks.
1:07:48 Coming up soon.
1:07:54 Until then, take care of yourself and if you can, someone else too.
1:07:56 Freakonomics Radio is produced by Stitcher and Renbud Radio.
1:08:02 You can find our entire archive on any podcast app also at Freakonomics.com where we publish
1:08:04 transcripts and show notes.
1:08:07 This episode was produced by Alina Kullman.
1:08:12 Our staff also includes Augusta Chapman, Dalvin Abouaji, Elinor Osborn, Ellen Frankman, Elsa
1:08:17 Hernandez, Gabriel Roth, Greg Rippen, Jasmine Klinger, Jason Gambrell, Jeremy Johnston,
1:08:21 John Schnarrs, Lyric Bowditch, Morgan Levy, Neil Coruth, Sarah Lilly, Theo Jacobs and
1:08:23 Zac Lipinski.
1:08:26 Our theme song is Mr. Fortune by the Hitchhikers.
1:08:28 Our composer is Luis Guerra.
1:08:33 As always, thanks for listening.
1:08:36 We are a carrot based organization because we don’t have sticks.
1:08:38 I mean, would you like me to loan you a stick just once in a while?
1:08:39 Yeah, that would be fun.
1:08:53 The Freakonomics Radio Network, the hidden side of everything.
1:08:56 [MUSIC PLAYING]
0:00:09 This is the second and final part of a series we are revisiting from last year.
0:00:19 Stick around for an update at the end of the episode.
0:00:23 Last week’s episode was called Why Is There So Much Fraud in Academia.
0:00:29 We heard about the alleged fraudsters, we heard about the whistleblowers, and then a lawsuit
0:00:31 against the whistleblowers.
0:00:38 My very first thoughts were like, oh my god, how’s anyone going to be able to do this again?
0:00:43 We heard about feelings of betrayal from a co-author who was also a long-time friend
0:00:45 of the accused.
0:00:52 We once even got to the point of our two families making an offer to a developer on a project
0:00:55 to have houses connected to each other.
0:01:01 We also heard an admission from inside the house that the house is on fire.
0:01:05 If you were just a rational agent acting in the most self-interested way possible as
0:01:09 a researcher in academia, I think you would cheat.
0:01:14 That episode was a little gossipy for us at least.
0:01:20 Today we are back to wonky, but don’t worry, it is still really interesting.
0:01:26 Today we look into the academic research industry, and believe me, it is an industry.
0:01:30 And there is misconduct everywhere from the universities.
0:01:36 The most likely career path for anyone who has committed misconduct is a long and fruitful
0:01:41 career because most people, if they’re caught at all, they skate.
0:01:46 There’s misconduct at academic journals, some of which are essentially fake.
0:01:50 There may be something that sounds a lot less nefarious than what I just described, but
0:01:53 that is actually what’s happening.
0:01:59 And we’ll hear how the rest of us contribute, because after all, we love these research
0:02:00 findings.
0:02:04 You know, you wear red, you must be angry, or if it says that this is definitely a cure
0:02:05 for cancer.
0:02:09 We’ll also hear from the reformers who are trying to push back.
0:02:13 It was a tense few months, but in the end, I was allowed to continue doing what I was
0:02:14 doing.
0:02:16 Can academic fraud be stopped?
0:02:27 Let’s find out.
0:02:32 This is Freakonomics Radio, the podcast that explores the hidden side of everything with
0:02:44 your host, Stephen Dubner.
0:02:50 This week, we heard about two alleged cases of data fraud from two separate researchers
0:02:51 in one paper.
0:02:57 The paper claimed that if you ask people to sign a form at the top before they fill out
0:03:03 the information, you will get more truthful answers than if they sign at the bottom.
0:03:09 After many unsuccessful attempts to replicate this finding and allegations that the data
0:03:14 supporting it had been faked, the original paper was finally retracted.
0:03:20 The two alleged fraudsters are Dan Ariely of Duke and Francesca Gino, who has been suspended
0:03:22 by Harvard Business School.
0:03:27 Gino subsequently sued Harvard, as well as the three other academic researchers who blew
0:03:28 the whistle.
0:03:32 The lawsuit against the researchers was dismissed.
0:03:35 The whistleblowers maintain a blog called Data Colada.
0:03:40 They have written that they believe there is fake data in many of the papers that Francesca
0:03:43 Gino coauthored, perhaps dozens.
0:03:48 Gino and Ariely, meanwhile, both maintain their innocence.
0:03:52 They also both declined our request for an interview.
0:03:56 On that one paper that caused all the trouble about signing at the top, there were three
0:04:01 other coauthors, Lisa Shoe, Nina Mazar, and Max Bazerman.
0:04:05 None of them have been accused of any wrongdoing.
0:04:10 So let’s pick up where we left off with Max Bazerman, the most senior researcher on that
0:04:11 paper.
0:04:13 He also teaches at Harvard Business School.
0:04:19 When there’s somebody who engages in bad behavior, there’s always people around who
0:04:23 could have noticed more and acted more.
0:04:26 Bazerman was close with Francesca Gino.
0:04:29 He had been her advisor and he trusted her.
0:04:34 So he has been spending a lot of time lately thinking about the mess.
0:04:40 He recently published a book called Complicit, How We Enable the Unethical and How to Stop,
0:04:45 and he’s working on another book about social science fraud.
0:04:48 This has led him to consider what makes people cheat.
0:04:54 Let’s take the case of Dieteric Stapel, a Dutch professor of social psychology, who
0:05:02 after years of success admitted to fabricating and manipulating data in dozens of studies.
0:05:09 Part of the path toward data fabrication occurred in part because he liked complex ideas, and
0:05:16 academia didn’t like complex ideas as much as they liked the snappy sort of quick bait,
0:05:22 and that moved him in that direction and also put him on the path toward fraudulent behavior.
0:05:28 So here’s something that Stapel wrote later when he wrote a book of confession essentially
0:05:29 about his fraud.
0:05:35 He wrote, “I was doing fine, but then I became impatient, overambitious, reckless.
0:05:39 I wanted to go faster and better and higher and smarter all the time.
0:05:43 I thought it would help if I just took this one tiny little shortcut, but then I found
0:05:46 myself more and more often in completely the wrong lane in the end.
0:05:49 I wasn’t even on the road at all.”
0:05:54 What struck me about that, and I think about that with Dan Ariely and Francesca Gino as
0:05:59 well, which is that the people who have been accused of having committed academic fraud
0:06:05 are really successful already, and I’m curious what that tells you about either the stakes
0:06:11 or the incentives or maybe the psychology of how this happens, because honestly, it surprises
0:06:12 me.
0:06:18 I would say we don’t know that much about why the fraudsters do what they do.
0:06:22 And the most interesting source you just mentioned, so Stapel wrote a book in Dutch
0:06:29 called “Outsporing,” which means something like D-Rail, where he provides his information
0:06:33 and he goes on from the material you talked about to describing that he became like an
0:06:36 alcoholic or a heroin addict.
0:06:41 And he got used to the easy successes, and he began to believe that he wasn’t doing
0:06:42 any harm.
0:06:48 After all, he was just making it easier to publish information that was undoubtedly true.
0:06:55 So this aspect of sort of being lured onto the path of unethical behavior followed by
0:07:00 addictive-like behavior becomes part of the story, and Stapel goes on to talk about lots
0:07:06 of other aspects like the need to score, ambition, laziness, wanting power, status.
0:07:12 So he provides this good insight, but most of the admitted fraudsters or the people who
0:07:18 have lost their university positions based on allegations of fraud have simply disappeared
0:07:20 and have never talked about it.
0:07:25 One of the interesting parts is that Mark Houser, who resigned from Harvard, and Ariely
0:07:32 and Gino, who are alleged to have committed fraud by some parties, all three of them wrote
0:07:39 on the topic of moral behavior and specifically why people might engage in bad behavior.
0:07:40 That’s right.
0:07:46 A lot of the fraud and suspected fraud comes from researchers who explore fraud.
0:07:54 In 2012, Francesca Gino and Dan Ariely collaborated on another paper called The Dark Side of Creativity.
0:07:57 Original thinkers can be more dishonest.
0:08:01 They wrote, “We propose that a creative personality and a creative mindset promote
0:08:09 individuals’ ability to justify their behavior, which in turn leads to unethical behavior.”
0:08:15 So just how much unethical behavior is there in the world of academic research?
0:08:21 That’s a hard question to answer precisely, but let’s start with this man.
0:08:26 I essentially spend all of my nights and weekends thinking about scientific fraud, scientific
0:08:28 misconduct, scientific integrity for that matter.
0:08:35 Ivan Oranski is a medical doctor and editor of a neuroscience publication called The Transmitter.
0:08:41 He’s also a distinguished journalist in residence at NYU, and on the side, he runs a website
0:08:43 called Retraction Watch.
0:08:49 We hear from whistleblowers all the time people we call sleuths who are actually out there
0:08:53 finding these problems, and often that’s pre-retraction or they’ll explain to us why
0:08:54 retraction happened.
0:08:57 We also do things like file public records requests.
0:09:02 He began Retraction Watch in 2010 with Adam Marcus, another science journalist.
0:09:08 Marcus had broken a story about a Massachusetts anesthesiologist named Scott Rubin.
0:09:13 Rubin had received funding from several drug companies to conduct clinical trials, but
0:09:18 instead he faked the data and published results without running the trials.
0:09:22 I went to Adam and I said, “What if we create a blog about this?”
0:09:26 It seems like there are all these stories that are hiding in plain sight that essentially
0:09:30 we and other journalists are leaving on the table.
0:09:37 When we looked at the actual retraction notices, the information was somewhere between misleading
0:09:38 and opaque.
0:09:40 What do you mean by that?
0:09:46 When a paper is retracted and it’s probably worth defining that, a retraction is a signal
0:09:51 to the scientific community or really to any readers of a particular peer-reviewed journal
0:09:58 article that you should not rely on that anymore, that there’s something about it that means
0:10:03 you should … You can not pretend it doesn’t exist, but you shouldn’t base any other work
0:10:04 on it.
0:10:09 But when you call it misleading or opaque, you’re saying the explanation for the retraction
0:10:12 is often not transparent.
0:10:16 When you retract a paper, you’re supposed to put a retraction notice on it the same way
0:10:19 when you correct an article in the newspaper, you’re supposed to put a correction notice
0:10:21 on it.
0:10:25 When you actually read these retraction notices, and to be fair, this has changed a fair amount
0:10:30 in the 13 years that we’ve been doing this, sometimes they include no information at all.
0:10:34 Sometimes they include information that is woefully incomplete.
0:10:39 Sometimes it’s some version of getting al Capone on tax evasion.
0:10:45 They fake the data, but we’re going to say they forgot to fill out this form, which is
0:10:50 still a reason to retract, but isn’t the whole story.
0:10:56 Let’s say we back up and I ask you, how significant or widespread is the problem of, we’ll call
0:10:58 it academic fraud?
0:11:07 We think that probably 2% of papers should be retracted for something that would be considered
0:11:10 either out and out fraud or maybe just severe bad mistake.
0:11:18 According to our data, which we have the most retraction data of any database, about 0.1%
0:11:23 of the world’s literature is retracted, so one in a thousand papers.
0:11:26 We think it should be about 20 times that, about 2%.
0:11:29 There’s a bunch of reasons, but they come down to one.
0:11:34 There was a survey back in 2009 which has been repeated and done differently and come
0:11:40 up with roughly the same number, actually even higher numbers recently that says 2% of researchers,
0:11:44 if you ask them anonymously, they will say, yes, I’ve committed something that would be
0:11:45 considered misconduct.
0:11:48 Of course, when you ask them how many people they know who have committed misconduct, it
0:11:52 goes much, much higher than that.
0:11:54 That’s one line of evidence, which is admittedly indirect.
0:11:59 The other is that when you talk to the sleuths, the people doing the real work of figuring
0:12:04 out what’s wrong with literature and letting people know about it, they keep lists of papers
0:12:09 they’ve flagged for publishers and for authors and journals, and routinely, most of them are
0:12:10 not retracted.
0:12:12 Again, we came to 2%.
0:12:15 Is it exactly 2% and is that even the right number?
0:12:18 No, we’re pretty sure that’s the lower bound.
0:12:23 Others say it should be even higher.
0:12:28 Modern watch has a searchable database that includes more than 45,000 retractions from
0:12:32 journals in just about any academic field you can imagine.
0:12:38 They also post a leaderboard, a ranking of the researchers with the most papers retracted.
0:12:44 At the top of that list is another anesthesiologist, this one a German researcher named Joachim
0:12:45 Bolt.
0:12:49 He came up briefly in last week’s episode too.
0:12:52 Bolt has had more than 200 papers retracted.
0:12:57 Bolt, an anesthesiology researcher, was studying something called hetistarch, which was essentially
0:12:58 a blood substitute.
0:13:05 Not exactly blood, but something that when you were on a heart-lung pump, a machine during
0:13:08 certain surgeries or you’re in the ICU or something like that, it would basically cut
0:13:14 down on the amount of blood transfusions people would need, and that’s got obvious benefits.
0:13:20 Now he did a lot of the important work in that area, and his work was cited in all the guidelines.
0:13:23 It turned out that he was faking data.
0:13:29 Bolt was caught in 2010 after an investigation by a German state medical association.
0:13:34 The method he had been promoting was later found to be associated with a significant
0:13:37 increased risk of mortality.
0:13:43 So in this case, the fraud led to real danger, and what happened to Bolt?
0:13:47 He at one point was under criminal indictment, or at least criminal investigation.
0:13:48 That didn’t go anywhere.
0:13:54 The hospital, the clinic also, which to be fair, had actually identified a lot of problems.
0:14:00 They came under pretty severe scrutiny, but in terms of actual sanctions, pretty minimal.
0:14:03 You’ve written that universities protect fraudsters.
0:14:06 Can you give an example other than Bolt, let’s say?
0:14:09 So universities, they protect fraudsters in a couple of different ways.
0:14:13 One is that they are very slow to act, they’re very slow to investigate, and they keep all
0:14:16 of those investigations hidden.
0:14:22 The other is that because lawyers run universities, like they, frankly, run everything else, they
0:14:25 tell people who are involved in investigations.
0:14:31 If someone calls for a reference letter, let’s say someone leaves, and they haven’t been
0:14:36 quite found guilty, but as a plea bargain sort of thing, they will leave, and then they’ll
0:14:38 stop the investigation.
0:14:42 Then when someone calls for a reference, and we actually have their receipts on this because
0:14:49 we filed public records requests for emails between different parties, we learned that
0:14:53 they would be routinely told not to say anything about the misconduct.
0:14:58 Let’s just take three of the most recent high profile cases of academic fraud or accusations
0:14:59 of academic fraud.
0:15:04 We’ve got Francesca Geno, who was a psychology researcher at Harvard Business School, Dan
0:15:08 Ariely, who’s a psychologist at Duke, and then Mark Tessier-Levin, who was president
0:15:12 of Stanford Medical Researcher, three pretty different outcomes, right?
0:15:18 Tessier-Levin was defenestrated from his presidency, Geno was suspended by HBS, and Dan Ariely,
0:15:23 who’s had accusations lobbed at him for years now, is just kind of going on.
0:15:26 Can you just comment on that heterogeneity?
0:15:31 So then I would just not so much as a correction, but just to say that, yes, Mark Tessier-Levin
0:15:33 was defenestrated as president.
0:15:38 He remains, at least at the time of this discussion, a tenured professor at Stanford, which is
0:15:42 a pretty nice position to be in.
0:15:47 A Stanford report found that Tessier-Levin didn’t commit fraud or falsify any of his
0:15:54 data, although work in his labs, quote, “fell below customary standards of scientific rigor,”
0:16:00 and multiple members of his labs appear to have manipulated research data.
0:16:05 Dan Ariely also remains a professor at Duke, and Duke has declined to comment publicly
0:16:10 on whether an investigation into the allegations of data fraud even occurred.
0:16:15 As Ivan Oransky told us, universities are run by lawyers.
0:16:21 I’ve been quoted saying that the most likely career path or the most likely outcome for
0:16:26 anyone who has committed misconduct is a long and fruitful career.
0:16:30 And I mean that because it’s true, because most people, if they’re caught at all, they
0:16:31 skate.
0:16:36 The number of cases we write about, which grows every year, but is still a tiny fraction
0:16:43 of what’s really going on, Dan Ariely, we interviewed Dan years ago about some questions
0:16:45 in his research.
0:16:49 Duke is actually, I would argue, a little bit of a singular case.
0:16:58 Duke in 2019 settled with the US government for $112.5 million because they had repeatedly
0:17:06 alleged to have covered up really bad significant misconduct.
0:17:08 Duke has had particular trouble with medical research.
0:17:14 There was one physician researcher who faked the data in his cancer research.
0:17:20 There were allegations of federal grant money being mishandled, also failing to protect patients
0:17:22 in some studies.
0:17:28 At one point, the National Institutes of Health placed special sanctions on all Duke researchers.
0:17:34 So I’ve got to be honest, and people in Durham may not like me saying this, but I think Duke
0:17:39 has a lot of work to do to demonstrate that their investigations are complete and that
0:17:43 they are doing the right thing by research dollars and for patients.
0:17:48 Most of the researchers we have been talking about are already well-established in their
0:17:49 fields.
0:17:54 If they feel pressure to get a certain result, it’s probably the kind of pressure that comes
0:17:59 with further burnishing your high status, but junior researchers face a different kind
0:18:01 of pressure.
0:18:04 They need published papers to survive.
0:18:06 Publish or perish is the old saying.
0:18:10 If they can’t consistently get their papers in a good journal, they probably won’t get
0:18:17 tenure and they may not even have a career, and those journals are flooded with submissions.
0:18:22 So the competition is fierce and it is global.
0:18:27 This is easy to miss if you’re in the US since so many of the top universities and journals
0:18:34 are here, but academic research is very much a global industry and it’s also huge.
0:18:44 Even if you are a complete nerd and you could name 50 journals in your field, you know nothing.
0:18:50 Every year, more than 4 million articles are published in somewhere between 25 and 50,000
0:18:55 journals, depending on how you count, and that number is always growing.
0:19:02 You have got journals called Aggressive Behavior and Frontiers in Ceramics, Neglected Tropical
0:19:10 Diseases, and Rangefer, which covers the biology and management of reindeer and caribou.
0:19:16 There used to be a journal of mundane behavior, but that one, sadly, is defunct.
0:19:21 But no matter how many journals there are, there is never enough space for all the papers.
0:19:27 And this has led to some, well, let’s call it scholarly innovation.
0:19:30 Here again is Ivan Oransky.
0:19:34 People are now really fixated on what are known as paper mills.
0:19:41 So if you think about the economics of this, right, it is worthwhile if you are a researcher
0:19:47 to actually pay, in other words, it’s an investment in your own future, to pay to publish a certain
0:19:48 paper.
0:19:53 What I’m talking about is literally buying a paper or buying authorship on a paper.
0:19:58 So to give you a little bit of a sense of how this might work, you’re a researcher who
0:20:00 is about to publish a paper.
0:20:06 So Donner et al. have got some interesting finding that you’ve actually written up and
0:20:08 a journal has accepted.
0:20:10 It’s gone through peer review.
0:20:13 And you’re like, great, and that’s good for you.
0:20:17 But you actually also want to make a little extra money on the side.
0:20:22 So you take that paper, that essentially it’s not a paper, it’s really still a manuscript.
0:20:26 You put it up on a brokerage site, or you put the title up on a brokerage site.
0:20:31 You say, I’ve got a paper where, you know, there are four authors right now.
0:20:35 It’s going into this journal, which is a top tier or mid tier, whatever it is, et cetera.
0:20:37 It’s in this field.
0:20:42 If you would like to be an author, the bidding starts now, or it’s just, it’s $500 or 500
0:20:43 euros or whatever it is.
0:20:47 And then Ivan Moransky comes along and says, I need a paper.
0:20:49 I need to get tenure.
0:20:50 I need to get promoted.
0:20:51 Oh, great.
0:20:53 Let me click on this, you know, brokerage site.
0:20:55 Let me give you $500.
0:21:01 And now all of a sudden, you write to the journal, by the way, I have a new author.
0:21:02 His name is Ivan Moransky.
0:21:03 He’s at New York University.
0:21:07 He just joined late, but he’s been so invaluable to the process.
0:21:10 Now what do the other co-authors have to say about this?
0:21:13 Often they don’t know about it, or at least they claim they don’t know about it.
0:21:18 What about fraudulent journals, do those exist, or fraudulent publication sites that look
0:21:22 legit enough to pass muster for somebody’s department?
0:21:23 They do.
0:21:28 There are all different versions of fraudulent with a lower case F publications.
0:21:32 There are publications that are legit in the sense that they can point to doing all the
0:21:34 things that you’re supposed to do as a journal.
0:21:36 You have papers submitted.
0:21:38 You do something that looks sort of like peer review.
0:21:42 You assign it what’s known as a digital object identifier.
0:21:44 You do all that publishing stuff.
0:21:48 And they’re not out and out fraudulent in the sense of they don’t exist and people are
0:21:52 just making it up, or they’re trying to, you know, use a name that isn’t really theirs.
0:21:55 But they’re fraudulent with a lower case F in the sense that they’re not doing most
0:21:57 of those things.
0:22:01 Then there are actual, what we refer to, we in Anna Abelkina, who works with us on this,
0:22:03 as hijacked journals.
0:22:06 We have more than 200 on this list now.
0:22:08 They were at one point legitimate journals.
0:22:14 So it’s a real title that some university or funding agency or et cetera will actually
0:22:15 recognize.
0:22:22 But what happened was some version of the publisher sort of forgot to renew their domain.
0:22:25 I mean, literally something like that.
0:22:29 Now, they’re more nefarious versions of it, but it’s that sort of thing where, you know,
0:22:35 these really bad players are inserting themselves and taking advantage of the vulnerabilities
0:22:40 in the system, of which there are many, to really print money because then they can get
0:22:44 people to pay them to publish in those journals and they even are getting indexed in the places
0:22:45 that matter.
0:22:53 It sounds like there are thousands of people who are willing to pay journals that are quasi
0:22:56 real to publish their papers.
0:22:58 At least thousands, yes.
0:23:01 Who are the kind of authors who would publish in those journals?
0:23:06 Are they American, not American, or their particular fields or disciplines that happen
0:23:08 to be most common?
0:23:12 Well, I think what it tends to correlate with is how direct or intense the publisher
0:23:16 of Paris culture is in that particular area.
0:23:21 And generally that varies more by country or region than anything else.
0:23:25 If you look at, for example, the growth in China of a number of papers published, what’s
0:23:30 calculated is the impact of those papers, which relies on things like how often they’re
0:23:31 cited.
0:23:37 You can trace that growth very directly from government mandates.
0:23:43 For example, if you publish in certain journals known as a high impact factor, you actually
0:23:48 got a cash bonus that was a sort of multiple of the number of the impact factor.
0:23:53 And that can make a big difference in your life.
0:23:59 Recent research by John Yenedes, Thomas Collins, and Yerun Boss has examined what they call
0:24:02 extremely productive authors.
0:24:07 They left out physicists since some physics projects are so massive that one paper can
0:24:10 have more than 5,000 authors.
0:24:14 So leaving aside the physicists, they found that over the course of one year, more than
0:24:21 1,200 researchers around the world publish the equivalent of one paper every five days.
0:24:28 The top countries for these extremely productive authors were China, the US, Saudi Arabia,
0:24:30 Italy, and Germany.
0:24:35 When there is so much research being published, you’d also expect that the vast majority of
0:24:38 it is never read by more than a handful of people.
0:24:43 There’s also the problem, as the economics blogger Noah Smith recently pointed out, that
0:24:50 too much academic research is just useless, at least to anyone beyond the author.
0:24:53 But you shouldn’t expect any of this to change.
0:24:58 Global scholarly publishing is a $28 billion market.
0:25:04 Everyone complains about the very high price of journal subscriptions, but universities
0:25:07 and libraries are essentially forced to pay.
0:25:12 There is, however, another business model in the research paper industry, and it is
0:25:13 growing fast.
0:25:16 It’s called Open Access Publishing.
0:25:22 And here it’s most often the authors who pay to get into the journal.
0:25:27 If you think that sounds problematic, well, yes.
0:25:31 Consider Hindawi, an Egyptian publisher with more than 200 journals, including the Journal
0:25:37 of Combustion, the International Journal of Ecology, the Journal of Advanced Transportation.
0:25:41 Most of their journals were not at all prestigious, but that doesn’t mean they weren’t lucrative.
0:25:47 A couple years ago, Hindawi was bought by John Wiley and Sons, the huge American academic
0:25:50 publisher for nearly $300 million.
0:25:55 So Hindawi’s business model was they’re an open access publisher, which usually means
0:26:00 you charge authors to publish in your journal, and you charge them, you know, it could be
0:26:05 anywhere from hundreds of dollars to even thousands of dollars per paper, and they’re
0:26:09 publishing, you know, tens of thousands and sometimes even more papers per year.
0:26:11 So you can start to do that math.
0:26:17 What happened at Hindawi was that somehow, paper mills realized that they were vulnerable.
0:26:19 So they started targeting them.
0:26:24 They’ve actually started paying some of these editors to accept papers from their paper mill.
0:26:31 And long story short, they now have had to retract something like, we’re still figuring
0:26:34 out the exact numbers when the dust settles, but in the thousands.
0:26:40 In 2023, Hindawi retracted more than 8,000 papers.
0:26:45 That was more retractions than there had ever been in a year from all academic publishers
0:26:46 combined.
0:26:52 Wiley recently announced that they will stop using the Hindawi brand name, and they folded
0:26:55 the remaining Hindawi journals into their portfolio.
0:26:59 But they are not getting out of the pay to publish business.
0:27:02 Publishers earn more from publishing more.
0:27:03 It’s a volume play.
0:27:07 And when you’re owned by, you know, shareholders who want growth all the time, that is the
0:27:09 best way to grow.
0:27:14 And these are businesses with, you know, very impressive and enviable profit margins of,
0:27:17 you know, sometimes up to 40%.
0:27:18 And these are not on small numbers.
0:27:23 The profit itself is in the billions often.
0:27:28 It’s hard to blame publishers for wanting to earn billions from an industry with such
0:27:30 bizarre incentives.
0:27:36 But if publishers aren’t looking out for the integrity of the research, who is?
0:27:37 That’s coming up after the break.
0:27:40 I’m Stephen Dubner, and this is Freakonomics Radio.
0:27:56 We feel like when we’re in cultures that there is no way for any of us to change the culture.
0:27:57 It’s a culture.
0:28:00 My God, how could we change it?
0:28:06 But we also recognize that cultures are created by the people that comprise them.
0:28:12 And the notion that we collectively can actually do something to shift the research culture
0:28:19 I think has spread, and that spreading has actually accelerated the change of the research
0:28:21 culture for the better.
0:28:23 That is Brian Nosek.
0:28:27 He’s a psychology professor at the University of Virginia, and he runs the Center for Open
0:28:29 Science.
0:28:34 For more than a decade, his center has been trying to get more transparency in academic
0:28:35 research.
0:28:41 You might think there would already be transparency in academic research, at least I did.
0:28:45 But here’s what Nosek said in part one of the series, when we were talking about how
0:28:49 researchers tend to hoard their data rather than share it.
0:28:54 Yeah, it’s based on the academic reward system.
0:28:56 Publication is the currency of advancement.
0:29:02 I need publications to have a career, to advance my career, to get promoted.
0:29:08 And so the work that I do that leads to publication, I have a very strong sense of, “Oh my gosh,
0:29:15 if others now have control over this, my ideas, my data, my designs, my solutions, then I
0:29:18 will disadvantage my career.”
0:29:22 I asked Nosek how he thinks this culture can be changed.
0:29:27 So for example, we have to make it easy for researchers to be more transparent with their
0:29:28 work.
0:29:32 If it’s really hard to share your data, then adding on that extra work is going to slow
0:29:34 down my progress.
0:29:37 We have to make it normative.
0:29:41 People have to be able to see that others in their community are doing this.
0:29:42 They’re being more transparent.
0:29:46 They’re being more rigorous so that we, instead of saying, “Oh, that’s great ideas and nobody
0:29:49 does it,” you say, “Oh, there’s somebody over there that’s doing it.
0:29:51 Oh, maybe I could do it too.”
0:29:53 We have to deal with the incentives.
0:29:58 Is it actually relevant for my advancement in my career to be transparent, to be rigorous,
0:30:00 to be reproducible?
0:30:03 And then we have to address the policy framework.
0:30:09 If it’s not embedded in how it is that funders decide who to fund, institutions decide who
0:30:15 to hire, and journals to decide what to publish, then it’s not going to be internally and completely
0:30:17 embedded in the system.
0:30:18 Okay.
0:30:20 So that is a lot of change.
0:30:24 Here’s one problem NOSC and his team are trying to address.
0:30:30 Some researchers will cherry-pick or otherwise manipulate their data or find ways to goose
0:30:34 their results to make sure they come up with a finding that will capture the attention
0:30:36 of journal editors.
0:30:42 So NOSC’s team created a software platform called the Open Science Framework where researchers
0:30:49 can pre-register their project and their hypothesis before they start collecting data.
0:30:50 Yeah.
0:30:56 So the idea is you register your designs and you’ve made that commitment in advance.
0:30:59 And then as you’re carrying out the research, if things change along the way, which happens
0:31:02 all the time, you can update that registration.
0:31:04 You could say, “Here’s what’s changing.”
0:31:08 We didn’t anticipate that going into this community was going to be so hard and here’s
0:31:10 how we had to adapt.
0:31:11 That’s fine.
0:31:12 You should be able to change.
0:31:17 You just have to be transparent about those changes so that the reader can evaluate.
0:31:20 And then those data are time-stamped together.
0:31:21 Exactly.
0:31:22 Yeah.
0:31:23 You put your data and your materials.
0:31:25 If you did a survey, you add the surveys.
0:31:28 If you did behavioral tasks, you can add those.
0:31:33 So all of that stuff can be attached then to the registration so that you have a more
0:31:36 comprehensive record of what it is you did.
0:31:41 It sounds like you’re basically raising the cost of sloppiness or fraud, yes?
0:31:44 It makes fraud more inconvenient.
0:31:46 And that’s actually a reasonable intervention.
0:31:52 I don’t think any intervention that we could design could prevent fraud in a way that doesn’t
0:31:55 stifle actual legitimate research.
0:32:00 We just want to make visible all the things that legitimate researchers are doing so that
0:32:04 someone that doesn’t want to do that extra work has a harder time.
0:32:08 And eventually, if everything is exposed, then the person who would be motivated to
0:32:12 do fraud might say, “Well, it’s just as easy to do the research the real way.
0:32:16 So I guess I’ll do that.”
0:32:19 The idea of pre-registration isn’t new.
0:32:22 It goes back to at least the late 19th century.
0:32:25 But there is a big appetite for it now.
0:32:28 The Data Collada team came up with their own version.
0:32:33 Here is Yuri Simonson from the Asade Business School in Barcelona.
0:32:35 It’s a platform that we launched.
0:32:37 It’s called Aspredicted.
0:32:39 And it’s basically eight questions that people write.
0:32:43 People call author, sign on it, it’s time stamped, people, you can show the PDF.
0:32:47 And when we launched it, we thought, “Okay, when do we call this a failure?”
0:32:49 You know, thinking ahead, “When do you shut down the website?”
0:32:54 But all right, if we don’t get 100 a year, we’re going to call that failure.
0:32:56 And we’re getting about 140 a day now.
0:33:02 Brian Nosik says, “The registered report model can be especially helpful to the journals.”
0:33:07 So in the standard publishing model, I do all of my research, I get my findings, I write
0:33:09 it up in a paper and I send it to the journal.
0:33:12 In that model, the reward system is about the findings.
0:33:17 I need to get those findings to be as positive, novel, and tidy as I can so that you, the
0:33:20 reviewer, say, “Okay, okay, you can publish it.”
0:33:25 That’s dysfunctional and it leads to all of those practices that might lead the claims
0:33:28 to be more exaggerated than the evidence.
0:33:35 The registered report model says, “To the journal, you are going to submit, Brian, the
0:33:39 methodology that you’re thinking about doing and why you’re asking that question and
0:33:43 the background research supporting that question being important and that methodology being
0:33:45 effective methodology.
0:33:46 We’ll review that.
0:33:49 We don’t know what the results are, you don’t know what the results are, but we’re going
0:33:52 to review based on, do you have an important question?”
0:33:56 So this is almost like before you build a house, you’re going to show us your plan and
0:34:00 we’re the building department and we’re going to come and say, “Yeah, that looks legit.
0:34:03 It’s not going to collapse, it’s not going to infringe on your neighbor,” and so on.
0:34:04 Is that the idea?
0:34:05 Exactly.
0:34:12 The key part is that the reward, me getting that publication, is based on you agreeing
0:34:15 that I’m asking an important question and I’ve designed an effective method to test
0:34:16 it.
0:34:17 It’s no longer about the results.
0:34:19 None of us know what the results are.
0:34:26 And so even if the results are uninteresting, not new, et cetera, we’ll know they’re legitimate,
0:34:30 but there would seem to be a conflict of incentive there, which is that, “Oh, now do I need to
0:34:33 publish this uninteresting, not new result?
0:34:34 What do you do about that?”
0:34:39 Yeah, so the commitment that the journal makes is we’re going to publish it regardless
0:34:43 of outcome and the authors are making that commitment too.
0:34:47 We’re going to carry this out as we said we would and we’ll report what happens.
0:34:52 Now an interesting thing happens in the change of the culture here in evaluating research
0:34:57 because you said, “Well, if it’s an uninteresting finding, do we still have to publish it?”
0:35:01 It turns out that when you have to make a decision of whether to publish or not before
0:35:06 knowing that the results are, the orientation that the reviewers bring, that the authors
0:35:10 bring is, “Do we need to know the answer to this?”
0:35:13 Regardless of what happens, do we need to know the answer?
0:35:16 Is the question important in other words?
0:35:17 Exactly.
0:35:22 Is the question important enough that we need evidence regardless of what the evidence is?
0:35:25 And it dramatically shifts what ends up being published.
0:35:30 So in the early evidence with register reports, more than half of the hypotheses that are
0:35:35 proposed end up not being supported in the final paper.
0:35:43 In the standard literature, comparable type of domains, more than 95% of the hypotheses
0:35:45 are supported in the paper.
0:35:48 You wonder in the standard literature, “If we’re always right, why do we bother doing
0:35:49 the research?”
0:35:50 Right?
0:35:52 Our hypotheses are always right.
0:35:57 And of course, it’s laughable because we know that’s not what’s actually happening.
0:36:01 We know that all that failed stuff is getting left out and we’re not seeing it.
0:36:08 And the actual literature is an exaggeration of what the real literature is.
0:36:12 I think we should say a couple of things here about academia.
0:36:17 The best academics are driven by a real scientific impulse.
0:36:22 They may know a lot, but they’re not afraid to admit how much we still don’t know.
0:36:28 But they’re driven by an urge to investigate, and not necessarily at least, an urge to produce
0:36:33 a result that will increase their own status.
0:36:39 But academia is also an extraordinarily status conscious place.
0:36:41 I’m not saying there’s anything wrong with that.
0:36:47 If status is the reward that encourages a certain type of smart disciplined person to
0:36:51 do research for the sake of research, rather than taking their talents to an industry that
0:36:56 might pay them much more, that is fantastic.
0:37:02 But if the pursuit of status for status is sake leads an academic researcher to cheat,
0:37:04 well, yeah, that’s bad.
0:37:07 I mean, the incentives are part of the problem, but I don’t think it’s a part of the problem
0:37:09 that we have to fix.
0:37:12 That again is Yuri Simonson from Data Colada.
0:37:17 I think the incentives, it’s like going to be little rub banks because their incentives
0:37:20 are there, but it doesn’t mean that we should stop, you know, rewarding cash.
0:37:26 It just, we should, you know, make our saves safer because it’s good for cash to buy things,
0:37:30 and it’s good for people who publish interesting findings to get recognition.
0:37:36 Brian Nosek says that more than 300 journals are now using the registered reports model.
0:37:43 I think there is broad buy-in on the need to change, and it has already hit the mainstream
0:37:50 of many of the changes that we promote, sharing data materials code, pre-registering research,
0:37:52 reporting all outcomes.
0:37:59 So we’re in the scaling phase for those activities, and what I am optimistic about is that there
0:38:05 is this meta science community that is interrogating whether these solutions are actually having
0:38:07 the desired impact.
0:38:12 And so this is the most exciting part of the movement as I’m looking to the future is this
0:38:16 dialogue between activism and reform.
0:38:17 We can do these things.
0:38:21 Let’s make these changes, and meta science and evaluation.
0:38:22 Is this working?
0:38:24 Did you do what you said it was going to do and et cetera?
0:38:29 And I hope that the tightness of that loop will stay tight because that I think will
0:38:34 make for a very healthy discipline that is constantly skeptical of itself and constantly
0:38:39 looking to do better.
0:38:43 Is Brian Nosek too optimistic?
0:38:44 Maybe.
0:38:51 100 journals is a great start, but that represents maybe 1% of all journals.
0:38:58 For journals and authors, the existing publishing incentives are very strong.
0:39:01 So I think journals have really complicated incentives.
0:39:06 That is Samin Vizier, a psychology professor at the University of Melbourne.
0:39:09 Of course, they want to publish good work to begin with, so there’s some incentive to
0:39:12 do some quality check and cover their ass there.
0:39:16 But once they publish something, there’s a strong incentive for them to defend it or
0:39:19 at least to not publicize any errors.
0:39:27 And here’s a reason to think that Brian Nosek is right to be optimistic about research reform.
0:39:32 Some of his fellow reformers, including Samin Vizier, have been promoted into prestigious
0:39:34 positions in their field.
0:39:40 Vizier spent some time as editor-in-chief of the journal Social Psychological and Personality
0:39:41 Science.
0:39:44 So one of the things the editor-in-chief does is when a manuscript is submitted, I would
0:39:48 read it and decide whether it should continue through the peer review process or I could
0:39:51 reject it there, and that’s called desk rejection.
0:39:54 One thing I started doing at the journal that wasn’t official policy, it was just a practice
0:39:58 I decided to adopt, was that when a manuscript was submitted, I would hide the author’s
0:39:59 names for myself.
0:40:03 So I was rejecting things without looking at who the authors were.
0:40:06 So the publication committee started a conversation with me, which is totally reasonable, about
0:40:10 the overall desk rejection rate, am I rejecting too many things, et cetera.
0:40:14 There was some conversation about whether I was desrejecting the wrong people.
0:40:18 So if I was stepping on important people’s toes, and an email was forwarded to me from
0:40:24 a quote-unquote award-winning social psychologist, Samin desrejected my paper, I found this extremely
0:40:27 distasteful and I won’t be submitting there again.
0:40:31 And when I would try to engage about the substance of my decisions, the scientific basis for
0:40:34 them, that wasn’t what the conversation was about.
0:40:37 So it was basically like, do you know who I am?
0:40:38 Yeah.
0:40:39 Yeah.
0:40:41 I’ll send a question to you and that journal then.
0:40:46 It was a tense few months, but in the end, I was allowed to continue doing what I was
0:40:47 doing.
0:40:52 Vizier recently took on the editor-in-chief job at a different journal, Psychological
0:40:53 Science.
0:40:57 It is one of the premier journals in the field.
0:41:03 It’s also the journal where Francesca Gino published two of her allegedly fraudulent papers.
0:41:08 So I asked Vizier what changes she is hoping to make.
0:41:10 We’re expanding a team that used to have a different name.
0:41:15 We’re going to call them the statistics, transparency and rigor editors, the star editors.
0:41:20 And so that team will be supplementing the handling editors, the editors who actually
0:41:24 organize the peer review and make the decisions on submissions.
0:41:28 Like if a handling editor has a question about the data integrity or about details of the
0:41:32 methods or things like that, the star editor team will provide their expertise and help
0:41:33 fill in those gaps.
0:41:37 We’re also, I’m not sure exactly what form this will take, but try to incentivize more
0:41:41 accurate and calibrated claims and less hype and exaggeration.
0:41:45 This is something that I think is particularly challenging with short articles like Psychological
0:41:49 Science publishes and especially a journal that has really high rejection rate where
0:41:51 the vast majority of submissions are rejected.
0:41:55 Authors are competing for those few spots and so it feels like they have to make a really
0:41:57 bold claim.
0:42:00 And so it’s going to be very difficult to play this like back and forth where authors are
0:42:02 responding to the perception of what the incentives are.
0:42:06 So we need to convey to them that actually if you go too far, make two bold of claims
0:42:10 that aren’t warranted, you will be more likely to get rejected.
0:42:12 But I’m not sure if authors will believe that just because we say that.
0:42:16 They’re still competing for a very selective number of spots.
0:42:21 So as a journal editor, how do you think about the upside risk of publishing something
0:42:25 new and exciting against the downside risk of being wrong?
0:42:27 Oh, I don’t mind being wrong.
0:42:29 I think journalists should publish things that turn out to be wrong.
0:42:32 It would be a bad thing to approach journal editing by saying we’re only going to publish
0:42:35 true things or things that we’re 100% sure are true.
0:42:39 The important thing is that the things that are more likely to be wrong are presented
0:42:42 in a more uncertain way and sometimes we’ll make mistakes even there.
0:42:45 Sometimes we’ll present things with certainty that we shouldn’t have.
0:42:49 What I would like to be involved in and what I plan to do is to encourage more post publication
0:42:55 critique and correction, reward the whistleblowers who identify errors that are valid and that
0:43:01 need to be acted upon and create more incentives for people to do that and do that well.
0:43:04 How would you reward whistleblowers?
0:43:05 I don’t know.
0:43:10 Do you have any ideas?
0:43:15 Right now, the rewards for whistleblowers in academia may seem backwards.
0:43:20 Remember, the data collado whistleblowers were sued by Francesca Gino, one of the people
0:43:21 they blew the whistle on.
0:43:25 They needed a go fund me campaign for their legal defense.
0:43:31 So no, the whistleblowers aren’t collecting any bounties, nor do they cover themselves
0:43:33 in any kind of glory.
0:43:38 Stephen, I’m the person that walks into these academic conferences and everyone is like,
0:43:40 here comes Debbie Downer.
0:43:43 That’s Leif Nelson, another member of Data Collada.
0:43:47 He’s a professor of business administration at UC Berkeley.
0:43:53 In a recent New Yorker piece by Gideon Lewis Krause about these fraud scandals, Nelson
0:43:58 and his Data Collada partners were described as having a, quote, “basic willingness to
0:44:03 call bullshit.”
0:44:09 So now that you’ve become part of this group that are collectively, I would think of as
0:44:15 the primary whistleblower or police or steward, whatever word we want to use, against fraudulent
0:44:20 research in the social sciences, what does that feel like?
0:44:24 I’m guessing on one level, it feels like an accomplishment.
0:44:28 On the other hand, it makes me think of a police force where there’s the Internal Affairs
0:44:36 Bureau where detectives are put to find the bad apples and even though everybody’s in
0:44:41 favor of rooting out the bad apples, everybody kind of hates the IAB guys.
0:44:48 And I’m curious what the emotional toll or cost has been to you.
0:44:50 Wow.
0:44:51 Bad question?
0:44:52 No.
0:44:55 Like, it reminds me of how stressful it all is.
0:44:59 We struggle a little bit with thinking about analogies for what we do.
0:45:01 We’re definitely not police.
0:45:03 Police, amongst other things, have institutional power.
0:45:06 They have badges, whatever.
0:45:07 We don’t have any of that.
0:45:09 We’re not enforcers in any way.
0:45:15 The Internal Affairs thing hurts a little bit, but I get it because that’s saying, “Hey,
0:45:18 within the behavioral science community, we’re the people that are watching the behavioral
0:45:20 scientists.”
0:45:22 And you’re right, no one likes internal affairs.
0:45:27 Most of our thinking is that we want to be journalists, that it’s fun to investigate.
0:45:28 That’s true for everybody in the field, right?
0:45:31 They’re all curious about whatever it is they’re studying.
0:45:33 And so we’re curious about this.
0:45:37 And then when we find things that we think are interesting, we also want to talk about
0:45:40 it, not just with each other, but with the outside world.
0:45:45 But I don’t identify as much with being a police officer or even a detective, though
0:45:49 every now and then people will compare us to something like Sherlock Holmes and that
0:45:51 feels more fun.
0:45:55 But in truth, the reason I sort of wins at the question is that the vast majority of
0:46:00 the time, it comes with far more burden than it does pleasure.
0:46:02 Even before the lawsuit?
0:46:10 Yeah, the lawsuit makes all of the psychological burden into a concrete observable thing.
0:46:16 But the prior to that is that every time we report on anything that’s going to be like,
0:46:23 “Look, we think something bad happened here, someone is going to be mad at us and probably
0:46:27 more people are going to be, and I don’t want people to be mad at me.”
0:46:32 And I think about some of the people involved and it’s hard because I know a lot of these
0:46:36 people and I know they’re friends and I know the friends of the friends and that carries
0:46:40 real, real stress for I think all three of us.
0:46:45 In the New Yorker piece, there are still people who call you pretty harsh names.
0:46:47 You’ve been compared to the Stasi, for instance.
0:46:49 Yeah, that’s real bad.
0:46:53 I’m not happy with being compared to the Stasi.
0:46:56 The optimistic take is that there’s less of that than there used to be.
0:47:02 When any of the three of us go and visit universities, for example, and we talk to doctoral students
0:47:06 and we talk to assistant professors and we talk to associate professors, we talk to senior
0:47:11 professors, the students basically all behave as though they don’t understand why anyone
0:47:13 would ever be against what we’re saying.
0:47:18 They wouldn’t understand the Stasi thing, but they also wouldn’t even understand why
0:47:21 they almost are at the level, “I don’t understand why we’re having you come for a talk.
0:47:23 Doesn’t everyone already believe this?”
0:47:26 But when I talk to people that are closer to retirement than they are to being a grad
0:47:32 student, they’re more like, “You’re making waves where you don’t need to, you’re pushing
0:47:34 back against something that’s not there.
0:47:36 We’ve been doing this for decades.
0:47:38 Why fix what isn’t broken?”
0:47:39 That sort of thing.
0:47:42 If they were to say that to you directly, “Why fix what isn’t broken?”
0:47:43 What would you say?
0:47:46 I would say, “But it is broken.”
0:47:48 And your evidence for that would be?
0:47:53 The evidence for that is a multi-fold.
0:47:57 After the break, multi-fold we shall.
0:47:58 I’m Stephen Dubner.
0:47:59 This is Freakonomics Radio.
0:48:08 We’ll be right back.
0:48:11 Can academic fraud be eliminated?
0:48:12 Certainly not.
0:48:15 The incentives are too strong.
0:48:21 Also, to be reductive, cheaters are going to cheat, and I doubt there is one field of
0:48:27 human endeavor, no matter how noble or righteous or honest it claims to be, where some cheating
0:48:29 doesn’t happen.
0:48:34 But can academic fraud at least be greatly reduced?
0:48:35 Perhaps.
0:48:42 But that would likely require some big changes, including a new type of gatekeeper.
0:48:48 Samin Vizier, the journal editor we heard from earlier, is one kind of gatekeeper.
0:48:50 Sometimes, for example, we’ll get a submission where the research is really solid, but the
0:48:54 conclusion is too strong, and I’ll sometimes tell authors, “Hey, look, I’ll publish your
0:48:57 paper if you tone down the conclusion,” or even sometimes change the conclusion from
0:49:01 saying there is evidence for my hypothesis to there’s no evidence when we are the other,
0:49:05 but it’s still interesting data, and authors are not always willing to do that, even if
0:49:07 it means getting a publication in this journal.
0:49:11 So I do think that’s a sign that maybe it’s a sign that they genuinely believe what they’re
0:49:15 saying, which is maybe to their credit, I don’t know if that’s good news or bad news.
0:49:20 I think often when we’re kind of overselling something, we probably believe what we’re
0:49:21 saying.
0:49:24 And there’s another important gatekeeper in academic journals, one that we’ve barely
0:49:29 talked about, the referees who assess journal submissions.
0:49:35 Peer review is a bedrock component of what makes academic publishing so credible, at
0:49:40 least in theory, but as we’ve been hearing about every part of this industry, the incentives
0:49:44 for peer reviewers are also off.
0:49:48 Here again is Ivan Oransky from Retraction Watch.
0:49:53 If you add up the number of papers published every year, and then you multiply that times
0:49:59 the two or three peer reviewers who are typically supposed to review those papers, and sometimes
0:50:04 they go through multiple rounds, it’s easily in the tens of millions of peer reviews as
0:50:05 a unit.
0:50:10 And if each of those takes anywhere from four hours to eight hours of your life as an expert,
0:50:14 which you don’t really have ’cause you gotta be teaching, you gotta be doing your own research,
0:50:18 you come up with a number that cannot possibly be met by qualified people.
0:50:19 Really, it can’t.
0:50:21 I mean, the match just doesn’t work.
0:50:23 And none of them are paid.
0:50:27 You are sort of expected to do this because somebody will peer review your paper at some
0:50:30 other point, which sort of makes sense until you really pick it apart.
0:50:35 Now peer reviewers, so even the best of them, and by best I mean people who really sit and
0:50:40 take the time and probe what’s going on in the paper and look at all the data.
0:50:41 But you can’t always look at the data.
0:50:45 In fact, most of the time you can’t look at the raw data, even if you had time because
0:50:47 the authors don’t make it available.
0:50:53 So peer review, it’s become really peer review light and maybe not even that at the vast
0:50:54 majority of journals.
0:50:59 So it’s no longer surprising that so much gets through the system that shouldn’t.
0:51:02 This is a very hot topic.
0:51:07 And that again is Leif Nelson from UC Berkeley and Data Colada.
0:51:13 Editors largely in my field are uncompensated for their job, and reviewers are almost purely
0:51:15 uncompensated for their job.
0:51:19 And so they’re all doing it for the love of the field.
0:51:21 And those jobs are hard.
0:51:24 I’m an occasional reviewer and an occasional editor.
0:51:27 And every time I do it, it’s basically a taxing.
0:51:34 The first part of the job was reading a whole paper and deciding whether the topic was interesting.
0:51:38 Whether it was contextualized well enough that people would understand what it was about.
0:51:44 Whether the study as designed was good at testing the hypothesis as articulated.
0:51:48 And only after you get past all of those levels would you say, okay, and now do they have
0:51:50 evidence in favor of the hypothesis.
0:51:58 By the way, we have mostly been talking about the production side of academic research this
0:52:00 whole time.
0:52:02 What about the consumer side?
0:52:07 All of us are also looking for the most interesting and useful studies.
0:52:13 All of us in industry, in government, in the media, especially the media.
0:52:15 Here’s Ivan Oransky again.
0:52:17 We have been conditioned.
0:52:20 And in fact, because of our own attention economy.
0:52:25 We end up covering studies overall else when it comes to science and medicine.
0:52:26 I like to think that’s changing a little bit.
0:52:28 I hope it is.
0:52:33 But we cover individual studies and we cover the studies that sound the most interesting
0:52:36 or that have the biggest effect size and things like that.
0:52:42 You wear red, you must be angry or if it says that this is definitely a cure for cancer.
0:52:43 And journalists love that stuff.
0:52:44 They lap it up.
0:52:48 Like signing a document at the top will make you more likely to be honest on the forum.
0:53:00 And on that note, I went back to Max Baserman, one of the co-authors of that paper, which
0:53:03 inspired this series.
0:53:09 For Baserman, the experience of getting caught up in fraud accusations was particularly bewildering
0:53:14 because the accusations were against a collaborator and friend that he fully trusted, Francesca
0:53:15 Gina.
0:53:22 So, you know, when we think about Ponzi schemes, it’s named after a guy named Ponzi who was
0:53:26 an Italian-American who preyed on the Italian-American community.
0:53:32 And if we think about Bernie Madoff, he preyed on lots of people, but particularly many very
0:53:36 wealthy Jewish individuals and organizations.
0:53:40 One of the interesting things about trust is that it creates so many wonderful opportunities.
0:53:45 So in the academic world, the fact that I can trust my colleagues means that we can diffuse
0:53:48 the work to the person who can handle it best.
0:53:51 So there’s lots of enormous benefits from trust.
0:53:56 But it’s also true that if there’s somebody out there who’s going to commit a fraud of
0:54:04 any type, those of us who are trusting that individual are perhaps in the worst position
0:54:07 to notice that something’s wrong.
0:54:12 And quite honestly, Steven, you know, I’ve been working with junior colleagues who are
0:54:18 smarter than me and know how to do a variety of tasks better than me for such a long time.
0:54:22 I’ve always trusted them, certainly for junior colleagues.
0:54:27 For the most new doctoral students, I may not have trusted their competence because they
0:54:28 were still learning.
0:54:34 But in terms of using the word trust in an ethical sense, I’ve never questioned the ethics
0:54:35 of my colleagues.
0:54:39 So this current episode has really hit me pretty, pretty heavily.
0:54:44 Can I tell you, Max, that is what upsets me about this scandal, even though I’m not an
0:54:48 academic, but I’ve been writing about and interacting with academics for quite a while
0:54:49 now.
0:54:53 And the problem is that I maybe gave them overall too much credit.
0:54:59 I considered academia one of the last bastions of, I mean, I do sound like a fool now when
0:55:06 I say it, but one of the last bastions of honest, transparent, empirical behavior where
0:55:11 you’re bound by a sort of code that only very rarely would someone think about intentionally
0:55:12 violating.
0:55:19 I’m curious if you felt that way as well, that you were sort of played or were naive
0:55:20 in retrospect.
0:55:22 Undoubtedly, I was naive.
0:55:27 You know, not only did I trust my colleagues on the signing first paper, but I think I’ve
0:55:30 trusted my colleagues for decades.
0:55:36 And hopefully with a good basis for trusting them, I do want to highlight that there’s
0:55:38 so many benefits of trust.
0:55:44 So the world has done a lot better because we trust science.
0:55:49 And the fact that there’s an occasional scientist who we shouldn’t trust should not keep us
0:55:52 from gaining the benefit that science creates.
0:56:00 And so one of the harms created by the fraudsters is that they give credibility to the science
0:56:09 deniers who are so often keeping us from making progress in society.
0:56:14 It’s worth pointing out that scientific research findings have been refuted and overturned
0:56:17 since the beginning of scientific research.
0:56:20 That’s part of the process.
0:56:26 But what’s happening at this moment, especially in some fields like social psychology, it can
0:56:28 be disheartening.
0:56:33 It’s not just a replication crisis or a data crisis.
0:56:36 It’s a believability crisis.
0:56:39 Samine Vizier acknowledges this.
0:56:42 There were a lot of societal phenomena that we really wanted explanations for.
0:56:47 And then social psych offered these kind of easy explanations or maybe not so easy, but
0:56:50 these relatively simple explanations that people wanted to believe just to have an answer
0:56:51 and an explanation.
0:56:56 So just how bad is the believability crisis?
0:57:01 Danny Kahneman, who died last year, was perhaps the biggest name in academic psychology in
0:57:05 a couple generations, so big that he once won a Nobel Prize in economics.
0:57:10 His work has been enormously influential in many fields and industries.
0:57:15 But in a New York Times article about the Francesca Gino and Dan Ariely scandals, he
0:57:21 said, “When I see a surprising finding, my default is not to believe it.
0:57:26 12 years ago, my default was to believe anything that was surprising.”
0:57:30 Here again is Max Baserman, who was a colleague and friend of Kahneman’s.
0:57:36 I think that my generation fought against the open science movement for far too long,
0:57:40 and it’s time that we get on the bandwagon and realize that we need some pretty massive
0:57:46 reform of how social science is done, not only to improve the quality of social science,
0:57:49 but also to make us more credible with the world.
0:57:54 So many of us are attracted to social science because we think we can make the world better,
0:57:58 and we can’t make the world better if the world doesn’t believe our results anymore.
0:58:03 So I think that we have a fundamental challenge to figure out how do we go about doing that.
0:58:08 In terms of training, I think that for a long time, if we think about training and research
0:58:13 methods and statistics, that was more like the medicine that you have to take as part
0:58:15 of becoming a social scientist.
0:58:21 And I think we need to realize that it’s a much more central and important topic.
0:58:27 If we’re going to be creating reproducible, credible social science, we need to deal with
0:58:31 lots of the issues that the open science movement is telling us about.
0:58:34 And we’ve taken too long to listen to their advice.
0:58:41 So if we go from data collada, talking about p-hacking in 2011, you know, there were lots
0:58:44 of hints that it was time to start moving.
0:58:49 And the field obviously has moved in the direction that data collada and brianosic have moved
0:58:50 us.
0:58:56 And finally, we have samine vasir as a new incoming editor of psych science, which is
0:58:58 sort of a fascinating development as well.
0:59:00 So we’re moving in the right direction.
0:59:06 It’s taken us too long to pay attention to the wise advice that the open science movement
0:59:10 has outlined for us.
0:59:23 I do think there needs to be a reckoning.
0:59:29 I think that people need to wake up and realize that the foundation of at least a sizable
0:59:34 chunk of our field is built on something that’s not true.
0:59:39 And if a foundation of your field is not true, what does a good scientist do to break into
0:59:41 that field?
0:59:45 Like imagine you have a whole literature that is largely false.
0:59:49 And imagine that when you publish a paper, you need to acknowledge that literature.
0:59:53 And that if you contradict that literature, your probability of publishing really goes
0:59:54 down.
0:59:56 What do you do?
0:59:59 So what it does is it winds up weeding out the careful people who are doing true stuff.
1:00:04 And it winds up rewarding the people who are cutting corners or even worse.
1:00:12 So it basically becomes a field that reinforces rewards, bad science and punishes good science
1:00:14 and good scientist.
1:00:21 Like this is about an incentive system and the incentive system is completely broken.
1:00:23 And we need to get a new one.
1:00:27 And the people in power who are reinforcing this incentive system, they need to not be
1:00:28 in power anymore.
1:00:32 You know, this is illustrating that there’s sort of a rot at the core of some of the stuff
1:00:34 that we’re doing.
1:00:41 And we need to put the right people who have the right values, who care about the details,
1:00:45 who understand that the materials and the data, they are the evidence.
1:00:48 We need those people to be in charge.
1:00:53 Like there can’t be this idea that these are one-off cases, they’re not.
1:00:55 They are not one-off cases.
1:00:56 So it’s broken.
1:01:00 You have to fix it.
1:01:02 That again was Joe Simmons.
1:01:06 Once we published this series last year, there have been reports of fraud in many fields.
1:01:11 Not just the behavioral sciences, but in botany, physics, neuroscience and more.
1:01:16 So we went back to Brian Nosik, who runs the Center for Open Science.
1:01:23 There really is accelerating movement in the sense that some of the base principles of
1:01:28 we need to be more transparent, we need to improve data sharing, we need to facilitate
1:01:33 the processes of self-correction are not just head nods.
1:01:39 Yeah, that’s an important thing, but have really moved into, yeah, how are we going
1:01:40 to do that?
1:01:45 And so I guess that’s been the theme of 2024 is how can we help people do it well?
1:01:48 At his center, Nosik is trying out a new plan.
1:01:53 One of the more exciting things that we’ve been working on is a new initiative that we’re
1:01:56 calling Lifecycle Journal.
1:02:03 And the basic idea is to reimagine scholarly publishing without the original constraints
1:02:04 of paper.
1:02:11 A lot of how the peer review process and publishing occurs today was done because of their limits
1:02:13 of paper.
1:02:17 But in a world where we can actually communicate digitally, there’s no reason that we need
1:02:22 to wait till the research is done to provide some evaluation.
1:02:27 There’s no reason to consider it final when it could be easily revised and updated.
1:02:33 There’s no reason to think of review as a singular one set of activities by three people
1:02:35 who judge the entire thing.
1:02:40 And so we will have a full marketplace of evaluation services that are each evaluating
1:02:42 the research in different ways.
1:02:46 It’ll happen across the research lifecycle from planning through completion.
1:02:51 And researchers will always be able to update and revise when errors or corrections are
1:02:52 needed.
1:02:57 But the need for corrections can move in mysterious ways.
1:03:04 Brian Nosik himself and collaborators including Leif Nelson of Data Colada had to retract
1:03:09 a recent article about the benefits of pre-registration after other researchers pointed out that their
1:03:15 article hadn’t properly pre-registered all their hypotheses.
1:03:16 Nosik was embarrassed.
1:03:22 My whole life is about trying to promote transparent research practices, greater openness, trying
1:03:24 to improve rigor and reproducibility.
1:03:28 I am just as vulnerable to error as anybody else.
1:03:36 And so one of the real lessons, I think, is that without transparency, these errors will
1:03:39 go unexposed.
1:03:44 It would have been very hard for the critics to identify that we had screwed this up without
1:03:50 being able to access the portions of the materials that we were able to make public.
1:03:57 And as people are engaged with critique and pursuing transparency, and transparency is
1:04:04 becoming more normal, we might for a while see an ironic effect, which is transparency
1:04:11 seems to be associated with poorer research because more errors are identified.
1:04:15 And that ought to happen because errors are occurring.
1:04:18 Without transparency, you can’t possibly catch them.
1:04:26 But what might emerge over time as our verification processes improve as we have a sense of accountability
1:04:33 to our transparency, then the fact that transparency is there may decrease error over time, but
1:04:34 not the need to check.
1:04:36 And that’s the key.
1:04:40 Still, trust in science in the US has been declining.
1:04:45 So we asked Nosik if he is worried that this new transparency, which will likely uncover
1:04:49 more errors, might hurt his cause.
1:04:54 This is a real challenge that we wrestle with and have wrestled with since the origins of
1:05:03 the center is how do we promote this culture of critique and self-criticism about our field
1:05:10 and simultaneously have that be understood as the strength of research rather than its
1:05:12 weakness.
1:05:16 One of the phrases that I’ve liked to use in this is that the reason to trust science
1:05:19 is because it doesn’t trust itself.
1:05:25 That part of what makes science great as a social system is its constant self-scrutiny
1:05:31 and willingness to try to find and expose its errors so that the evidence that comes
1:05:37 out at the end is the most robust, reliable, valid evidences could be.
1:05:43 And that continuous process is the best process in the world that we’ve ever invented for
1:05:46 knowledge production.
1:05:47 We can do better.
1:05:55 I think our mistake in some prior efforts of promoting science is to appeal to authority,
1:05:57 saying you should trust science because scientists know what they’re doing.
1:06:04 I don’t think that’s the way to gain trust in science because anyone can make that claim.
1:06:06 Appeals to authority are very weak arguments.
1:06:13 I think our opportunity as a field to address the skepticism of institutions generally and
1:06:21 science specifically is to show our work, is by being transparent, by allowing the criticism
1:06:28 to occur, by in fact encouraging and promoting critical engagement with our evidence.
1:06:33 That is the playing field I’d much rather be on with people who are the so-called enemies
1:06:39 of science than in competing appeals to authority.
1:06:43 Because if they need to wrestle with the evidence and an observer says, “Wow, one group is totally
1:06:49 avoiding the evidence and the other group is actually showing their work,” I think people
1:06:51 will know who to trust.
1:06:53 It’s easy to say it’s very hard to do.
1:06:58 These are hard problems.
1:06:59 I agree.
1:07:00 These are hard problems.
1:07:05 To be fair, easy problems get solved or they simply evaporate.
1:07:11 It’s the hard problems that keep us all digging and we at Freakonomics Radio will keep digging
1:07:13 in this new year.
1:07:15 Thanks to Brian Nosick for the update.
1:07:18 Thanks especially to you for listening.
1:07:20 Coming up next time on the show.
1:07:22 The sun right here is 12 foot tall.
1:07:30 The economics of highway signs and after that, some 30 million Americans think that they
1:07:36 are allergic to the penicillin family of drugs and the vast majority of them are not.
1:07:37 Why does this matter?
1:07:43 Nothing kills bacteria better than these drugs.
1:07:46 We go inside the bizarro world of allergies.
1:07:47 Thanks.
1:07:48 Coming up soon.
1:07:54 Until then, take care of yourself and if you can, someone else too.
1:07:56 Freakonomics Radio is produced by Stitcher and Renbud Radio.
1:08:02 You can find our entire archive on any podcast app also at Freakonomics.com where we publish
1:08:04 transcripts and show notes.
1:08:07 This episode was produced by Alina Kullman.
1:08:12 Our staff also includes Augusta Chapman, Dalvin Abouaji, Elinor Osborn, Ellen Frankman, Elsa
1:08:17 Hernandez, Gabriel Roth, Greg Rippen, Jasmine Klinger, Jason Gambrell, Jeremy Johnston,
1:08:21 John Schnarrs, Lyric Bowditch, Morgan Levy, Neil Coruth, Sarah Lilly, Theo Jacobs and
1:08:23 Zac Lipinski.
1:08:26 Our theme song is Mr. Fortune by the Hitchhikers.
1:08:28 Our composer is Luis Guerra.
1:08:33 As always, thanks for listening.
1:08:36 We are a carrot based organization because we don’t have sticks.
1:08:38 I mean, would you like me to loan you a stick just once in a while?
1:08:39 Yeah, that would be fun.
1:08:53 The Freakonomics Radio Network, the hidden side of everything.
1:08:56 [MUSIC PLAYING]
Probably not — the incentives are too strong. But a few reformers are trying. We check in on their progress, in an update to an episode originally published last year. (Part 2 of 2)
- SOURCES:
- Max Bazerman, professor of business administration at Harvard Business School.
- Leif Nelson, professor of business administration at the University of California, Berkeley Haas School of Business.
- Brian Nosek, professor of psychology at the University of Virginia and executive director at the Center for Open Science.
- Ivan Oransky, distinguished journalist-in-residence at New York University, editor-in-chief of The Transmitter, and co-founder of Retraction Watch.
- Joseph Simmons, professor of applied statistics and operations, information, and decisions at the Wharton School at the University of Pennsylvania.
- Uri Simonsohn, professor of behavioral science at Esade Business School.
- Simine Vazire, professor of psychology at the University of Melbourne and editor-in-chief of Psychological Science.
- RESOURCES:
- “How a Scientific Dispute Spiralled Into a Defamation Lawsuit,” by Gideon Lewis-Kraus (The New Yorker, 2024).
- “The Harvard Professor and the Bloggers,” by Noam Scheiber (The New York Times, 2023).
- “They Studied Dishonesty. Was Their Work a Lie?” by Gideon Lewis-Kraus (The New Yorker, 2023).
- “Evolving Patterns of Extremely Productive Publishing Behavior Across Science,” by John P.A. Ioannidis, Thomas A. Collins, and Jeroen Baas (bioRxiv, 2023).
- “Hindawi Reveals Process for Retracting More Than 8,000 Paper Mill Articles,” (Retraction Watch, 2023).
- “Exclusive: Russian Site Says It Has Brokered Authorships for More Than 10,000 Researchers,” (Retraction Watch, 2019).
- “How Many Scientists Fabricate and Falsify Research? A Systematic Review and Meta-Analysis of Survey Data,” by Daniele Fanelli (PLOS One, 2009).
- Lifecycle Journal.
- EXTRAS:
- “Why Is There So Much Fraud in Academia? (Update)” by Freakonomics Radio (2024).
- “Freakonomics Goes to College, Part 1,” by Freakonomics Radio (2012).