Failure to replicate

1 minute read

What if you set out to replicate a series of 53 “landmark” clinical trials in cancer treatment and found you could confirm only 6 of them? If you’re C. Glenn Begley, you write about it in Nature: “Raise standards for preclinical cancer research”.

What reasons underlie the publication of erroneous, selective or irreproducible data? The academic system and peer-review process tolerates and perhaps even inadvertently encourages such conduct. To obtain funding, a job, promotion or tenure, researchers need a strong publication record, often including a first-authored high-impact publication. Journal editors, reviewers and grant-review committees often look for a scientific finding that is simple, clear and complete a 'perfect' story. It is therefore tempting for investigators to submit selected data sets for publication, or even to massage data to fit the underlying hypothesis.
But there are no perfect stories in biology. In fact, gaps in stories can provide opportunities for further research for example, a treatment that may work in only some cell lines may allow elucidation of markers of sensitivity or resistance. Journals and grant reviewers must allow for the presentation of imperfect stories, and recognize and reward reproducible results, so that scientists feel less pressure to tell an impossibly perfect story to advance their careers.

In my experience, reviewers often ask for complexity to be added to a paper, by acknowledging weaknesses in methods and alternative explanations for the observations. This makes papers in paleoanthropology stronger. Of course, if the paper is under submission to a glamor journal, those kinds of reviews usually lead to rejection.