Why won't Science publish replication studies?

An article in Slate by Kevin Arceneaux and coworkers recounts their experiences trying to publish a replication of a high-profile psychology study in Science: “We Tried to Publish a Replication of a Science Paper in Science. The Journal Refused.”

The story concerns a 2008 study in Science that claimed that people react differently to scary pictures depending on whether they are political liberals or conservatives. The study was widely publicized at the time of publication and has become a mainstay of

There’s one problem: It didn’t replicate. Arceneaux and coworkers explain how they got grants to set up expensive equipment in their laboratories and tried to extend the work with hundreds of subjects. And failed. And then they tried to replicate the exact circumstances of the original study, with the input of the original authors, with a larger sample of subjects. And failed.

They wrote it up and submitted it to Science. Desk reject. The story is well worth reading, this is the authors’ bottom line:

We believe that it is bad policy for journals like Science to publish big, bold ideas and then leave it to subfield journals to publish replications showing that those ideas aren’t so accurate after all. Subfield journals are less visible, meaning the message often fails to reach the broader public. They are also less authoritative, meaning the failed replication will have less of an impact on the field if it is not published by Science.

Science is published by the American Association for the Advancement of Science. The cause of science is not advanced by publishing studies that attract huge public attention, but then failing to publish the results when those studies fail to replicate.

I am surprised that the editors of the journal do not see the opportunity here to establish a responsible precedent. Well-powered, pre-registered studies that revisit splashy research findings are the way that future science is going to happen. As it is, Science is appealing to researchers who design underpowered studies that produce counterintuitive results. As we’ve seen in the last few years from the “replication crisis”, those studies are very likely to turn out to be bunk.

I would add one thing. To me, here’s an irritating part of the story that is not getting the attention it deserves:

We had raised funds to create labs with expensive equipment for measuring physiological reactions, because we were excited by the possibilities that the 2008 research opened for us.

That’s the power of a research study published in Science: it changes the funding environment for all scientists in a field. Such studies establish for referees and grant agencies what is worth investing time and resources in.

That’s bad. No single study should have that kind of influence. But the reality is that new research directions often come from just such single cases, and a study like this can start a rush to be in the first wave of researchers investigating a new phenomenon. When those results are bunk, all that time and money—that could have been spent in more promising directions—is wasted.