A meeting at the Howard Hughes Medical Institute last week asked whether journals should start publishing the reviews they receive on papers. As reported by Jeffrey Brainard in Science, the consensus was yes: “Researchers debate whether journals should publish signed peer reviews”.
Publishing the reviews would advance training and understanding about how the peer-review system works, many speakers argued. Some noted that the evaluations sometimes contain insights that can prompt scientists to think about their field in new ways. And the reviews can serve as models for early career researchers, demonstrating how to write thorough evaluations.
“We saw huge benefits to [publishing reviews] that outweigh the risks,” said Sue Biggins, a genetics researcher at the Fred Hutchinson Cancer Research Center in Seattle, Washington, summarizing one discussion.
Personally, I have mixed feelings about this.
I favor transparency. I also would like to see some sunlight shed on the worst abuses of the peer referee system. I’ve been witness to terrible, abusive, reviews, and I believe that anonymity and the power of secrecy tend to make these worse. Making individuals sign their reviews and publish them would give them an incentive to be responsible and temperate in their comments.
However, referee comments apply to earlier revisions of articles, before the final published version. This inevitably leads to confusion if the peer commentaries are published, because the paper is likely to have changed in response to the comments. Some journals, like eLife do a pretty good job of presenting the peer commentary along with the author responses, in a way that a reader can follow the actual changes that were made in the review and editorial process. But that takes extra work and responsibility on the part of the editors.
When PLoS ONE used to publish peer comments early in its history, I saw many readers cherry-picking review comments to criticize the article, even though the article had been altered to satisfy them. This is one of the fears cited in Brainard’s article, and it’s not just a theoretical fear. It really happened when reviewer comments were published along with the articles.
What I suspect is that when referee reports are published alongside papers routinely, we will notice the blind spots of referees. Some papers get very superficial review, and others get highly critical review in sections that don’t deserve it. Meanwhile, we’ll see that a large fraction of referees suggest additional unnecessary analyses.
At least, if the reviews are signed, we’ll know when a referee is delaying a paper intentionally to try to scoop its results!