Time to trash anonymous peer review?

2 minute read

This week’s Science magazine is organized on the theme of science communication. In addition to the John Bohannon “sting” operation I discussed in the last post (“‘Open access spam’ and how journals sell scientific reputation”), there are several other thematic articles. An interesting one was a profile of Vitek Tracz, founder of open access publisher Biomed Central and the Faculty of 1000 “The Seer of Science Publishing”.

Tracz has founded a new journal (F1000Research) that publishes its submissions immediately upon receipt, and curates an open process of peer review after publication. He’s down on the concept of anonymous peer review:

In another bold strike, Tracz is taking aim at science's life force: peer review. "Peer review is sick and collapsing under its own weight," he contends. The biggest problem, he says, is the anonymity granted to reviewers, who are often competing fiercely for priority with authors they are reviewing. "What would be their reason to do it quickly?" Tracz asks. "Why would they not steal" ideas or data?
Anonymous review, Tracz notes, is the primary reason why months pass between submission and publication of findings. "Delayed publishing is criminal; it's nonsensical," he says. "It's an artifact from an irrational, almost religious belief" in the peer-review system.

I’ve been signing my reviews for a long time. Still, I recognize there are some valid concerns people have about open, signed reviews. Junior reviewers may be less than candid about the weaknesses of the papers submitted by powerful, well-connected researchers, because they fear career repercussions. That fear is justified; grants are decided by faceless panels of researchers, conference invitations flow from professional backscratching, and jobs for future students depend on professional standing and connections. And even senior scientists may hesitate before writing an honest review, for any of these reasons, and because they fear writing a review full of mistaken logic. After all, we’ve all gotten reviews from people who totally misunderstood a paper’s methods or data. The clueless reviewer is a well-worn trope for a reason.


Many of these risks loom large because there is no tangible benefit whatsoever for reviewing, aside from the satisfaction of making science better. The lack of any connection between review performance and professional standing creates a race to the bottom among reviewers. Reviews take months, overrepresent loudmouths and cranks, and quality depends strongly on how many friends an academic editor can call upon for reviews.

Signed reviews would create a link between reviewing activity and professional standing, thereby providing benefits for good reviewing. Obviously, if we are measuring the “worth” of reviews relative to other forms of professional activity, they are not as high as a journal article. But they are probably equally worthwhile as technical comments – indeed, a technical comment is really the same thing as a review, just a stage later in the process. With F1000Research, the publication model is much more like a preprint followed by technical comments, which lead to corrections. And the immediate availability of data means that the reviewers can replicate the work.

Still, there’s something very wrong with a publishing model that requires $1000 upfront to publish a paper, when all the reviewing labor is being provided gratis.