John Bohannan in Science writes one of many stories about the Wellcome Trust establishing a new open access journal, in which peer review follows the posting of preprints: “U.K. research charity will self-publish results from its grantees”.
Normally, peer review is anonymous and happens before publication of a paper. The charity’s journal, called Wellcome Open Research, will encourage researchers to post their work immediately on the site, as a full research paper or even just a data set. Only then does the peer review begin, and in this case the reviewers selected by the journal’s editors will be publicly known. "The transparent peer review process will encourage constructive feedback from experts," the Wellcome Trust's press release reads, "[focusing] on helping the authors improve their work rather than on making an editorial decision to accept or reject an article." Some other scientific journals and preprint servers such as arXiv similarly use postpublication peer review, although the concept has so far failed to be fully embraced by the research community.
It makes no sense whatsoever for funders to support scientific work, and then tolerate that work being behind a paywall that not even the funders can access, much less the public. Bohannan directs some attention to this aspect:
"This really is a potential game changer for a major funder to be taking control of the research output," says Paul Ginsparg, the Cornell University physicist who founded arXiv.org, the massive online scientific preprint server. He hopes that U.S. funding agencies will follow suit. "It would be a miracle."
From the point of view of Wellcome and other nonprofit groups that fund science, academic journals can be an expensive drain on time and money. Publication can take months or even years before anyone gets to read the output of the research they back, and with traditional subscription journals the reader then pays for the privilege.
This seems like the obvious move for large organizations funding science. If you want to increase the value of the work you fund, then you should make it easier to disseminate the results of that work to a broader number of researchers and the public. The preprint process is accelerating research by allowing others to build on the research faster.
Open review of preprints also has the potential to broaden the scope of outside review, bringing more voices into the process. But so far in biological sciences, it has failed to achieve its potential. Scientists may be reading and using results from preprints, but a robust commentary around new research has yet to emerge in these fields. Physicists and economists have long relied on pre-review dissemination of research, but it takes a cultural change to recognize these efforts.
Wellcome’s initiative may help shift the ground by taking some researcher fear out of the process.
The earliest scientific journals had editors who selected or solicited articles from experts, on the basis of interest to scientific readers. The experts of the time, in the eighteenth and nineteenth centuries, were gentleman philosophers, with science as an avocation, not a source of practical income. The best of the early editors were community builders, drumming up interest in scientific work at the same time they tried to advance the standards of scientific practice. This role of curating work continues today. Journals like Nature and Science, and even field-specific journals like the Journal of Human Evolution, have editors who select articles that they think will be valuable for their readers, both for community-building and for displaying scientific importance.
One of the big issues in science today is the fact that researchers are gaming their research outputs toward results that fit the values of journal editors. Editors and reviewers exhibit bias toward results that are “positive” in the sense of demonstrating statistical evidence for a hypothesis (as opposed to failure to disprove a null). They also like “counter-intuitive” results, because they seem novel as opposed to simply replicating what is already known.
Many people have pointed out that exactly these kinds of results are least likely to be true. Scientists are on a massive wild goose chase for false positives. “High-impact” journals print such results disproportionately often, and they pay little price when these results are later shown to be less than initially advertised.
The current system seems to work very well for certain journals, but poorly for funders who want to support solid research. Such problems have well-known solutions. Increase sample sizes. Pre-register studies, so that that researchers cannot change them based on what appears to be “significant” in a small sample. Require more from “counter-intuitive” results. So far, most journals have done very little to encourage these solutions.
A preprint server by definition does not select articles. With the arXiv and bioRxiv preprint servers, each submission undergoes a few checks at the time of submission to ensure that it fits the posting criteria, but those checks do not select articles on the basis of interest or perceived importance. Any scientific research that fits the preprint server’s remit will be accepted.
With Wellcome Open Research, the remit is any research funded by Wellcome. This creates a very interesting situation that we haven’t seen before: The curation of research for this venue occurs with funding decisions and not after research is completed.
This open approach is a funder-level equivalent of pre-registration. In essence they are saying: “We will fund projects and provide the infrastructure to publish them, irrespective of how they turn out.” They are making their own version of “impact” by disseminating both the work of researchers that they fund, and the review of that work. By doing so, they show the quality of science they are funding.
The criticism will be that Wellcome is making a walled garden. Sure, the research may look good on the surface, but it hasn’t faced the true competition of work in the major journals. Wellcome can do much to end this line of criticism. The benefit of open review is that its quality is on public display. But that means Wellcome needs to have excellent reviewers take part in the system. This is going to take a change in culture. The best reviewers have many demands on their time. Open review can allow referees to build a reputation for good community interaction, and what Wellcome can do is magnify those incentives. As a funder, Wellcome can demand something from its grantees that would benefit the community. But it should also think about carrots that will bring more reviewers into their system and create more engagement for funded researchers. A great community will build great science.
Will it work?
Obviously the scheme leaves a lot of wiggle-room for researchers to adjust their projects as circumstances require. The individual researcher still has many strong incentives to goose results to attain higher impact, and so far nothing is to stop them from submitting their favorite work to Cell.
Still, there are many advantages. I have found the eLife collaborative review model to be better than traditional review in many ways, and if Wellcome captures the benefits of a more open review process, most researchers will be eager to publish their work this way. Researchers and institutions worry that publishing in non-traditional venues will hurt their chances of obtaining funding. As a major funder, Wellcome can officially end that worry for their grantees.
I think we will see other funders following this lead, particularly those with strong public missions. It works against the interests of public-facing organizations for the work that they support to be behind a paywall, and at the same time, it is a huge suck of value to pay Elsevier or NPG to release work under a Creative Commons license. Organizations can add immense value by taking on the review and dissemination role directly.