As the landscape of scientific publishing looks to change dramatically over the next few years, a key concern will be on the future of peer review. Mike Peacey provides an overview of his team’s recent study that examined how demonstrations of objectivity and subjectivity influences reviewer decision making. The results of their model help to identify some further directions on how to ensure review processes work in the interests of a self-correcting scientific discovery system, rather than one merely reflecting a herding mentality.
Scientific publishing is under the spotlight at the moment. The long-standing model of scientists submitting their work to a learned journal, for consideration by their peers who ultimately decide whether it is rigorous enough to warrant publish (a process known as peer review), is now hundreds of years old – the first scientific journals date from the 17th Century. Is it time for change? Two major factors are driving the scrutiny which this traditional publication model is facing: the astonishing growth in scientific output in recent years, and technological innovations that make print publishing increasing anachronistic.
At the same time, there are concerns that many published research findings may be incorrect. The true extent of this problem is difficult to know with certainty, but pressure on academics to publish (the “publish or perish” culture) may incentivise the publication of novel, eye-catching findings. However, the very nature of these findings (in other words, that they are surprising or unexpected) means that they are more likely to be incorrect – extraordinary claims require extraordinary evidence. Peer review has been criticised because it sometimes fails by allowing such claims to be published, often when it is clear to many scientists that the claims are extremely unlikely to be true. Is peer review as unsuccessful as is sometimes claimed? And how might it be improved? We explored this question recently in a mathematical model of reviewer and author behaviour.
In our model we considered a number of scientists who each, sequentially, obtain private information (e.g., through conducting experiments) about a particular hypothesis. The result of each experiment will never be perfect, but will on average be correct (with more controversial topics providing nosier signals). Once they have completed their experiment, the scientists each write academic papers with the objective of advancing knowledge. These papers are then reviewed by a peer before a decision is made whether or not it is published. When a paper is published, the manuscript begins to partially influence the conclusions that later scientists reach. As a result, the amount of new information transmitted decreases.
Source: Park IU, Peacey MW, Munafò MR. Modelling the effects of subjective and objective decision making in scientific peer review. Nature. 2013. doi: 10.1038/nature12786.
In other words, authors begin to “herd” on a specific topic. We found that the extent to which this herding occurs (and hence the confidence we can have in a hypothesis being correct) will depend on the particular way in which peer review is conducted. When reviewers are encouraged to be as objective as possible they do not use all the information available to them, and therefore their decision provides other scientists no information about their own private information. When reviewers are allowed a degree of subjectivity when making a decision (i.e., to use their judgement about whether the results are likely to be correct, as well as the more objective characteristics of the paper they’re reviewing), the peer review process transmits more information and this allows science to be self-correcting.
Models such as ours necessarily simplify reality, and typically focus on one aspect of a process to determine how important it is to that process. So the results of our model certainly shouldn’t be taken as definitive; rather, they can help to identify some interesting questions which can then be followed up empirically. For example, reviewers usually have to rate manuscripts on a number of dimensions, such as novelty and likely impact. One question might be whether asking reviewers to explicitly rate the extent to which they believe the results of the study provides useful information. Our results also suggest that opening up other channels through which scientists can make their private information known could also be valuable. These could include post-publication peer review, which is growing in popularity, and prediction markets to capture these signals at an aggregate level. The landscape of scientific publishing is likely to change dramatically over the next few years, as open access, self-archiving, altmetrics and other technology-driven innovations become increasingly common. This provides an opportunity to implement changes to a model of scientific publishing that has otherwise remained essentially unchanged for decades.
This piece which first appeared on the CMPO Viewpoint blog is reposted with permission and is based on the findings from Park IU, Peacey MW, Munafò MR. “Modelling the effects of subjective and objective decision making in scientific peer review” published in Nature in December 2013.
Note: This article gives the views of the author, and not the position of the Impact of Social Science blog, nor of the London School of Economics. Please review our Comments Policy if you have any concerns on posting a comment below.
Mike Peacey is a Teaching Fellow at the University of Bath. He has a BSc in Mathematics and an MSc in Economics. He is currently putting the finishing touches to his PhD at the University of Bristol where he has previously taught.
2 Comments