The current review system for many academic articles is flawed, hindering the publication of excellent, timely research. There is a lack of education for peer reviewers, either during PhD programmes or from journal publishers, and the lack of incentives to review compounds the problem. Thomas Wagenknecht offers up some solutions to the current system, including encouraging associate editors to use their authority to mitigate the impact of bad reviewers, shortening the entire peer review process, and increasing peer reviewer education during PhD and even Masters programmes. However, there are also opportunities for more significant reforms, by adopting post-publication peer review and by exploiting new distributed ledger technologies.
Double-blind peer reviewing, compared to open reviewing, is a great system to ensure quality in research that focuses on the content rather than the person(s) behind the research. However, the current review system for many academic publications is flawed. It hinders publication of timely and excellent research for three main reasons. The examples provided for mediocre reviewer suggestions are from my own experience.
Three problems with the current system
First, while academic writing is already taught to undergraduates, academic reviewing is not even discussed in many PhD programmes. Second, not having received best practices in their education, researchers rarely get it from publishers either. Ferreira and colleagues (2016), analysing journals in biology, argued that many journals do not even provide guidance, let alone forms and defined criteria. From my personal experience, this holds true for other disciplines as well. Third, there are few incentives for academics to conduct thorough peer assessments, while the number of requests for reviews is simultaneously very high.
These reasons (and others) leave a significant portion of reviewers to deliver mediocre or bad assessments. About one third of reviews is considered useless or misleading, while another third is only OK (Prechelt et al. 2017). Sometimes coined “Reviewer 2” in memes and jokes, some reviewers ask authors to include research from their own study sub-field (sometimes suggesting quoting their own publications). They ask researchers to include more literature and extend certain sections without explicitly mentioning why and what they are missing. Others are overly caustic and seem to be in an aggressive mood. Moreover, many reviewers ask to include additional analysis using their preferred method or ask authors to conduct additional experiments.
However, academia does not need to accept the current state. There are solutions both within the current system and beyond.
Three solutions to the current system
In terms of solving some of the problems within the prevailing setup, publishers should aim to both empower associate editors (AEs) and streamline the process, while universities should incorporate academic reviewing in their curricula.
First, some AEs do not use their power to mitigate the impact of bad reviewers. Other than simply concurring with the reviewers, they should encourage reviewers to be constructive and non-aggressive. When reviewers do not provide concrete guidance, AEs should single this out by asking the reviewers to either add information or by exempting the authors from having to respond to certain comments. Publishers should encourage their AEs to ensure such behaviour by setting clear rules and guidance.
Second, review processes take too long. Both authors and reviewers regularly have 8-12 weeks to complete a revision or review. This leads papers to remain behind review bars for well over a year and – in many cases – significantly longer. Yet, this is unnecessary. AEs could simply ask both authors and reviewers to conduct their work in 2-4 weeks or less. If matters that are more important come in the way, they can still ask for an extension. This would reduce waiting times, thereby cutting periods of vocational adjustments, and – more importantly – would allow others earlier access to cutting-edge research.
Third, senior researchers and publishers should make the effort to more thoroughly educate reviewers on best practices for reviewing. These should find their way into PhD student education or even into classes at Masters level. Casnici et al. (2016) found that senior researchers are more likely to reject revision requests; junior researchers might realise through this education that they do not have to fully oblige with reviewers as some currently do.
Image credit: succo, via Pixabay (licensed under a CC0 1.0 license).
Moving forward with post-publication
Yet, these are rather small improvements. Beyond the current system, there are various proposals to increase the quality of academic reviews more significantly. One rather prominent suggestion is post-publication peer review (PPPR). The principle is simple: papers get published quickly after a light sanity check by an editor. Once published, referees conduct the same review process as with pre-publication review (i.e. the current system) but their reviews are published right next to the article – possibly including the name and affiliation of the reviewer. Crucially, reviewers need not provide a recommendation to publish or withhold the paper (Hunter 2012). It is enough for them to point out problems and possible solutions. Authors can then either write rebuttals or try to address problems. Reviewers could embark on addressing these problems too. This in turn could increase the incentives to conduct reviews in the first place. Furthermore, PPPR could reduce the risk of reviewers suggesting unsubstantiated extensions of sections without providing concrete guidance (e.g. full citations).
Moreover, PPPR would require authors to share their data more openly in order to allow reviewers with all sorts of backgrounds to scrutinise their work. As both psychology and economics have recently suffered from a replication crisis, PPPR could contribute to reinstating trust.
In effect, authors would profit from better reviews and higher quality revisions, while reviewers could receive credit for their input more easily (da Silva & Dobranszki 2015). Some journals have taken up this idea and accepted open peer review, where authors can ask their reviewers to publish their critique directly next to the article. Publons collects peer reviews and evaluates them. Yet, these are only small changes.
Using technological advances to reclaim the reviewing system
New technologies offer the chance to improve the review system even further. For instance, Ferreira et al. (2016) argue that reviewing should be separated from journals, towards a centralised platform that sets unilateral standards. They also argue that reviewing should be mandatory for all scientists and that monetary incentives should be provided to ensure this. Using distributed ledger technology (DLT), i.e. the blockchain, and building upon open access platforms, these proposals could be implemented in the real (academic) world. DLTs would be a powerful tool to support PPPR as they make it easy to trace the provenance and development of an article. The technology would even allow for anonymous, yet still trusted PPPR. Reviewers could also be compensated more easily, for instance, based on the readership of a paper (see Steam). DLT could support data sharing too. This would make the whole process more transparent and encourage reviewers to be more constructive, while simultaneously gaining recognition for their reviews.
These changes to the system are bold, yet thankfully many researchers are open to change (Prechelt et al. 2017; The Cost of Knowledge). To some extent, they are also not in the interest of some of the major players in academia. Publishers disproportionately profit from free reviews and cheap articles provided for their journals, and in turn have lower production costs and can still increase subscription prices for libraries and academic institutions. Predatory journals only make the situation wore (da Silva & Dobranszki 2015). Thus, changing the review system could contribute to changing the entire academic value chain for societal good. Thus, it might be worthwhile to revise our review system substantially.
PS: reviewers interested in providing better reviews should avoid any of these formulations.
The author would like to thank Benedikt Fecher for his valuable insights and suggestions that improved this article (in a streamlined, helpful process).
This blog post originally appeared on the Elephant in the Lab blog and is republished under a CC BY-SA 3.0 license.
Note: This article gives the views of the author, and not the position of the LSE Impact Blog, nor of the London School of Economics. Please review our comments policy if you have any concerns on posting a comment below.
About the author
Thomas Wagenknecht studied Media Management at University of Mainz and Political Science at the University of Erfurt. He worked in the financial industry for Deutsche Börse Group, the rating unit of the German savings banks as well as a consultancy. From 2015 to 2018, he was a research scientist at FZI in the field of Information Process Engineering. Thomas’ focus was on participation and Blockchain technology. Since June 2018, he is a consultant at Accenture Strategy.
Many reviewers have agendas, either political or personal and ignorance based. Don’t trust them!
This is a perfect discussion topic. However, it still neglects the proverbial elephant in the room. The processes are all part of the old patriarchal gatekeeping boys club attitude underpinning most Academic practices remaining within institutions such as Universities and related services?
How much is this driven by the need for ‘productivity points’ for research intensity and impact? Who are the reviewers? Are they in turn reviewed by the same colleagues whose works they review? Is this a problem caused by cronyism, networking, disciplinary silos, pedagogical distinctions, or inherent biases within the various disciplines and subject foci?
How does Research productivity measures drive the entire process of academic publication?
I have edited a journal for 15 years. It is worth noting that editors have discretion to ignore gratuitous or hopeless reviews, and they will not ask those people again, either. Reviewing is an academic obligation so I encourage all staff to do it, in our university and beyond. I do at least 20 a year – I suspect this is on the high side. I should say that the higher the level of ambition of the academic, they less they respond to review requests – perhaps regarding reviewing as some sort of ‘dead time’ that will not benefit them personally. Meanwhile I have had dedicated people with serious constraints who have still submitted reviews – from hospital, for instance!
I thank the hundreds of reviewers I have dealt with. By the way, there are national ‘ideal types’ – I will not name them but academics in a couple of countries almost never do me the courtesy of replying to a review request. I thought this was the fault of individuals, but now I see it is actually a statistically significant number. [they are both in Europe by the way] .