LSE - Small Logo
LSE - Small Logo

Blog Admin

May 20th, 2015

Passing Review: how the R-index aims to improve the peer-review system by quantifying reviewer contributions.

10 comments | 3 shares

Estimated reading time: 5 minutes

Blog Admin

May 20th, 2015

Passing Review: how the R-index aims to improve the peer-review system by quantifying reviewer contributions.

10 comments | 3 shares

Estimated reading time: 5 minutes

Gero-headshotMCantorPeer review is flawed. Look no further than the storm of attention over sexist reviewer comments. A new index proposes a simple way to create transparency and quality control mechanisms. Shane Gero and Maurício Cantor believe that giving citable recognition to reviewers can improve the system by encouraging more participation but also higher quality, constructive input, without the need for a loss of anonymity.

Peer review is a pervasive necessity in academia. Often illustriously proclaimed to be ‘the gatekeepers of truth’, reviewers are supposed to ensure scientific quality by scrutinizing new research prior to publication. So as productive early career scientists, we spend part of our time diligently reviewing manuscripts for journals. We do so with the understanding that others will do the same when we submit papers ourselves.

Lamentably, this doesn’t always work out as nicely as it sounds. It turns out that many academics do not do their fair share of reviews (Petchey et al 2014). Furthermore, editors have a hard time finding appropriate and effective reviewers to agree to conduct them; when they do agree, reviewers are often late or limited in their constructive commentary; and in many cases, the number of reviewers, editors, or sometimes even journals, actually outnumber the authors (Hochberg et al 2009). All of this combined has lead the broader community to conclude that the current system of peer-review is broken.

One could easily point the finger at anonymity as being largely responsible for the maintenance of such bad reviewer behaviour in our community. Economists, sociologist, and even primatologists, will tell you that pro-social norms are maintained by the threat of a public requital. Blind review allows poor and unethical reviewing to continue. A recent example is the sexist comments to two female co-authors suggesting that male authors would improve the manuscript. This prompted the journal to “remove” both the reviewer from there database and the editor from their position. Yet the only punishment faced by the anonymous reviewer is that they will get fewer review requests; something that many might strive for if success is measured solely by publications. As a result, even though there is evidence that open knowledge of reviewers’ identities results in the production of higher quality and more courteous reviews which are more likely to recommend publication, there is still strong resistance to open review.

peer review featuredImage credit: AJ Cann AJCann.Wordpress.com Flickr (CC BY-SA)

So how then do we move forward without having to completely revolutionize our current system of review and rebuild a journal culture from the bottom up? The answer to this question has been hotly debated in the coffee rooms of many academic institutions since long before either of us was born. Over the years, editors have tried both to punish reviewers by delaying publication of their subsequent manuscripts; as well as the exact opposite, to reward conscientious reviewers for their efforts by ensuring speedy review of their work. They have even considered paying cash for reviews! Yet, despite these attempts, there is still no widely accepted method to credit reviewers for their time and expertise.

In the era of Scientometrics, the science of measuring and analyzing science, academics have shown a growing interest in measuring productivity and reputation. Although there is a spectrum from condemnation to praise within the community for the use of metrics, they seem unavoidable shortcuts to evaluate an academic’s productivity. How many of us haven’t Googled our peers H-index? As a result, we believe that simply giving citable recognition to reviewers can improve the peer-review system, by encouraging more participation but also higher quality, constructive input, without the need for a loss of anonymity.

In our recent paper in Royal Society Open Science, we outline the R-index, a straightforward way of quantifying contributions through review. The R-index aims to track scientists’ efforts as reviewers, accounting not only for the quantity of reviewed manuscripts, but also the length of the manuscripts as a proxy for effort, impact factor (IF) of the journal as a proxy for standing in the field, and, perhaps most importantly, a quality score based on the editor’s feedback on the punctuality, utility and impact of the reviews themselves. The quality control built into R-index allows editors to quantify how useful the review was to the decision to publish, but also on how constructive the commentary was for the authors, and what should be the most basic of all courtesies, if it was returned on time.

In this way, R-index differs greatly from other efforts to recompense reviewers. We not only hoped to create a tool to assess contributions, but also one which improves the system itself. Perhaps the most well-known metric of review is the website Publons. Publons shares the same principles we do of encouraging participation and transparency of reviews by turning peer review into a measurable research output. However, where Publons has succeeded in creating an easy to use social network and ranking metric for reviewers, its Merit metric is limited in that it is based entirely on the number of reviews produced. As a result, the metric will be biased to career stage (the longer you have been an academic, the more reviews you have been asked to do and the more reviews you are likely to have done), nor can it speak to the utility and quality of the reviews themselves. In contrast, our simulations on the R-index performance demonstrate that R-index is egalitarian across career stages. Hard working early career scientists such as doctoral students and post-docs, who tend to complete the bulk of the reviewing load in their field, score as well as leading scientists who review only for high impact, multidisciplinary journals. Furthermore, R-index is difficult to game.  No one is able to produce a number of poor reviews quickly due to the quality score parameter or a large number of reviews for predatory, or less reputable, journals as the IF of the journal is considered.

On the whole R-index provides several useful benefits to the academic community:

  1. R-index quantifies contributions as a reviewer. Academic productivity can no longer justifiably be based solely on publications. Relating an academics output as an author (H-index) and a reviewer (R-index) allows for a more holistic view of their contributions.
  2. R-index creates transparency. By its design it encourages journals to make basic data on reviewers available. It is not necessary for a specific connection between reviewers and papers to be made, but simply the disclosure of the number, length, and the quality score of reviews by each academic.
  3. R-index judges journals based on utility to authors not alleged impact. A journal’s averaged R-index across its pool of reviewers adds a metric to assess a journal as a practical and efficient service in the publication process.
  4. R-index makes life easier for editors. Another unquantified and thankless endeavor in academia, R allows editors to manage their stable of reviewers and avoid poor, ineffective or repeatedly late academics – late alone blatantly sexist ones.
  5. R-index is easy to implement. Its simple data that most journals already collect and its calculation can be automated by existing cross-publisher databases including Google Scholar, Scopus, or Web of Science. We are currently working with publishers and databases to implement the calculation of R across fields.

High R-scored reviewers are our community’s unheralded pillars; our proposed metric emphasizes this aspect of scientists’ productivity, increasing their visibility without revolutionizing the peer-review system. The R-index subtly shifts the perception of conducting reviews from an unrecognized obligation to a measurable contribution to one’s academic profile and the progress of their respective field.

Post Script: One of the reasons we chose to submit our metric to Royal Society Open Science is that it supports open review; and, if you’re interested, you can see the entire review of our R-index manuscript online alongside our publication here.

Note: This article gives the views of the author, and not the position of the Impact of Social Science blog, nor of the London School of Economics. Please review our Comments Policy if you have any concerns on posting a comment below.

About the Authors

Shane Gero is an FNU Research Fellow in the Marine Bioacoustics Lab in the Department of Zoophysiology at Aarhus University in Denmark. For the last ten years, Shane’s research has driven The Dominica Sperm Whale Project, a long-term study on wild sperm whale social structure and communication. He tweets at @sgero and you can follow his research program @DomWhale.

Mauricio Cantor is a doctoral candidate in the Biology Department at Dalhousie University, Canada. His research focuses on the ecology of interactions between individuals and between species, lying in the interface of fields such as behavioral, population and community ecology.

Print Friendly, PDF & Email

About the author

Blog Admin

Posted In: Academic publishing | Evidence-based research | Knowledge transfer | Research ethics

10 Comments