LSE - Small Logo
LSE - Small Logo

Taster

April 17th, 2019

Self-organising peer review for preprints – A future paradigm for scholarly publishing

11 comments | 10 shares

Estimated reading time: 5 minutes

Taster

April 17th, 2019

Self-organising peer review for preprints – A future paradigm for scholarly publishing

11 comments | 10 shares

Estimated reading time: 5 minutes

Preprints – rapidly published non peer reviewed research articles – are becoming an increasingly common fixture in scholarly communication. However, without being peer reviewed they serve a limited function, as they are often not recognised as high quality research publications. In this post Wang LingFeng discusses how the development of preprint servers as self-organising peer review platforms could be the future of scholarly publication.

As of April 2019 the number of preprint databases registered on ROAR (Registry of Open Access Repositories) database reached 4733. In some disciplines, preprints have become commonplace, especially those in which competition is intense to establish the priority of research. Preprints are relatively inexpensive and they allow for the almost immediate publication of research findings, which have led some to argue that they represent the most viable future for scholarly publishing. However, unlike traditional scholarly publications, they have not been peer reviewed.

Do preprints need peer review?

In 2013 the arXiv China Service Group, working on behalf of the National Science Library of Chinese Academy of Sciences, investigated the attitudes of Chinese users of arXiv. The results were interesting, the majority of respondents supported submitting manuscripts to arXiv, but had not done so themselves. The explanation for this inconsistency was simple – their institutes did not recognize preprints. Those researchers who favoured preprints, seemed to do so to establish priority, rather than as a form of formal publication. This finding mirrors studies on European and American biomedical researchers, who were similarly indisposed to preprints.

The lack of peer review, is also perceived as making preprint servers harder to use, as finding high quality papers amidst a sea of other papers is a time-consuming, if not impossible task. A global survey conducted by arXiv in 2016 showed that about 58% of respondents agreed that arXiv should “offer a rating system so readers can recommend arXiv papers that they find valuable”.

The current view is that arXiv could implement a Community Peer Review model, in which registered readers are allowed to comment and rate manuscripts, which are then collated to form a ranking. There are three obvious weak points to Community Peer Review. First, it could be easily gamed and promote unethical behaviour, such as soliciting favourable comments from colleagues and friends. Second, some papers may be quickly reviewed by a number of readers, while others may be unduly neglected. Third, as reviewing is voluntary, there is little control over whether reviewing takes place at all and the time taken for papers to be reviewed.

Self-organising peer review

In order to address these issues, we propose a system of self-organising peer review (SOPR), operating in accordance with 8 rules:

  1. Only corresponding authors can submit articles to the preprint server and all authors of submitted papers are automatically registered as reviewers.
  2. A registrant can submit several papers per year, but a maximum of six manuscripts will be peer reviewed.
  3. Papers are reviewed in the order in which they are submitted.
  4. After submission the author’s information is concealed before the article is posted on the server. Only after the review is finished will the identity of the author/s be revealed on the article.
  5. All papers are rated by scale of 1 to 5, with 5 indicating best quality.
  6. Each new registrant is given a reviewer qualification level, also set at 1-5. The reviewer qualification level is determined, at the first registration, by registrants’ publication and citation record such as H-index or other scientometric indicators. Each registrant’s review qualification level will be adjusted every three years.
  7. Each manuscript is reviewed by 3 registrants.
  8. A penalty mechanism for if an author or reviewer does not accept a review assignment, or does not complete review on time. Whereby, their own papers will not be reviewed and their right to use the preprint database will be suspended for a period of time.

 

Matching reviewers to papers

Papers can then be matched to authors in a four step process:

Step1. Papers are given an estimated quality level, based on the average rating of recent papers submitted by the corresponding author.

Step2. Possible reviewers are identified amongst authors publishing in the same area.

Step3. Papers and reviewers are matched based on their relative rankings: A paper with an estimated rating of N should be reviewed by reviewers whose review qualification level is N-1, N or N+1. Reviewers from the same institution as the author, or who had published with any of the authors in the past three years would be excluded, as would those who were being penalized, or had completed a review within the previous 15 days, effectively providing prolific reviewers with a break.

Step4. If there are more than three reviewers meeting these criteria, we have also designed a set of rules to select three reviewers from them. If there are less than three reviewers satisfying above rules for a paper, the paper should wait for the next round paper-reviewer matching. The paper-reviewer matching process could be done at convenient time or fixed time such as weekends.

Overall we believe SOPR would address the weaknesses of community peer review, as the matching system is rules based, it would reduce the potential for gaming. It would also ensure that every paper is reviewed and incentivise registrants to complete reviews on time.

Next generation preprint databases

An ideal research publication paradigm should have four characteristics: fast review, fast publication, low cost and free access. First-generation preprint servers have three of these characteristics: fast publication, low cost and free access. However, the absence of peer review means that they are unlikely to gain wide recognition and acceptance within the academic community beyond their current role in registering research findings. OA journals currently have fast review processes relative to traditional journals, but they have also proven to be expensive and have spawned a range of predatory publications. If open access and open science initiatives, such as Plan S, are to succeed they need a new publication model. We believe that a “self-organising review + pre-print database” is such a model and represents an emerging paradigm for scholarly publishing.

 

This post is based on the author’s co-authored paper, A conceptual peer review model for arXiv and other preprint databases published in Learned Publishing.

Note: This article gives the views of the author, and not the position of the LSE Impact Blog, nor of the London School of Economics. Please review our comments policy if you have any concerns on posting a comment below.

Image Credit: Starling Murmuration via Wikimedia Commons (Licensed under a CC BY-SA 2.0 licence)

About the author

Wang LingFeng is a researcher at the Guilin University of Electronic Technology Business School and works on topics such as peer review and open science. Interested readers are welcome to contact him via his e-mail: 1493352071@qq.com, his ORCID is 0000-0001-6217-8865.

 

Print Friendly, PDF & Email

About the author

Taster

Posted In: Academic communication | Open Access | Open Research | Peer review | Research evaluation

11 Comments