Jon Tennant takes a look at the transformations underway aimed at tackling the widespread dissatisfaction with the system of peer review. He provides further background on the platform ScienceOpen, which seeks to enable a process of transparent, self-regulating, peer review, where knowledge sharing is encouraged, valued, and rewarded. By adopting a more transparent process of research evaluation, we move one step closer towards a fairer and democratic research process.
What is peer review? Fundamentally, it is supposed to be the process where academic peers provide feedback on research through expert analysis and discussion. It is a process of enormous importance for managing the content of the published scientific record, and with that the careers of the researchers who produce it. Peer review is perceived as the golden standard of scholarly publishing and forms a vital component at the core of the process of research communication and the structure of academia.
In the days of the global ‘open research’ movement, peer review is undergoing a phase of transformation. There is mounting evidence, and an increasingly common perspective, that peer review is less about evaluating research and more about stifling creativity while maintaining the status quo of established networks. There have been numerous opinion pieces and analyses regarding the different aspects of peer review published over the last decade, but one comment that perhaps emphasises the current evolutionary trend is that peer review is a “model that simply may have run its course given societal and technological change.”
Image credit: jaymantri.com CC0 Public Domain
At ScienceOpen, we are attempting to take advantage of, and spearhead, this transformation through a system of transparent, community-driven, public peer review. Research communities are the best placed to evaluate research, and this can only be achieved by providing a platform to enable a process of transparent, self-regulating, peer review, where knowledge sharing is encouraged, valued, and rewarded.
There are four core messages that underpin the peer review process at ScienceOpen, each one with a fascinating history of discussion and development.
- Getting credit for peer review
A recent survey found that a majority of researchers would prefer greater levels of feedback and acknowledgement for their work as referees, especially from their research institutes. For this, the peer review process would have to become de-anonymised (see point 2) to enable a reward or recommendation system.
At ScienceOpen, we value referee reports as a fundamental and integral part of research. By making reports open, research becomes more of an evolving dialogue. To facilitate this, we make all of our referee reports publicly available through use of a Creative Commons attribution (CC BY) license. We combine this by supplying DOIs to all reports, so that they become fully citable, re-usable, and formally recognisable as research outputs. Personal review histories can be used to supplement academic profiles or CVs, for example, by adding them to ImpactStory profiles.
- To sign or not to sign?
The quality of peer review has been shown to be influenced very little by whether or not referees sign their reports. The only possible negative effect of removing referee anonymity is that they become more likely to decline to review. But why might this be?
Many researchers, particularly those earlier on in their careers, feel that by being overly critical (which some would call ‘thorough’), they might find themselves on the end of a backlash from senior researchers. That such a fear runs through academia is quite disconcerting: we should expect that constructive feedback on our work is well received. Furthermore, retaliation in such a manner represents a serious case of academic misconduct, and by providing transparency into the peer review processes, such cases, should they ever occur, can be dealt with.
To that end, we see transparency through signing referee reports as a mechanism for accountability, for both authors and referees. At ScienceOpen, we expect peer review to be conducted in a professional, civilised, and courteous manner. Through this, we expect the quality of the entire research communication process to increase. By signing reviews, additional context is gained into the process and, perhaps more importantly, recognition and credit can be distributed accordingly.
- Referee reports as re-usable outputs
One main aspect of open peer review is that referee reports are made publicly available after the peer review process. This underpins the idea that peer review is a supportive, progressive and collaborative in order to continuously assess the quality of research and transfer knowledge. By opening up reviews inspection, a further layer of quality control is injected into the review process.
In 2000, when BioMedCentral launched, it didn’t take them long to innovate by publishing both the reviewer names and history alongside published manuscripts in the medical journals of the BMC series. Since then, newer journals like PeerJ adopted a system where both the reviews and the names of the referees can be optionally made open. The Frontiers series also publishes all referee names alongside articles.
ScienceOpen supports open peer review primarily through a post-publication system. Each article we publish is open to two layers of evaluation: (1) a formal peer review process that requires a minimum of 5 publications attached to a user’s ORCID account to sustain a level of expert review; and (2) a recommendation and commenting service open to all members to encourage early career researchers and other members to contribute to the evaluation of scientific research. By employing this dual approach, we ensure that peer review operates fairly and publicly, and also maintains the integrity and reliability of professionally conducted peer review by the expert research community. By combining this with open licensing and assignment of DOIs, referee reports are fully open to public and expert scrutiny, and are available to further scrutiny, re-use and sharing.
- Post-publication peer review to the rescue?
Several venues have adopted post-publication peer review, based on the adventurous principle that all research deserves the opportunity to be published. Here, the filtering system via peer review occurs subsequent to publishing, and inbuilt systems for post-publication peer review now exist at RIO, The Winnower, F1000 Research, and ScienceOpen.
Criticism of research does not and should not cease simply because research has been singularly verified by traditional peer review. Articles submitted to ScienceOpen are not pre-selected based on any subjective estimate of their perceived impact – we prefer to let research speak for itself. But perhaps more radically, ScienceOpen now offers the same post-publication peer review tools for over 11 million articles on the site. This means that however and wherever pre-publication peer review was carried out, the discussion can continue on the ScienceOpen platform. What better way is there to get credit for sharing your expertise with the scientific community in the form of peer review?
Ultimately, we see peer review at ScienceOpen as a much cheaper, legitimate and unbiased, faster and more efficient alternative to traditional methods of pre-publication peer review. We believe that by adopting a more transparent process of research evaluation, we move one step closer towards a fairer and democratic research process.
Note: This article gives the views of the author, and not the position of the LSE Impact blog, nor of the London School of Economics. Please review our Comments Policy if you have any concerns on posting a comment below.
Jon Tennant is currently finishing up his PhD (apparently) at Imperial College London, where he is researching deep time evolutionary patterns in groups like dinosaurs and crocodiles. Alongside this, he currently works as a PLOS Paleo Community Editor, is the new Communications Director for ScienceOpen, a freelance science writer, and author of the kids dinosaur book Excavate Dinosaurs! He spends far too much time in the pub or on Twitter (as @Protohedgehog) talking about open access.
The current issue of Learned Publishing (Jan 2016), now published by Wiley, is a special issue on peer review.
Don’t underestimate the fear of retaliation for providing a negative review. If I do not have tenure yet or if I am applying for a grant, many other scientists will have a chance to say whether they think I deserve tenure or the grant. Giving a negative review of me when asked about my standing in the field for tenure or giving a slightly negative review of my grant proposal would not be academic dishonesty and would be impossible to prove any wrong doing.
What could eaaily happen is I review a paper, point out major flaws and sign my name. Author gets my review and either forever has a negative association with my name or feels like I am not smart enough to understand his logic. The author might respond favorably, and may forever remember me as the person who showed him the holes in his argument, and may remember me as a smart person. But certainly, you must see the real problem with signing your name to any review of someone’s work.
No other field places 100% importance on what your peers think of you. Feedback from people in your own company who basically want the company to succeed does not conpare with feedback from other companies. And at a company you are not fired for not getting funding or for not getting tenure. Your entire career as an academic is built on what others think of you. And the others are not your audience or readers, they are your direct competition.
I agree with the points given. But who should are the stakeholders that can make this happen?
If the purpose of revealing reviewers is to count workload, all journals should be required to publish the names of reviewers and the number of articles an reviewer has reviewed. A software can then be developed to publish automated information on such quantity. The number count should be good enough. There is no need to link the names with the reviews.
This review counting software can also help to identify reviewers who never review for others and automatically blacklist them for future submissions.