What do researchers expect of the peer review process? And do their experiences deliver on these expectations? Elaine Devine reports on the findings of recent research that sought answers to these questions, to be used to inform improved training, support resources, and guidelines. Researchers felt strongly that peer review should, and mostly does, improve the quality of research articles; that incidences of academic fraud are not detected as much as expected; and that a relatively high prevalence of regional and seniority bias exists in peer review.
Over 7,400 views from across the research community and from all disciplines and career stages; more than ten hours of focus groups to explore and support the quantitative data; survey responses from every continent; a year from the initial idea to run a research project and publish the results to the first set of findings going online. What’s the topic that had people hooked? Could it be anything else but peer review?
Always the subject of most interest during our workshops on publishing in journals, it often seems like every researcher has a view on the peer review process. But what would those engaged in it on a day-to-day basis – whether as reviewers, reviewees or journal editors – say when we questioned them about it in-depth? Would there be consensus? Could the responses give us, a publisher responsible for the systems and processes that facilitate peer review, clear indicators on what researchers believe is important, with a strong evidence base to guide future developments?
The results gave us so much data that I had to break it into two publications: the first a white paper focusing on the purpose, mechanics, and ethics of peer review and the second an accompanying supplement on motivations, training, and support (plus all the data). From responses to the online survey, to discussions in the focus groups (where participants in the UK could easily have talked for twice the allotted time, with one group having to be gently eased out the door at the end…), this project was completely absorbing. Some of the more notable findings are outlined below.
Note: On a scale of 1 (strongly disagree) to 10 (strongly agree), respondents rated a number of objectives on what they believed the purpose of peer review should be. Using the same scale, they were then asked to rate to what extent they agreed or disagreed that peer review is currently achieving the same objectives. In the figures below, gold represents respondents’ expectations, and blue their experience in reality, giving a “reality gap” between the ideal and real-world experience. In most cases science, technology and medicine (STM) and humanities and social sciences (HSS) responses were closely aligned, so the statistics below show a combined mean score.
- What did researchers think peer review should achieve? Across HSS and STM fields, researchers rated checking for an appropriate and robust methodology and improving the quality of the published article highest.
- The biggest gap between respondents’ expectations and reality was in detecting academic fraud in STM, with a mean rating of 6 in reality versus an ideal-world rating of 7.7; while in HSS this was providing polite feedback, with an ideal-world rating of 8.4 versus a real-world rating of 6.8.
- The only area in which reality exceeded expectation for respondents was in correcting grammar, spelling, and punctuation. This meant it was happening more than researchers expected it to.
- Researchers thought there was a low prevalence of gender bias but higher prevalence of regional and seniority bias in peer review. The double-blind model was felt to be most capable of preventing reviewer discrimination.
- Most reviewers and journal editors agreed that waiting 15-30 days was a reasonable time for initial feedback, with almost all reviewers saying they achieved this on their most recent review. But the vast majority of authors reported waiting longer. What adds to the time? The hidden processes and checks happening behind the scenes, from reviewer search and selection, to possible plagiarism checks, to tiered editorial decision-making; all of which is hugely important to get right but remains invisible to authors.
- There remains a strong preference for double-blind review among all respondents, with a rating of 8 or above out of 10 (echoing other large-scale peer review studies, conducted both before and after ours). Yet there are also balanced views for open, open and published, and post-publication review, with mean ratings of between 5 and 6 out of 10 for these (though HSS editors were less supportive than STM).
- Playing a part as a member of the academic community, reciprocating the benefit, and improving papers were given as the most important reasons for agreeing to review across all disciplines. This was also true when we examined the data by age.
- Over two thirds of authors who have never peer reviewed would like to, yet 60% of editors have difficulty in finding qualified reviewers. Over two thirds of authors who are yet to review a paper would also like formal training.
So, what have we done since completing analysis of the latest findings last year? There were clear views on training and on communication around the peer review process, with improved guidance (hopefully) enabling people to better understand the hidden work that may slow down an author receiving their initial report, and to support researchers in feeling confident in accepting a review request. So we refreshed and enhanced our guidance, from reviewer guidelines to information on what to expect as an author.
We also developed training on “being an effective reviewer”, bringing together the best of the guidance from across our teams into one workshop. Intended to offer a complete overview of the process, each session covers what you should be focusing on as a reviewer and how to give feedback, what review reports can look like and how to use the system, plus the ethical standards that underpin every review. It ends with a practical exercise so attendees can apply everything they’ve learnt, followed by lots (and lots) of discussion. We’ve run this session in various institutions worldwide; a very rewarding activity that is aimed at giving the next generation of reviewers the information they need from the outset.
And then on to process. Time taken was, unsurprisingly, a big issue (with one respondent commenting: “an extremely lengthy and frustrating wait for your paper”). Yet this needed to be balanced against the huge importance placed on quality (“the worst reports are short, snitty and patronising…the best are critically engaged, add something and improve quality”).
Alongside clear explanation of what can be happening “behind the scenes”, we’ve also piloted Peerage of Science on a number of journals. A service to which authors can submit their manuscript before sending to a journal, Peerage of Science enables researchers to set deadlines and have multiple journals view the peer review process simultaneously. Authors can then accept a direct publishing offer from a journal or choose to export the reviews to the journal of their choice, all with the aim of streamlining the process.
We’re also working with Publons to see reviewers get verified recognition for their review activity and use this to build a profile of contributions for inclusion on their CV, funding or promotion applications. Currently running on Taylor & Francis journals in the sciences, this is optional for reviewers. We’ll be looking at the results of both these pilots in the coming months.
And this is by no means it. Peer review, in all its guises, is still very much at the heart of scholarly communication and a fascinating, thorny, and personal subject to many. Looking at improvements to this process (whatever they might be) is an ongoing activity, deserves constant examination and shouldn’t have an end point. Just like all good policy, having an evidence base to inform decisions is vital and using this data to guide us is something we’ll continue to do.
Note: This article gives the views of the author, and not the position of the LSE Impact Blog, nor of the London School of Economics. Please review our comments policy if you have any concerns on posting a comment below.
About the author
Elaine Devine is Senior Communications Manager at Taylor & Francis Group, and is responsible for communications and outreach activity across their journals portfolio, including managing the PR and media, social media and multimedia teams. Prior to this role, she was responsible for author outreach activity at Taylor & Francis, including overseeing the redevelopment of their author-focused digital communications channels. Elaine has spent the majority of her career working in academic and professional publishing or in the charity sector, and has previously worked for (amongst others) the National Trust and the Royal College of Nursing.
2 Comments