The purpose of peer review is often portrayed as being a simple ‘objective’ test of the soundness or quality of a research paper. However, it also performs other functions primarily through linking and developing relationships between networks of researchers. In this post, Flaminio Squazzoni explores these interconnections and argues that to understand peer review as simply an exercise in quality control is to be blind to the historical, political and social dimensions of peer review.
Peer review has a bad name on social media and the press. Type “peer review is..” in Google search and among the first entries you will find is “peer review is broken”. It seems that peer review is now one of the most popular means through which academic frustration finds a way to express itself. Hyper-competition and the dominant ‘publish or perish’ culture in academia do not help.
I believe that this also reveals a profound misunderstanding of what this institution truly is. For instance, the general view is that peer review is a quality screening mechanism for scholarly journals and it is often studied as such. In brief, if manuscripts had an intrinsic objective quality, referees and editors would only need to be sufficiently smart, disinterested and unbiased to recognise it. High quality manuscripts would appear in prestigious journals, middle-quality manuscripts would find their way to less prestigious ones, while creepy science would simply not pass the filter, or feed some online predatory journals. Much research measures the quality of the process in terms of referee disagreement: the higher the disagreement is, the stronger the evidence for reviewers’ subjective bias are. Three experts cannot have different opinions on a manuscript, because quality is an intrinsic property of the manuscript, it speaks for itself. Expert judgement must be consistent. This is what everyone expects from robust, solid peer review. There is some truth in this view, at least it is coherent with the stratified, hierarchical organisational nature of academic prestige and worth.
However, peer review is also something more, if not something even completely different in the first instance. Besides being a quality control device, peer review is a distributed effort for recognizing and increasing the value of manuscripts and so is inherently ‘constructive’. It is simultaneously a context in which experts develop, adapt and enforce standards of judgement, a form of (direct and indirect) connection and cooperation, a disciplined, mediated discourse between (often unrelated) experts in a “safe” (though often disorganised and ambiguous) environment. And so it is also inherently ‘social’. If this is true, peer review cannot be seen as a ‘prediction game’ on the objective quality of manuscripts. Unlike an activity, such as people guessing the weight of an ox, which so fascinated the British scientist Francis Galton in the early 1900’s, there is no pre-established, unambiguous weight or value that can be attributed to a research paper.
In a recent large-scale collaborative project “PEERE”, we have tried to explore these alternative views on peer review and stimulate a debate on the multiple functions and constituencies of peer review as a complex social institution.
For instance, in an article published in Scientometrics, we tracked all manuscripts rejected by a journal and traced their subsequent publication in other outlets. We found that rejected manuscripts that were published later in other journals benefitted in terms of subsequent citations from being exposed to more than one round of reviews before rejection. Having received a more detailed reviewer report and more importantly, being subjected to higher reviewer disagreement from the rejecting journal. Some of them were published in journals that had an impact factor higher than the rejecting journal. This would suggest that peer review can be considered a ‘value multiplier’ and that reviewer disagreement and more rounds of reviews are not symptoms of inconsistent reviewer judgement, but could help authors improve their manuscripts.
In a more recent article published in the Journal of Informetrics, we used the term “invisible hand” of peer review to describe how the process is a connection between knowledgeable scholars in the first place. We mapped the connections between all scholars who submitted and/or reviewed manuscripts for a multidisciplinary journal strictly following double blind peer review. The idea was to measure respective positions in the network structure of the community before and after they were matched by double blind peer review. We found that referees tended to recommend more positively submissions by authors who were within three steps in their collaboration network before they were paired by the journal editor. We found that when these closest referees were demanding, i.e., requested major revisions, the manuscripts were cited more when eventually published. More interestingly, we found that co-authorship network positions changed after peer review, with collaboration distances between scientists decreasing more rapidly than could have been expected had the changes been random. We found cases of referees collaborating directly with authors they previously reviewed, whom they were not previously related with. We also found cases in which these ex-post collaboration patterns were not determined by manuscript publication. Future collaboration was more probable even when manuscripts were eventually rejected and so referees did not know the manuscript authors’ names. This would suggest that peer review not only reflects, but also creates and accelerates scientific collaboration. If this is true, the real ‘bias’ is not the subjectivity of expert judgement but the lack of diversity and representativeness in referee selection.
To sum up, these are only examples of an attempt at looking at peer review in a way that is more coherent with the historical, social and political dimension of this institution. As historians like Alex Czizar remind us, peer review reflects the public status of science and has always had multiple functions and dimensions. Studying it only as a quality screening context does not pay justice to the complexity of this key institution around which the autonomy, legitimacy and credibility of science is built.
About the author
Flaminio Squazzoni is professor of sociology at the University of Milan, Italy where he leads Behave, a new centre for research and training on behavioural sociology. He is editor of JASSS and works on peer review, social norms and behavioural research. Interested readers are welcome to contact him via his e-mail: email@example.com and/or dialogue with him on @squazzoni.
Note: This article gives the views of the authors, and not the position of the LSE Impact Blog, nor of the London School of Economics. Please review our comments policy if you have any concerns on posting a comment below.
How about this !!!
I agree, but what if the reviewer rejects a paper without actually reading it? For instance, we recently had a rejection based on the claim that “there is a major error in the paper because of a problem P in some cases”, while we had a full section demonstrating how we avoided P in all cases, and we also provided a code that the reviewer can use himself to verify our general method of solving P. It was in fact a major point of the paper emphasized in the introduction, What should we do in that case? Appealing the editorial decision is futile, because this means that the editor himself failed miserably, since he did not read the titles of the sections of our paper or/and the report.