LSE - Small Logo
LSE - Small Logo

Blog Admin

October 16th, 2017

The next stage of SocArXiv’s development: bringing greater transparency and efficiency to the peer review process

4 comments

Estimated reading time: 5 minutes

Blog Admin

October 16th, 2017

The next stage of SocArXiv’s development: bringing greater transparency and efficiency to the peer review process

4 comments

Estimated reading time: 5 minutes

Almost 1,500 papers have been uploaded to SocArXiv since its launch last year. Up to now the platform has operated alongside the peer-review journal system rather than seriously disrupting it. Looking ahead to the next stage of its development, Philip Cohen considers how SocArXiv might challenge the peer review system to be more efficient and transparent, firstly by confronting the bias that leads many who benefit from the status quo to characterise mooted alternatives as extreme. The value and implications of openness at the various decision points in the system must be debated, as should potentially more disruptive innovations such as non-exclusive review and publication or crowdsourcing reviews.

Since SocArXiv launched last year, researchers have uploaded almost 1,500 papers. With expanding outreach and community building, the system is heading toward greater growth and impact. Some papers we host are works in progress, others are under review or already accepted at traditional journals, some are shared as free versions of paywalled papers already published, and many include replication packages of data, code, and other research materials. However, although SocArXiv contributes to making our scholarship better, more engaged, and more efficient, we operate alongside the peer-review journal system rather than seriously disrupting it. In the next year, we will pursue ways to bring more transparency and efficiency to the peer review process itself.

Figure 1: Number of papers hosted on SocArXiv, July 2016 – October 2017.

There is a lot of movement in the open science and open access communities to rethink or redesign peer review. We will benefit from the work of others, mostly in biosciences and other STEM fields, including research such as the OpenAIRE survey recently reported here, and the open peer review system already in use at F1000Research. But the norms, expectations, and language sometimes varies markedly across disciplines, so without reinventing the wheel we nevertheless need to focus on our social science community and its needs.

Just as our partners at the Center for Open Science are working on the technology to integrate peer-review functions into their platform, we are developing our ideas for what functions we need, and how to govern them. So it’s a good time to put some issues on the agenda for discussion in the social sciences. Here are the ones that seem most pressing to me.

Institutional bias

First, there is a conservative bias in our discussions about the scholarly communication system. All employed research academics participating in the debate over peer review got where they are through the current system. We’re a community with a reason to be biased in favour of the status quo. This creates a challenge for our imaginations, and makes ideas that are merely original or provocative seem outlandish or extreme.

Let’s shift the burden of proof. My own presumption is that openness is better because, all else being equal, transparency makes us more accountable, improves collaboration, and facilitates honest and meaningful communication. Of course, all else may not be equal. There may be reasons why openness is not feasible, and so we need to consider the benefits of, for example, double-blind peer review. But I don’t want to start from the presumption that a closed system is better until proven otherwise.

Arguments for the current system need to be persuasive too. In the history of social science, double-blind peer review is a relatively recent innovation, and not something that was instituted following a democratic process we are compelled to honour.

Where and when to open

Second, we can think of openness in the peer review process as a series of decisions. How and what to share at each stage of the review and publication process will have various implications, as will the overall shift toward a more open orientation.

Initial submissions: when papers are ready to be reviewed, they are submitted to a peer review system. At this point the paper can be publicly available or not. Also, whether it is under review can be publicly disclosed or not. Reasons to share papers generally are obvious. But when it’s entering a peer review process the calculus changes, because now the author risks the consequences of a negative outcome or bad reviews becoming public. On the plus side, if the review is announced the reviewing system can be held accountable for its process and outcomes, for example providing a check on discrimination, which might otherwise occur invisibly.

Reviews: either through the coordination of an editor or some other (open or closed) process, reviewers read and evaluate the work. Sharing reviews publicly has obvious potential benefits to people other than the original author, who can learn from the mistakes of the author, gain additional insights from the comments, and learn from observing the process itself. And again, the quality of the reviews can be used to reflect on the quality of the reviewing body and its outcomes. Further, the reviewers might finally have their scholarly contributions recognised. Someday, producing reviews that benefit the community at large – and which might include original writing, and data analysis – could carry weight in hiring and promotion decisions. On the other hand, the potential downsides to open reviews include embarrassment to the author, bad will between authors and reviewers, or damage to the reputation of critical (or less critical) reviewers. (As an intermediate option, reviews might be made public but the identity of the reviewers concealed.)

Revisions: in the current system, unless an author decides to share versions through an alternative channel, readers of most journals never see the version history of papers (which often has many stages, as the review-revise cycle repeats). There is pedagogical purpose to sharing these but perhaps more importantly sharing versions along the way gets research out faster, allowing people to learn from and respond to the work, and maybe collaborate with the author, before it is “complete”. On the other hand, there is a risk that sharing work that is bad or wrong will cause harm.

Decision: in the current journal system, the public is informed when work is accepted but not when it is rejected. When I send a paper to a journal, I do not tell them (or their reviewers or readers): “previous reviewers have deemed this work unacceptable. Let’s hope you disagree!” And if the work is never accepted and published it simply never becomes part of the scholarly record. Should rejection be part of that scholarly record? And by what standard of rejection? Editorial decisions in the current system are sometimes based in part on things like arbitrary page limits or the size of the interested audience, rather than an evaluation of whether the work makes a contribution to knowledge.

This raises profound questions. Should we judge academics based on their rejected work, or just the work they eventually publish? Should the public be notified that work is wrong, rather than just having it disappear? If a paper is rejected, should it remain available to the public, along with an open record of the process? Again, this makes accountability possible where none exists presently. And here you may see the bias of current academics, who are sitting in positions earned through accepted publications and have no interest in having their errors or low-quality work exposed. What is the interest of everyone else in this question?

An important wrinkle here is that peer-review decisions ultimately need not be binary in nature. Instead of a dichotomous outcome, peer review could produce research evaluations on one or more continuums.

More radical alternatives

The above discussion is based on a view of the peer review process as it is now (at least in sociology), with interaction between an author and an editor, who facilitates the interaction with peer reviewers and makes a decision to accept or reject the work. There are more disruptive alternatives that would blow up the current process while still representing a system of peer review.

Non-exclusive review and publication: one option is to open papers to review by more than one editorial body at a time. Papers could be posted openly and then reviewed by any “journal” interested in them. Authors could decide whether to revise their work in response to none, any, or all of the reviews, and have the work “accepted” by multiple editorial bodies. Or the work could be forked, with different versions accepted after different revision paths. This is not so different from how some of us work now, with extended projects over multiple papers, but professional rules currently prohibit simultaneous submission. The advantages here might include expanding the network of interaction, and inspiring collaboration and exchange across research areas. For example, editors or reviewers in the area of economic sociology might be interested to see demographers’ reviews of the same paper. The current system is highly inefficient when papers bounce from journal to journal, with reviewers at every stage. If the process were open, could we do this better?

Crowdsourcing reviews: reviews could be conducted by anyone interested in the work. This could work with some adjudication of reviewer qualifications. For example, a reviewer could be judged a qualified peer for any study, so they could log in and just start reviewing anything; or people could apply to be a reviewer for a particular paper, or maybe within a particular field. Editors could be notified when a paper has received a certain number of reviews, in order to make a decision. Or acceptance could be granted by an algorithm based on the ratings of qualified reviewers (further, reviews could be weighted according to the reputation of the reviewers, based on the quality of their previous reviews or other status indicators).

The advantage here may be in the more organic process flow, in which reviewing work becomes more integrated into research. For example, if I’m working on a certain topic, I might decide to review the existing submissions in an area, offering my critical responses while learning from them and incorporating their innovations in my own work – rather than waiting to be invited to review for a sub-field based on work I published years ago and with which I am not currently engaged. The risk here is that people could game this by recruiting friends to do reviews, trading positive reviews, ganging up on work they don’t like, and so on (this all happens now to some unknown degree, or course.) The type of moderation would be crucial in any system like this.

It’s our system

I don’t have a plan to impose a new peer review regime. But I do intend to challenge our current one, and I’m delighted to be working with the SocArXiv group and the Center for Open Science to generate and test ideas on an open platform. Today’s journal system owes a lot to decisions made long ago by for-profit or status-hoarding actors; fallible people working in very different intellectual and technological contexts. Let’s interrogate that system to see if it deserves our allegiance, and hold it up to alternatives that our imaginations – and our new technology – make possible.

Note: This article gives the views of the author, and not the position of the LSE Impact Blog, nor of the London School of Economics. Please review our comments policy if you have any concerns on posting a comment below.

About the author

Philip Cohen is a professor of sociology at the University of Maryland and author of The Family: Diversity, Inequality and Social Change (W. W. Norton, 2014). He is the co-editor of Contexts.

Print Friendly, PDF & Email

About the author

Blog Admin

Posted In: Academic publishing | Digital scholarship | Open Research | Peer review

4 Comments