Progress to open access has stalled. After two decades of trying, the proportion of born-free articles is stuck at 20%. Kicking off the Impact Blog’s Open Access Week coverage, Toby Green suggests the solution to our financially unsustainable scholarly publishing system may lie in rethinking traditional processes using internet-era norms. Embracing the principle of “fail fast”, all papers should first be published as freely available preprints to test whether they “succeed” or “fail”, with journals then competing to invite authors to publish. This would reduce the costs of the expensive, straining peer review system while ensuring all papers are available to all readers.
Let’s face it, progress to open access has stalled. No progress has been made over the past year – roughly 80% of all new articles published this year will be paywalled – same as last year. As Open Access Week dawns, let’s take a closer look at why.
No one has been idle these past 12 months. Librarians have been getting tougher with publishers, most notably in Germany and Sweden; publishers have innovated with Read and Publish offers; and, with the EU’s blessing, 13 funders are peddling Plan S. My feeling is that these efforts are the final throes of the tired “Green-Gold-Diamond” approach to open access which seeks a flip to a supply-side funding model from the traditional consumption-side model. A flip that’s flawed because all it does is transfer inequity of access to inequity of authoring; i.e. previously those without funds couldn’t read, now they won’t be able to publish. A flip that’s failed because after two decades of trying, we’re stuck at 20%.
In thinking about this problem, I have come to the conclusion that open access is the wrong target, it’s beside the point. The crux of the matter is that scholarly publishing is unsustainable both financially and in terms of human effort. Let me count the ways.
- The funds available to pay for publishing research are not growing fast enough to keep pace with the growth in research budgets and, consequently, the number of articles that emerge. The number of articles submitted for publication is growing ~6% per annum; the library and funder budgets that pay for publishing are not.
- Publons’ report on peer review shows a system under severe strain: it’s taking longer to find reviewers and they are less likely to complete a review quickly. Peer review costs around US$1,500 per paper; that’s a lot of money if the result is rejection.
- It’s kind of ironic that the weakest papers are costing the most to publish. Authors are encouraged to re-submit rejected papers to another journal, sometimes only to once more be rejected, before, finally, the paper finds a home in a title further down the foodchain. Every submission and rejection costs money. Elsevier alone rejects over 4,000 papers every working day – that’s an estimated daily cost of US$100,000.
- Authors are “double-dipping”: they increasingly post their articles as preprints to share their findings with peers fast, then submit to impact-factored journals to boost their career and grant-winning prospects. With changes between the former and the latter versions being small, we’re paying to publish the same content twice. This is not to cast blame. Authors need the internet-era speed of preprints to counter the analogue-era timescale of formal publishing. They need traditional, impact-factored journals to counter the exclusion of preprints from the reputation economy on which their careers depend.
Until the scholarly publications ecosystem is transformed in line with the digital age, I argue that open access can’t be afforded. So, how to transform it?
I think the answer lies in “digital transformation”, the rethinking of traditional processes using internet-era norms. An example is the process to apply for a British passport. Previously, application involved lots of form-filling, “peer review” in the form of a signature from another passport holder, and other user-unfriendly, bureaucratic, pen-pushing practices. Today’s online system is user-centric and would make any internet start-up proud. It’s undoubtedly a lot less costly for the UK authorities too.
So, inspired by this example, how could we rethink the process of scholarly publishing? One internet-era principle is “fail fast” – if your project fails, you stop and move on in another direction. What if all papers were first published on preprint servers to test whether it “succeeds” or “fails”? If it succeeds, journal editors would compete to invite authors to publish in their journal, flipping the submission process. If it failed to garner interest, no matter, the paper remains on the preprint server (perhaps to gain attention later as a slow-burner) and the author moves on in another direction.
Let’s assume that half of the preprints succeed in gaining the attention of a journal editor and, of these, half survive peer review – the total saving on the current publishing system would be significant. Cutting 15% off today’s cost of publishing journal articles is US$1.5bn. Yet, in terms of getting papers in front of readers, nothing would have changed: all would be available online just as they are today.
One catalyst is needed: the “reputation economy” (comprising tenure, promotion, and grant-giving committees) must value preprints just as it does articles published in impact-factored journals today. To help this process along, preprint servers need to have comment fields like those in TripAdvisor and Airbnb. Just as consumers trust consumers in making choices about where they eat and stay the night, readers will trust readers in making choices about what they read next. Perhaps reader comments could be codified and included in altmetric scores?
Nothing comes for free and this proposal implies another change: authors will have to do more to promote their papers. Funders are increasingly looking to measure the impact of the research they fund so this is something authors will have to do more of in any case. There is a danger that those who are already well-known will do better than newcomers (the Matthew Effect) but I would argue that a preprint system open to all offers newcomers a greater chance of breaking through than today’s closed world of peer-reviewed journals.
Once significant costs have been stripped out of the system, it should be possible for libraries and funders to fund both open preprint repositories and open access journals without the need for paywalls or play-walls. But until the costs come down, I fear we’ll remain stuck with the same frustrations we have today, only things will become more heated. Worst of all, I bet I’ll be writing that the number of articles born-free is still stuck at ~20% in 12 months’ time.
This blog post is based on the author’s preprint article, “We’re still failing to deliver open access and solve the serials crisis: to succeed we need a digital transformation of scholarly communication using internet-era principles” (6 September, 2018), available via Zenodo (DOI: 10.5281/zenodo.1410000).
Featured image credit: Open innovation: The new bright idea, by opensource.com (licensed under a CC BY-SA 2.0 license).
Note: This article gives the views of the author, and not the position of the LSE Impact Blog, nor of the London School of Economics. Please review our comments policy if you have any concerns on posting a comment below.
About the author
Toby Green has spent 35 years in scholarly publishing working with commercial, society, IGO and NGO organisations on all types of content: books, journals, databases, A&I services and encyclopedias – always with an eye on the reader experience. He writes this piece in a personal capacity and in the hope that it contributes to thinking about how to find a sustainable and effective way to make scholarship available to all. His ORCID iD is 0000-0002-9601-9130.
I find this a very refreshing view. Thanks for working it out in so much detail and for (re-)sharing it in this digestible format. Re-designing the process in an all-digital (born digital) way, is the way to go….
I would think one of the biggest challenges will be dealing with the legacy systems around research evaluation and tenure, as you also refer to. Do you have more thoughts on that?
Thanks, Yvonne, for your kind comments. As for dealing with legacy systems, I’m under no illusions that change in the way researchers are evaluated for promotion and grants etc is going to be challenging but it is possible if only juries and panels would realise it. I saw yesterday that researchers in Norway are still judged on whether they publish in ‘category 1’ or ‘category 2’ journals. I can understand that in an analogue age, measuring at the level of the journal was probably the only cost-effective method – but in today’s internet-era, measuring at the level of an article is eminently possible and affordable. So, all it takes is for panels and juries to look at article-level ‘altmetric’ assessment tools and I’m sure change could happen fast.
You’re hitting the nail on the head with the idea of digital transformation. Replicating structures that perhaps were sensible in the print environment, in the current digital environment, is making us miss many opportunities for improvement. From the structure of the documents (our attachment to the concept of “page” and static texts is increasingly indefensible), to the structure of scientific communication (the idea that only journals can “publish” or vet contents for quality).
To me it doesn’t make sense that a manuscript (another term that perhaps should be phased out) may have to sit in the author’s hardrive for months (sometimes years), until a couple of people, who might not even be that interested in the topic of the article, decide whether it is worthy of being published in Journal X. Nowadays anyone can publish a text and make it available to everyone else, so there’s no need to perpetuate the “bug” of limited dissemination inherent in the print environment as if it were a desirable feature.
As you say, succeed or fail fast, or in other words, let each paper stand or fall on its own merits. So yes, preprint first, always. Of course, it will be the responsibility of the readers to accept or disregard the validity of the claims made on the paper (but this was also true before). Also, the review culture should adapt to these changes. The people who read a paper should act as self-appointed reviewers and comment on what they think are the strong and the weak points of the paper. In my experience, this almost never happens at the moment.
Hi Toby,
Thanks for an interesting post. What do you think of the F1000 model (proprietary, I know) that allows authors to publish what are effectively preprints within 7 days then have the external peer review done transparently with peer reviewers’ reports posted alongside the original article and subsequent versions? The F1000 APC cost for a long article (over 2,500 words) is about US$ 1,000.00 and no cost for subsequent versions of the same paper.
All the best,
Scott
That model is interesting but it locks the author into one channel. I would prefer to see preprints on subject-based servers and have journals compete to publish the best preprints. This would oblige journals to deliver better value.
Blockchain has been proposed for this: https://www.icoexaminer.com/blog/the-blockchains-bid-to-democratise-scientific-publishing/
One possible (unintended) consequence of this approach is that the article is never “published” because the “article” will be a living breathing thing that is constantly evolving as input is added from other researchers. (Or it dies/hibernates in the blockchain until someone revives it.)
I predict journals of the future will only exist as these blockchain platforms and publishers will charge entrance fees to join them. The question then becomes: Who pays for the entrance fees?
I see another challenge if we’re on that line of thought. In his most recent blogpost on “Towards Self-Publishing in Scholarly Communications” (http://www.davidworlock.com/2018/10/towards-self-publishing-in-scholarly-communications/), David Worlock raises:
“how…put together and scaled? And how can we ensure the dataflows are unpolluted by self-promotion or lack of verification and validation?…”
You are right, Toby. Digital transformation this is the way out for open access in practice. There is a paradox – STM Publishers Association, which is criticized by Hindawi for a weak aspiration to OA movement, puts much better to the issue of digital transformation with its members then so called promoters of OA – Hindawi and its project called OASPA. Moreover, the trust in OASPA is distorted by their wish to be a sole authority in OA publishing ethics worldwide. We know that among members of OASPA there are several which are quite questionable with regard to ethics. Yes, Beall’s list of predatory publisher contain some of them, and OASPA always try to ignore this fact. But finally, look at the case. This is a smaller Publisher accepted to OASPA recently https://oaspa.org/member/llc-cpc-business-perspectives/. This Publisher in the Bealls list too. OASPA ignores this fact. Let it be, but what a paradox that journals of this publisher were removed by DOAJ from DOAJ journal list a year ago (October 24, 2017). The reason was serious – Editorial misconduct by the Publisher. OASPA and DOAJ declared in August 2018 that they both combine its efforts about new applications by the journals and Publisher. And what do we have now? DOAJ removes the journals of the Publisher, and at the same time this Publisher is accepted by OASPA!!! This case distrusts OASPA and finally, the lack of accountability inside of DOAJ and OASPA will make both discredited. So, you are right – we should rely for certain results of each publisher in promoting OA, including digitalization.
Toby came with an interesting point of view. It is not about open access itself as a goal (in this context Rob is right too – OAPSA and Hindawi push this idea forward, but they do it too strightforward). It is about open access as instrument. In this context, I also agree that STM with its multi-service members are much more constructive than all those who want open access right now and in any means. If all this “right now and in any means” happens it will devaluate scholarly research remarkably. So, it should be step by step and used just an instrument to promote effectively research results and its authors. Effectively means with an impact on the reputation of researchers through impact factor and citations. I can refer here to the DOAJ statement at its web-site that they do not recognize any impact factors. I do not understand this too strightforward statement of DOAJ as one of the promoters of ideas of open access. What does DOAJ exist for? What interests do they serve? I expect it should be interests of scholars first of all those papers who aspire to increase their research impact. It seems to me that DOAJ, OASPA and Hindawi are very close to lose its way. Are they going to promote the research results through open access or they what to be a sole regulator of publishing standards. If they want both, it comes to be a conflict of interests, because they are both particpants (Publishers funding DOAJ and OASPA like Hindawi) and regulators (self-elected). It is not the case for smooth growth of a wise open access idea. As for Rob’s reference to the small Publisher which was accepted by OASPA recently (Business Perspectives), I looked through its web-site and found that this small Publisher not just care about open access through publishing open access papers, but also wants to evaluate other journals (journals of other Publisher) through it is own journal index (https://jicindex.com/) and regulate the market in this way. This is a conflict of interests for sure – publishing journals and evaluating journals as activities can not be merged by a Publisher in the way as it was done by this small Publisher (owners of this Publisher and heads of JIC index as a projects are the same individuals!!!). This proves the idea, that OAPSA and Hindawi want to use the idea of open access to receive the control over the new market and provide membership to the Publisher, which do not want just to publish, but also to control and regulate the market. I see that we do not need such kind of open access movement.