LSE - Small Logo
LSE - Small Logo

Damian Pattinson

George Currie

July 9th, 2024

Designer Science – Why big brand journals harm research

4 comments | 45 shares

Estimated reading time: 6 minutes

Damian Pattinson

George Currie

July 9th, 2024

Designer Science – Why big brand journals harm research

4 comments | 45 shares

Estimated reading time: 6 minutes

The transition towards an open access scholarly communication system has transformed the technology and underlying function of journals. Damian Pattinson and George Currie argue one consequence of this change has been the rise of journals as holders of brand value over research quality and offer three ways of escaping this dynamic.


It would be convenient if we kept all the good research in one place. Or in a handful of well-known, rubber-stamped journals. This was the central conceit of scholarly publishing for the past century. But does it still hold up?

A brand name doesn’t tell us much about quality. Yet how we value research has become so focused on a few prestige journals – exalted, exclusive and expensive – that it thrives on the assumption it does. You publish in them, your work is important; or you don’t and it’s not.

Research assessment – whether formal, as part of researcher evaluation in institutions; or informal, as in deciding whether it is worth reading – should at least engage with the piece of research itself. Yet it often doesn’t.

The book is judged by the cover and the research by the journal. How did we get here and how do we fix it?

Brand power

Open Access, as envisioned in 2002, viewed science in its broadest sense as a social good. But one of the biggest impacts of the transition has been how it has restructured the marketplace of academic publishing. With the introduction of APC-model publishing, the author has become the customer.

Journals have also been leveraging their brands through brand extensions; reusing the journal name in related or sub journals.

Incentivised to publish as much as possible by this model, publishers are turning away good money with every rejection. Journal cascade systems – where rejected research is redirected to other journals in a portfolio – became a way of maintaining the power of existing brands through perceived impact, quality, or exclusivity, without having to refuse the money.

Journals have also been leveraging their brands through brand extensions; reusing the journal name in related or sub journals. The Nature name is a prime example with over 30 journals proudly wearing it and more planned this year.

Thinking about journals primarily as brands changes, or perhaps only highlights, one of the purposes of research in those journals. Journals are using research as content marketing. Instead of being filters for “bad research”, journals now stratify it according to brand values.

The biggest brands

Publishers are also leveraging the power of their brands together with the power of scale. Since 2018, we’ve seen an increase in the pace of consolidation of the OA marketplace. Seemingly driven by funder mandates for Open Access, big publishers have been acquiring smaller OA outfits, such as Wiley’s acquisition of Hindawi and Taylor & Francis’ acquisition of PeerJ. This consolidation of the options available to authors makes substantial change much more difficult without funding mandates to force it, while strengthening cascade approaches to article retention.

Transformative Agreements have also led to increasing monopolisation. Country-level deals are much easier to negotiate for larger publishers while limiting author choice to the journals they publish. This disadvantages smaller OA and society publishers who are either left out of such deals or included on terms they have limited power to negotiate.

Rather than transform, Transformative Agreements consolidate power to few publishers, stifle innovation, and limit author choice.

Proxies for quality

The brandification of journals has been encouraged through an over-reliance on proxies for quality and the citation-based ranking of journals such as Journal Impact Factors.

By creating targets that are proxies for what we really want to measure, we incentivise gaming the system to meet them.

Despite originally being developed as tools to help librarians decide where to allocate their resources, these metrics have become a shorthand for research, and even researcher, quality (even those responsible for the Impact Factor warn these metrics should not be used in this way). Conflating citation metrics with “quality” requires the assumption that citations are a meaningful measure of the methodological rigour and the importance of research. If that is even the case, we must then make the leap that past citations of a journal reliably predict future citations of any one piece of research.

Chasing these metrics warps the way journals behave too. By creating targets that are proxies for what we really want to measure, we incentivise gaming the system to meet them. Our obsession with citations exacerbates publication bias, with flashy headlines more likely to make the cut. Journals may delay articles to maximise their citation potential, or publish excessive numbers of literature reviews, as they bring three times more citations than original research. Some publishers also engage in citation stacking – encouraging new research to cite recently published articles in the same or sister journals. Finally publishers might fish for high Impact Factors: launching many journals in a field, knowing that if only one gets a decent score it will be flooded with submissions, reaping huge financial rewards.

Incentives and integrity

The power of a brand lies not only in the what, but in the who. Brands are status symbols.

Publishing in a big name or high-impact-factor journal is a premier status symbol in academia. This prestige economy leads to some researchers gaming the system. The incentive structure and rewards system are so warped that prominent figures say the rational thing to do is to cheat. Those who don’t are forced to play by the same wonky rules – and to play against the cheats.

Academic cheating ranges from subversion to plain deceit. Methods such as HARKing (Hypothesising After Results are Known) and p-Hacking turn the scientific process on its head. And practices such as data fabrication and image manipulation are plain dishonest.

If you want your work to be published where it matters, if you want to progress your career, positive results are table stakes.

Our current incentive structures also play a role in what research gets done in the first place. Researchers could be more likely to form and test hypotheses that, if proven, would be significant enough for publication in their desired journal.

These practices are driven by widespread publication bias and the importance we place on significant and positive results. Positive results are more likely to be published, more likely to be published in a higher-impact journal, and more likely to be published sooner. If you want your work to be published where it matters, if you want to progress your career, positive results are table stakes.

Our reward system doesn’t incentivise robust, reliable or trustworthy science, it incentivises headlines.

The way out

To overcome these challenges we need to embrace change on both practical and cultural levels. There are signals that a change is coming, for example with the Bill & Melinda Gates Foundation announcing that grants will not support APCs past 2024. Another is the widespread support for DORA and the responsible use of quantitative indicators.

The challenges are widely recognised; why is there so much hesitation to act?

While policy change is powerful, signatures and words come easier than action. The challenges are widely recognised; why is there so much hesitation to act? Publishers, institutions, and funders have the power to change this system.

eLife (where we both work) was founded with the purpose to disrupt and reform scholarly communication. 18 months ago, we adopted a publish-review-curate approach to publishing. By removing the accept–reject decision and providing public expert evaluation we put the focus squarely back on the research rather than the journal name. With the aim that other publishers will move to this approach we’ve created open-source technology that enables research to be shared as Reviewed Preprints along with public reviews and editor assessments. Our new model isn’t about competing with other journals, we want them to do what we’re doing and we want to make the tools available for everyone.

But the eLife model is only one possible solution and there are other approaches and commitments that can help move the needle toward a fairer system of research assessment.

Adopt research assessment that directly engages with the research being assessed.

We ditch reliance on proxies such as journal names and citation-based metrics. Journals could adopt open peer review and editorial assessments that could inform meaningful assessment of research in institutions or for funding.
Prioritise the value of the hypothesis, the rigour of the methodology, and the transparency of the findings over the results.

How one conducts science is a better measure of good science (and a good scientist!) than what’s found. Practical examples of this are preregistration of research and for peer review to be conducted with the redacted results.

Decouple researcher assessment from publication.

Instead we focus on the merits of the process and the outputs – as above – as well as all the other contributions researchers make to science and society, such as adopting more open ways of sharing research and results, teaching, and participation in peer review.

If we recognise and address the flaws in a hyper-competitive, often gamed, and sometimes rigged system we can build toward more robust systems of science communication – including incentive structures, reward systems, and measures of quality. From it we will reap the benefits of more robust science.

 


The content generated on this blog is for information purposes only. This Article gives the views and opinions of the authors and does not reflect the views and opinions of the Impact of Social Science blog (the blog), nor of the London School of Economics and Political Science. Please review our comments policy if you have any concerns on posting a comment below.

Image Credit: calimedia on Shutterstock


Print Friendly, PDF & Email

About the author

Damian Pattinson

Damian is Executive Director at eLife. He started his publishing career at the BMJ, where he worked as an Editor on BMJ Clinical Evidence and Best Practice. He later joined PLOS ONE as Editorial Director, before moving to Research Square as VP of Publishing Innovation, where he launched the Research Square preprint server. He holds a PhD in Neuroscience from University College London.

George Currie

George joined eLife in 2023 to help communicate the need for Open Science. He previously worked at OUP where he developed Open Access author value propositions, and as Brand Manager at Hindawi where he gained a deep understanding of the systems and challenges in research and research publishing.

Posted In: Academic publishing | Open Research | Research evaluation

4 Comments