LSE - Small Logo
LSE - Small Logo

Blog Admin

April 30th, 2018

Random audits could shift the incentive for researchers from quantity to quality

5 comments

Estimated reading time: 5 minutes

Blog Admin

April 30th, 2018

Random audits could shift the incentive for researchers from quantity to quality

5 comments

Estimated reading time: 5 minutes

The drive to publish papers has created a hyper-competitive research environment in which researchers who take care to produce relatively few high-quality papers are out-competed by those who cut corners so their bibliometrics look good. Adrian Barnett suggests one way to push back against the pressure to “publish or perish” is to randomly audit a small proportion of researchers and take time to assess their research in detail. Auditors could examine complex measures of quality which no metric could ever capture such as originality, reproducibility, and research translation. Concern over the possibility of being audited would also encourage good research practice across the community.

Every researcher is under pressure to publish in high-ranking journals. The most blatant pressure I ever experienced was over a decade ago when my then head of department created a league table of staff in their department using annual paper numbers. At the top were the well-funded and established professors, at the bottom were researchers who had recently had a baby or been away collecting data. It was incredibly demoralising.

The drive to publish piles of papers has created a hyper-competitive research world. Researchers who work with care to produce relatively few high-quality papers are being out-competed by researchers who cut corners so that their bibliometrics look good. But, as the authors of yet another metric themselves admitted, “[no] metric should be taken as a substitute for the actual reading of a paper in determining its quality”. Which, to my mind, completely undermines the point of designing another metric.

But how can we read papers to determine research quality given the huge volume of papers being published every day? This is why imperfect metrics thrive, because researchers can be compared in minutes rather than the days or weeks that would be needed to read and understand their contribution.

Our idea to push back against the pressure to “publish or perish” is to randomly audit a small proportion of researchers and take time to assess their research in detail. The auditors could examine complex measures of quality that no metric will ever capture such as originality, reproducibility, and research translation. The audits would encourage good research practice across the research community because of the concern of being audited.

The same approach is used to discourage drink-driving and tax avoidance. Of course both these problems still exist, but there is good evidence to show that the introduction of random breath tests greatly reduced fatalities.

Many researchers react with horror to the idea of random audits and raise valid concerns about who the auditors would be and the power they would wield. It would likely be stressful to be audited, but if the audits achieved their aim of taking a deep look at quality then researchers who used care, imagination, and dedication should have nothing to fear.

In fact, an auditing system should allow diligent researchers to remain true to their scientific principles. I’ve heard many researchers complain that they’d like to forget the rat race and work more carefully, but fear the consequences for their career. Peter Higgs, who won a Nobel prize in 2013, said that he would not be “productive enough” in today’s research world because he spent time deeply considering a problem rather than “churning out” papers.

To examine the potential of random audits we used an existing simulation of the research world where simple Darwinian principles showed how paper numbers quickly trump quality when researchers are rewarded based purely on their numbers. We added random audits that were able to spot researchers producing poor-quality work and remove them. Importantly the audits also improved behaviour in the wider research community, because researchers whose colleagues were audited were prompted to produce fewer papers of a higher quality.

We found that the competitive spiral to produce ever greater paper numbers, could be avoided by auditing around 2% of all published work. For research funded by the US National Institutes of Health we estimated this would cost around USD $16 million per year. This is a relatively cheap price to maintain research quality and would likely pay for itself many times over in the avoided costs of policies and practices based on poorly conducted research.

Ours was a very simplistic simulation of audits that leaves many unanswered questions about how the audits would work in practice. It’s not clear whether auditors would really be able to spot poorly performing researchers, or if conflicts of interest could be managed for what would be a very important role.

Despite the many unanswered questions, audits are worth considering as a way to reduce research waste, and were even suggested back in 1988 when the research world was far less competitive than it is today. The current system of rewards is deeply ingrained and we need strong and potentially radical actions to shift the focus from quantity to quality.

My old head of department took the next logical step with their league table and “cut the tail of the distribution” in order to increase the average output per staff member. But many of the people in the tail played vital roles in the department that weren’t captured by this simple metric. Metrics will never be able to summarise our complex research world and no metric is going to solve the problem of research waste. If we really want to understand research quality then we need to stop taking shortcuts and spend more time reading research.

This blog post is based on the author’s co-written article, “Randomly auditing research labs could be an affordable way to improve research quality: A simulation study”, published in PLoS ONE (DOI: 10.1371/journal.pone.0195613).

Featured image credit: Patrick Fore, via Unsplash (licensed under a CC0 1.0 license).

Note: This article gives the views of the author, and not the position of the LSE Impact Blog, nor of the London School of Economics. Please review our comments policy if you have any concerns on posting a comment below.

About the author

Adrian Barnett is a statistician and vice-president of the Statistical Society of Australia. He works at the Australian Centre For Health Services Innovation at Queensland University of Technology. He has an NHMRC Senior Research Fellowship in meta-research which uses research to improve research. Adrian’s ORCID iD is 0000-0001-6339-0374, and he can be found on Twitter @aidybarnett .

Print Friendly, PDF & Email

About the author

Blog Admin

Posted In: Higher education | Research policy

5 Comments