The quality of academic research is a driver of scale, not vice-versa, argues Mark Leach of University Alliance, whose latest report concludes that concentration of resources on the basis of size will not improve research excellence.
University Alliance’s commitment to research is critical to our broader aim to drive innovation and growth in the UK. We have been consistent in our calls to fund research on the basis of excellence, however we wanted to better understand the evidence-base that shows why this is the right approach if we are to ensure the long-term sustainability and health of the UK research base.
But the debate about concentration is not new. And it usually falls between two camps. A first that argues that research should be concentrated towards existing ‘critical mass’ in fewer institutions, and those that would direct funds towards areas of proven quality.
Through our research, it has become clear that quality is in fact a driver of scale, and not vice-versa. We commissioned Evidence to carry out the research and they concluded that concentrating resources on the basis of scale would eliminate many areas of excellence – often small, medium sized and with an essential dynamism that the health of UK research depends on.
The relationship between size and performance and productivity
The first part of the research analysed the relationship between the size of research units and their performance and productivity. Size is indicated by a basket of metrics that includes the number of full-time equivalent Category A Staff (the most active researchers), performance by the outcomes of RAE2008 and citation impact, and productivity by the number of papers per full-time equivalent Category A Staff.
What is very clear is that there is no continuous relationship between research unit size and performance in RAE2008. It is also apparent that there are small and median-sized units which perform as well as, and in some cases better than, the largest units.
The diagram shows that the positive correlation between size and performance may be strongest for smaller research units, and that above a certain threshold no significant improvement is observed. This diagram also uses a dashed line to show how the relationship between size and performance would appear if it were continuous and linear. The data may, therefore, indicate the existence of a true ‘critical mass’ below which units are not viable. However, this is likely to be relatively small and to differ between subjects. Nevertheless, there is no evidence that concentrating resources in the largest units would substantially improve overall performance.
Small and median-sized research units tend to be at least as productive as large units. Also, peak productivity is not generally associated with the largest units, but is often found around the median. There is variation between different Units of Assessment (UoAs), but a general pattern emerges that demonstrates small units can perform as well as the largest units and peak productivity is generally observed amongst the smaller to median-sized units.
This scatter plot shows the relationship between papers per RAE2008 Category A FTE Staff mapped to UoA12 (Allied Health Professions and Studies) against the number of RAE2008 Category A FTE Staff submitted to this UoA. It appears broadly that there is no correlation between research unit size and researcher productivity in UoA12. In fact there are small and median-sized research units which appear to perform as well as larger research units (give or take the outliers) and the highest productivity is observed for units with around the median number of RAE2008 Category A FTE Staff. Therefore, the data shows that concentration of resources on the basis of size alone will not improve overall performance.
This is consistent with a study of recent US NIH data concluding that “middle sized labs do best”. The data shows time and again that funding on the basis of scale would not improve overall productivity but might eliminate some of the best units.
The distribution of research quality
The second part of the report uses Evidence’s Impact Profiles to assess the distribution of citation impact. Citation data are highly skewed with many papers receiving no citations and few receiving many citations. Impact Profiles allow such distributions of citations to a body of papers to be visualised.
There is, for most of the Units of Assessment (UoAs) analysed, little difference in the profiles of the University Alliance institutions, the group of institutions with fewer than the median number of Category A Staff, and the UK as a whole. A similar percentage of the research papers published by each of these groups receive equivalent numbers of citations. Where there are differences in the profiles these can be explained by low volumes of papers being mapped to the relevant UoA, or because of the strategies by which institutions select papers for RAE submissions.
Because the quality profile of each of the groups of institutions analysed is similar, there is no evidence that removing funding from any particular group would increase overall performance.
The Government is due to publish a research and innovation strategy in the autumn which will fill the void left by the HE White Paper that did not seek to tackle research issues. This may well open up the debate about research concentration again, but it is imperative that any such debate is led by evidence. Our own report, backed up by countless other studies, points resoundingly to policy that always ensures public resources are directed towards excellence. If they are not, we risk withering our research base and ultimately damaging chances of future economic growth.
Mark Leach (@markmleach) is a Senior Policy Adviser for University Alliance, a group of 23 business-engaged universities committed to world-class research and a quality student experience. Read the full report here.
He is also founder and editor of wonkhe, a site dedicated to higher education matters.