LSE - Small Logo
LSE - Small Logo

Sierra Williams

July 8th, 2014

The rejection of metrics for the REF does not take account of existing problems of determining research quality.

3 comments | 1 shares

Estimated reading time: 5 minutes

Sierra Williams

July 8th, 2014

The rejection of metrics for the REF does not take account of existing problems of determining research quality.

3 comments | 1 shares

Estimated reading time: 5 minutes

martin_smithAmidst heavy scepticism over the role of metrics in research assessments, Martin Smith wonders whether the flaws of the current system have been fully recognised. There is no system of research assessment that is perfect and peer review may well be a better, although problematic, measure of quality than metrics. But the REF has become disproportionate. The question that arises is whether a system of metrics, whilst still imperfect, would enable less time spent on assessing research and more time doing it?

Before the results of the 2014 REF are even finalised, HEFCE is beginning the process of reviewing the assessment process and in doing so is again considering the role of metrics in assessing the quality of research. The nearly automatic response of social scientists is that metrics are a poor mechanism for capturing research quality. And, of course, in many ways this is true. There are numerous flaws with metrics including the fact that they are poor proxies for quality and they are often inadequate, inaccurate and incomplete.

Nevertheless, the rejection of metrics does not take account of the problems of existing REF.  The first is that considering the amount of money being distributed the current mechanism is disproportionate.  In political science a burdensome structure of assessment has been set up, internally and externally, to distribute about £14 million pounds a year. In addition 50 percent of funding goes to 10 per cent of Universities. Consequently we are focussing considerable activities and effort on a relatively small pot of money (especially in relation to teaching income). Eighteen colleagues have spent months of their lives assessing research in politics and this process is now shadowed by University departments putting considerable resources into their own grading; creating an invidious situation where we are grading colleagues work in the name of inclusion and exclusion. The question that arises is whether we should in effect bite the bullet and recognise that despite the inefficiencies and inaccuracies, a system of metrics would enable less time spent on assessing research and so more time doing it?

ResearchImage credit: neil conway on Flickr (CC BY 2.0)

The second problem is that the rejection of metrics is based on an assumption that the current system can capture research quality when metrics cannot – that peer review is an effective system. However the flaws of peer review are as great as those of metrics. Anyone who has had an article reviewed or edited a journal knows that referees often disagree and that whatever safeguards are put in place there is a strong element of subjectivity in peer review. Indeed, we now have a situation where an article could go through a rigorous review process in a leading journal and be published but be regarded by the REF panel as a weak piece of work (and with no way of knowing whether or not such a decision has occurred). At least with referee comments for a journal we see the judgement and some reasoning but in the case of REF the grading for individuals is never made clear. The lack of clarity is justified behind the idea of assessing a unit but in effect it is the work of individuals that is being marked. There is an argument that what we need to do is improve peer review by making the process more open. However, it seems unlike that HEFCE would open up the can of worms of transparent review and such a process would only add further to the costs of the whole exercise.

So metrics could make the system much more cost effective and remove the biases of a peer review system.  Yet before jumping for metrics a number of issues need to be cleared up. One fundamental question is the question of what is the purpose of REF? Initially the idea of assessment was as a mechanism of distributing funding but increasingly it is used as ranking of departments. Under the current systems we get very stark rankings often with relatively small differences between performance. The process would be less fraught if it returned to a focus on funding rather than ranking. Second, there is also a question of what metrics are used? Much of the focus has been on citations (which clearly are flawed in many ways) but there are many other metrics that can be used that do reflect research activity (and many of which are already gathered for REF) such as research funding, number of PGRs, number of articles published in ISI ranked journals, number of University press books – the possibilities are many.

Similarly, another way to simplify the process and iron out much of the noise would be to aggregate the statistics at institutional level so that University could submit their statistics as a unit and hence receive a block grant (as happens for other sources of funding such as HEIF or studentships within DTCs).  There is, of course, the objection that in this way weak research units in strong universities would receive funding but this would create a strong incentive for Universities to improve the performance across institutions. It would also mean that REF returned to institutional rather than individual assessments and so avoid the sorts of pressures created around issues of REF performance in the current system. To continue with peer review it is necessary to show the accuracy of the assessments is enough to justify the extra cost. That case is very hard to make and with the right metrics the process could become less of a burden for all.

There is no system of research assessment that is perfect and peer review may well be a better, although problematic, measure of quality than metrics. But REF has become disproportionate.  There is little relationship between the sums being distribution and the burden of the process.  A process of using a basket of metrics may produce anomalies and may produce game playing but no more than the current system.  What it would do is simplify the process and allow us to get on with teaching and research.

This post is cross-posted at the Political Studies Association (PSA) blog under the title Why we should not reject a metric REF out of hand and is cross-posted with the author’s permission.

Note: This article gives the views of the authors, and not the position of the Impact of Social Science blog, nor of the London School of Economics. Please review our Comments Policy if you have any concerns on posting a comment below.

About the Author

Martin Smith is Professor of Politics at the University of York.

Print Friendly, PDF & Email

About the author

Sierra Williams

Posted In: Academic communication | Government | Higher education | REF2014 | REF2021 | Research funding

3 Comments