LSE - Small Logo
LSE - Small Logo

Blog Admin

December 16th, 2014

Predicting the results of the REF using departmental h-index: A look at biology, chemistry, physics, and sociology.

2 comments

Estimated reading time: 5 minutes

Blog Admin

December 16th, 2014

Predicting the results of the REF using departmental h-index: A look at biology, chemistry, physics, and sociology.

2 comments

Estimated reading time: 5 minutes

public domain (wikipedia)Can metrics be used instead of peer review for REF-type assessments? With the stakes so high, any replacement would have to be extremely accurate. Olesya Mryglod, Ralph Kenna, Yurij Holovatch and Bertrand Berche looked at two metric candidates, including the departmental h-index, and four subject areas: biology, chemistry, physics and sociology. The correlations are significant, but comparisons with RAE indicate that while the departmental h-index is the best metric, it would not have been good enough to replace the peer review exercise. A more important question is whether we should seek to measure research quality using metrics at all.

Academic research is a very special kind of human endeavour.  It is often founded purely on curiosity and useful applications may not be immediately obvious. Curiosity-driven research has, however, led to some of the most important practical advances our civilisation has produced. These include the internet, GPS, progress in genetics and in social network theories.  Scientists and academic researchers involved in such discoveries and developments typically follow their career paths in pursuit of knowledge – rather than for financial gain. Indeed, commercial exploit-ability and profitability may be impossible to predict or entirely absent from blue-skies research. For this reason curiosity-driven research is mostly carried out at universities and research institutes funded by the public purse.

To check that society is getting the best possible value for money, some governments appraise the research emanating from higher education institutes on a regular basis. One of the world’s most developed assessment exercise is the UK’s Research Excellence Framework (REF), the results of which are due on the 18th of December. But does the REF itself provide value for money? It is based on peer review, which is considered by some as the most reasonable tool for comprehensive research evaluation. But, although it takes place only every 5-7 years, it is costly, disruptive and time-consuming. Is this a price worth paying to measure research? Indeed, can one reasonably measure this special human activity, which combines creativity and special way of thinking?

competitionImage credit: Fritz Cohn Wikimedia (Creative Commons Attribution-Share Alike 3.0)

If one can do it, can it be done cheaply and non-invasively instead? It has been suggested that a set of automated, scientometric or bibliometric indicatorsmay form a suitable basis for a substitute for, or component of, peer-review at the level of the research groupor department. Indicators and metrics are certainly cheap and easy for managers to use (perhaps too easy – they reflect only a simplistic aspect of the research process). And because they can be monitored continuously, they would avoid the disruption and tension in the run-up to REF time. So, can metrics be used instead of peer review for REF-type assessments?

The stakes in this game are very high. Besides determining the amount of money which society donates to universities for research, the REF is the primary source for research rankings and therefore contributes to the reputations of universities, departments and research institutes in the UK. So any replacement for the REF has to be extremely accurate to be accepted by policy makers and the academic community.

In recent papers [1,2] we compared a citation-based indicator to the results of the UK’s last appraisal, the Research Assessment Exercise (RAE). Conducted in 2008, this was also based on peer review. Although RAE2008, like REF014, delivered a quality profile for each submission, this can be compacted into a single quality estimator using the post-RAE funding formula used by the Higher Education Funding Council for England (HEFCE). We denote the resulting statistic by s (as it is some measure of research strength per head). Our objective was to try to find a bibliometric indicator which correlates well with s.

We looked at two candidates and four subject areas: biology, chemistry, physics and sociology [3]. The best was a departmental version of the Hirsch index (h-index). As in Dorothy Bishop’s blog, a departmental h-index of n means that n papers, authored by staff from a given department, and in a given subject area, were cited n times or more in a given time period. The departmental h-index is easily calculated using a data-base such as Scopus. There are many differences between the data sets behind s and h. For example, unlike the RAE or REF, all researchers in a department contribute in principle to h, not just a select few. Also, while the outputs of a researcher who has moved institutions during the REF period can count towards the RAE/REF submission of the new domicile, contributions to departmental h-index are based on affiliations as recorded on the Scopus database (for example). Despite these difference (and more), the results were (surprisingly) not too bad. We found correlation coefficients between about 0.55 and 0.8.  But are these good enough to make predictions?

Fig. 1. h2008 versus the peer-review based measure s  for research groups from different HEI’s in sociology. The Pearson correlation coefficient here is equal to 0.62.

fig1 hindex2008 sociology

Before discussing this, we ask if we can improve these results. First we tweaked the formula for s; instead of basing it on the post-RAE HEFCE funding formula, which valued 4*, 3* and 2* research in the ratio 7:3:1, we used the more recent formula involving the ratio 3:1:0. We found no improvement. Actually, s encapsulates three aspects of research: the outputs themselves (mostly publications), the research environment and esteem indicators. Since the h-index only involves outputs, we also restricted the calculation of s to that component of RAE2008. Again there was no improvement. We conclude that our crude statistic s is pretty robust as a summary of RAE profiles.

The h-indices we use in Fig.1 were measured at beginning of 2008 and involved the same time-window as RAE 2008, i.e., papers which appeared between 2001–2007. Citation counts, of course, change with time and to investigate the evolution of the Hirsch metric we also determined h as of 2009, 2010, etc, each based on publications appearing in the preceding 7 years. We found that while the h-indices grow gradually, the ranks of the various institutions do not change significantly year on year and the correlation coefficients do not become stronger with time. This means that, if one wants to use departmental h-indices based on the citation within the limited time window, it is as reasonable to do so early in the game as later; one does not have to wait for citations to accumulate when dealing with entire departments.

This brings us to our conclusions for RAE and our predictions for REF. The correlations of between 0.55 and 0.8 which we measured (see Fig.1 for the case of sociology) would certainly not have been good enough to replace RAE2008 by the departmental h-index. Various higher education institutes (HEI’s) for the four subject areas are ranked in Tables 1-4 (not all HEI’s are listed due to technical reasons – see [3]). E.g., the University of Essex was in second place in the list of HEI’s in sociology when ranked using the RAE2008-score s, but in 20th place using the departmental h-index. We have yet to see how this plays out for REF2014. However, we still can try to predict the changes in the ranked positions of HEI’s by comparing its new and old departmental h-indices. E.g., the departmental h-index predicts that Oxford and Cambridge will both rise in the sociology ranks at REF2014 while Manchester will fall. In this sense, perhaps the h-index can be used as a navigator between REF’s. These and other predictions are contained in the tables and in [3]. Now we have to wait until the REF2014 results to see what will happen.

In summary, comparisons with RAE indicate that the departmental h-index is perhaps the best metric we have but it would not have been good enough to replace that peer review exercise. Time will tell how it performs in relation to REF or if it can be used as a “navigator”.

In our opinion, a more important question is whether we should seek to measure research quality using metrics at all. In our opinion, we should not. We believe that their introduction would encourage managers to force researchers to change direction and pursue metrics. This would undermine academic freedom itself, the foundation of basic research. It would therefore be devastating to an endeavour which is at the very heart of what it is to be human and a foundation of our society — curiosity itself.

If we have to monitor research quality, let us stick with peer review. Let us accept that REF distorts the very thing it seeks to measure, but let us turn that to our advantage. REF can be used as a driver not only for research quality but also for the conditions to enable top-quality research to thrive. In recent years these have become so severely distorted as to be damaging not only to science but to scientists themselves in the new metrics-driven culture of publish and perish.

Table 1. The list of British HEI’s in Biology, ranked by RAE2008-scores s, h2008 and by  (the corresponding values of h-indices are shown in parentheses).  is used for departmental h-index based on publication period between 2007 and 2013 to compare with REF 2014.

table 1 hindex2008

Table 2. As in Table 1 but for Chemistry.

table2 hindex2008

Table 3. As in Table 1 but for Physics.

table3 hindex 2008

Table 4. As in Table 1 but for Sociology.

table4 hindex2008

Note: This article gives the views of the authors, and not the position of the Impact of Social Science blog, nor of the London School of Economics. Please review our Comments Policy if you have any concerns on posting a comment below.

About the Authors

Olesya Mryglod is Researcher at Laboratory for Statistical Physics of Complex Systems, Institute for Condensed Matter Physics of the National Academy of Sciences of Ukraine

Ralph Kenna is Professor of Theoretical Physics at the Applied Mathematics Research Centre, Coventry University.

Yurij Holovatch is Research Head at Laboratory for Statistical Physics of Complex Systems, Institute for Condensed Matter Physics of the National Academy of Sciences of Ukraine.

Bertrand Berche is Professor at the Statistical Physics Group, IJL, Université de Lorraine, France.

Print Friendly, PDF & Email

About the author

Blog Admin

Posted In: Citations | Evidence-based research | Higher education | Rankings | REF2014

2 Comments