The Higher Education Funding Council for England (HEFCE) remains deeply conservative in not using citations analysis for academic assessment. But it has now changed its previous policies of ‘asking for the moon’ when judging the external impacts of academic research. Patrick Dunleavy finds that HEFCE’s definition of what counts as an external impact has been greatly broadened. The criteria for allocating ‘star’ numbers to impacts case studies have become a lot simpler and more sensible. And a new ‘impacts template’ form has been introduced, although only impact case studies will attract funding. The template should at least allow universities and departments to offer more by way of systematic metrics of their overall impacts activity, to set their case studies in context.
With impeccable timing, HEFCE has chosen deep summer to go out to final consultation on how its review panels will operate the Research Excellence Framework (REF) in 2014. Given that administrative agility, innovativeness and flexibility are not inscribed in HEFCE’s genes, this is the last real opportunity for academics and researchers to come back with comments by October.
Almost all the coverage so far has focused on how the academic elements of the REF will operate, where only the physical science panels have yet woken up to the pervasive availability of great quality citations data that makes any ‘eyeballing’ by the committee members seem just amateurish and irrelevant. The ready availability of Harzing’s Publish or Perish software based on Google Scholar, and the imminent launch of Google Scholar Citations (evaluated in beta on this blog) mean that HEFCE’s position seems increasingly non-credible – defending the horse cavalry in the age of the machine gun.
Every serious department at every serious university in the UK already looks at the current and likely future citations records of new job applicants before they hire them. Departments look even more carefully at publications and citations before they promote anyone to senior lecturer or reader. And finally, no one does (or should) get to be a professor without a substantial citations track record. Yet because HEFCE takes advice from a traditional set of biliometricians, the REF remains locked in the dark ages of nominally ‘eyeballing’ 200,000 books and articles that REF panels have neither the time nor ability to assess in any credible way.
But not all is gloom and RAE-like stagnation. On external impacts HEFCE does seem to have made some positive steps forward, away from its previously over-stringent and impossible-to-meet concept of what external impacts from academic research are. Their latest definition (in Annex C, paragraph 5) shows a new and commendable pluralism:
|“Impact includes, but is not limited to, an effect on, change or benefit to:
• the activity, attitude, awareness, behaviour, capacity, opportunity, performance, policy, practice, process or understanding
• of an audience, beneficiary, community, constituency, organisation or individuals
• in any geographic location whether locally, regionally, nationally or internationally”.
This wider approach now fits much better with a 21 July joint statement on external impacts made by HEFCE plus Research Councils UK and Universities UK. It makes life a great deal easier for all the social sciences and the humanities, whose characteristic kind of ‘collective’ external impacts fit a lot better into this definition. And it means that civil society impacts are on a more level-pegging with business and government impacts. Even for the physical sciences, the new approach has many practical advantages, by placing more emphasis on changing awareness.
Nor do the changes stop here. Impacts are awarded grades from 4* at the top, down to 1* or unclassified at the bottom. Table 1 (below) shows the original HEFCE definition of these grades at the time when universities were asked to provide pilot impacts case studies in five disciplines, for which HEFCE published the results in late 2010:
Table 1: The earlier HEFCE star criteria for external impacts
|Four star||Exceptional: Ground-breaking or transformative impacts of major value or significance with wide-ranging relevance have been demonstrated|
|Three star||Excellent: Highly significant or innovative (but not quite ground-breaking) impacts relevant to several situations have been demonstrated|
|Two star||Very good: Substantial impacts of more than incremental significance or incremental improvements that are wide-ranging have been demonstrated|
|One star||Good: Impacts in the form of incremental improvements or process innovation of modest range have been demonstrated|
|Unclassified||The impacts are of little or no significance or reach;
or the underpinning research was not of high quality;
or research-based activity within the submitted unit did not make a significant contribution to the impact.
The problems here should be obvious, especially the impossibilist ‘ground-breaking or transformative’, and the logic-chopping distinctions (‘but not quite ground-breaking’ or ‘substantial impacts of more than incremental significance’). No wonder 98 per cent of working academics reacted by thinking – ‘Well, that counts me out!’
But now, as Table 2 (below) shows, HEFCE has come up with a much simpler and far more realistic set of criteria. To get the top star ranking, external impacts now need only be ‘outstanding’ compared to other departments in the same discipline, and all references to ‘ground-breaking or transformative’ changes have been exorcised.
Table 2: The new HEFCE star criteria for impacts
|The criteria for assessing impacts are ‘reach’ and ‘significance’:
|Four star||Outstanding impacts in terms of their reach and significance.|
|Three star||Very considerable impacts in terms of their reach and significance.|
|Two star||Considerable impacts in terms of their reach and significance.|
|One star||Recognised but modest impacts in terms of their reach and significance.|
|Unclassified||The impact is of little or no reach and significance; or the impact was not eligible; or the impact was not underpinned by excellent research produced by the submitted unit.|
The previous apparent separation of ‘reach’ (meaning something like how many people or activities were affected by an impact) and ‘significance’ (meaning either how radical or original the impact was, or perhaps how ‘lucky’ the researchers got in terms of having some distinctive effect, attributable to them alone) still lingers. But the new text seems to make clear that panels will somehow just fudge the two together.
The reference to an Impacts Template (backed up by a new 4 or 5 page form) also marks a commendable shift away from the previous sole reliance on case studies to allocate the impacts share of total funding (which is 20 per cent). A few weeks back I discussed with a top HEFCE official the unfairness of case studies for (say) a scientist who invests years of effort working with a company on an applied technique, only for a later bankruptcy, or a foreign take-over, or simply a re-shuffling of executives in the firm to lead to production using the technique being scrapped. Nothing actually changes in how the firm produces outputs here, not because the academic research was poor, but just because the scientist was unlucky in her choice of partner. ‘That’s just tough’, the HEFCE person responded. ‘We’re only asking for one case study per ten academics. So a department that was consistently unlucky like that just does not deserve to get further funding’.
I hope that the Impacts Template marks a belated recognition by HEFCE of how ill-founded this earlier position was. Scientists or academics want to work with any outside stakeholders whom they can get to take an interest in their work. But they are not experts in futurology who can somehow finely forecast organizational politics years ahead so as to know what will work out well and what will not. What should matter in assessing impact is not these random factors, but the whole approach that departments make to getting their work out there, to working with external stakeholders and thereby seeking to change society for the better.
The Impacts Template now at least gives departments 4 or 5 pages in which to demonstrate their strategies in this respect. The focus is squarely on each department (although wider university initiatives of direct relevance can also be mentioned) and HEFCE stresses that it wants ‘evidence and specific details or examples of the submitted unit’s approach’, not broad generalizations. In a long overdue concession that mountains of text and ‘narrative’ may not cut the mustard with anyone outside academia, Annex F grudgingly admits that ‘Completed templates may include … tables and non-text content’. The way is at last opened then for well-organized departments to submit detailed metrics and statistics of all their impacts activities, of the kind advocated in our Impacts Handbook.
So overall these changes are welcome. HEFCE has apparently listened. But just as the academic assessment still ignores citations analysis, so the overwhelming emphasis of the external impacts part still remains on the case study. In the last session of the LSE Impacts Conference a succession of HEFCE bigwigs and other research funders insisted that external impacts simply must be assessed somehow if UK higher education is to retain its Treasury funding.
But what all the politicians and top civil servants I have ever met want in exchange for millions of pounds of funding is a targeted report, a detailed Powerpoint or briefing with some good tables, convincing and credible statistics, and a sound econometric or ‘value for money’ assessment. So what is the point then of building a paper mountain of 5,000 impacts ‘narratives’ that no one in Whitehall will ever read, or even glance at? The idea that saturation-bombing civil servants with these massed ‘fairy tales of influence’ will convince anyone is still absurd.