LSE - Small Logo
LSE - Small Logo

Blog Admin

October 22nd, 2012

REF Advice Note 1: Understanding Hefce’s definition of Impact

4 comments

Estimated reading time: 5 minutes

Blog Admin

October 22nd, 2012

REF Advice Note 1: Understanding Hefce’s definition of Impact

4 comments

Estimated reading time: 5 minutes

English universities have begun spending millions of pounds and thousands of staff hours on preparing for the 2014 Research Excellence Framework. Some of these resources will be devoted to the 5,000 Impact Case Studies that will be used by the funding council  (Hefce) to allocate a fifth of government research funding. In a new series the LSE Impacts project team presents free Advice Notes on how to prepare Impact Case Studies from start to finish, focusing on the social sciences and humanities.

To begin the series Patrick Dunleavy explores how to see past the warped official language of Hefce (with its systematically over-claiming labels) so as to begin identifying what achievements of your department might make feasible impact cases.

In 2014 the Higher Education Funding Council for England (Hefce) will begin allocating 20 per cent of government research support to universities (QR funding) on the basis of how far departments and research units have achieved external impacts – that is, worthwhile effects outside academia on business, the economy, government, civil society and the development of public and policy debates. Metrics are undeniably less developed here, than they are for assessing the academic influence of research. So Hefce’s chosen method of evaluation instead requires departments to submit ‘Impact Case Studies’ for roughly every ten academic staff included in the REF. So departments must prepare an impact case for every 36 to 40 research output publications they submit.

If we cash out the sums involved on a pro rata basis (and also assume that current funding numbers do not change), then a top-rated Case Study (graded as 4* in Hefce-speak) could be worth around £720,000 to a department and university over five years. So the REF Impacts funding stream holds out the prospect of significantly reviving the financial viability of more applied, accessible and socially useful work across the social sciences and humanities – reversing years of the previous RAE process which squeezed out work with exactly these same qualities.

How we define possible cases, and which out of many ‘possibles’ we choose to submit for audit, are critically important decisions. The whole livelihoods of many researchers and staff funded by ‘soft-budget’ monies may hang on getting things right.

Decoding Hefce speak

A mixed bureaucratic/academic audit exercise like the REF cannot be conducted in ordinary language. Instead the Hefce officials and Panel members have to create their own, closed discourse-world, into which they can retreat so as to justify their behaviours to themselves, each other, and (at the end of the process) the Treasury and politicians. This discourse is marked throughout by an ‘impossiblist’ inflation of judgments about ‘excellence’. Its general operation is powerfully embodied in the ridiculously over-claiming ‘grade’ boundaries that Panels are asked to apply to academic outputs. Here everything has to be ‘world class’ or it is inadmissible. Work of mere ‘international significance’ is graded 2* and assigned no government funding at all.

For the Impact Case Studies the same sort of impetus gets translated into two criteria – ‘Reach’ and ‘Significance’. Apparently then a good ICS might score highly by having a concentrated and provable effect in a small, focused area; or by a more modest effect across a wider area of wider social and economic life; or some combination of the two. When asked for more clarification Hefce quickly acknowledged that no objective criteria of ‘reach’ or ‘significance’ are available, and that all such judgements will be highly relative (‘case by case’) and involve Panels developing some pragmatic rules of thumb.

The far more fundamental problems rest with Hefce’s demands that claims of external influence from academic research should be ‘auditable’, that is, provable, evidence-based and capable of being demonstrated in detail. There is nothing wrong in principle with this, but in Hefce’s inflated discourse this all gets expressed as demands that academics prove what inherently they cannot prove.

As my chart shows below, Hefce’s concept of an impact is drawn very widely, involving all the yellow and blue boxes below, while the pink box is reserved for the Panels themselves. Any intellectually honest social scientist will readily admit that Hefce is asking for the moon in terms of what departments are apparently being asked to show.

The boxes that are wholly or partly yellow at the top left of the picture shows what academics and departments can actually offer some useful evidence about, that is:

  • The number and type of ‘occasions of influence’ on which their research work has been communicated to, or seriously considered by, businesses, government agencies or NGOs. This should be widely feasible to document.
  • A ‘gross’ association or apparent convergence (perhaps implying some degree of causal linkage) between research being so considered and changes in what firms, agencies or NGOs actually do. Some of these changes are on the public record and hence can be evidenced. Apparent influence (or a degree of convergence) here at least speaks to the ‘timeliness’ and broad ‘relevance’ of the research. Even if no action is taken by the external body, showing that your work was seen as relevant to the choices considered and the final decision made is still useful.
  • The contribution which the research made to public debate and discussion in that specific area at the time. Here researchers do not need to demonstrate that actors altered activity, but just that the research contributed to ongoing public discussions. Again this can be very usefully covered by metrics and by demonstrating a ‘digital footprint’ in general or specialist media.

    Chart 1: Why the REF Impact Case Studies are so difficult to write
    (especially in the social sciences and humanities)

Although Hefce has no metrics for measuring external impacts, the official forms for Case Studies still demand that academics provide evidence of direct change that followed from their research. In a complex and inherently multi-causal social world you must still show that your research was somehow ‘the difference that made the difference’. The inflated language of Hefce requirements more or less force academics and departments to begin constructing ‘fairytales of influence’. If the impact claimed is on business or government then departments must somehow ‘net out’ other factors so as to claim a specific net casual effect made by particular research projects or publications. If we took Hefce-speak at face value, academics must show that business or government decision-makers were solely or uniquely activated by it, shown by the first of the blue boxes in the chart.

Nor is it by any means enough for Hefce that departments make such claims in general terms about a cumulative or collective stream of work, or the lifetime achievements of an academic or research group. Impacts claimed must instead be specifically tagged to particular individual outputs, known to be of at least 2* quality (‘international significance’ on the research outputs scale), and carried out in the defined pre-period (1993 to 2007) or the REF period itself (2008 to 2013). Similarly, the impacts claimed must be seen to have occurred in the five year REF period – and tough luck if it spans the boundary at either end. We shall say a lot more about these silly over-requirement elements in later blogs in this series.

Hefce’s inflated discourse applies too for contributions to public debate, cultural development and enhancing public engagement with research. Even if the case study is primarily a ‘public engagement’ one, departments must still somehow claim that their voice (amongst many others) somehow rose above the din to be uniquely or distinctively efficacious. The guidance from Panel D for the humanities seems a little inflated and more realistic in this respect than does the main social science Panel (C).

Worth bearing in mind is that the research users on the panel run the impact part of the REF assessments. So the pink box in the chart above is undertaken by government, business and civil society people who actually utilise research as part of their day to day roles. It seems clear they too will be able to see the bureaucratic nature of Hefce’s requirements. But they will also be able to spot academic hubris and over-claims just as easily.

Hold your nose and plunge in

Many thousands of academics are currently looking at the impact case requirements and are grappling with the question – ‘Does any of my work meet the bill?’ The dominant reaction, related to us by hundreds of interviewees, contacts and Case Study authors, has been self-doubt and dismay. How can I honestly hold my head up and make the kinds of grandiose claims for ‘reach’ and ‘significance’ that Hefce seem to require? How can I ever demonstrate, not just that a book, an article or even a whole stream of research reached social actors, government or business – but that (in and of itself) my work swung their behaviours into a new course (that would not have happened anyway), and that this change unambiguously enhanced social welfare?

The only honest answer here is that you can’t demonstrate this, and nor can anyone else. So don’t let this apparent ‘reality gap’ in Hefce’s approach put you off. Instead, take a deep breath, hold your nose tightly, and plunge into the process. You will have to accept that you’ll be doing stuff that stinks a bit, and that your hands will get a bit dirty. Make the best, most rational, most evidenced case you can, and let the Panel members grapple with the nature of their ‘grading’. Eschew false modesty and over-scrupulousness, while yet maintaining as much academic integrity as possible. Recognizing the farcical elements of Hefce’s discourse, you must none the less give your research and your department the best possible hearing it can receive.

The remaining blogs in this series address how to handle Hefce’s demands in detail, beginning with criteria that can help you envision and design your case study as a whole.

Note: This article gives the views of the author(s), and not the position of the Impact of Social Sciences blog, nor of the London School of Economics.

About the author:
Patrick Dunleavy
is Professor of Political Science and Public Policy at the London School of Economics and Political Science, where he has worked since 1979. He has authored and edited numerous books on political science theory, British politics and urban politics, as well as more than 50 articles in professional journals.

 

 

 

Print Friendly, PDF & Email

About the author

Blog Admin

Posted In: Impact | REF2014

4 Comments