Alongside petitions against the REF, we have also seen the growth of campaign groups that promote the impact of academic research. Simon Smith charts the concerns and counterarguments made by HEFCE and its critics and ends up finding cause for optimism.
My interest in the REF really began when I became aware of the UCU petition against the inclusion of impact in HEFCE’s new framework for research assessment which eventually attracted nearly 18,000 signatures, including 3,000 professors in 2009. Perhaps the key phrase was that “researchers [must] enjoy academic freedom to push back the boundaries of knowledge in their disciplines”, unhindered by considerations about social and economic benefit.
What struck me as interesting about recent debates on the status of scientific research was a sort of return to the Victorian era in the sense of the renewed concern for the autonomy of the scientific enterprise alongside some divergent rhetorical strategies. So alongside the UCU petition we also saw campaigns like Making the Case for the Social Sciences and Science is Vital, unashamedly claiming how useful research can be in the disciplines they represent.
Indeed the REF consultations actually enacted a micro-cycle as respondents to the consultation in 2008 invoked the social and economic value of research to highlight one of the weaknesses of bibliometrics, but once this devil was replaced by the spectre of impact, respondents to the second consultation tended to engage in purifying boundary work, opposing a scholarly ‘pull’ to the government ‘push’ for impact measurement.
Academic concerns meet HEFCE’s response
We published a paper that analysed in greater detail the arguments deployed in the second public consultation. Our analysis of the responses identified several predominant concerns in the academic community:
1. ‘Expert review’ and the threat to field autonomy
HEFCE’s line: Users should be involved in the assessment of impact; bringing in peer users does not threaten academic autonomy, on the contrary it could extend quality control further downstream from knowledge exploration to exploitation.
Counter-arguments: researchers as a professional community have a right to self-regulation; practical difficulties of achieving a ‘shared understanding’ of quality, without which evaluation would lack rigour and confidence.
2. Debates about how to handle the collective dimensions of knowledge production
HEFCE’s line: coherent research units should integrate exploration and exploitation, or at least have a system for connecting the two sets of activities.
Counter-arguments: fine, but the template of the REF is too rigid to account for networked knowledge production.
3. ‘Building on’ excellent research: attachments and detachments
HEFCE’s line: excellence is paramount and any impact has to be traceable back to preceding excellent research.
Counter-arguments: in that case, the REF will have difficulty capturing the variety of interactive processes through which knowledge translations occur – the template is only really suited to simple excellence/relevance combinations and to those with a linear tale to tell.
What we saw were two sets of arguments against the inclusion of impact, and arguments from those in principle favouring its assessment, but worried about the crude way HEFCE was proposing to do so. The arguments against the inclusion of impact evaluation were either on the level of professional jurisdictions or on an epistemological level.
The professional jurisdictions of research
The professional jurisdictions argument claimed that it isn’t legitimate to evaluate research according to a criterion that does not relate to a certain, quite narrow definition of the researcher’s craft. The epistemological argument claimed that evaluation will have counter-productive effects insofar as it influences research choices in ways that discourage the kinds of curiosity-driven research that, history shows, have often led to innovations with far-reaching but at the time unpredictable social, economic or cultural consequences.
Predictably, the argument about professional jurisdictions was taken up most forcefully by well-established academic disciplines positioned furthest from the ‘user’, such as some of the physical sciences and the humanities; and by representatives of elite institutions with the strongest interest in protecting ‘quality’ criteria and generally favourable to the concentration of resources that has resulted from RAEs.
Equally predictably, however, the impact agenda has been supported by members of disciplines whose research interests position them close to the user, such as the health, medical and pharmaceutical sciences or educational researchers.
Indeed, HEFCE found some improbable allies. For example, several papers were published by geographers favouring radical co-productive / participatory research approaches, and hoping that the evaluation of impact will strengthen the incentives to invest in reciprocal relationships with communities by adding academic value to collaborative / public engagement activities that generally go unrewarded in career terms.
The durability of curiosity-driven research
The epistemological argument seems more incontrovertible at first sight. The idea of curiosity-driven research is almost sacrosanct to many scientists, and the history of science is littered with examples of breakthrough discoveries which had no predictable application at the time of the exploratory research, and which, the argument goes, could only be contemplated because the very institutional structure of science contains protective enclaves in which experimental research can go on without having to demonstrate any immediate economic or social value. It is these enclaves that would be eroded by the incentive system put in place by impact evaluation, claimed HEFCE’s opponents.
Is this a realistic fear? Enclaves that protect nascent ideas from sceptical scrutiny that could veto them before they have chance to prove themselves are found in abundance in commercial research environments, organisational studies show. If a commercial imperative doesn’t eliminate them, why should a 20%-weighted, retrospectively-assessed impact imperative? More fundamentally, we need to ask: is ‘exploration’ really best served by divorcing it from considerations of ‘exploitation’? Should we isolate for evaluation just that portion of the chain of investments required to move from a ‘pure’ scientific research finding to its most distant or banal use, or should the evaluation mechanism take a perspective on the whole value chain?
A cause for optimism?
My position is that it’s better to have a holistic perspective, not only because the REF is used to distribute public money and therefore ought not to be indifferent to measures of ultimate use, but also because it can encourage a more reflexive science.
If only curiosity-driven researchers were aware just how implicated their everyday technical decisions are with society, they might be more confident when it comes to thinking about questions of societal relevance! From this perspective, thinking about impact is a crucial phase in all scientific investigation, since any research process involves a range of ‘collaborators’, whose selection conditions not only the social effects of knowledge but the type of knowledge that will be produced.
My optimistic suggestion has therefore been to look upon the REF as an institution capable of enhancing the reflexivity of researchers with regard to these ties with the ‘real world’, ties that exist anyway but are often not routinely recognised. Could the REF help to embed impact considerations among the routine reflexive tools for the research process?
I have reservations about how the tool is calibrated at present – in particular, its implicit assumptions about linearity and its insistence on attributing impacts to identifiable pieces of research in particular research units are at odds with the messy, collaborative nature of most knowledge creation processes. But I think it is possible to take advantage of the REF – to work – between its lines – individually, but especially collectively, in order to get beyond the audit game and approach it as an exercise in reconstructing the chains of knowledge translation that our past research enacted. This would serve as both an accounting exercise and an ‘enlightenment tool’ which could help us to visualise and critically review the practical engagements we have entered throughout our research experiences.