The inclusion of impact measurement in the 2014 REF has generated anxiety and unease for academics, especially those in the humanities. This anxiety is connected to the university’s need to preserve disciplinary autonomy writes Jon Adams, who considers how impact passes a crucial element of control out of the hands of the departments, and into the hands of the public.
Among academics, there is apparently a widespread resistance to governmental assessment. The decision to include ‘impact’ as part of the 2014 Research Excellence Framework has been met with a special degree of scorn and derision.
Alongside the worry that some disciplines will be better equipped to fulfill these new obligations, there is also a particular concern that including ‘impact’ as a criterion will inevitably lead to vulgarization – either in the modern vernacular sense of debasement through skewing to prurience, or in the only slightly less snooty Victorian sense of sharing rarified thought with the vulgate.
Still, not everyone seems so concerned, and we can probably infer from the silence of medical researchers, petrochemists, legal scholars, and engineers a quiet confidence in their ability to score well on the new criteria. It is, after all, easier to imagine and to measure impacts proceeding from engineering or medical research – their productions are intended to execute tangible goals. In other words, impacts are not an incidental feature of the work they do in the universities but their primary aim.
It is the humanities (and to a lesser extent, the social sciences) that have been most vocal in their protestations, and because they feel most vulnerable. Measuring of any kind requires quantification, and the work of the humanities is usually held to be inimical to the types of reduction that quantification requires. The humanities are notoriously difficult to assess – a macro-level difficulty that runs all the way down to the fuzzy marking of undergraduate essays and exams.
A threat to the constitution of universities?
The ways in which the humanities have articulated their anxieties about the REF 2014 are actually quite interesting for other disciplines, and point to ways in which the resistance to assessment generally and to impact, in particular, emerges from principles fundamental to the constitution of the modern university.
Stefan Collini’s 2009 response to the announcement that impact would be included as part of the next assessment exercise is exemplary for my purposes. Collini is derisive about impact, especially its applicability to the humanities. He concludes, as such arguments often do, with a foundationalist assertion that the value of the humanities is self-evident and irreducible.
Collini’s more particular concern is that impact is an external measure – it records successes that have occurred outside the discipline. Understanding this is important for understanding the special objection impact elicits. An academic who is being measured by external values (televisual popularity, for example) has effectively slipped the regulatory yoke of the discipline he or she nominally represents.
It’s a worry because participation in public engagement is vetted by systems of assessment that intersect only incidentally (if at all) with those used by the disciplines themselves, and in so doing, bypass the universities’ own mechanisms of quality control. In other words, you needn’t be an especially good practitioner to be a good populariser.
Part of the anxiety surrounding the impact criterion stems from exactly this sense that external forces will be allowed to intrude on the universities’ in-house systems of quality assessment. Not only this, but by effectively forcing public engagement, the impact criterion severs the sort of independence from the market-forces of public engagement that previously kept the knowledge productions of academics safe from capture and distortion by the vicissitudes of populism and current affairs.
The worry is that an academic who is encouraged to appeal to a large number of people will begin to make deliberately provocative statements. Like many complainants, Collini expects impact will act as an incentive to undignified behaviour. He pointedly calls this “hustling and hawking.” Underlying these complaints is an abiding sense that the wrong people were being privileged by the RAE and that the REF will – by adding this new impact criterion – only exacerbate that problem.
Collini is well aware that the REF’s ultimate goal is funding allocation, and recognizes that the purpose of scoring departments is to allocate resources equitably, such that the “highest-scoring departments then receive a greater share of the funding.” The idea that only peer-review matters, the fetishisation of that idea, provides a consoling buffer against the greater exposure and salaries available to journalists and thinktank employees and R&D people doing superficially similar work outside the universities. And for those subject to it, talking-up the strictness of peer-review is a complement reflexively paid to yourself.
University systems of quality control lead to distinctive identities
This goes some way to accounting for why impact in particular is so vilified. But it also points to why assessment generally is unpopular. Individual disciplines want to retain their own systems of quality control because that is how they maintain their distinctive identities. This idea, in essence, is the foundation of the principle of disciplinary autonomy, which is the right of each field to define its own tenets of success. In the absence of an overarching theory of knowledge and vocabulary capable of mediating between disciplines, individual disciplines set their own epistemic standards. Disciplinary autonomy allows individual disciplines to decide what counts as a good idea, and who is a good thinker.
In the main, that system works perfectly well. It works to insulate disciplines from the imperialism of neighbouring disciplines (it stops economists from telling sociologists they’re doing it all wrong), and in so doing it allows a healthy methodological pluralism to flourish.
How does one compare a physicist with a legal theorist with a literary critic? However invidious that question might seem, in the absence of even a rough answer, it’s not clear how we should decide how much support economics departments should receive in relation to departments of biochemistry or history. Given that this sort of ranking is exactly what funding bodies at the governmental level are required to do, we have a better sense of why the assessment systems (generally) are so unpopular.
There is here a fundamental tension between the demands of disciplinary autonomy – which seeks to obey only internal criteria, and the demands of centralised funding to generate metrics capable of arbitrating between achievements in widely disparate fields. It is a tension between the need for governments to have some means of allocating finite resources equitably, and the need for disciplines to be independent of trans-disciplinary metrics.
So the complaint isn’t simple protectionism as such – it’s not just that universities shouldn’t have to compete on these grounds, it’s the requirement that this crucial tenet of academic self-rule be surrendered.The worry, so far as I can make out, is that impact passes a crucial element of control out of the hands of the departments, and into the hands of, well: you.
The REF, and other research evaluation exercises around the world, do not replace “disciplinary autonomy” in determining what is quality research. In fact, I believe they are now favouring peer review over citation metrics. But on the issue of cross-disciplinary evaluation, if an economist thinks that a social science idea is economically unviable, and the economist’s criticism is supported by evidence and validated by peer review, then this is a valid criticism of the social scientist (and vica-versa). The real problem is that the critical citation contributes to the positive measurement of impact. One day, when biliometrics research develops qualitative metrics that take advantage of textual analysis software to measure criteria like citation sentiment, the percentage of ideas supported by evidence, whether or not a research method contains the expected elements of the method, and the success rate of hypotheses and predictions, then we will witness a new suite of metrics that will go beyond both the vulgarity of quantitative citation metrics and the subjectivity of selective peer review panels.