Academics should be engaged with the wider world, but impact, if it is routinised, loses its potential to change the dynamic of a system. Chris Hackley writes that the research that influences policy should be celebrated but we must be wary of the risk that the impact measurement will begin to define what is to be measured.
On the face of it, it doesn’t seem much of an imposition for universities to demonstrate that a small proportion of their research is noticed by the world at large. We are publicly subsidized institutions after all, and accountable to the taxpayer. In fact we really ought to play a bigger part in public policy debate than we do. But does the way this engagement is framed by HEFCE as ‘impact’ risk the intellectual neutrality of universities?
More specifically, if universities self-consciously pursue policy related research in order to maximise revenues from impact, are they effectively conscripting their academics into the dominant ideology of government?
I’ve been fortunate in my career never to have experienced an ‘over-managed’ research environment, and at Royal Holloway independent research is especially well-supported. But it seems likely that, over time, the measurement and funding of impact could exert an influence on university or department missions and research strategies.
It may be inevitable that the measurement of impact will change universities’ behaviour, just as the presence of researchers changes the phenomena being investigated. The amount of funding attached to impact is not insignificant, and there is already prima facie evidence that the REF is having an effect on resource allocation and research priorities. Some universities have responded in some pretty left-field ways to previous RAE initiatives, so it is unsurprising if they over-think and over-manage impact. It’s up to Vice Chancellors if they want to hire hacks to write their impact case studies or policy wonks to shape their research agenda, although it’s probably unnecessary. Similarly, grant awarding bodies that ask for an impact plan with proposals are probably just wasting everyone’s time, but there may be other risks in trying to ‘over manage’ impact.
I’ve written two impact case studies for my school of management to consider for the REF. In each case, the research began without any thought of impact or policy engagement. What is more, the findings ran up against the prevailing government line at the time. I think there is, in each, a powerful circumstantial case that a contribution to public understanding was made that eventually filtered through to government and business policy circles, but it seems to me that the integrity of this is vested in the fact that our findings suggested that government policy (in two quite different spheres) was initially misconceived in important respects.
On the one hand, I feel slightly shamefaced to craft these impact cases, after the fact, in terms of a goal that was never part of the initial motive for doing the research. It’s probably unwise to admit to fortunate timing or serendipity when rationalising impact, but they play a part. On the other hand, isn’t there an added integrity to incidental impact? Shouldn’t an ‘impact’ disturb equilibrium? That being so, research that is conceived for its impact potential might disappear up its own axiology. Impact, routinised, loses its potential to change the dynamic of a system. If a study is designed to dovetail into existing policy, is its value compromised? Designing impact into research might also be designing impact out of it.
There is another danger of trying to design impact into research. Impact is more than media coverage, but the latter is often implicated in the former, at some level, either as a proxy for contributions to public debate and understanding, or as a matter of public record in minutes and reports. The supporting evidence I’ve marshalled in my impact cases includes URLs to media coverage and to PDFs of consultation and policy documents. URLs make good evidence, I should think. But there is a process of selection that takes place in the way research findings are noticed, written about, referenced and shared in media stories, in policy papers and on the internet, by researchers, journalists, policy analysts and so on. This selection is inevitably framed by institutional values and imperatives.
A liberal-pluralist view might hold that pushing universities to engage with public policy debates is a good thing for democracy. As I said at the beginning, I think we should have our conversations with the wider world as well as with each other, and trying to engage the public with research can be salutary for academics and, I think, valuable to all. A critical Marxist viewpoint, on the other hand, would suggest that institutional bias, unconsciously internalised by journalists, researchers and other actors, predisposes the selection of findings that fit within the prevailing ideology. Research becomes noticed, repeated, shared and incorporated into policy fora not because it challenged the prevailing assumptions, but because it supports them. If academics touch this petrie dish of ideology, the argument goes, we too might become infected.
There is sometimes an assumption that universities merely furnish policy debates with evidence bases, but evidence is never neutral. In a policy context, epistemology becomes political, and evidence merely another rhetorical device. Herman and Chomsky’s propaganda model, as a kind of trade-off between the two extremes, might offer a dialectical version of how research distils through interlocking media and policy arenas. Policy contributions from research, in the form of papers posted on the web or presented at policy meetings, blogs or press releases, shared, tweeted and blogged, bounce back and forth between ideological resistance from institutional forces on the one hand, and the liberal imperative to texture media debate with heterogeneous views on the other.
That a small proportion of university research might have an influence on cultural or economic policy or public understanding is something to be highlighted and celebrated, and perhaps built into accountability systems. After all, universities have always done this anyway. What we must beware of is the risk that the measurement defines what is to be measured.
Note: This article gives the views of the author(s), and not the position of the Impact of Social Sciences blog, nor of the London School of Economics.
About the author:
Chris Hackley established the Marketing teaching and research group at Royal Holloway, University of London, as its first Professor of Marketing, in 2004. His research and writing take a critical and socio-cultural stance on marketing and consumer cultural policy issues. Recent published work includes a Bakhtinian analysis of youth binge drinking, a critical appraisal of Ofcom’s new paid-for TV product placement rules, and an anthropological analysis of the fading TV phenomenon that was X Factor.
7 Comments