Following on from the recent debate at the ‘From Research to Policy: Academic Impacts on Government’ conference, Jane Tinkler finds that the academic expertise and luck required for a piece of research to be considered valuable by government in policymaking is not valued by the Research Excellence Framework.
This month saw the second major event from the Impact of Social Science team. The conference, From Research to Policy: Academic Impacts on Government, looked at how academic research can impact on the policy process, the problems and possibilities that the relationship between these two groups presents and how the situation may be improved. We had a number of speakers with experience from both within and outside universities (for those interested, podcasts from the sessions are available here).
But the discussion again brought home to me that what government essentially wants from academics is not what the Research Excellence Framework (REF) seeks to capture. And it definitely can’t measure some of the chance ways impact on policymaking happens.
What government really wants from academics is ‘wise advice’. When those in power seek academics out, they usually want the result of experience and expertise built up over an academic’s career rather than just the findings from a particular piece of research. Policymakers want advice that is targeted on particular policy issues. There may also be a time dimension as, for example, external events have brought a particular issue to the fore and they need answers to questions very quickly.
It is this wise council that means academics are extensively used by government on advisory board, expert panels, as witnesses and panel chairs. Again, the choices for these positions are often based on over-time expertise. Indeed these ‘academic service’ roles can sometimes not be directly related to the academics core research: instead it may be that the academic’s expertise provides a fresh perspective or match well with others on the panel.
So policymakers explicitly want academic expertise rather than necessarily the results of a specific piece of research (or even set of research findings). However these expertise or academic service roles are not always considered in themselves to be evidence of impact by the REF process. Panel C guidance states:
Acting as an adviser to a public body, for example, does not of itself represent impact. However, providing advice based on research findings from the submitted unit, which has influences a policy, strategy or public debate would constitute impact if there is evidence that the advice has had some effect or influence.
Academics need to be ‘lucky’ in matching their current research grant applications with potential future policy needs. Most academics that have talked to us as part of the Impact project mention the part that serendipity plays in dealing with government. There are some academics who are good at spotting research trends and put plans, or more importantly funding applications, in place to ensure research is available when it might be of most use in the policy cycle. But it can be that the biggest ‘impact’ from a particular piece of research comes from a chance meeting at a seminar that gets your research into the right policymaker’s hands.
There is also the unlucky academic: those who have been working on a particular research problem that following an election is no longer politically popular. This frankly means it will not have impact on current policymakers. It may influence the debate for those in opposition, or those bodies who work with and around government. But some research findings will just not be picked up in certain political climates.
It is of course difficult for any evaluation to take into account the luckiness or otherwise of an academic. But having an impact on government, luck seems to be a more than average problem.
The distinctiveness of academic research is not always seen as a benefit in government.
Long-run programmes of academic research that address particular policy problems are a vital resource that can sometimes not be valued by policymakers precisely because of the time that it takes to reach these conclusions. We academics also have the reputation of ‘sitting on the fence’ and not making conclusive judgements which is seen at best as unhelpful and at worst as obstructive. But it is this exactly this rigour, comprehensiveness and quality that are the values of academic work. And the lack of one definitive answer shows the thoughtful nature of trying to look across a range of methodologies and approaches and bring them all together in a way that is helpful for complex real world policy problems. As Huw Davies pointed out at our seminar, research doesn’t speak for itself and it doesn’t stand alone. Working with and creating research findings is a dynamic and iterative process that sometimes involves rethinking what we thought we knew. This is not a comfortable position for policymakers to be in.
It will be a negative and surely unintended consequence of the REF if the work that academics do with government and policymakers cannot be usefully used in the evaluation process. We know that academics felt that the Research Assessment Exercise (RAE) pushed them into producing single author, middle of the discipline, traditional output style work. We also now know that the most impactful work is often multi-authored and on the borders of cross-disciplinary work. Because social scientists need to engage with government, academic work with its rigour, quality and questioning is valuable in policy debates. It feels very much that we need more academics feeding into government and policymaking, not less.
There is a whole research literature on research utilization. I am not very familiar with it but I know that it identifies different types of research use and that instrumental use (that direct kind the REF seems to value) is only about 12% of all research use. The most common kind is conceptual, wherein research helps policy makers (or other users of research) think differently about the issues. Advisory board service and other forms of “wise advice” are crucial here.
It is a shame that the REF does not make use of the research on research utilization in forming it’s own policies.
I am basing the following comments on this presentation: https://www.publicengagement.ac.uk/sites/default/files/NCCPE%20REF%20update%20February%202012%20FINAL.pdf.
Taken out of context, the quote from guidance for panel C (above) about mere participation as an adviser to a public body not in itself being sufficient to constitute impact sounds restrictive. Only advice that produces some effect on public debate or policies would count as having an impact. Thus, it is not mere expertise that counts, but rather expertise that is actually useful to or used by policy makers that would count as having an impact.
Whether one’s advice is actually used by policy makers in crafting a policy may be in part a matter of luck — one has to have picked the right questions and perhaps even have come up with the right answers in order to be useful to certain policymakers. But I am not convinced that it’s all luck and no skill. Moreover, as I understand the REF, there is no requirement that every piece of research conducted by every researcher (even those in the areas covered by panel C) have an impact.
Responding to the REF is thus something that requires universities (not individual researchers acting alone) to put together the research that HAS HAPPENED to have had an impact. Whether a particular piece of research happens to have an impact may involve a bit of luck; but demonstrating that research has had an impact is a matter of crafting a convincing account. On the face of it, then, it seems reasonable to me to say that citing mere participation as an adviser to a public body is not in itself evidence of impact. One would need to say more to give a convincing account.
The key question is whether the REF provides ways that universities can make a case for the impact of the research they choose as that which best demonstrates impact. Universities need room to maneuver, and whether the REF provides sufficient room is the key question we ought to ask in assessing the assessment framework. So, let’s see.
First, the REF does not provide a list of impacts. That’s a good thing, since it provides some room to maneuver (as we suggest here: https://blogs.lse.ac.uk/politicsandpolicy/2012/03/17/impact-accountability-holbrook-and-frodeman/). HEFCE seems to realize the force of this point (from slide 33 of the presentation linked above):
“In drawing up its assessment criteria and the advice to submitting institutions, the main panel agreed that providing HEIs with detailed lists of impacts and evidence and/or indicators for those impacts would be unhelpful because these could appear prescriptive or limiting.”
Second, what universities need to provide is a narrative account that links research to some sort of impact (effect on the audience). That’s usefully vague. Most importantly, these narrative accounts will be judged by peers. These peers ought to appreciate the fact that providing absolutely definitive evidence of impact is a tall order. What’s needed is judgment, and peer judgment seems a fairer way of operationalizing that than a predetermined set of supposedly objective indicators of impact. This is not to suggest that indicators might not prove useful in helping to inform such judgment; but they shouldn’t be used alone, precisely because they limit the peers’ freedom to exercise their own judgment.
Again, the REF seems to have got this right (from slide 60 of the presentation linked above):
“Within their narrative account in the case study, institutions should provide the indicators and evidence most appropriate to the impact(s) claimed, and to support that chain. The sub-panels will use their expert judgement regarding the integrity, coherence and clarity of the narrative of each case study, but will expect that the key claims made in the narrative to be supported by evidence and indicators.”
So, one must construct a narrative account that will be susceptible to expert judgment. ‘I was an adviser to government’ clearly does not meet that standard, but that seems reasonable. What’s needed is something more to construct a plausible narrative. To continue the quote from slide 60:
“The main panel anticipates that impact case studies will refer to a wide range of types of evidence, including qualitative, quantitative and tangible or material evidence, as appropriate. Individual case studies may draw on a variety of forms of evidence and indicators. The main panel does not wish to pre-judge forms of evidence. It encourages submitting units to use evidence most appropriate to the impact claimed.”
That’s precisely the sort of room to maneuver that’s necessary for the REF to allow. Luck may play a role in which research a university selects to demonstrate impact. But that doesn’t mean that it’s impossible to construct a narrative account of impact that is susceptible to positive judgments from an expert panel. Does it?
I’ve published a response that goes a bit beyond a comment here: http://cas-csid.cas.unt.edu/?p=3285.
The questionable validity of the impact criteria isn’t the only problem. The exercise of preparing impact statements is distracting academics from their core activities and taking up inordinate amounts of time and money that HEIs can ill afford. See http://tinyurl.com/6olcfaq
@deevybee You write:
“For the REF 2014, the rules have now changed, so that we don’t only have to document what research we’ve published: we also have to demonstrate its impact. Impact has a very specific definition. It has to originate from a particular academic paper and you have to be able to quantify its effect in the wider world.”
I think this overstates the burden on researchers, as I understand the REF. First, not every researcher must demonstrate impact. Second, impact has a usefully vague definition, not a very specific one. Third, you don’t have to be able to quantify the effect of a single paper — all sorts of arguments for impact are available to be included in the narrative account.
The fact that universities are taking the REF seriously strikes me as a good thing, rather than the waste of time and money you suggest. Here’s a radical suggestion: what if academics took impact seriously, as something that they ought to be doing as part of their jobs? Then documenting it might seem less burdensome. And if one actually has an impact, then one might even see the REF as an opportunity rather than as a burden.
At York University (Toronto, Canada) we have brokered many wonderful research collaborations with non-academic partners that have resulted in significant impact. One graduate student intern produced a “heat registry” with an inner city neighbourhood and that research is now informing the cooling policies for all 2.5 million Torontonians. One small collabration on evaluating a new way of providing immigrant settlement services resulted in creation of 86 new jobs and an expanded service model for new Canadians. A collaboration between university and community agency resulted in a Green Ecomony Centre that provide green business services to rural business creating 18 jobs and sustaining 220 jobs. And none of these impacts were from the outcomes of a reseach program. They were collaborative project, many dependent on students not faculty. Amazing outcomes that procuded real world impacts that might not be REF-ale by their narrow rules.
@David Phipps
I keep going back to documents published about the REF, such as this: http://www.hefce.ac.uk/research/ref/pubs/2011/02_11/02_11.pdf.
Here is what it says in Part 3, Section 3, Paragraph 146:
“146. The REF aims to assess the impact of excellent research undertaken within each submitted unit. This will be evidenced by specific examples of impacts that have been underpinned by research undertaken within the unit over a period of time, and by the submitted unit’s general approach to enabling impact from its research. The focus of the assessment is the impact of the submitted unit’s research, not the impact of individuals or individual research outputs, although they may contribute to the evidence of the submitted unit’s impact.”
Were the students in the examples you cite not affiliated with a research program at all? Or could a case be made that in fact, they were? Were these not students at a university? Were these students not engaged in learning through research? And was that research being performed by something that would count as a research program? Maybe these students went of totally on their own, with knowledge they gained from the internet, rather than from school, or something, and did great things. In that case, no case could be made that the HEI’s research program had anything to do with the impact. On the other hand, maybe there is a connection that could be made.
And the impact need not be connected to a specific research output, according to what HEFCE’s guidance actually says.
@J. Britt Holbrook
The relationships and collaborations we broker are opportunistic and not part of an ongoing research program. Sometimes the collaborations provide research and evidence that results in a paper or a thesis and other times the work is more of a one off and the results have a greater community impact than academic impact. I think of concern for me is the line “underpinned by research undertaken within the unit over a period of time”. As one off, instrumental instances of research I am not certain they would stand up as traditional scholarship against which one could measure extra-academic impact.
@David Phipps
Well, now we are getting philosophical, which suits me fine. 😉
I think one implication of impact requirements (whether ex ante (such as RCUK’s Pathways to Impact) or ex post (such as in the REF) is that our notion of ‘traditional scholarship’ has to be re-thought.
The relationship between academic impact and broader societal impact also needs to be rethought.
It is precisely here that impact policies provide us with an opportunity. Maybe we should rethink our standards of academic rigor. Maybe we should reconsider criteria for promotion and tenure. And maybe we should leverage impact policies in assisting us in rethinking the relationship between the production of knowledge and its use. Questions like these are what makes websites like yours and this very site so interesting. Maybe blogging should count; maybe Twitter should count; maybe having more impact outside the academy than inside should count more.
We should also take advantage of the leeway we have with impact criteria to help shape how impact policies are interpreted and enacted. Based on a cursory look at the heat registry case, I would bet that a narrative account could be constructed that would link the student to the research program of which she is a part. Her program generally does that type of environmental studies research, and it encourages community engagement — so the impact her program has had should certainly count.
I guess what I am suggesting is getting inside the criteria and moving around to create a better fit, rather than refusing to try them on until one gets a bigger size.