LSE - Small Logo
LSE - Small Logo

Blog Admin

December 21st, 2018

2018 in review: round-up of our top posts on research evaluation and impact

0 comments

Estimated reading time: 5 minutes

Blog Admin

December 21st, 2018

2018 in review: round-up of our top posts on research evaluation and impact

0 comments

Estimated reading time: 5 minutes


The concept of research impact pervades contemporary academic discourse – but what does it actually mean?

Research impact is often talked about, but how clear is it what this term really means? Kristel AllaWayne HallHarvey WhitefordBrian Head and Carla Meurk find that academic literature discusses research impact but often without properly defining it, with academic discourses mostly drawing on bureaucratic definitions originating from the UK. The authors highlight four core elements that comprise most research impact definitions and propose a new conceptualisation of research impact relevant to health policy.

Why has no other European country adopted the Research Excellence Framework?

Most European countries have followed the UK’s lead in developing performance-based research funding systems (PRFS) for their universities. However, what these countries have not done is adopt the same system, the Research Excellence Framework being its most recent iteration. Instead, many use indicators of institutional performance for funding decisions rather than panel evaluation and peer review. Gunnar Sivertsen has examined systems throughout Europe and finds the REF to be quite unique as a combination of performance-based institutional funding and research evaluation. While most countries do both, they do so in independent setups and with different, less expensive methodologies.

The RAE/REF have engendered evaluation selectivity and strategic behaviour, reinforced scientific norms, and further stratified UK higher education

The UK’s periodic research assessment exercise has grown larger and more formalised since its first iteration in 1986. Marcelo MarquesJustin J.W. PowellMike Zapp and Gert Biesta have examined what effects it has had on the submitting behaviour of institutions, considering the intended and unintended consequences in the field of education research. Findings reveal growing strategic behaviour, including high selectivity of submitted staff, the reinforcement of scientific norms with respect to the format and methodological orientation of submitted research outputs, and an explicit concentration of funding.

From invisibility to impact: radically different measures are needed to capture the true impact of research

Academics are increasingly expected to produce directly applicable solutions to hard-to-solve “real-world” problems such as poverty, development, and environmental degradation. However, conventional assessments of science have not yet been adequately adapted to capture the diverse effects of this type of problem-centred research. Examining a prominent recent example of multidisciplinary research on consumption, environment and sustainability in Ireland, Henrike RauGary Goggins and Frances Fahy show how certain narrowly defined measures of scientific relevance can fail to capture the actual impact of research.

Resist? Welcome? Co-opt? Ignore? The pressures and possibilities of the REF and impact

The increased focus on impact in research evaluation represents a range of possibilities and pressures to those academics whose work is being assessed. For some it offers an opportunity to progress social justice causes and engage in participatory, bottom-up research approaches with less powerful groups; while to others it is further evidence of the managerial audit culture that is corrupting universities, trammelling academic freedom, and which must be resisted. Robert MacDonald considers both perspectives and suggests that even if the REF is an example of increased governmental control, it might yet provide space to engage in a positive, progressive politics of research.

Hitting the QR sweet spot: will new REF2021 rules lead to a different kind of game-playing?

Universities’ REF 2021 preparations are well under way, with additional guidance published last autumn in the form of new REF rules designed to reduce game-playing behaviours among institutions. However, as Simon Kerridge observes, the rule changes may have introduced, or rather enhanced, some hidden dangers around universities’ FTE and impact submissions. Projections in funding allocation demonstrate why submitting institutions might be given pause for thought, with the driver for excluding staff to stay below an impact case study threshold possibly even higher than last time.

The hidden costs of research assessment exercises: the curious case of Australia

Research assessment exercises provide the government and wider public with assurance of the quality of university research, with the guiding principles being accountability, transparency, and openness. But is there the same accountability and openness when it comes to the public cost of these large-scale exercises? Ksenia Sawczak examines the situation in Australia as the research sector looks ahead to the new Engagement and Impact Assessment later this year. There seems little doubt this exercise will demand significant resources, with no guarantee it will achieve its stated goal of improving how universities engage with industry. Until the hidden costs of assessment exercises are revealed and a thorough consideration of their general utility is undertaken, questions will remain as to whether they are a responsible use of public monies.

Looming REF deadlines lead to a rush in publication of lower quality research

The increased significance of research assessments and their implications for funding and career prospects has had a knock-on effect on academic publication patterns. Moqi Groen-XuPedro A. TeixeiraThomas Voigt and Bernhard Knapp report on research that reveals a marked increase in research productivity immediately prior to an evaluation deadline, which quickly reverses once the deadline has passed. Moreoever, the quality of papers published just before deadlines is lower, as measured by citations. Those who design research assessments should consider having cycles of varying lengths across different fields,  affording researchers the time and opportunity to pursue more novel, risky projects.

A brief history of research impact: how has impact assessment evolved in the UK and Australia?

Over the last couple of decades there has been an international push around the assessment of the wider societal impact of research. Kate Williams and Jonathan Grant document the evolution of research impact assessment in the UK and Australia, and how policies in the two countries have been seemingly interdependent, a back-and-forth process developed through international learning. Continued political commitment to impact assessment is likely in both countries, with debate centred around reducing the costs and burden through the use of impact metrics.

The role of the self in the research process: reflections on researching the REF as a PhD student

In this short, reflective post, Emily Yarrow considers her experiences as a PhD student researching women’s lived experiences of research evaluation in the UK and particularly the anxieties she felt as a junior researcher interviewing very senior, esteemed academic colleagues. It is important to reflect on the role researchers play in the interviewing and data collection process, and also on how gender, gendered power dynamics, and one’s position in the academic hierarchy can potentially affect interactions with participants from the outset.

Impact is crippling higher education. But it is still part of the solution

Now a fixture of the higher education landscape, the “impact agenda” is partly fuelled by a cost-benefit framework that encourages universities to focus on demonstrating the economic value of their interventions. As a consequence, a clear pattern emerges with the government as the main beneficiary of impact, not wider society. Tina Basi and Mona Sloane argue that REF 2021 offers the opportunity to frame a discussion on the purpose of universities that is less focused on economics and more focused on people and public engagement, returning closer to the Humboldtian model of higher education.

Your research has been broadcast to millions – but how do you determine its impact?

The potential of broadcast programming to reach millions of people holds obvious appeal to researchers looking to maximise the dissemination of their work. But when it comes to impact, having vast reach is just one part of the equation – how can the significance of broadcast research be determined? Melissa GrantLucy Vernall and Kirsty Hill developed a mixed-methods approach, using questionnaires and focus groups, that sought to measure the impact of health-related research broadcast in two programmes on prime time television. Follow-up work conducted after the broadcasts showed that participants’ understanding of the issues had subsequently been enhanced, with a number revealing that they had changed their behaviours as a result of the research.

Guidance on testimonials and statements to corroborate impact

One of the more compelling forms of evidence submitted to REF2014 by universities looking to demonstrate research impact was the corroborating statement or testimonial from a research user or partner organisation. Stephen Kemp provides clear guidance on what these statements should include and aim to convey, while also sharing advice on how they might be sourced, as well as other, more easily overlooked considerations.

How should we balance the research impact ecosystem?

Currently there is much discussion around research impact as REF 2021 preparations intensify. However, universities that are preoccupied with impact case study submissions to the next exercise may be missing the bigger picture. Jenny Ames emphasises the importance of establishing and nurturing a research impact culture; one that can help a university to achieve its vision more broadly and deliver benefits beyond the REF.

Making research evaluation processes in Europe more transparent

Researchers repeatedly cite career advancement as a key incentive for their practices and behaviours. This is critical to understanding the pace of change in scholarly communications, as those researchers inclined to innovate or experiment with new forms of research outputs, methodologies, or communication styles risk being penalised by the evaluation system used by many research institutions that are slow to adapt to the modern research environment. Sarah SloweGareth ColeJon Tennant and Charlie Rapple are gathering data on current promotion and hiring guidelines used throughout Europe and will analyse how these compare to researchers’ attitudes of “publish or perish” and the impact factor as the key determining factor for career advancement. A number of recommendations will follow from this analysis, with the ultimate aim of fostering a more informed evaluation, promotion, and recruitment system for researchers.

There is an absence of scientific authority over research assessment as a professional practice, leaving a gap that has been filled by database providers

Research metrics have become more established as a means to assess research performance. This is understandable given research institutions’ and funders’ demand for assessment techniques that are relatively cheap and universally applicable, even though use of such metrics remains strongly contested within scientific communities. But to what extent does the academic research field of evaluative citation analysis confer legitimacy to research assessment as a professional practice? Arlette JappeDavid Pithan and Thomas Heinze find that the growth in the volume of ECA publications has not led to the formation of an intellectual field with strong reputational control. This has left a gap which has been filled by commercial database providers, who by selecting and distributing research metrics have gained a powerful role in defining standards of research excellence without being challenged by expert authority.

Lining up the dominoes: lessons from art research on how to evidence impact

For research to have impact is becoming more and more important, so being able to convincingly evidence that impact is a valuable skill. Lesley Brook has studied how the impact of art research was evidenced during the 2014 Research Excellence Framework and shares lessons also applicable to a broader range of disciplines. While achieving impact is not a simple linear pathway, a complex domino fall can serve as useful analogy: starting with one domino, a cascade of research outputs and related activities fall along multiple branching pathways to contribute collectively to the overall impact of research and to evidence of that. Be sure to remove superfluous dominoes, fill in any gaps, and don’t end your lines too soon!

Impact from critical research: what might it look like and what support is required?

As demands for demonstrating impact are increasingly woven throughout the funding and institutional architectures of higher education, concerns have been raised that the impact agenda could adversely affect critical and blue-skies research, favouring instead research that lends itself more easily to societal uptake. Ahead of REF 2021, Ruth Machen considers what impact from critical research could look like and how assessment frameworks could support, rather than squeeze out, space for critical research. Four modes of critical research impact are outlined: challenging policy; empowering resistances; platforming voices; and nurturing new critical publics.

The messy business of impact for the social sciences: fear and failure, stealth and seeds

Failure is an inevitable part of any academic career. This may feel especially true for those researchers working to have an impact on politics and policy, with research work always vulnerable to rejection or disregard. Matthew Flinders explains how such precarity brings into sharp focus the messy business of impact for the social sciences: the great problem of sowing seeds in a political context is you can never be absolutely sure they will germinate. This situation carries the risk of decisions regarding the investment of institutional resources being taken with an eye not on the intellectual vibrancy of a project or the need to cultivate a culture of engaged scholarship but on a short-term calculation as to whether the outlay is likely to deliver a high-quality impact case study.

Better, fairer, more meaningful research evaluation – in seven hashtags

elizabeth gaddConsidering the future of research assessment, Elizabeth Gadd outlines how she believes research evaluation could be made better, fairer, and more meaningful. The resulting seven guiding principles, neatly framed as hashtags, range from understanding our responsibilities to researchers as people, through to ensuring our evaluations are a more formative process, offering valuable, constructive feedback.

Could it all be much ado about nothing? A tragicomic perspective on research impact

The contemporary drive to understand exactly how academic research has had an impact on society represents a major undertaking, with significant resources being expended. However, researchers acknowledge there may be occasions where no amount of time, effort, or funds will identify the impact arising from certain research. Given the considerable effort that has been dedicated to research assessment processes, and the challenge of identifying impact that may be less apparent, Joanne Doyle ponders whether it may all be much ado about nothing.

The evaluative inquiry: a new approach to research evaluation

Contemporary research evaluation systems are often criticised for negative effects they can have on academic environments and even on knowledge production itself. Established in response to many of these criticisms, the evaluative inquiry is a new, less standardised approach to research assessment. Tjitske Holtrop outlines the four principles that give shape to the evaluative inquiry’s method: employing versatile methods; shifting the contextual focus away from the individual; knowledge diplomacy; and favouring ongoing engagement ahead of open-and-shut reporting.

The A to Z of writing an impact case study

With submission to REF 2021 now less than two years away, university staff and academics are stepping up work to present their best examples of research impact in the form of compelling impact case studies. In thinking about how to approach writing these documents, Sally Brown has compiled this useful A to Z; from understanding your impact aim, all the way through to capturing the zeitgeist.

Print Friendly, PDF & Email

About the author

Blog Admin

Posted In: Annual review | Impact | REF2014 | REF2021 | Research evaluation | Research policy

Leave a Reply

Your email address will not be published. Required fields are marked *