LSE - Small Logo
LSE - Small Logo

Michael Taster

December 21st, 2021

2021 In Review: Measuring and Assessing Research

0 comments | 12 shares

Estimated reading time: 6 minutes

Michael Taster

December 21st, 2021

2021 In Review: Measuring and Assessing Research

0 comments | 12 shares

Estimated reading time: 6 minutes

The second of this year’s annual reviews focuses on research assessment. Looking into the the quantitative and qualitative measures that govern and shape research, impact and the management of higher education institutions, this list brings together a selection of posts that we have featured on the LSE Impact Blog in 2021. 


The Absurdity of University Rankings:

University rankings are imbued with great significance by university staff and leadership teams and the outcomes of their ranking systems can have significant material consequences. Drawing on a curious example from their own institution, Jelena Brankovic argues that taking rankings as proxies for quality or performance in a linear-causal fashion is a fundamentally ill-conceived way of understanding the value of a university, in particular, when publicly embraced by none other than scholars themselves.


An alternative approach to measuring community engagement in higher education:

Universities across the globe are increasingly being called on to contribute to their surrounding communities and regions, especially so as they are mobilised in response to the impact of the COVID-19 pandemic. Reflecting on these emerging demands within Europe, Thomas Farnell, presents the TEFCE project and its role in the development of a European framework for community engagement in higher education, based on a qualitative and participatory approach, rather than one driven by metrics. Outlining the aims of the framework and what makes it different to previous approaches, he argues why the time might be right for such a project to embed itself in a European context.


Impact Monoculture – Are all impact case studies the same old story?:

The impact of quantitative research measures on academic behaviours have been widely discussed, but the impact of qualitative assessment regimes is more often thought of as benign. Drawing on an analysis of impact case studies submitted to REF 2014, Justyna Bandola-Gill and Katherine E. Smith, explore how the narrative turn in research assessment has created four distinct narratives for impact case studies. Finding these narratives to diverge from more complex accounts of real-world impact, they assess the relative value of this ‘governance by narrative’.


Multilingualism is integral to accessibility and should be part of European research assessment reform:

Developing research systems that promote diverse, multilingual and relevant research for different audiences is a key and often overlooked element in making research accessible. However, biases in traditional research assessment often place researchers looking to produce multilingual research outputs at a disadvantage. Reflecting on the European Commission’s recently published aims for the reform of research assessment Janne Pölönen, Emanuel Kulczycki, Henriikka Mustajoki and Vidar , suggest the omission of multilingualism from this agenda, risks undermining the project’s aims of supporting high-quality and accessible research.


Industry not harvest: Principles to minimise collateral damage in impact assessment at scale:

The recent institutional submissions and conclusion of the first phase of the REF, coupled with the announcement of a wide-ranging review of research assessment in the UK, has provided space for renewed thinking on the state of research assessment. In this post, Julie BayleyKieran Fenby-Hulse, Chris Hewson and Anne Jolly, present reflections on the wider systemic effects of research and impact assessment within higher education institutions during the most recent round of the REF and discuss how principles derived from these observations might inform an approach to research assessment that is more inclusive, consistent and reduces unintended consequences.


Peer review for academic jobs and grants continues to be shaped by metrics, especially if your reviewer is highly ranked:

The aim of peer review for research grants and academic hiring boards is to provide expert independent judgement on the quality of research proposals and candidates. Based on findings from a recent survey, Liv Langfeldt, Dag W. Aksnes and Ingvild Reymert, find metrics continue to play a significant role in shaping these decisions, especially for reviewers who are highly ranked themselves.


Without a clear sense of purpose, what is the future of national research assessment exercises in Australia?

Commenting on the recent review of Australia’s ERA and EIA research assessment exercises, Ksenia Sawczak argues that they lack a clearly defined purpose, or return on investment for Australian universities. In a climate of declining trust in the Australian Research Council, together with a confused idea about how research should be funded, she suggests the assessment regime itself is at a critical point of juncture.


The REF’s singular focus on excellence limits academic diversity:

Research assessment exercises, such as the REF ostensibly serve to evaluate research, but they also shape and manage it. Based on a study of REF submissions in the fields of economics, history, business and politics, Engelbert Stockhammer, argues that the REF promotes a narrow vision of economics determined largely by work published in particular journals and calls for a wider distribution of research funding to prevent fields being captured by dominant academic cultures.


Creating what we seek to measure – How to understand the performative aspect of impact evaluation?

A common feature of evaluation mechanisms across many fields of activity is the influence they have on shaping perceptions and practices within them. In the UK a key argument in favour of the inclusion of impact within the REF has been the way in which, for better or worse, it has sensitised UK academics to how research can impact society. In this post, Jorrit Smit and Laurens Hessels, draw on a recent analysis of different impact evaluation tools to explore how they constitute and direct conceptions of research impact. Finding a common separation between evaluation focused on scientific and societal impact, they suggest bridging this divide may prove beneficial to producing research that has public value, rather than research that achieves particular metrics.


When it comes to gender inequality in academia, we know more than what can be measured:

In academia gender bias is often figured in terms of research productivity and differentials surrounding the academic work of men and women. Alesia Zuccala and Gemma Derrick posit that this outlook inherently ignores a wider set of variables impacting women, and that attempts to achieve cultural change in academia can only be realised, by acknowledging variables that are ultimately difficult to quantify.


We won’t get to a more equitable knowledge ecosystem if we don’t have more equitable ways to assess research and knowledge:

The ways in which research quality and research impact are defined and measured are deeply embedded in practices and concepts derived from the Global North. Drawing on examples from the Global South, Jon Harle argues that a fundamental shift is required that understands the value of research – and the institutions producing it – according to the contexts in which knowledge is needed, produced and used.

 


Note: This article gives the views of the author, and not the position of the LSE Impact Blog, nor of the London School of Economics. Please review our comments policy if you have any concerns on posting a comment below

Image Credit: Adapted from Diana Polekhina via Unsplash. 


Print Friendly, PDF & Email

About the author

Michael Taster

Michael Taster is the managing editor of the LSE Impact Blog.

Posted In: Annual review | Measuring Research | Research evaluation

Leave a Reply

Your email address will not be published. Required fields are marked *