LSE - Small Logo
LSE - Small Logo

Claire Fraser

Elizabeth Gadd

December 2nd, 2024

Unanswered questions in research assessment 1 – Whose values lead value-led approaches?

0 comments | 9 shares

Estimated reading time: 7 minutes

Claire Fraser

Elizabeth Gadd

December 2nd, 2024

Unanswered questions in research assessment 1 – Whose values lead value-led approaches?

0 comments | 9 shares

Estimated reading time: 7 minutes

Reflecting on the ongoing reform of research assessment, Claire Fraser and Elizabeth Gadd question some existing approaches. Here they discuss how reform efforts may need to reconsider the usefulness of value-led strategies.


This is the first part of a three-part series written with Noemie Aubert-Bonn, Haley Hazlett, and Karen Stroobants, on Unanswered questions in research assessment.


This is the first of three reflections from a group of women who have dedicated a significant part of their careers to advancing international reforms in research assessment. Collectively we have worked for the Declaration on Research Assessment (DORA), the Global Research Council (GRC), Responsible Research Assessment (RRA) Working Group, A Global Observatory on Responsible Research Assessment (AGORRA), the Coalition for Advancing Research Assessment (COARA) and the International Network of Research Managers (INORMS)Research Evaluation Group. All are international organisations, but are largely born, led, and/or based in the Global North. 

Our passion for this work emanates from a deep desire for equity and inclusion across a research ecosystem that is hampered by inadequate research assessment. However, across these endeavours there is an underlying universalist logic, that given the global nature of research any attempts to reform the way research is assessed have to also be global. Whilst a global approach to reform is essential, it is ironic that many of the original calls for reform emanated from privileged English-speaking institutions in the Global North. In all of these roles we sincerely sought to engage with global counterparts, but what was not obvious at the time was the incongruity of trying to embed equity at the heart of research assessment from an inequitable foundation.  


what was not obvious at the time was the incongruity of trying to embed equity at the heart of research assessment from an inequitable foundation. 

The first challenge we face in this respect, is that our approaches to responsible research assessment often prize normative ‘value-led’ approaches that might not translate on a global scale. Many of the foundational principles of responsible research use terms such as ‘transparency’ and ‘humility’, which may seem self-evident, but can be perceived differently in other parts of the world. The INORMS SCOPE Framework for responsible evaluation has as its first stage, ‘Start with what you value’ and the Humane Metrics Initiative has as its strapline ‘Live your values’, perhaps in the expectation that firstly we can agree that a value-led approach is the right approach; secondly that we have enough autonomy as evaluators to impose our own values on evaluations; thirdly, that if we work hard enough, we can all agree on what we value; and fourthly, that what we value is likely to look different to what we actually assess in most evaluations. Let’s take each of these assumptions in turn before we return to our own motivations and advocacy. 

Is a value led approach the right one?

Value-led approaches resonate strongly with institutions in the Global North. It is common for research performing and funding organisations to have a values statement and if an individual is described as having strong values, this is high praise. Yet, the value placed on ‘values’ and what is ‘valued’ differs around the globe. What if evaluators say they only value citations and grant income? What then? By taking this approach, we impose a specific cultural framing on the development of better research assessment approaches that may not only presumptuous, but ultimately impede the movement’s effectiveness. 

Do evaluators have the autonomy they need to impose a value-led approach?

Researchers and research performing institutions are subject to external pressures and expectations that limit their independence. In some contexts, there are legal constraints. For instance, academic freedom is not clearly defined or universally applied. In some contexts, researchers enjoy the freedom to explore a wide range of topics and methodologies, in others, academic freedom may be constrained or challenged. The same is true of those seeking to assess research. Some institutions will have the freedom to set their own recruitment, promotion and tenure criteria. In others it is set at governmental level. Even where institutions do have autonomy to be flexible about their assessment mechanisms, it is likely only those in senior positions have the authority to make decisions.

So how can evaluators set value-led assessments if they are having to bend to the rules and expectations of others? The differences in autonomy levels influence the type of research undertaken and the criterion and methodologies with which research and researchers are assessed.

Can we agree on what we value?

Another challenge is that because the responsible research assessment movement has roots in the Global North, the terms of the debate have been set for about a decade in these countries before others have had the chance to contribute. Institutions in the Global North have been exposed to the debate and had a chance to discuss, define, and embed forms of research assessment that align with their values and norms.

As we discuss elsewhere in this series, even where such thinking has led to agreement on the need for a values-led overhaul of assessment practices, the reasons that organisations, including in the Global North, engage in reform remain diverse. When conversations with global participants then take place, there can be misconceptions that we are all on the same page: that we all agree that the Journal Impact Factor is bad, and peer review is the ‘gold standard’. 


We have to be aware of the influences that have led to our beliefs that assessment requires reform, and that so many of these influential forces derive from the Global North.

We are therefore not only assuming that colleagues from around the world are all in the same place in their thinking, but that if they had had the same chance to do that thinking they would have come to the same conclusions and seek to take reforms in the same direction.

We have to be aware of the influences that have led to our beliefs that assessment requires reform, and that so many of these influential forces derive from the Global North. In Latin America, the scholarly communication system is dominated by university presses and non-commercial providers, and whilst Latin American researchers are still subject to the assessment regimes set by international journals and league tables, the solutions they propose are likely to look quite different. In some African nations, there is a strong emphasis on research that has tangible impact(s) on communities and intellectual property (IP) development to stimulate economic growth. As the global academic community grapples with the continued emphasis on metrics, it becomes even more important to learn how others have developed their assessment strategies to focus on context-specific outcomes. 

Advocating for the potential and not the route to reform

With all this in mind, how can we respect different values and cultural norms in the discussions we are engaged with? And how can individuals who are developing new (or reforming existing) research assessments both learn lessons from others and still be able to build in local context?   

We think that for assessment reform to be successful, it requires international co-creation and co-leadership. We also believe that collaboration on this scale offers greater opportunities to improve assessments, improve the data underpinning the assessments and support a better understanding of constraints and limitations of certain approaches. We will continue to work globally and seek to ‘create space’ for these discussions. However, we do not want these spaces to recreate unequal power structures. We feel the need to advocate for the idea of reform, but not the route for there form. As a movement we should seek to be conscious of our own assumptions and positionality, and to adopt a greater level of humility and openness as we work with global peers to co-create the future of research assessment.  


We would like to acknowledge the thought-provoking insights of Anna Hatch to the discussions leading up to the creation of these posts.

The content generated on this blog is for information purposes only. This Article gives the views and opinions of the authors and does not reflect the views and opinions of the Impact of Social Science blog (the blog), nor of the London School of Economics and Political Science. Please review our comments policy if you have any concerns on posting a comment below.

Image Credit: KOBU Agency on Unsplash.


Print Friendly, PDF & Email

About the author

Claire Fraser

Claire is a Senior Policy Adviser at Research England, a council within UK Research and Innovation. Claire’s work focuses on research culture and assessment policy both in the UK and internationally.

Elizabeth Gadd

Dr Elizabeth (Lizzie) Gadd chairs the INORMS Research Evaluation Group and is Vice Chair of the Coalition on Advancing Research Assessment (CoARA). In 2022, she co-authored 'Harnessing the Metric Tide: Indicators, Infrastructures and Priorities for UK Research Assessment'. Lizzie is the Head of Research Culture and Assessment at Loughborough University, UK and champions the ARMA Research Evaluation SIG. She previously founded the LIS-Bibliometrics Forum and The Bibliomagician Blog and was the recipient of the 2020 INORMS Award for Excellence in Research Management and Leadership.

Posted In: Featured | Research evaluation | Research policy | Unanswered Questions in Research Assessment

Leave a Reply

Your email address will not be published. Required fields are marked *