LSE - Small Logo
LSE - Small Logo

Elizabeth Gadd

January 24th, 2024

How efforts to assess university contributions to the Sustainable Development Goals fall short

3 comments | 42 shares

Estimated reading time: 6 minutes

Elizabeth Gadd

January 24th, 2024

How efforts to assess university contributions to the Sustainable Development Goals fall short

3 comments | 42 shares

Estimated reading time: 6 minutes

The UN Sustainable Development Goals are widely used by research institutions and metrics providers as a mechanism for assessing the impact of universities on the wider world. Elizabeth Gadd argues attempts to quantify academic contributions to these goals can miss the mark.


In our efforts to better ‘measure what matters’ in higher education, attention increasingly turns to the United Nations’ Sustainable Development Goals (SDGs) as a way to define ‘what matters’ and to frame assessments accordingly. Many institutions are embedding the SDGs in their strategies and turning to the Times Higher Education Impact Rankings (based on the SDGs) as a Key Performance Indicator. At one point, SDGs were even mooted as a sub-structure for the next UK Research Excellence Framework.

SDGs may be inherently A Good Thing, but unless we critically assess the implications of using the SDGs as an assessment framework, we may be storing up future problems.

This is understandable. In a sector ever more concerned about societal impact, what could be better to measure the worth of universities than to identify the extent to which they have contributed to these 17 globally agreed targets? Who can argue with no poverty, and zero hunger? With climate action and reduced inequalities?  Not me. And Goal 4, ‘quality education’, is of course the core business of universities, so what’s not to like? Well, whilst the SDGs may be inherently A Good Thing, but unless we critically assess the implications of using the SDGs as an assessment framework, we may be storing up future problems.

1. A focus on the SDGs risks creating disciplinary winners and losers

The first thing that concerns me about relying on the SDGs as a framework for assessing a university’s contribution to the world is that the SDGs aren’t everything we care about. The SDGs define a specific, very important set of targets. However, they are not our only targets. And the problem with enshrining them above all others, is that whilst all disciplines might have the option of contributing to achieving the SDGs, some disciplines have more opportunity than others. So, whilst musicologists, art historians and literary theorists might all find themselves involved in projects that support the SDGs, others in the pure and social sciences might find that everything they do is aligned to the SDGs. If institutions then start assessing their own worth based on the extent to which they contribute to the SDGs, there are going to be clear disciplinary winners and losers.

According to Scopus 60% of papers produced by Oxford University in 2022 were not mapped to an SDG at all.  Leaving aside for now the challenge of doing this accurately, are we saying that all the unaligned papers are worthless and should never have existed? Of course not. However, by relying on the SDGs to define what matters in universities we risk downgrading many contributions to knowledge that just wouldn’t directly be associated with an SDG.

2. Assessing sustainable development isn’t always sustainable

The second thing that concerns me about this drive to frame our contributions in terms of the SDGs is that universities don’t just want to show that they’re contributing, but that they’re contributing more than everyone else. It drives competition over collaboration. Ultimately, competition requires quantifying contributions and this is where the project fails. Because it’s hard to put boundaries around what does and doesn’t constitute a contribution to any particular SDG and even harder to then count and weight those various contributions. (See Stephen Curry’s articulate and incisive  critiques of the Times Higher Education Impact Rankings’ attempts to do just this).

 it’s hard to put boundaries around what does and doesn’t constitute a contribution to any particular SDG and even harder to then count and weight those various contributions.

Of course, bibliometric data is a real favourite here because publications and citations are so eminently countable. However, a paper produced by Armitage, Lorenz & Mikki in 2021 showed that two different attempts to allocate research papers to the 17 SDGs came up with two very different answers.

The real irony around the use of bibliometric data in SDG assessments is that the main data sources used are hugely biased towards to the global north. Indeed, journals from the ‘global south’ in Elsevier’s Scopus are outnumbered six-to-one by those based in the ‘global north’. Deeply ironic given SDG 10’s call for reducing inequality. Some of the THE Impact Rankings’ indicators are equally problematic. Such as, allocating points to institutions for having “targets to admit students who fall into the bottom 20% of household income in the country”, when in some countries those in the bottom 20% are barely finishing primary school.

3. Appearing to, or actually contributing to the SDGs?

My biggest concern about a fixation with appearing to be contributing to the SDGs is that it takes universities away from actually contributing to the SDGs. It concerns me to think about the time and effort universities put into compiling data and evidence to prove how big an SDG player they are and how that time might have been spent actually working to achieve the SDGs had they not felt such a pressure to showcase themselves.

Further, the SDGs have their own targets and indicators that don’t bear any relation to the things that win you points in the THE Impact Rankings. In fact, none of the indicators used in the THE rankings (cf., ‘preventing student hunger’ and food waste tracking’) appear in the SDGs at all. Does The United Nations care about how many journal articles universities might have published using keywords that may or may not map to an SDG? Their ambitions are categorically different.

Does The United Nations care about how many journal articles universities might have published using keywords that may or may not map to an SDG?

The real SDG indicators focus on a decline in the “proportion of the population living below the international poverty line by sex, age, employment status and geographical location (urban/rural)”.  They want to count the “number of people who died or disappeared in the process of migration towards an international destination.” And they want to see the “proportion of population using safely managed drinking water services” at 100%.

I’m not saying that publishing papers in these domains might not ultimately contribute to meeting some of these targets.  I’m saying that these are the targets we should be fixated on, and not proxies invented for the purposes of making universities look good. The SDGs aren’t an opportunity to look good, they are an opportunity to do good. Efforts to divert attention away from the real ambitions of the SDGs are in poor taste.

At the risk of repeating myself, I’m not saying the SDGs aren’t important. I’m saying the opposite: they are too important to use as a vanity project. If universities want to assess their contribution to the SDGs, they should seek to understand the actual SDG targets and indicators, and then weigh up how their missions and investments align. If they can then evidence any genuine contribution towards meeting those actual targets, then all well and good. But don’t just tell me how many papers you published whose keywords align with the SDGs. We need to make sure that by showcasing our contributions to the SDGs we aren’t compounding the very problems we’re trying to solve.

 

 


I should like to acknowledge the helpful input of Professor Dan Parsons and Dr Julie Bayley on earlier drafts of this piece. All opinions, and errors, are my own.

The content generated on this blog is for information purposes only. This Article gives the views and opinions of the authors and does not reflect the views and opinions of the Impact of Social Science blog (the blog), nor of the London School of Economics and Political Science. Please review our comments policy if you have any concerns on posting a comment below.

Image Credit: wutzkohphoto on Shutterstock.


Print Friendly, PDF & Email

About the author

Elizabeth Gadd

Dr Elizabeth (Lizzie) Gadd chairs the INORMS Research Evaluation Group and is Vice Chair of the Coalition on Advancing Research Assessment (CoARA). In 2022, she co-authored 'Harnessing the Metric Tide: Indicators, Infrastructures and Priorities for UK Research Assessment'. Lizzie is the Head of Research Culture and Assessment at Loughborough University, UK and champions the ARMA Research Evaluation SIG. She previously founded the LIS-Bibliometrics Forum and The Bibliomagician Blog and was the recipient of the 2020 INORMS Award for Excellence in Research Management and Leadership.

Posted In: Higher education | Measuring Research | Rankings

3 Comments