LSE - Small Logo
LSE - Small Logo

Elizabeth Gadd

January 31st, 2022

A Narrative CV for Universities?

2 comments | 57 shares

Estimated reading time: 6 minutes

Elizabeth Gadd

January 31st, 2022

A Narrative CV for Universities?

2 comments | 57 shares

Estimated reading time: 6 minutes

In an attempt to move away from overly quantitative assessments of researchers, many research funding bodies are turning to the use of narrative CVs. In this blogpost, Elizabeth Gadd argues that, in the same way, offering universities a narrative format in which to describe their contributions would provide them with an opportunity to proactively and independently describe how they are so much more than their rank. 


Narrative CVs for researchers are being piloted all over Europe with the latest such announcement being made by UKRI. The idea is that instead of assessing researchers based on quantitative indicators (citations, publications, income) or lazy shortcuts (which university they went to, who they collaborated with), they are instead assessed on a qualitative description of their actual contribution to scholarship: to knowledge, to the development of others, to the research community and to society itself. Yes, a researcher might include some indicators in their narratives (‘100% of my work is open access’ for example), but the overall approach is descriptive, not quantitative.

Early investigations have shown that it’s not an easy task to write these things, nor to assess them. However, they provide a much richer, more nuanced picture of an individual scholar’s contribution, which should lead to recognising and rewarding a much broader range of scholarly activity than is currently the case.

But it’s not only individual researchers who are the victim of unfair, over-quantified assessments which don’t truly represent their contribution to scholarship. For the past two decades, universities have consistently been under increasingly metricised scrutiny. Some of this is at the hands of governments, through national quality assessments, but some of the poorest evaluations – especially for countries reliant on overseas student recruitment- are performed by the global university rankings.

The rankings are particularly reliant on reductive indicators, which are often a very poor proxy for the quality they claim to indicate (Nobel prize-winning alumni as an indicator of teaching quality for example). These indicators are then weighted in various ways to form composite scores which may or may not reflect the missions of those institutions they seek to represent.

Despite the tendency of rankings to count what can be counted, rather than what actually counts, and to do so in ways that would make any self-respecting statistician blush, the global rankings continue to grow in power and influence. Students use them to select where to study, academics to select where to work, funders to select who to fund, and governments to guide their educational investments. In light of such heavy usage, university leaders feel powerless to do anything but play the rankings game – even though it is the large, old, wealthy, white, English-speaking Higher Education Institutions in the global north who always win.

In my experience there are really only two institutional responses to the rankings game. Institutions either play happily (appointing dedicated rankings staff, emailing colleagues to solicit their engagement, and establishing ambitious ranking-based KPIs) or, they play unhappily (keeping a quiet eye on them, announcing any useful ‘wins’ in marketing, and using them as a lever to support otherwise independently useful courses of action). But everyone plays. Because not to play at all is tantamount to financial and reputational suicide.

As a solution to this problem, I recently proposed an initiative called ‘Much More Than Our Rank’. The initiative would enable all universities – from the ranking toppers to the unranked – to sign up, add a logo to their institutional web pages, and proceed to describe how they have contributed so much more to students, to scholarship and to society, than the rankings’ narrowly scoped and poorly-constructed indicators would currently have us believe.

A bit like a narrative CV in fact.

Which set me to wondering whether, with a globally sourced template (and I mean globally sourced, this can’t just be another initiative imposed on the international community from the global north), a narrative CV for universities could provide a serious shift in the way we value and evaluate our HEIs.

The beauty of this approach is that it doesn’t require a wholesale and immediate boycott of the rankings which, although many would be thrilled to see it, is just not a realistic option for student recruitment-reliant HEIs (i.e., all of us). Instead, it provides an opportunity for universities to acknowledge their uncomfortable cognitive dissonance and say: yes we engage with the rankings because we have to, but we also recognise their limitations – and their negative impact on less privileged HEIs – so we’re engaging in this playing-field-levelling activity where everyone gets an equal opportunity to shine.  And suddenly, instead of shoving our partner institutions down the rankings to gain a little more height, we can celebrate our partnerships because that’s a feature of a thriving university.

they would allow third parties to make a more considered comparative judgement about the kinds of institutions they are looking at: what the institution is proud of and what their priorities and ambitions are.

Of course, the objections that are already surfacing regarding researcher narrative CVs will apply equally to university CVs: they will take time to write (as UK colleagues involved in writing REF environment statements will testify) and time to assess (no lazy shortcuts for funders seeking to identify the ‘top 200’ universities here). English-speakers may still have the advantage, and those who’ve been around longer will have more to say. Others might argue that an institution’s web pages are already a long-form narrative CV. Still others might fear that they’d be open to abuse: full of over-exaggerated and unverifiable claims. True. All true.

But, they would allow third parties to make a more considered comparative judgement about the kinds of institutions they are looking at: what the institution is proud of and what their priorities and ambitions are. They might provide a bit of history and a bit of national context. They might even link out to some relevant and independently verified data, and what they don’t say is probably going to be just as meaningful as what they do.

However, for me, the real win would be the opportunity for HEIs to put some distance between themselves and the poor construction, validity, application and impact of the university rankings; to highlight the rankings’ deficiencies and to take back some control over how universities are assessed. It feels like a small but manageable step in a journey towards something better; that’s as qualitative, fair and inclusive as the future we’re trying to create for our researchers.

It has been thrilling to see the metric tide turn, and to at last be moving forward with a serious alternative to quantitative assessments of researcher careers. But we can’t stop here. Whilst research performing organisations continue to be assessed in heavily metricised ways, this is always going to cascade down to impact the lives of individual researchers. We must now expand our suite of alternative assessment mechanisms to universities themselves.  And what better way than to co-design a narrative ‘CV’ template for universities?

 


Note: This article gives the views of the authors, and not the position of the Impact of Social Science blog, nor of the London School of Economics. Please review our Comments Policy if you have any concerns on posting a comment below.

Image Credit: Toa Heftiba via Unsplash. 


 

Print Friendly, PDF & Email

About the author

Elizabeth Gadd

Dr Elizabeth (Lizzie) Gadd chairs the INORMS Research Evaluation Group and is Vice Chair of the Coalition on Advancing Research Assessment (CoARA). In 2022, she co-authored 'Harnessing the Metric Tide: Indicators, Infrastructures and Priorities for UK Research Assessment'. Lizzie is the Head of Research Culture and Assessment at Loughborough University, UK and champions the ARMA Research Evaluation SIG. She previously founded the LIS-Bibliometrics Forum and The Bibliomagician Blog and was the recipient of the 2020 INORMS Award for Excellence in Research Management and Leadership.

Posted In: Higher education | Measuring Research

2 Comments