LSE - Small Logo
LSE - Small Logo

Kyle Grayson

Paul Grayson

September 11th, 2023

What would honest university rankings look like?

6 comments | 67 shares

Estimated reading time: 10 minutes

Kyle Grayson

Paul Grayson

September 11th, 2023

What would honest university rankings look like?

6 comments | 67 shares

Estimated reading time: 10 minutes

University rankings and their subsequent league tables presuppose higher education institutions exist in a linear hierarchical structure and that presenting information in this way is useful to prospective students. Deploying a comparable methodology to the rankers, Kyle Grayson and Paul Grayson argue that English universities largely fall into two non-hierarchical groups with comparable characteristics.


The emergence of a global league table culture based on contested indicators, controversial methodologies, and opaque algorithms developed by private interests has raised a series of important questions about the positive and/or pernicious effects on university systems across the globe.

Within the UK, one of the justifications for league tables offered by organisations who create them is that they provide indicators of quality. These help prospective students make informed choices about where to apply for university. Thus, league tables are presented as an essential source of information to ensure that prospective students (consumers) make informed choices. This is particularly pertinent to the UK context given that on average, the OECD reports that a public British university undergraduate education has the most expensive tuition fees in the world.

In a recent open access article, we decided to test rankers by taking them at their word: do their metrics and rankings provide distinctive indicators of quality that meaningfully distinguish institutions from one another?

The short answer is, no. By creating indices in which minute differences amongst institutions are magnified, they present their data inappropriately. To put it differently, rankers use a presentation technique that will confirm their underlying premise that there is a hierarchy of institutions in UK higher education. The results are league tables in which there is no indication of whether ranking differences identified amongst institutions are statistically defensible or a reflection of meaningful variations in the student learning experience or research environment.

By creating indices in which minute differences amongst institutions are magnified, they present their data inappropriately

Utilizing the same data and indicators used by the Guardian and Complete University Guide for 2018-19 to avoid any temporary distortions from COVID-19, we employed cluster analysis to see if groups of UK universities might share common characteristics. In this technique, all institutions placed in one group would have more in common with one another than with universities in any other group. If we found any groups via this analysis, it would point to the incoherence of ranking institutions in a hierarchical fashion.

Using cluster analysis to group institutions, and discriminate analysis to determine if differences between groups were statistically significant, we examined four indicators: value added in learning, overall student satisfactionresearch quality, and research intensity. We also undertook regression analyses where applicable to determine if groupings based on indicator outcomes were related to other student, staff, and/or institutional characteristics. We then added a ‘research cost’ variable which was meant to reveal if indicators of research ‘quality’ were also influenced by the sheer magnitude of research resources available within some institutions.

Contrary to league table rankings, we found that UK universities clustered into only two groups based on these indicators. To minimise bias, we labelled them Blue and Green Group respectively. 53% of UK undergraduates were enrolled in the Blue Group and 47% in the Green Group.

 we found that UK universities clustered into only two groups based on these indicators

In the Green Group, universities had a higher value-added in learning than those in the Blue Group. The differences were statistically significant and relatively large (19%). Differences between the groups on student satisfaction were negligible. The Blue Group had much higher research index scores (77%) than the Green Group (43%). These differences were statistically significant. However, Green Group produced research at about 1/10 the cost of Blue Group. These differences also attained statistical significance. While these distinctions may in part reflect differential costs associated with different types of research, they may also indicate more efficient or diverse kinds of knowledge production within the Green Group. A definitive answer would require further investigation.

We then examined university and student characteristics for these groups. Blue Group institutions had lower student-staff ratios, higher spending per student, higher institutional incomes, higher entry tariffs, lower percentages of students from low-income areas, lower percentages of students from state schools, lower percentages of students living at home, higher percentages of students progressing into their 2nd year, and higher percentages employed in a graduate job after 6 months (an indicator of both socio-economic class background and regional prosperity) than the Green Group. It is worth noting that student-staff ratios, spending per student, entry tariffs, and graduate employment are key indicators that work against Green Group members in most league tables.

It is worth noting that student-staff ratios, spending per student, entry tariffs, and graduate employment are key indicators that work against Green Group members in most league tables.

We then used traditional mission group categories to classify institutions in the Blue and Green groups. 100% of Russell Group institutions and 93% of research intensive institutions could be found in the Blue Group. In contrast, 96% of post-92 institutions were located in the Green Group.

A cynic might argue all we have shown is that there is a hierarchy, albeit at a group level, and one that maps onto popular understandings of the higher education landscape. However, we do not think this is the case. Research is the defining characteristic of the Blue Group. The Green Group is defined by its better valued added outcomes in student learning. The importance attached to these characteristics in determining university quality will depend upon the eye of the beholder.

Our findings provide an evidence-based impetus to move away from the hierarchical league table culture promoted by media outlets, the government, and universities

Our findings provide an evidence-based impetus to move away from the hierarchical league table culture promoted by media outlets, the government, and universities. Rather, there are good reasons to concentrate on the contributions to learning and research provided by the two institutional groups we have identified, including how the Green Group provides excellent outcomes for nearly half of UK UG students despite operating in more challenging spaces with fewer resources. In any other national context, this contribution would be presented by governments, the media, and the higher education sector as an unqualified success. Unfortunately, market-level incentives and a broad cultural predisposition within British society to reproduce hierarchies leave us stuck in a mentality where the important social, cultural, and economic contributions of the Green Group are undervalued.

 


This post draws on the authors’ article, University quality, British league tables and student stakeholders, published in Quality in Higher Education. 

The content generated on this blog is for information purposes only. This Article gives the views and opinions of the authors and does not reflect the views and opinions of the Impact of Social Science blog (the blog), nor of the London School of Economics and Political Science. Please review our comments policy if you have any concerns on posting a comment below.

Image Credit: Detail of The Ladder of Divine Ascent, Monastery of St Catherine Sinai 12th century, via Wikimedia (Public Domain).


Print Friendly, PDF & Email

About the author

Kyle Grayson

Kyle Grayson is the Head of the School of Geography, Politics, and Sociology and Professor of Security, Politics, and Culture at Newcastle University, UK. He is also the Chair of the British International Studies Association.

Paul Grayson

Paul Grayson is a Professor of Sociology in the Faculty of Liberal Arts and Professional Studies at York University, Canada.

Posted In: Higher education | Measuring Research | Rankings

6 Comments