Despite being the focus of sustained critique university rankings have proven a resilient feature of academic life. Considering the recent moves by U.S. institutions to remove themselves from rankings, Julian Hamann and Leopold Ringel explore this relationship and suggest rankers and their critics in fact play a role in sustaining each other.
Shock waves rippled through U.S. higher education recently, as prestigious law schools and medical schools announced their withdrawal from the U.S. News & World Report rankings. This immediately became news, because rankings and elite higher education institutions (HEIs) have developed a mutual dependence: rankings rely on the participation of elite HEIs and their prestige to enhance their own reputation. Elite HEIs rely on rankings because alumni, applicants, and policy stakeholders judge them based on their rank.
Yet, some elite HEIs were ready to end this relationship. In November 2022, the dean of the Yale Law School stated that “rankings are useful only when they follow sound methodology and confine their metrics to what the data can reasonably capture”. Striking a similar tone, the dean of the Harvard Law School expressed “concerns about aspects of the U.S. News ranking methodology”. A number of other law schools and medical schools followed suit.
how do university rankings prevail despite facing widespread (and at times hostile) criticism?
The withdrawals raise the question what this development means for the university ranking industry as a whole. Is this the beginning of the end, as some have speculated? Are rankings mortal after all? Our recent study could offer some insights in this respect. Our point of departure: how do university rankings prevail despite facing widespread (and at times hostile) criticism?
One important insight of our study is that vocal criticism is anything but new. If rankings are a successful business model, so too is critical research on rankings (receiving an invitation to write for this blog being a case in point). We identified two modes of ranking critique that dominate both scholarly discourse and debates in the public domain (e.g., the LSE Impact Blog itself, but also Twitter).
The first mode draws attention to negative effects. Rankings purportedly produce and consolidate inequalities, instill opportunistic behavior by those trying to anticipate ranking criteria, and infringe upon the independence of higher education and science. The second mode is concerned with methodological shortcomings. Critics argue that the measures in use are too crude and simplistic, methodology and data not transparent enough, and neither validity or reliability sufficient. Remarkably, the latest critique by law schools and medical schools in the U.S. mirrors exactly this sentiment as it invokes methodological issues.
How do rankings navigate these challenges? We found that ranking producers have developed two modes of response. The first mode might best be characterised as defensive: Rankers either downplay their own influence and state that there are many approaches to measuring university performance, or they emphasize that rankings are inevitable and ‘here to stay.’ The second mode (confident responses) claims a broad demand for evaluations of university performance, demonstrates scientific proficiency, or temporalizes rankings by framing them as always needing improvement. When discussing matters of improvement, rankers often allude to the importance of getting help from researchers. For example, in a brochure on the methodology of the Impact Rankings, Times Higher Education invites critics by stating: “we can improve the approach!” Relatedly, representatives of the U.S. News & World Report rankings outline this ostentatiously constructive stance: “(al)though the methodology is the product of years of research, we continuously refine our approach based on user feedback, discussions with schools and higher education experts, literature reviews, trends in our own data, availability of new data, and engaging with deans and institutional researchers at higher education conferences.”
The general point here is that rankers try to disperse responsibility by co-opting their critics. We should note that for this strategy to succeed, critics have to accept the offer and actually participate in public conversations. Methodological criticism, it seems, is the key lever to initiate and maintain such mutual engagement: even when becoming heated, these kinds of discussions always remain constructive in the sense that, according to their underlying logic, rankings should become more rigorous, reliable, and valid.
the co-optation of critics is neither a matter of mischievous, deceptive rankers nor of researchers becoming corrupt accomplices of the ranking industry.
Constructive conversations between rankers and critics can be witnessed at different venues: in publications, in which representatives of ranking organizations and think tanks publish alongside esteemed higher education researchers, at events such as workshops and summits organized by ranking organizations (e.g., the IREG Observatory conferences), and not least on social media, where rankers and critics engage regularly.
To be sure, the co-optation of critics is neither a matter of mischievous, deceptive rankers nor of researchers becoming corrupt accomplices of the ranking industry. Rather, we interpret the never-ending constructive conversation between rankers and critics as a discursive phenomenon in its own right and irrespective of what intentions participants may, or may not, have. As a result, university rankings develop a quality we have coined discursive resilience: the ability to engage with critics in a productive way in order to navigate a potentially hostile environment.
as critical researchers we may unintentionally contribute to the institutionalization of university rankings
We believe that our study makes an important contribution to higher education research, which has paid only minor attention to the “discursive work” that renders university rankings resilient. For a comprehensive understanding of the subject matter, it is necessary to further explore university rankings as discursive phenomena, which means treating both the praise and the critique they receive in equal terms. This also implies that we should be more aware of our own agency: whether we like it or not, as critical researchers we may unintentionally contribute to the institutionalization of university rankings.
How, then, should we interpret the recent withdrawals from university rankings in U.S. higher education? In consideration of our findings, and U.S. News & World Report’s latest pledge to listen to the critics and improve the methodology, we are inclined to project that the company will return with a brand new ranking next year. Most likely, this episode is not the end of the ranking regime, but it may turn out as just another opportunity to build discursive resilience.
The content generated on this blog is for information purposes only. This Article gives the views and opinions of the authors and does not reflect the views and opinions of the Impact of Social Science blog (the blog), nor of the London School of Economics and Political Science. Please review our comments policy if you have any concerns on posting a comment below.
Image Credit: Adapted from Yan Krukau via Pexels.