LSE - Small Logo
LSE - Small Logo

Jelena Brankovic

March 22nd, 2021

The Absurdity of University Rankings

32 comments | 760 shares

Estimated reading time: 7 minutes

Jelena Brankovic

March 22nd, 2021

The Absurdity of University Rankings

32 comments | 760 shares

Estimated reading time: 7 minutes

University rankings are imbued with great significance by university staff and leadership teams and the outcomes of their ranking systems can have significant material consequences. Drawing on a curious example from their own institution, Jelena Brankovic argues that taking rankings as proxies for quality or performance in a linear-causal fashion is a fundamentally ill-conceived way of understanding the value of a university, in particular, when publicly embraced by none other than scholars themselves.


Earlier this month, QS published its annual World University Rankings by Subject, spurring excitement across academic social media. “Thrilled to be part of the world’s No. 1,” said a faculty member. “Proud alumna and staff member,” wrote another. “So proud to be part of the team. Well done everyone”… And so on. You get the picture.

There’s nothing inherently wrong with people liking their job and enjoying working in a stimulating environment. There is, however, something unsettling about scholars taking things like a ranking produced by QS, Shanghai, or whichever organization, as confirmation, or evidence, of how good—or bad for that matter—they are having it as compared to everyone else.

One may wonder, do scholars respond to rankings in this way because they are just carried away in the moment? Or because they take rankings seriously? Or is it, perhaps, something else?

To be clear from the start, my intention here is not to criticise rankings. At least not in the way this is usually done. In this sense, this is not a story about rankings’ flawed methodologies or their adverse effects, about how some rankings are produced for making profit, or about how opaque or poorly governed they are. None of that matters here.

One may wonder, do scholars respond to rankings in this way because they are just carried away in the moment?

What I wish to do, instead, is to draw attention to a highly problematic assumption which many in academia seem to subscribe to: the assumption that there is, or that there could possibly be, a meaningful relationship between a ranking, on the one hand, and, what a university is and does in comparison to others, on the other.

I will start by telling a story that involves my own university and I will conclude with a more general argument about why publicly endorsing rankings does a disservice to the academia.

What’s in a rank?

In September 2019, German Der Spiegel published an article reporting on the then just-released Times Higher Education’s (THE) World University Rankings. Germany’s status as “the third most represented country in the Top 200,” the article said, was once again confirmed. As “particularly noteworthy,” the article continued, another piece of news stood out: “Bielefeld University jumped from the position 250 to 166.”

Although Bielefeld had participated in this particular ranking since the first edition in 2011, for the most part it would be variously positioned somewhere between 201 and 400. Then, in the 2020 edition of the ranking,” Bielefeld found itself in the Top 200. The news was well received at home. Bielefeld’s Rector thanked everyone “who has contributed to this splendid outcome.”

However, Bielefeld’s leadership was quite puzzled by the whole thing: they could not put the finger on what exactly they had done to have caused this remarkable improvement. They must have done something right, for sure, but what? Aiming to get to the bottom of it, they decided to investigate.

It was quite obvious from the start that the “jump” probably had to do with citations. Anyone even remotely familiar with the methodology of this ranking can understand how this would make sense. And in fact, by citations alone, Bielefeld was the 99th university in the world and sixth in Germany.

Our university library confirmed the citations hypothesis. Moreover, when the scores over time were recalculated, it looked like Bielefeld improved more than 120 places in two ranking cycles alone. And, not only did it “improve” exceptionally, it was also one of the top performers of all universities worldwide when it came to how much it progressed from one year to the next.

Bielefeld, it seemed, was an extreme outlier.

As it turned out, huge leaps in THE rankings had already been linked to a large international collaboration in global health—the Global Burden of Disease study. The THE’s method of counting citations is such, another source said, that in some cases it took not more than a single scholar participating in this study to improve the rank of a university, and even significantly so, from one year to the next.

We were struck by the finding that ten articles alone brought as much as 20% of Bielefeld’s overall citations in those two years. Each one could be linked to the Global Burden of Disease study. All but one were published in The Lancet and co-signed by hundreds of authors. One of the authors—and one only—came from Bielefeld.

Bielefeld’s rise in the ranking was, our analysis showed, clearly caused by one scholar. This is how meaningful the relation between the “performance” of Bielefeld University—an entire institution—and its rank really was.

It is tempting to think that, all things considered, Bielefeld “gamed” this particular ranking—accidentally. It is funny even, especially when one thinks of the inordinate amounts of money some universities put into achieving this kind of result in a ranking.

The logical fallacy

When THE published the ranking, the president of the German Rectors’ Conference said he believed that “Germany’s rise was ‘closely connected’ to the government’s Excellence Initiative.” He certainly wasn’t alone in the effort to explain rankings by resorting to this linear-causal kind of reasoning, as we can see in the same text.

rankings are artificial zero-sum games. Artificial because they force a strict hierarchy upon universities. Artificial also because it is not realistic that a university can only improve its reputation for performance exclusively at the expense of other universities’ reputations

This detail, however, points to what may be the most extraordinary and at the same time the most absurd aspect of it all. Almost intuitively, people tend to explain the position in a ranking by coming up with what may seem like a rational explanation. If the university goes “up,” this must be because it has actually improved. If it goes “down,” it is being punished for underperforming.

Bielefeld’s example challenges the linear-causal thinking about rankings, spectacularly so even. However, in truth, there is nothing exceptional about this story beyond it being a striking example of how arbitrary rankings are.

THE may eventually change its methodology, but changing methodology won’t change the fact that its rankings are artificial zero-sum games. Artificial because they force a strict hierarchy upon universities. Artificial also because it is not realistic that a university can only improve its reputation for performance exclusively at the expense of other universities’ reputations. Finally, artificial because reputation is in and of itself not a scarce resource, but rankings make it look like it is. With this in mind, THE’s rankings are not in any way special, better, or worse than other rankings. They are just a variation on a theme.

Numbers, calculations, tables and other visual devices, “carefully calibrated” methodologies, and all that, are there to convince us that rankings are rooted in logic and quasi-scientific reasoning. And while they may appear as if they were works of science, they most definitely are not. However, maintaining the appearance of being factual is crucial for rankings. In this sense, having actual scientists endorsing the artificial zero-sum games rankers produce critically contributes to their legitimacy.

To assume that a rank—in any ranking—could possibly say anything meaningful about the quality of a university relative to other universities, be it as a workplace or as a place to study, is downright absurd. It is, however, precisely this assumption that makes rankings highly consequential, especially when it goes not only unchallenged, but also openly and publicly embraced—by scholars themselves.

 


The section of this post relating to Bielefeld’s position in the THE rankings appeared in German in a longer form in the Frankfurter Allgemeine Zeitung under the title “So verrückt können Rankings sein” on March 11, 2020.

Note: This review gives the views of the author, and not the position of the LSE Impact Blog, or of the London School of Economics.

Image Credit: Ian Taylor via Unsplash. 


 

Print Friendly, PDF & Email

About the author

Jelena Brankovic

Dr. Jelena Brankovic is a postdoctoral researcher at the Faculty of Sociology, Bielefeld University. Her research interests span the subject areas of higher education, sociology of organizations, and transnational governance. Currently she is studying the role rankings and other forms of public comparison play in the institutional dynamics within and across sectors. She is serving as Books Editor on the Editorial Board of Higher Education and is a coordinator of the international network of Early Career Higher Education Researchers (ECHER). Twitter: @jelena3121.

Posted In: Measuring Research | Rankings

32 Comments