University ranking season is always filled with vibrant discussion on the opaque and subjective criteria used to establish the infamous league tables. Jonathan Albright focuses his analysis here on the QS World University Rankings by Subject. By removing the highly subjective score for employer reputation, the results change substantially, especially for universities outside global cities.
Each year, parents, prospective students, and administrators across the world anxiously await the release of the “top university” rankings. Whether the Times Higher Education, UK’s Good University Guide, Shanghai’s Jiao Tong’s science-focused ARWU, US News and World Report’s Best Colleges, or the old standby—the QS World University Rankings—these tables continue to be hot topics amongst many a statistician, professor, and student.
While it’s hard to fathom how a single compiled figure for a university, or even for a diverse faculty, could possibly summarise all of its strengths and weaknesses, QS has recently expanded its subject rankings, where these scores are much easier to grasp in the context of disciplines, a category corresponding roughly to that of department, and in some cases, school or research centre. This year’s QS World University Rankings by Subject provide not only an overall figure but also the four individual metrics for each area. To explore how the organisation is ranking subject areas in Australia and New Zealand and to compare the evaluation metrics beyond the one-dimensional “overall score”, I have compiled data for the 21 universities in Australia and New Zealand ranked in a single area, Communication and Media Studies, with some interesting results.
First, however, let’s talk about the criteria: QS evaluates subjects based on four metrics, all of which are weighted. They are, in decreasing order of impact, academic reputation (50%), H-index, a metric representing the quality of research output (20%); employer reputation (20%), and finally, number of citations (10%).
Two of these scores, employer reputation and academic reputation, are worth 70% of the overall total, and determined from responses to global surveys sent out by QS. The other two metrics in these rankings include the number of citations for work published within the past five years for departments meeting a minimum threshold of indexed output, and the Hirsch (H) index, a mathematical indicator of research output quality. All of these metrics have their caveats, perhaps the most problematic being that both of QS’ academic and employer reputation survey scores are heavily internationally biased, with domestic responses to its surveys accounting for only half the weight given to responses from outside the respective country of each university.
In Australia, where only 2.5% of all reputation surveys were from domestic respondents, and New Zealand, whose local respondents comprised a mere 1.2% of the total, this is hugely problematic. Why? Because in the 2013 Media and Communication rankings, these two countries combine for 21 out of the top 200—nearly 11% of the total—more than any other locale save for the three countries represented by the United Kingdom and the United States.
Yet if QS’ methodology is listed correctly, domestic reputation input can only account for half of 3.7%(2.5% + 1.2%) at maximum, so no more than 1.8% of the total score. That means that Switzerland, at 3% of all reputation survey responses for 2013, potentially has more impact on ANZ rankings than both of the countries themselves, combined. The UAE and Saudi Arabia—a country that represents Zealand’s highest-worth export education market by spending outside of tuition fees—together make up more than 7%, having potentially four times as much influence.
Given these subject rankings include a weighted score of 20% for “employer reputation”, an ambiguous metric related to a department or school’s graduate employment profile, flagship institutions like University of Melbourne, University of Sydney, and University of Auckland, schools whose names correspond to global alpha and beta cities, who maintain huge numbers of international enrolments , advertise heavily in international markets, and send graduates back to their home countries are likely to do well in this category. And this doesn’t even consider that these top cities are now ranked separately in overlapping categories by QS “stars”.
But remember these are subject rankings, and ironically, this key employment metric is highly subjective: it says little a department or university subject’s quality, its facilities, the effectiveness of its teaching, and barely considers what the graduates and employers in its own country have to say about it. If we remove this single score for employer reputation, leaving the other three metrics, including the academic reputation, a metric that is also subjective and heavily internationally-biased, the new QS subject rankings change substantially:
The above graph plots these three weighted academic metrics versus “employment reputation” for Communication and Media Studies, and two outliers are notable: La Trobe University and Queensland University of Technology (QUT). Both of these universities sit far below the median QS subject employment reputation score of 82.0, yet rank highly in all other categories.
On this graph, in Australia and New Zealand, Queensland University of Technology’s media programme moves into third place, within a few decimal points of second-ranked University of Sydney. When sorted by H-index and number of citations—the two more objective ranking metrics related to the quality and depth of a subject area and research output, QUT is the leading media and communication programme in Australia and New Zealand. And in both of these academic-only views, New Zealand’s University of Otago rises to sixth place overall, behind University of Sydney and University of Melbourne. The above graphs can be explored and downloaded here.
Surveying the impact
The fact that the H-index, a respected indicator for the quality of academic output, is worth the same weight in a subject ranking as an employment reputation survey written by QS and heavily biased by international responses puts departments at universities like QUT and Otago at a distinct disadvantage in these types of global rankings. It also helps to sway thousands of future enrolment choices away from programmes at these institutions, especially for new international students. It also suggests that departments at flagship universities around the world are likely to maintain a statistical advantage based on biased and subjective criteria before academic, teaching, and research quality ever become part of the equation.
Australia’s $20 billion and New Zealand’s $2.5 billion dollar export education markets are growing at more than 10% a year. Hence the restructuring game. The race to reshuffle departments and schools at universities helps to distance them from their domestic peers by aligning their organisational configurations directly with global ranking methodologies and provide an advantage on paper.
This, of course, is then leveraged to attract more students—especially high-fee paying international ones—and achieve more enrolments, funding and perpetuate the cycle of prestige. It’s business. Universities that cannot pay to play are losing out. But individual subject rankings mean the departments have now become part of this game.
So how do schools like La Trobe cope with this problem? Will top programmes at lesser-known universities like QUT and Otago ever be able to overtake their alpha counterparts in university subject rankings? Perhaps La Trobe, in a recent faculty restructuring dispute, and Otago, who outspent all of its NZ counterparts in advertising last year, may already be hard at work on this issue.
This was originally posted at Advice to Graduates and is reposted with the author’s permission.
Note: This article gives the views of the author, and not the position of the Impact of Social Science blog, nor of the London School of Economics. Please review our Comments Policy if you have any concerns on posting a comment below.
Jonathan Albright is a PhD Candidate at The University of Auckland. His research concerns participatory mediation, where he uses quantitative, qualitative and computational methods to engage with the impact of emergent mechanisms such as the hashtag in the evolution of news media. Jonathan specializes in the empirical analysis of socially-mediated news events on Twitter, data-driven journalism and in thematic data visualization techniques. He can be found on Twitter @d1gi.