What to make of the 2020 Emerging Economies University Rankings, just published by the Times Higher Education magazine? Congratulations would appear to be in order for the three institutions in the Middle East that feature in a top 20 dominated by nine Chinese universities. King Abdulaziz University in Jeddah, Saudi Arabia, finds itself in 13th position, followed by the United Arab Emirates’ Khalifa University in 15th and another Saudi institution, Alfaisal University in Riyadh, in 20th place.
Doubtless all three will be encouraged by their standing, which will also play a part in how potential students decide which university to choose. But do these rankings really do justice to these universities, or help students to make the right choice? A closer look at the methodology behind them suggests not.
The Emerging Economies rankings are based on the same data compiled by Times Higher Education (THE) for its World University Rankings, in which King Abdulaziz University features in the 201-250 band for 2020.
One reaction to the Emerging Economies ranking is that as well as being a cynical commercial exercise in data repurposing, it is also patronizing. Citizens of the UAE and Saudi Arabia might be surprised to learn they are living in an “emerging economy,” a classification invented by another Western institution, the London Stock Exchange. As a guide to investors, the FTSE Groups ranks all economies that it deems to be not fully developed as “advanced emerging,” “secondary emerging” or “frontier.”
Saudi Arabia and the UAE, incidentally, are not even considered to be “advanced emerging” economies but are merely “secondary emerging,” along with Kuwait and Qatar.
So, leaving aside colonial-era perspectives, how does THE evaluate institutions?
All its rankings are based on 13 performance indicators that “judge institutions on their teaching, research, knowledge transfer and international outlook.” Universities may be awarded up to 100 points distributed across five categories: teaching (worth 30% of the total), research (also 30%), citations for published research (20%), “international outlook” (primarily what proportion of staff and students are from overseas – 10%) and research income from industry (10%).
THE does not visit the universities, talk to staff or students or assess teaching abilities or outcomes on the ground. Instead, the largest proportion of marks – more than half of the 60% allocated to teaching and research, the two largest categories – is derived from a subjective annual Academic Reputation Survey, which examines “the perceived prestige of institutions in teaching [and] a university’s reputation for research excellence among its peers.”
Peers? Of the 10,000 respondents to the latest survey, 39% were from the Asia-Pacific region, 33% from Western and Eastern Europe, 20% from North America – and just 1% from the Middle East.
In the survey, respondents are “questioned at the level of their specific subject discipline [and] asked to … name at most 15 universities that they believe are the best in each category, based on their own experience.”
The key word here is “believe.” How likely is it that, for example, a business lecturer from the American Midwest, will even be aware of a university in the Middle East, let alone be in a position to judge the performance of its many departments?
Not only is the survey heavily biased toward responses from regions far from the Middle East, but its very premise is undermined by the inevitably narrow perspective of the respondents. Of those, 14.6% teach in physical sciences, 14.5% in clinical and health subjects, 13.4% in life sciences and 13.1% in business and economics. Social sciences (8.9%), computer science (4.2%), education (2.6%), psychology (2.6%) and law (0.9%) are far less well represented.
Another 20% of potential marks is awarded for “research influence,” measured by the number of citations earned by a university’s published research. One danger of this category is that faculties where teaching is more of a priority than research could find themselves pushed into research to boost their university’s ranking rather than because it benefits their students.
This category also puts pressure on institutions to publish research in English, rather than in the language spoken by their staff, students and country. THE’s assessment of the number of times a university’s published work is cited is based on an analysis of 21,100 active and 14,000 inactive journals on a database run by THE partner Elsevier. But only three of those journals are published solely in Arabic. Even those published in both Arabic and English amount to only seven.
THE is not alone in creating an artificial competitive rankings environment; the QS World University Rankings and the Academic Ranking of World Universities, among others, operate similar systems. Each of them ranks individual universities differently – another indication that, far from offering what they claim is scientifically objective analysis, each ranking system is the product of an artificial and subjective methodology.
Rankings companies all have something else in common – the fact that they are businesses, with a business model driven by creating the pressure of rankings. THE, for example, offers “branding services … to raise your university’s global profile” and a consultancy service that “provides strategic insights for growth amid a fast-transforming global higher education landscape.”
Many of the universities listed in the 2020 Emerging Economies University Rankings will be making the most of their “achievement.” The best of them, however, will know that, while they are obliged to play the superficial rankings PR game, what really matters is what goes on day in, day out, in their labs and classrooms, and in the hearts, minds and achievements of their students.
Jonathan Gornall is a British journalist, formerly with The Times, who has lived and worked in the Middle East and is now based in the UK. This article was provided by Syndication Bureau, which holds copyright.