This page was last modified on 14 September 2017, at 15:46.

Publications:Focus on: Do international university rankings serve a useful purpose

From Eurydice

Jump to: navigation, search

Date of publication: 14 September 2017

9ball rack 2-1.jpg

There are a few great orchestras in the world, thank goodness. Although some people do put them in ranking order, it's not like a snooker match. Each orchestra has different things to offer. – Simon Rattle

For a few years now, international university rankings, such as the Times Higher Education (THE) World University Rankings, the QS World University Rankings, and the Shanghai Academic Ranking of World Universities, have been very influential in shaping university priorities and even government policy on higher education. But does the exercise of placing universities in a ranking order, like football teams in a league, make sense? Or would it be preferable to recognise, in the spirit of Simon Rattle's comment on orchestras, that higher education institutions all have different qualities?

Whenever one of the annual international university rankings is published, a lot of ink is devoted to how universities perform and what that means for a country's education system, with positions in international rankings presented as a matter of national pride and prestige. Over the years, some universities have adjusted their operations to match the criteria used in rankings, while governments have also acted to improve their institutions' place on various lists. Recently Russia, for example, selected 15 universities to receive special grants to improve their compliance with rankings criteria, while Germany, Japan and Singapore are among countries that have invested in programmes designed to create so-called 'world-class' universities.

The publishers of rankings claim to be providing reliable information to help students, policy makers and other stakeholders make informed decisions. For example, the Times Higher Education World University Ranking argues that its ranking can help students in choosing universities, universities in finding partners, governments in determining how to fund higher education, and employers in recruitment. It also claims that university rankings are unavoidable, and provide information in a world that lacks global educational oversight.

Critics, on the other hand, point out that the lack of global oversight in education means that there is no reliable way of collecting comparable data to rank universities. For example, should a claim to list the 'best' universities be taken seriously if there are no meaningful criteria about educating students – arguably the most important role of universities? If criteria are not relevant for most stakeholders, a ranking is likely to bias and distort thinking about higher education – misleading those it claims to inform.

Other objections are more technical. Research has shown that ranking methodologies are opaque and difficult to replicate. Quality of data cannot always be verified, and indeed some universities may even deliberately manipulate their numbers.

Given these limitations to rankings, how should they be used? People probably notice rankings because their information is easy to report and to digest, and in a complex world, such simple and simplistic information can be comforting. So perhaps well-ranked universities should be more wary of marketing themselves as the 'x best university' when they are aware that the foundations for the claim are rather hollow. Equally institutions outside the rankings – but still providing life-changing opportunities to many people – should not feel that they have to justify their existence.

Rankings should also be used with care in other policy areas. For example, Denmark and the Netherlands are among countries that use rankings as a basis for immigration decisions, with graduates from the highest-ranked universities awarded more points. Rankings thus provide a short cut to identify 'highly skilled migrants'. The problem is that, as rankings focus on the 'top' 500 or 1 000 world universities, they completely ignore more than 25 000 other higher education institutions, and the approach therefore excludes many potentially highly skilled people.

The fact that people use rankings – despite their faults – highlights the need to develop better information about higher education. One alternative that has been supported by the European Commission is U-Multirank, a multidimensional tool which provides information across five areas of university activity, allowing users to set their own priorities to identify suitable universities. Unlike international rankings, the tool does not use weighting to provide a common list, nor does it focus on research activity. It also compares institutions with similar profiles. While problems of data quality persist, this approach is clearly a major advance on centralised university ranking systems.

In an era of big data, the future no doubt holds the possibility of developing systems that assemble much more data on many different aspects of higher education. But in the meantime, rankings are indeed unlikely to disappear. All that can be hoped is that other forms of information – particularly research on the impact of higher education, concrete statistical data, and the work of education information networks like Eurydice – continue to be developed, promoted and used.


Authors: Kardelen Kala and David Crosier