by Dr. Wong Kah Keng
The rapid growth in the demand of access to higher education has sparked the demand for information on academic quality across the globe. The university rankings system has so far served as a rough guide to help prospective students make informed choices. Besides, the lucrative business in the higher education sector has induced global competition to attract the highest quantity and quality of students, and the university rankings system has assumed one of the key roles in achieving this prospect.
However, the university rankings system has continued to face criticisms and there has not been an international consensus on the measurement of teaching and research quality in these rankings. All rankings are imperfect. One may want different rankings for different purposes such as ranking by research versus ranking by teaching. Even rankings for the same purpose, research or teaching, there is no consensus on which factors are important, how to measure and weigh them. The public would dispute the criteria used for an individual ranking, even though both of these methods are used in some international rankings:
1) Objective approach: an evidence-based system that gathers a welter amount of information based on input measures (e.g. entry standards) and output measures (e.g. unemployment rates, quality of research by scholars), and crystallise them into a ranking. While this approach might get fine-grained detail “wrong”, it nevertheless gets the broader coarse-grained ranking decently accurate.
2) Subjective approach: a system based on word of mouth, employers’ or students’ feedback, and peer reviews, all of which are gathered from a limited number of people or selected authorities. This methodology runs the risk of prejudices and criticisms.
Rankings often end up based on what is quantifiable rather than what is truly important. However, there are several aspects of university rankings that confer their accuracy.
The main strength of rankings is the positive correlation between different rankings based on various criteria and weights. For instance, between the rankings of United Kingdom universities in the international and UK-based rankings. The precise rank in which a university appears varies from table to table but there is still a high correlation. One may be unable to tell precisely which university is ranked 30th or 35th in the world based on international rankings, but these rankings are still informative in distinctively differentiating between the universities in the top 30 from those ranked at, for instance, 90th and beyond, and more so as the gap widens.
It is also wise “not to let the best be the enemy of the good”, that is getting obsessed about the fact that some rankings aren’t perfect or fair in certain aspects that one abandons them altogether, and loses out on the helpful information they can impart. Rankings motivate universities to improve their performances and the result of their achievements is reflected in the next cycle of various, cumulative rankings. In the absence of rankings, it is more likely for one to drift along in comfortable mediocrity, and it might be wiser for universities to have the ambition of striving to improve in league tables rather than pottering along in one’s own world blissfully unconcerned about rankings.
The university rankings that gain a lot of public attention might exacerbate the competition and rivalry between universities, potentially thwarting cooperation in the process. As far as research is concerned, especially in biomedical or fundamental sciences, it is a long shot for a laboratory consisting of limited number of tools, specimens and researchers to produce research of high-end quality. Modern research often requires an enterprise of collaboration to reach an overreaching discovery, and thus any potential collaboration should not be barricaded by the mere issues caused by rankings. Both sides of institutions should realise the mutual benefits gained from concerted efforts and should at least be in a neutral position, if not effusive, when the opportunities for collaboration arise.
Careful interpretation of university rankings also needs to be exercised. The public should be motivated by the fact that there is no “holy grail” of rankings. Assessment on the quality or quantity of research reflected in virtually all international rankings has overlooked scholarly publications in languages other than English in citations data. There are several scientific journals published in German, Japanese, French, Chinese or Russian but very few of these papers have been considered in interpreting bibliometric data nor inclusion into the Science Citation Index (SCI)1, thereby likely not regarded as accepted “standards” set by the policy makers of university rankings. The language-bias problem has been downplayed by most university rankings.
In addition, the funding bodies for researchers should not be heavily influenced by a handful of rankings but rather through assessing the groups on a case-to-case basis. There is no research group that could solely catapult a university to the top nor gravitate it to the bottom tier of a league. Generalisation of the group’s potential based on the university’s position in the ranking might not be accurate.
Judging from the continued publications of global university rankings since nearly a decade ago, and the attention received each year by the media and public, university rankings are here to stay. They will continue to influence the landscape of higher education worldwide, affecting not only the decisions made by students or providers of education, but also the educational policies decided by the government.
Finally, it is indisputable that rankings have motivated the attempts of producing more elite universities. However, these leagues could only bring the true value of a university so far, as the core motivation or persistence for an academic to labour on is the genuine passion for his/her field of research and teaching, even when the university is at the top of any leagues. It is hoped that in the course of improving the local universities’ standing in rankings, the chance of discovering these pool of bona fide talents and sparkles could be accrued.
 van Leeuwen TN, Moed HF, Tijssen RJW, et al. Language biases in the coverage of the Science Citation Index and its consequences for international comparisons of national research performance. Scientometrics 2001:51:335-346.
About the author
Dr. Wong Kah Keng is the co-founder of Scientific Malaysian and holds a DPhil (Oxon) in cancer biology. He is currently a Principal Investigator in tumour immunology at the Department of Immunology, Universiti Sains Malaysia (USM). He can be contacted at [email protected]
All images in this article were obtained from Flickr.com under the Creative Commons license.