Such surveys have to be comprehensive
Phil Baty, editor, Times Higher Education World University Rankings, talks about the differences in the resultseducation Updated: Sep 22, 2010 09:20 IST
Why is there such a wide gap in the position of the same institution?
The 2010-11 Times Higher Education World University Rankings use a new methodology employing 13 different indicators across five broad categories of criteria. We believe it’s the most sophisticated and robust comparison of global university performance ever produced. It relies less on subjective measures of reputation and puts more weight on hard measures of excellence. It is impossible to compare rankings positions under the previous system and this new system.
What should aspiring international students or researchers make of such different results?
The new methodology that has been introduced for the 2010-11 provides an accurate and reliable picture of global higher education. Some institutions, and even whole countries, have not come out well under the new system. Others look much better. The 2010 THE WUR shows a balanced picture of global universities as institutions of teaching, research and knowledge transfer, and it's designed to inform decisions by students, academics, researchers, policy makers and others.
Because of the change to the methodology, any movement up or down since 2009 cannot be seen as a change in performance by an individual country or institution. We do contend, however, that these tables are realistic, and so in some cases they may deliver an unpleasant wake-up call that the days of trading on reputation alone are coming to an end.
What made THE reduce the weightage given to the international mix (of students and faculty) aspect?
While we think it is important to get a sense of an institution’s diversity and commitment to globalisation, and it is clear that the ability to attract overseas staff and students is indicative of its competitiveness, there were concerns that these indicators were easy to manipulate by institutions, and were not very strong proxis for real excellence. There were complaints that some institutions could deliberately bolster their score in these areas by recruiting large numbers of low quality students, or even staff. Given there is no way of measuring the quality of a student intake, for example, we felt it was safer to give these indicators a lower weighting.
Why did THE do away with the employer survey (in the reputation category) metric?
We believe it is important to make sure such surveys are highly rigorous and comprehensive. We asked lots of fundamental questions as to the value of employer surveys, and there were concerns about the size of the sample, how representative the sample was and how inclusive the sample was – for example do you look only at major global corporations, and disadvantage some institutions in terms of geography? Do you look at national employers, or local employers?
How do you select them?
We are looking into such a survey for the future, but we will only conduct one if we feel we can make one that is truly meaningful.
Interviewed by Rahat Bano