Interview de Ben Sowter, Head of Rankings, Research and Analysis de QS Network
1. Comment est née l’idée d’un classement ? Qui, entre le Times et QS, a pris l’initiative ?
I think it was a co-creation in 2004 between the managing director of the QS and the then editor of the Times Higher who were seated next to each other at a dinner and started talking and the idea emerged from that conversation. So I wouldn't really allocate the initiative to one or the other party, I guess we have equal "blame" for the scale of the project.
2. QS a-t-il le plus de responsabilités dans le classement ?
Well, it's not quite right. Initially, the Times Higher, who had been involved in other ranking projects and have commented on other ranking projects over the years, had a very significant involvement in the development of the methodology originally. So, when we came up with the indicators that we are using and when the weightings were decided upon, they were very heavily involved in that.
Obviously, once the methodology is in place, and you're moving along, there's less requirement for them to get involved because you're doing the same thing But even now, the situation remains that at the initiation of each year's project, the Times Higher and the QS team get together and talk about what additional angles we can take, what other indicators we might be able to include, and whether the weightings are still pertinent and relevant. And its the Times Higher that ultimately decides upon the relative weightings between the different criteria, and not the QS.
3. Il y a eu des changements de méthodologie l’année dernière. Qui a pris cette décision ?
Although they're a little more detailed, the criteria remain the same. What has happened is, firstly, we've clarified some of the definitions and some of the data that we collect more accurately. So for example, if we ask a university to provide us with the number of students that they have, for example, some of them will give us the number of full time students that they have, some will give us the total number of students, some will include distance learning, some will include part time students and some will give us the full time equivalent number. Obviously, when we compile a large list of student numbers from universities around the world, if they're not compiled according to the same definition, then we'll potentially have some problems with how the results are put together. So what we have done on that line is to require one specific number and to come up with a way to calculate that where it's not available. So we are still asking for student numbers, we're just getting a little better at asking for exactly the information that we require.
In other areas, we have changed the source of quotations data, for a number of reasons, one being that the new source of quotations data offers data on a much larger number of universities.
And then the other thing that we've done is to introduce a Z-score methodology for aggregating the different indicators together to come up with the overall score. And that is designed to both stabilize the rankings, which has been a bit more volatile than we would like, but also to apply the weightings transparently across the whole data set.
4. Est-ce que ce sont les universités qui vous fournissent elles-mêmes les principales informations ? Comment les vérifier ?
We are getting information through a third party called Evidence, who interpretate the Thomson database on our behalf, and what we found was that a lot of universities were unsatisfied that the numbers disagreed with their own perception of what the numbers should be. So what we chose to do early on in this year, when we were still planning to use the Thomson database as a source, was to allow universities to give us the numbers that they though were right. Not because we were going to use them for the ranking, because obviously they could submit any number, you know, verifying it would be difficult, but the objective was to collect that number in order to verify the numbers that we were getting from Thomson.
What we do next year is still up for some consideration, but certainly we're going to be working with the universities to identify all of the different names by which their research might be published under.
5. Considérez-vous le classement de Shanghai comme un concurrent du THES-QS ?
Well we don't see it as competition. Actually we see it as making the whole space work. If there was only one ranking out there, there would be a lot less cause for debate, consideration and decision. What we would like to say, in we say it quite up front, that rankings should only be the first consideration perhaps in making decisions about universities. And they should never be the last decision. we encourage people to do a lot more research on universities to try and meet people face to face to try and understand who's strong in a specific discipline that they want. So we definitely encourage people to only use us as maybe an initial reference or a short listing tool. And the fact that there's another ranking out there, crystallizes that, because you know the two rankings come up with very different results, and what that immediately does, is it makes people think "ok, how are these different, what are they measuring, what makes them different, and that educates them to a decision.
That’s one of the reasons why we're working towards building a personalized ranking system where people can come and use our data to develop a ranking of universities that looks at what is important to them specifically in terms of university search, or whatever.
6. Il n’y a donc pas de classement ideal fondé sur des critères objectifs ?
No, and the main reason for that is that you just can't. I mean, what is an objective criteria? Because even if each individual criteria was absolutely objective, then the selection of those is subjective. Plus the fact that there are also constraints on the availability of information. You might want to collect information on return of investment, for instance, of a big university degree, but you've got economic factors to take into account, you've got the fact that some universities give their education away for free, and some are very expensive. You've got the fact that an institution that specializes perhaps in business will have a higher return on investment, not because it has anything to do with the quality of the institution, but because business graduates are maybe paid more than medical graduates, for example. There are so many factors that influence each individual criteria, before you even talk about the subjective choices about how to compile those criteria.
So the ultimate answer to that question I think is that yes there is a ideal ranking methodology out there for each individual person who cares about the answer. Everybody is different, every university is different, and every criteria is different, and the only way to come up with a ranking at all is to make those subjective decisions about what and how to measure strengths of the institutions.
7. Pensez vous que la présence médiatique peut être un critère de qualité d'une université ?
Obviously this is a great example that also goes to the previous question. Because, yes it could. But if it was a criteria, would that encourage institutions to engage the media in non-constructive ways ? That's question number one. Secondly, the measurability of that is a challenge, you know. I am not saying it's impossible, but it's difficult, what media do you include, what's important, what's not. And how do you measure and track all that media in the space of only a year that you've got to compile the rankings each time. And what period of time do you use it over ? In principle, yes it could be a criteria. In practice I think it would be very difficult to divide the methodology to effectively measure it.
8. Les universités sont-ils des marques? Peut-on parler d’un marché international des universités?
Yes, we are definitely seeing a trend suggesting that universities are casting their eye overseas. Also that universities are becoming increasingly corporate in nature, you know beginning to use techniques that are more conventionally reserved for the corporate world, such as marketing and branding and such. So you begin to see logo designs, you begin to see consistent business cars and consistent messaging and consistent brand treatment in certain in universities. Others are weaker and behind, but it's beginning to be a concern.
And then internationally, there are more and more post-graduates, in particular, more and more students, and more post-graduate students in particular casting their eyes overseas as potential study choices. And international cooperation between universities and exchange programs and all these things are becoming more and more important as a component to attract international students and international faculty and indeed international investment into universities in various places.
But in many cases they are also on a specialization by specialization basis. Institutions like LSE don't necessarily compete with Harvard as an entire university, but they would like to compete with Harvard in the social sciences, for example, and that's their focus. So that is definitely taking place.
What we're looking to do in that area in the future is potentially to start evaluating universities' partnerships, potentially the ranking of the universities they are partnering with, potentially also including exchange programs and faculty exchange programs to sort of diversify that international indicator.
9. Comment fonctionne exactement le critère Peer review ?
Essentially what we do is we survey academics around the world, and we hand out an invitation to complete the survey to a lot of academics around the world in the region of about 200.000 through various databases.
So what we do then each year is to invite them to complete an online survey, which identifies who they are, which university they are in, what subject matter they express most familiarity with and what regions they know the most about in terms of university strength. Based on those questions, they get either one or a series of questions asking them to evaluate university for research in their broad subject area of expertise. And that simply invites them to select up to 30 universities they consider excellent in their area from a list that is devised from where their geographical knowledge resides.
10. Quelle est la proportion de réponses à ce sondage ?
You have seen the numbers, we get 5100 or so responses over the three years and we send out about 200.000 e-mails. Sometimes when you get invited to a survey for any reason, they pursue you until you fill it out right, so you get e-mail after e-mail to try to get you to respond to it. But we specifically only send out one and at most two e-mails, because we don't want to force people to give responses that they haven't thought about, if that makes sense.
11. Qui sont les principaux clients de QS ?
QS is a very diverse organization within the region of a hundred employees today. And its been in existence since 1990. In terms of full time there are about three people that work on the rankings. So it gives you an idea of the scale of operations of QS, this is not the main activity of QS. We run education fairs, we produce publications, we run websites, we do a whole range of things principally in the education space and post-graduate and MBA level, but also in the career space trying to help MBA graduates and graduates find careers once they complete their education. So in terms of main customers, universities and business schools are our main customers, but not directly related to the rankings. And the research teams are entirely independent from the interests of their clients for their services.
12. En général, comment réagissez-vous aux critiques au classement ?
Frankly, I think it is brilliant. If we weren't being criticized, I would think there was something very very badly wrong. Obviously we have taken an approach to come up with a ranking of universities. It is a highly complex consideration
What criticism does is that it invites users and readers and people to think twice about making decisions too quickly upon the basis of the ranking. And we don't want decisions to be made too quickly on the basis of the ranking because anybody should be doing a lot more reading, a lot more research before they make a final decision. We try to be as upfront and transparent as possible about how we are doing it and what we are doing and why we do it. And also wherever we can, we try to respond to criticism as best we can. A lot of the enhancements that I spoke about at the beginning of the conversation, that we made this year, was in direct response to some of that criticism. if there's nothing we can do about it, we can't do anything about it. But where we can, we are trying to improve our service and our offering.
13. Comment expliquez-vous les changements des universités d'un an à l'autre?
Separating ourselves from that particular case for a moment, we have already spoken to a number of people at Sciences Po to explain the specifics of what might have contributed to Sciences Po's situation. But looking at it more generally over the years, a significant problem up to this year has been the volatility of the ranking. You could go up by say 10% in international students, which is only a 5% indicator, and if you happened to be at the right point of the curve, that could make a significant difference to your overall ranking, even though nobody would actually really say that improving your international students by 10% represented any significant in improvement in quality as an institution as a whole. So the implementations of Z-scores this year is actually, even though it as perhaps contributed to some of the most profound changes this year, its designed to help stabilize the rankings and make those changes much smoother year on year. So if simulations are accurate, we won't see institutions moving by 150-200 places next year
Obviously it called into the question the validity of the ranking, because people don't believe that if you were able to genuinely and objectively measure university strength, that in any would it be possible for a university to improve itself by a hundred places in world terms year on year. It is a highly complex interdependent system, which has led to a fairly volatile sets of final results, which hopefully some of the changes we have made can smooth considerably and make it a more reliable benchmarks.
14. Les rankings devraient-ils influencer les decision des universités?
In general no I don't think so, but in certain cases what we have noticed is that rankings have alerted some universities to aspects of their policies that they might have let slide a little bit. But they shouldn't directly influence policy. If you look at the peer review, if a university decides that as a result of not doing as well as they hoped in the peer review, they are going to get more of their top researchers to publish in top journals, that they are going to get more of their research published in English, that they are going to get more of their academics represented at international conferences and speaking at international conferences and getting their work out there, then I'd say that overall that's a positive thing. But it is slightly more complex than a simple response to the ranking, it is a response that most people would suggest would be a firm strategic action anyway. You wouldn't necessarily consider healthy to respond too strongly to the rankings in a scenario where a university has a mandate which is much more broadly reaching than the measures of the ranking is able to evaluate.
|