Measure for Measure
Ruud Hendrickx is Lecturer at the Department of Management and Department of Econometrics and OR. His fields of research are: cooperative and non-cooperative game theory, games of skill and games of chance, and operations research.
Measuring things is part and parcel of scientific endeavor. Measuring is an integral part of an academic’s genetic makeup. Hence, it should not come as a surprise that one of academia’s favorite hobbies is measuring… itself.
That the world at large has an incentive to measure the relative performance of universities is clear. Prospective students want to investigate where best to study. Financiers (in most countries, taxpayers) want to ascertain whether they get value for money. Regulators want to know whether appropriate standards are kept. There are many stakeholders and it is only proper that these stakeholders receive the information they need.
More problematic are recent attempts to somehow impose yardsticks of (allegedly) absolute performance. Measures like quota of students graduating within a given period of time sound like useful information for the stakeholders involved. But when imposed by the Ministry of Education as an indiscriminate hurdle for all programs, not a very extensive knowledge of moral hazard is needed to conclude that this creates perverse incentives. Does anyone remember InHolland?
It is not only outside stakeholders that measure academic performance, but also academia itself, as stated in the introduction. This might be regarded as a hobby (if measuring is your forte, why not apply it to an environment you are familiar with?), but there are also other forces at work.
Let us start with education. At Tilburg University, education is almost completely measured by the students. First and foremost, there are the formal evaluation forms. Nowadays they are only available online, which leads to a worryingly low response rate. In addition to these forms, there are sounding boards and other committees involving students that keep a check on teaching performance. In the Econometrics and OR program, we teachers are lucky to have additional feedback from the students via Asset | Econometrics, which is usually much more informative than the bare numbers resulting from the official forms.
Research output is also measured. The challenge in measuring the quality of a research paper is obvious: in order to judge whether a paper on, say, finance is better than a paper on macroeconomics, you have to be an expert in both fields. Since the variety of fields within economics and business is very large, our faculty uses a proxy to measure the quality of a paper, namely the quality of the scientific journal in which it is published. Again, we have the same problem of comparing a journal in finance with a journal in macro. It just boils down to comparing apples and oranges.
A few years and a couple of name changes ago, our faculty (before we were relegated to a “school” – yuk) very fittingly used pictures of various types of fruit on its website. I never understood then what the fruit was doing there, but now I see that they were a symbol for the various strands of research areas that the faculty pretends to be able to compare and translate into lists and rankings. Unfortunately, I still do not know whether I fall within the category of apples or oranges.
Whatever your view on the merit of rankings, I reckon you agree that a deluge of different rankings serves no purpose whatsoever. A constant stream of messages with variations on the theme “Wow! We found another ranking in which we do well!” is not only pathetic, but after a while it blunts its own message. How much more convincing is it to leave the measuring to outside stakeholders and subsequently let the results speak for themselves. But, hey, probably the fruit of marketing is not part of my daily diet.
Text by: Ruud Hendrickx