Apart from universities as a whole, the recently released QS World University Rankings also include a list of their selection of the top 200 medical and life sciences schools in the world.
In theory, rankings seem like a good idea. After all, parents of budding doctors-to-be will be reassured that their offspring will benefit from excellent training and career perspectives, and prospective and current students will reinforce their self esteem and motivation to face the hardships of gruelling medical programmes knowing it will pay off in the future. And of course, schools themselves get a lot of free publicity, which in turn may attract more students, more faculty staff, and more funds.
With a few outliers in Asia and Australia (like Melbourne, Singapore, or Tokyo), the “usual suspects,” namely North American and British medical schools lead the pack, with Harvard Medical School at the helm, followed in second and third tiers by a mix of other North American, Northern and Southern European, Asian, and Australasian schools.
It doesn’t surprise me to see traditional centres of excellence like Singapore or Hong Kong taking up several respectable positions, but you could argue how have some centres from mainland China and Thailand elbowed their way into the list ahead of many institutions in the Western world. Likewise, I was surprised not to see included certain schools from countries like Brazil or Argentina, which have an excellent track record of medical education. Furthermore, will we be seeing schools from the Middle East and Africa surfacing in the future?
There’s much to be said about both university rankings and medical research. At first glance, it all seems to make perfect sense, and I’d reckon that most people would agree with that “natural pecking order.” But if we scrutinise the methodology a bit, we realise that considering these rankings as the most reliable source of information may be as misleading as considering a cross sectional study as the most solid type of medical research. After all, these rankings are based on three criteria, namely academic reputation (which has around 50% of the overall weighting for medical schools), employer reputation, and citation rates per paper. The academic and employer reputation scores are obtained through surveys of academic and potential employers, totalling 659 and 346, respectively. Many people have similar or greater number of friends and followers on Facebook and Twitter, that they can survey informally. Thus, does this sample have enough “power”?
These rankings say little or nothing about the quality of its clinical education, or whether these schools will train competent and humane doctors. In the survey to academics, the methodology reads that “respondents are asked to list up to ten domestic and thirty international institutions that they consider excellent for research.” Much of what we do as clinicians is carried out in an environment of extreme privacy and behind closed doors, so of course it is not easy to compare which schools teach better clinical or communication skills, or provide enough opportunities for students to gain, say, enough operating or procedural skills experience. Moreover, I’m sure we would find significant differences between centres that employ similar methodologies like problem based learning due to differences in local working culture and the levels of quality and experience of the faculty.
In all, even though it may have some value and may more or less reflect the intuitive and preconceived idea that many already have, these rankings still read like a highly biased piece of medical research. It is biased because it is too centered on the research potential of schools rather than on the excellence of its pre-clinical and clinical training. The leading schools are already strong, prestiged “brands” on their own, which may tend to inflate their actual value to the detriment of other less marketed or less well known schools that may be doing a great job.
Most medical graduates these days will still end up doing clinical medicine most of the time, eventually some research as well, but definitely not full time research. So if we want more accurate rankings reflecting the clinical and humanistic qualities of doctors that are trained, we definitely need alternative approaches to this sort of cross sectional style research. I’m sure we could easily do better resorting to other types of indicators and qualitative and quantitative methodologies. We might find some surprises then.
Tiago Villanueva is a general practitioner based in Lisbon, Portugal, and a former BMJ Clegg Scholar and editor, studentBMJ