I’m much amused by the pious positions taken by researchers and BMJ editors in the spirited dispute over qualitative research. The researchers are upset that The BMJ largely excludes qualitative research, and the editors insist that they do so to provide readers with research more likely to be useful to them. Both sides have hidden, less noble motives.
I was an editor at The BMJ for 25 years, and it took a long time to dawn on me that selecting a handful of research articles from the millions produced each year and sending them to clinicians was a pointless exercise. Despite the elaborate and expensive process of selecting from the 5000 studies we received we were, when you consider the whole body of research, sending almost random bits of information to the clinicians. Few were relevant to the clinicians, and even if they were no sensible clinician would change practice on one study.
Fortunately, it didn’t matter much because the clinicians didn’t read them. If you read for 30 minutes a week, the average for most clinicians, it doesn’t make sense to spend that time reading one or perhaps two densely written original research papers. It makes much more sense to read reviews and educational articles, and that’s what clinicians do read if they manage to divert themselves from obituaries, columnists, and fragments of news.
The BMJ editors know how clinicians—the main audience for the paper BMJ—read, and that’s why the current paper version has just brief summaries of three or four research papers. The person who applied for the editorship of The BMJ at the same time as me in 1990 advocated dropping research papers altogether, and one of the other editors suggested the same while I was the editor. After all, much of the time of editors was spent working on something that readers rarely read. How could that make sense?
But we didn’t drop the research. Why not? One reason is that much of the prestige of the journal (and coverage in the mass media) comes from the research, even though the prestige really belongs to the authors not the journal. More crucially, the income of journals depends on research. The biggest source of income for most scientific journals is the subscriptions (often horribly inflated) that academic libraries pay. And libraries are paying to access the research, not the flim flam that amuses readers. (There is a deep irony here in that academic institutions are paying large sums to access material where the value is added not by the journals but by their own employees, who do the research).
Another factor is that publishers, more so even than editors, are obsessed with the impact factor of their journals. The higher the impact factor of a journal the more likely libraries are to think that they must subscribe to the journal, no matter the price. A higher impact factor also means you have more research submitted, setting up a virtuous circle that leads to a still higher impact factor.
There are many games to play to up a journal’s impact factor, which you’ll remember is the number of citations received by the journal divided by the number of citable articles. Research articles are always citable. The impact factor increases immediately if the publishers and editors can convince the company that produces the impact factor to reduce the number of citable articles by excluding certain categories. This has happened many times. Another route to a higher impact factor is to reduce the number of research articles and concentrate on those that are highly cited.
This is where we get to qualitative research. As BMJ editors write in a letter to authors of a rejected qualitative study: “Our research shows that they are not as widely accessed, downloaded, or cited as other research.” More qualitative research means a lower impact factor, which is anathema to publishers and most editors.
But why should qualitative researchers care that The BMJ isn’t interested in their research? There are hundreds of other journals, some of them specialising in qualitative research. They can simply go elsewhere.
Ironically, they too are driven by the impact factor—or prestige—of the journal. Academics are measured by where they publish, and the higher the impact factor of the journal where they publish the better they do and the more funding they receive. As I’ve explained above, journals that publish more qualitative research generally have lower impact factors, so qualitative researchers like to get into journals that publish other sorts of studies as well.
The journals with the highest impact factors—Nature, Science, the New England Journal of Medicine, and the Lancet—hardly ever publish qualitative research, but The BMJ was a place where they could hope to publish. Now even it has been largely shut out. This upsets them.
Maybe I’m too cynical, but I think that both sides are producing fine arguments in this debate but being less open about uglier, self-serving motives.
Richard Smith was the editor of The BMJ until 2004.
Competing interest: RS was the editor of The BMJ and now receives a pension from the BMA, the sole owners of The BMJ.