This month the open access journal with the highest impact factor: PLoS Med (short for Public Library of Science Medicine) will publish a set of articles on research misconduct. The main articles are broken down into research misconduct in high-income countries and research misconduct in low- and middle-income countries (LMIC).
I am second author on the LMIC article. My excellent co-authors are from Nigeria, China and the UK (the BMJ’s own former editor Richard Smith). There was a fifth co-author who would have brought a lot to the paper, but he/she withdrew from the publication due to concerns about the potential impact on his/her ability to work in his/her home country due to content in the article. Research misconduct can be a touchy subject and not everyone wishes to be part of the debate.
There is the assumption that published research studies are good. That being published alone is enough to speak for the quality of the research. Or I should say, among those who do not or rarely publish, that research studies are good if they have been accepted for publication—in terms of having been critically reviewed during the peer review process.
I deal with this erroneous assumption of quality frequently when people talk about program design or evaluating programs. The fall back statement seems to be—”We will look at the published literature”—as if that constitutes quality. My follow-on point is always, “And what tool will you use to critically appraise the published literature?” as well as “And how will you identify unpublished studies of the same intervention?”
I realize that my blanket statements must be such a buzz kill to those around me, but do not the people whom I serve deserve this level of scrutiny in decision making for health and health policy related items? Do not we all want healthcare and health policy that has gone through similar scrutiny?
Let me tell you now—there is a LOT of rubbish out there, a lot of studies that are published, even in top journals, in which the data tells us one thing and the authors tell us another in the discussion and conclusion section. How do I know this? I am a systematic reviewer. Critical appraisal of included studies is a hallmark of systematic review, which is about a great deal more than just meta-analysis of randomized control trials.
When I am doing a systematic review and judge someone’s published paper to be of poor quality (in terms of data collection, data analysis or the presentation of results, discussion and/or conclusion) and then I go out and write about it—and publish it in my review, it does NOT make me the favorite person of the authors of the publication.
A friend of an author whose work is confusing—the numerical analysis does not add up to what is argued in the discussion and summarized in the conclusion sections—once said to me, “He was up to his elbows in data. You don’t know how hard it was for him to get to where he could write a paper.” And? While I empathize completely as a researcher and author, in my systematic review I can only say that the paper in question does nothing to prove the effectiveness of the intervention.
Could it be research misconduct? It might be…if it was done intentionally, perhaps to promote an intervention for which the evidence is weak. I do not think this is the case with the paper in my systematic review. However, the myth that researchers do not have an agenda when it comes to publishing is just that, a myth. Most researchers fund their work through competitive grants—so that it is a constant fight for funding—how do you generate funds? By publishing your work and building the reputation of your program. The scientist going for publication then has an inherent interest in making sure that his or her work gets published in the highest quality peer review publication so that it will be seen and read and cited by others.
Health research is a big business and there are many scientists/researchers with colossal egos interested in promoting their own agenda. This fact used to floor me, especially in global health where I have always thought we were all working together to improve the lives of the poor. I was wrong, I was naïve, and I definitely learned the hard way.
Perhaps more relevant to a US or UK audience, I often think of the now discredited work on the links between the measles vaccine and autism. This was research misconduct. However, years after the physician-researcher was discredited and the article retracted, there are well intentioned but misinformed parents who refuse to immunize their children—thus putting their children, their neighbors and the entire population at risk.
It is great to see that research misconduct is getting press time. I am delighted to be part of the discussion. We must be constantly vigilant.
Tracey Pérez Koehlmoos is the special assistant to the assistant commandant of the Marine Corps and senior program liaison for community health integration for the U.S. Marine Corps.
The opinions expressed in this article are her own and in no way reflect the opinions of the U.S. Marine Corps, the Department of Defense, or any other agency.