Authors evaluating their own studies in meta-analyses is common, but is it problematic?
Systematic reviewers and meta-analysts are widely seen as arbiters on the state of knowledge. This influence makes conflict of interest an acute issue for them in general. Yet as well as the usual author conflicts, they can have a type of potential conflict that’s totally meta and unique to them: being an author of the very studies they are choosing and weighing up. I think it’s problematic, and the systematic reviewing community has a blind spot about it.
When people are reviewing their own studies, it’s got the feel of an echo chamber to me, instead of an objective analysis of a body of evidence. Is it really the people responsible for the research who are most likely to be able to keep an open mind, and see if an entire field has overlooked critical issues, or keeps making the same mistake?
John Ioannidis and Peter Gøtzsche have written about experiencing this problem as co-authors of meta-analyses, observing that “Primary authors are likely to defend their results and see the meta-analysis as an opportunity to advance their views.”
From my point of view, author bias of this sort can just about scream out from a systematic review. When authors of a methodologically flawed study nevertheless rate their work as high quality in a subsequent systematic review, that can distort the review’s findings.*
It’s tough, because I can see the value of researchers systematically reviewing research papers to keep on top of the evidence to do their next study. And people who have done the primary studies obviously have a lot of insider knowledge. Yet it seems to me that people are underestimating how subjective many of the steps in systematic reviewing and meta-analysis are. And so they underestimate, in turn, the potential for being taken off course by confirmation bias or allegiance bias.
Even financial conflicts of interest haven’t been studied much in systematic review authorship, so we don’t know much about whether any conflicts translate to serious bias in systematic reviews. One thing we do know though is that authors evaluating their own studies is common. And that can indirectly be a financial interest, too, if the systematic review helps them get funding for the type of study the review says is needed.
A 2016 study found that 9% of a sample of 100 Cochrane reviews disclosed that one or more of the review’s authors was also an author of a study or studies included in the review—and 15% had authors of relevant studies that were not included in the review. Cochrane authors are supposed to disclose this, but that’s unusual for a journal.
The authors of that study didn’t trawl through the included studies looking for self-authorship that hadn’t been disclosed. Another group did, though, in their 2016 study of 95 systematic reviews of psychotherapies. They found that 34 of the reviews (36%) included studies authored by one or more of the systematic review authors. The relationship was disclosed in only two reviews, which were both published in journals with policies requiring it to be disclosed. I think all journals should require this disclosure.
These authors rated the amount of spin in the reviews’ conclusions too: they found it in 27 (28%) of the 95 reviews. There was some suggestion that spin was more likely when authors were reviewing their own studies [OR=2.08 (CI 0.83 to 5.18)], but we can’t know for sure without bigger and further studies in a variety of subject areas.
It would help, too, to have research on how often the reviewers’ conclusions were in line with those of their own studies in contested areas, for example, and whether their own studies get a critical enough assessment. We need more meta-research on this type of meta-conflict. The little we have suggests that there could be a problem here. And there’s a big problem with lack of awareness and disclosure.
Viswanathan and colleagues argued that there are several options for managing non-financial conflicts of interest in systematic reviewing: “disclosure followed by no change in the systematic review team or activities, inclusion on the team along with other members with differing viewpoints to ensure diverse perspectives, exclusion from certain activities, and exclusion from the project entirely.”
It’s not clear, though, that any of this nullifies the impact. “Exclusion from certain activities” seems to me to be the obvious minimum. If an author has done any of the included studies—or any that might be eligible—I think readers deserve to be sure that they had no role in choosing the studies, no role in determining the criteria by which quality will be assessed, and no role in assessing the quality. Come to that, what about choosing the outcomes, extracting data, and making recommendations about future research? And that’s pretty much most of the game. The more I think about it, the more it seems to me that the conflict is too great for a study’s author to have much of a role in a systematic review at all.
*I have written about an example of a deeply flawed study being rated as high quality by one of its authors in a systematic review, and the influence that had here.
Hilda Bastian is a scientist, blogger, and cartoonist. She is currently studying some factors affecting the validity of systematic reviews. Twitter @hildabast
Competing interests: I am currently working on a PhD on some issues affecting the validity of systematic reviews. I have never accepted funding from a manufacturer of a drug, device, or similar health product. More than 20 years ago, I received funding from a not for profit health insurer, and from a private health insurers’ association for participation in a conference.
This is an adaptation of an article that was first published on the PLOS Blogs Network, Absolutely Maybe.