How Can Journal Editors Fight Bias in Polarized Scientific Communities?

By Brian D. Earp

In a recent issue of the Journal of Medical Ethics, Thomas Ploug and Søren Holm point out that scientific communities can sometimes get pretty polarized. This happens when two different groups of researchers consistently argue for (more or less) opposite positions on some hot-button empirical issue.

The examples they give are: debates over the merits of breast cancer screening and the advisability of prescribing statins to people at low risk of heart disease. Other examples come easily to mind. The one that pops into my head is the debate over the health benefits vs. risks of male circumcision—which I’ve covered in some detail here, here, here, here, and here.

When I first starting writing about this issue, I was pretty “polarized” myself. But I’ve tried to step back over the years to look for middle ground. Once you realize that your arguments are getting too one-sided, it’s hard to go on producing them without making some adjustments. At least, it is without losing credibility — and no small measure of self-respect.

This point will become important later on.

Nota bene! According to Ploug and Holm, disagreement is not the same as polarization. Instead, polarization only happens when researchers:

(1) Begin to self-identify as proponents of a particular position that needs to be strongly defended beyond what is supported by the data, and

(2) Begin to discount arguments and data that would normally be taken as important in a scientific debate.

But wait a minute. Isn’t there something peculiar about point number (1)?

On the one hand, it’s framed in terms of self-identification, so: “I see myself as a proponent of a particular position that needs to be strongly defended.” Ok, that much makes sense. But then it makes it sound like this position-defending has to go “beyond what is supported by the data.”

But who would self-identify as someone who makes inadequately supported arguments?

We might chalk this up to ambiguous phrasing. Maybe the authors mean that (in order for polarization to be diagnosed) researchers have to self-identify as “proponents of a particular position,” while the part about “beyond the data” is what an objective third-party would say about the researchers (even if that’s not what they would say about themselves). It’s hard to know for sure.

But the issue of self-identification is going to come up again in a minute, because I think it poses a big problem for Ploug and Holm’s ultimate proposal for how to combat polarization. To see why this is the case, though, I have to say a little bit more about what their overall suggestion is in the first place.

Polarization as a conflict of interest

Ploug and Holm’s major suggestion is this: the polarization of a scientific community can generate conflicts of interest for particular researchers. Specifically:

[T]he threat is that a polarised group may nourish an interest in advancing the position and views of the group, and that this interest may come to be a main criterion and goal for the choice of methods, the reporting of findings and the provision of policy advice. This interest may well be based on an honest conviction that one is right and thus not be in any way morally reprehensible, but even honestly held convictions can introduce potential biases in research and reporting.

That would be a serious problem. As Ploug and Holm explain, it could “threaten the objectivity of science, and may in turn bias public debate and political decision-making.”

So they are onto something really important. In fact, in many areas of biomedicine (as well as in other scientific fields), you often get the feeling that a particular group of researchers (whether they’re direct collaborators or not) are ultimately more interested in scoring points for their “side” than in getting to the bottom of a genuine dispute.

One situation in which this can happen is when you have a tricky moral (or political) question hanging in the balance—so that individual studies start to look like so many chess pieces. This definitely happens in the debate over male circumcision. Since it’s a religious ritual for some groups—and one that is at least prima facie harmful—it has become very important (for some researchers) to show that “health benefits” can be ascribed to it, since these can then be used to mount a “secular” defense of the practice.

On the other side, you have moral and even human rights objections to circumcision, which are a lot easier “sell” if you can demonstrate harm. This ends up resulting in a very strange cocktail of religion, science, and ethics in the circumcision literature (which you start to figure out if you dig deep enough into it). There is no such thing as a “neutral” publication about circumcision.

Of course, simple career interests can play a role here, too – like the need to save face by defending your prior work, or the work of your friends or ideological allies. There are many other factors as well. And these can apply to any contested topic in science or medicine.

So polarization is a genuine problem. How do Ploug and Holm propose to resolve it?

A simple solution?

Their basic suggestion is that researchers should self-report polarization as a “conflict of interest” on the standard forms they fill out when submitting their papers. They might end up writing something like this:

  1. This article reports research in a polarised field.
  2. The research group I/we belong to generally believe that the intervention we have researched should/should not be introduced in healthcare.

Is this a promising solution to the problem of polarization?

Probably not

Imagine that you are a researcher with enough self-awareness and personal integrity to identify yourself as “polarized” on a conflict of interest disclosure form (if in fact that’s what you are).

How likely is it, in this scenario, that you are also the sort of person who would conduct polarized research—and write up polarized articles—to begin with? I don’t think it could be very likely.

Like I said before, once you realize that you’re getting too dogmatic about pressing a particular viewpoint (as in: failing to seriously engage with decent points from the other side), you can’t just go on submitting the same sorts of papers, as though this realization had no force. At least, you can’t if you have any sense of personal integrity—which is precisely what (self) disclosure of conflicts of interest requires.

A role for editors?

So what about shifting the onus to editors? It seems to me that any journal editor who is responsible for making a publication decision about a particular manuscript, should know at least enough about the field in question to judge whether it’s a polarized area.

I’m not saying they have to be experts in every field.

But if they don’t know enough about the subject of the manuscript they’re handling to be able to assess if polarization is an issue, then I don’t see how they could be qualified to make the other sorts of important assessments that are needed to, say, make a recommendation about publication based on the referees’ reports.

So, in practical terms, if they aren’t sure about whether or not the subject is polarizing, they should probably recuse themselves from evaluating the manuscript and send it to an editor who knows more about the field. On the other hand, if they can assess polarization—and if the manuscript sits at one extreme pole—they can choose from the following options:

  • (a) encourage the author(s) to re-submit the manuscript in a less polarized form (i.e., by taking more seriously the best arguments and data from the other side and responding to them in a charitable fashion)
  • (b) invite a commentary or response paper (prior to publication) from a respectable researcher on the “other side”
  • (c) publish the paper as it is, but with an editorial statement alerting the reader to the polarized nature of the research and/or its author(s) (perhaps with a list of references to credible opposing arguments)
  • (d) some combination of the above.

Conclusion

It shouldn’t be the responsibility of individual researchers to “out” themselves—on a conflict of interest disclosure form—as making inadequately supported arguments (remember: this is built into the very definition of polarization). After all, anyone with the integrity to do this would not be making such arguments in the first place!

Instead, journal editors who are directly handling manuscripts need to make sure that they know at least enough about the relevant field of research to judge whether it is polarizing—and then let their readers in on their assessment.

 

Target paper

Ploug, T., & Holm, S. (2015). Conflict of interest disclosure and the polarisation of scientific communitiesJournal of Medical Ethics41, 356-358.

Further reading

Earp, B. D. (2015). Do the benefits of male circumcision outweigh the risks? A critique of the proposed CDC guidelinesFrontiers in Pediatrics3(18), 1-6.

Earp, B. D. (2015). Sex and circumcisionAmerican Journal of Bioethics15(2), 43-45.

Earp, B. D., & Darby, R. (2015). Does science support infant circumcision? A skeptical reply to Brian MorrisThe Skeptic, in press.

Goldman, R. (2004). Circumcision policy: A psychosocial perspectivePediatrics and Child Health9(9), 630-633.

About the author: 

Brian D. Earp is a researcher in science and ethics at the University of Oxford, and an Associate Editor at the Journal of Medical Ethics. He blogs regularly at the Practical Ethics blog hosted by the Uehiro Centre for Practical Ethics at the University of Oxford, and contributes a monthly blog here at the JME Blog as well. Follow Brian on Twitter at @briandavidearp.

* Note that this entry is being cross-posted at the Practical Ethics blog.

(Visited 321 times, 1 visits today)