The BMJ Today: Antidepressants, FDA warnings, and suicide under the microscope . . . again

peter_doshiFor those who like “journalology,” today’s The BMJ has many of the ingredients for a rich case study.

The latest published letters to the editor are dominated by those taking issue with a previously published research paper.

The paper at issue—published this June and authored by Lu and colleagues—probed whether the Food and Drug Administration’s (FDA) 2003-04 warnings about the risk of suicidality in children and adolescents on antidepressants may have had “unintended consequences.” Their research found it did, and the authors concluded that after the FDA warnings, consumption of antidepressants decreased while suicide attempts increased. If true, it means the FDA warnings had the exact opposite of their intended effect.

But are the findings sound? In seven separate letters, critics challenge the study’s methods and its findings. In particular, the study’s proxy measure for suicide attempts comes under attack. If the proxy is unreliable, so are the results.

“We doubt that poisoning by psychotropic drugs is a ‘validated proxy for suicide attempts,’” write Mark Olfson and Michael Schoenbaum from Columbia University and the National Institute of Mental Health.

Peter Gøtzsche of the Nordic Cochrane Centre calls the outcome variable “a poor surrogate . . . People on SSRIs who attempt suicide don’t usually poison themselves (and cannot really do so with SSRIs), they tend to use violent methods like hanging.”

John Nardo, a retired psychiatrist at Emory University, calls the study “flawed from the start by the absence of usable E codes indicating deliberate self harm,” plus other “unjustifiable assumptions.”

Thomas Moore, of the Institute for Safe Medication Practices, composed a letter that is equally strong in its criticism: “Lu and colleagues’ study contains four substantial flaws, any of which would be fatal . . . ”

In response, Lu and colleagues defended their findings. “We agree with Olfson and Schoenbaum that quasi-experimental or observational studies are open to varying interpretation,” write Lu and colleagues, before explaining the arguments to support their favored interpretation of the data.

The discussion raises some fundamental questions that go beyond the specific debate—questions about how knowledge is produced, disseminated, and corrected when needed in medicine. In their letter, Catherine Barber, Deborah Azrael, and Matthew Miller, from Harvard School of Public Health and Northeastern University, say that “the evidence shows no increase in suicide attempts or deaths in young people after the FDA warnings,” clearly suggesting that Lu et al got it wrong. They conclude: “It is important that we get this right; sounding unnecessary alarms does nothing to protect our young people.”

But if the critics are correct, just how do “we get this right?” Who is “we” anyways, and how does one “unsound” an unnecessary alarm?

There is no question that the original paper had impact. The article’s Altmetric score—which claims to track “the buzz around scholarly articles and datasets online”—is 411, putting it among the highest ever scored in the journal. The paper clearly had some traction—far more than letters ever get.

If this were a clinical trial, RIAT might be helpful. For misreported clinical trials, I and others proposed a possible mechanism for correcting the scientific literature called RIAT: Restoring Abandoned and Invisible Trials. The premise is that by using the underlying data from trials, such as clinical study reports and individual participant data, third parties can help correct the scientific record by republishing misreported trials.

But the Lu et al. paper did not report on a trial. It was a retrospective, observational study of healthcare claims data. At debate is the appropriateness of the methods, not accusations of misreporting.

I have no doubt that “getting it right” is good advice, but I am not sure anybody has worked out just how to do it. Journals are likely just one of the many actors that need to think deeply about this question. The lay media’s general inability to distinguish between experimental and non-experimental research, between correlation and causation, coupled with its general lack of interest in what happens to papers post-publication, seems to also be part of the problem. Perhaps The BMJ’s new policy of publishing pre-publication histories and open peer review will aid bloggers, journalists, and others who want to really dig into stories, understand how manuscripts become publications, and help us all “get it right.”

Peter Doshi is an associate editor for The BMJ

Competing interests: I know some of the authors of the published letters referred to in this blog.