6 Jan, 12 | by BMJ Group
“Research highlights” is a weekly round-up of research papers appearing in the print BMJ. We start off with this week’s research questions, before providing more detail on some individual research papers and accompanying articles.
- What effect does inclusion of unpublished data have on the results of meta-analyses of drug trials?
- How many clinical trials funded by the US National Institutes of Health are subsequently published?
- Do US drug trials comply with mandatory reporting on ClinicalTrials.gov?
- What is the potential for bias in meta-analyses that use individual participant data?
- Why are reports describing randomised controlled trials sometimes not retrieved in literature searches?
- To what extent do three types of documents for reporting clinical trials (reports posted in trial results registries, clinical study reports submitted to regulatory authorities, and journal publications) provide sufficient information to enable trial evaluation?
Where have all the trials gone?
Back in 1986, oncologist and epidemiologist R John Simes called for prospective registration of all clinical trials in a publicly accessible registry. This, he argued, would aid discovery of ongoing and unpublished trials and would reduce the bias towards positive results in the evidence base.
In 2000 the US National Library of Medicine, as a result of the FDA (Food and Drug Administration) Modernization Act 1997, launched what is now the world’s biggest publicly accessible registry, ClinicalTrials.gov. Many other registries opened subsequently round the world. But registration was voluntary and patchy until the influential International Committee of Medical Journal Editors (ICMJE) ruled in 2005 that clinical trials had to be registered prospectively to qualify for publication in one of its member journals. There were then 12 members, including the BMJ, and now there are 14.
The ICMJE rules greatly increased rates of trial registration, but other journals following the committee’s uniform requirements for manuscripts didn’t have to adopt the rules, and most trialists were still off the hook. Then in 2007 the FDA Amendments Act (FDAAA) made prospective registration at ClinicalTrials.gov mandatory for all clinical trials of drugs, devices, or biological agents with at least one site in the United States (excluding phase I studies and early feasibility trials of devices). The act also required the posting of basic results within one year of the completion of each trial that was registered and ongoing in September 2007, and brought in fines of $10,000 per infringement. So the act had teeth.
Or did it? Andrew Prayle and colleagues’ analysis of ClinicalTrials.gov and the US database of FDA approved drugs, Drugs@FDA, found that only 22% of eligible registered drug trials had results posted at the registry. Studies funded solely by the drug industry complied better than the rest (40% v 9%) but, as Prayle and colleagues put it, “if the reporting rate does not increase, the laudable FDAAA legislation will not achieve its goal of improving the accessibility of trial results.”
Furthermore, the act requires posting only of “basic results” of eligible registered trials. So Joseph Ross and colleagues searched to see if trials funded by the US National Institutes of Health, registered at ClinicalTrials.gov, and completed at least 30 months earlier had been published yet in peer reviewed journals indexed in Medline. They found publications for fewer than half overall. And, a median of 51 months after trial completion, a third of trials remained unpublished.
Ask a simple question, but can you find the answer?
Doctors systematically explore a patient’s story. They hunt for red flag symptoms, review body systems, and explore hidden agendas. They use various sources—for example, they might glean a collateral history, review notes, or read letters. With the full clinical picture they can help to answer a patient’s query. When information is missing—maybe because the patient has cognitive impairment, or notes are incomplete—doctors will be able to identify with the feelings of frustration and potential for making the wrong decision.
It is similar for researchers and missing data. And implications of missing research reach down from the ivory towers to the heart of interactions with patients. Research contributes to the evidence based medicine summaries, patient information leaflets, guidelines, and policies we read and use from day to day. If information is missing, and some conclusions are incorrect, medicine has a problem. Three papers in this issue give doctors a flavour of the problems researchers face.
There are mistakes in how trials are labelled on Medline, an important source for researchers searching for trials. Susan Wieland and colleagues describe the differences between correctly and incorrectly labelled randomised controlled trials on Medline. Beate Wiesler and colleagues look at three documents related to clinical trials including the journal publication, and measure whether they contain the information researchers need. And Beth Hart and colleagues explore whether the conclusions of existing systematic reviews change when unpublished studies are included in meta-analyses.
Is there anything researchers can do? Options have increased in recent years, according to a linked Research Methods and Reporting article from An-Wen Chan. Trial registries, protocols, some pharmaceutical companies, and regulatory authorities all have unpublished data that can help. But this is not a cause for complacency, because “the current situation is a disservice to research participants, health systems, and the whole endeavour of clinical medicine,” Richard Lehman and Elizabeth Loder write in their linked editorial. Better systems are needed.
Identifying women with suspected ovarian cancer in primary care
Julia Hippisley-Cox and Carol Coupland derive and validate an algorithm to estimate the absolute risk of having ovarian cancer in women with and without symptoms.
Timing of onset of cognitive decline
Results from the Whitehall II prospective cohort study show evidence of cognitive decline in UK men and women at all ages between 45 and 70, report Archana Singh-Manoux and colleagues.