Hans Lund: A brief introduction to the concept of evidence-based research

Hans Lund crop (2)

This blog is part of a series of blogs linked with BMJ Clinical Evidence, a database of systematic overviews of the best available evidence on the effectiveness of commonly used interventions.

The scientific ideal

On 15 February 1676, in a letter to his colleague (and rival) Robert Hooke, Sir Isaac Newton wrote the following well known sentence: “If I have seen farther it is by standing on the shoulders of giants.” Newton referred to influential scientists before him such as Copernicus, Galilei, and Kepler and emphasised one of the fundamental aspects of science – science is cumulative with each new discovery dependent on previous knowledge. Lord Rayleigh presented the same line of thought in 1884 at the 54th meeting of the British Association for the Advancement of Science in Montreal:

“If, as is sometimes supposed, science consisted in nothing but the laborious accumulation of facts, it would soon come to a standstill, crushed, as it were, under its own weight. (…) The work which deserves, but I am afraid does not always receive, the most credit is that in which discovery and explanation go hand in hand, in which not only are new facts presented, but their relation to old ones is pointed out.”

So well over a century ago Lord Rayleigh realised that simple accumulation of research results do not benefit anyone. Each new result needs to be interpreted in the context of earlier research. Otherwise our scientific achievements will be worthless! In 1964, the same principle was endorsed by the World Medical Association:

The Helsinki Declaration states that biomedical research involving people should be based on a thorough knowledge of the scientific literature. That is, it is unethical to expose human subjects unnecessarily to the risks of research. Ideally, the introduction should include a reference to a systematic review of previous similar trials or a note of the absence of such trials.”

Not only would our scientific efforts be worthless or wasted, but in the majority of cases it would also be unethical to not base new research on earlier results.

The assumption

Many might, however, assume and argue that they have never come across an article published in a scientific journal that did not refer to earlier results. So are we not already complying with the scientific ideal? A scientific committee recently put this argument into writing, in the following response to an application for a new PhD program suggesting an evidence-based research approach by all PhD students: “Strictly speaking, it seems hard to imagine any research that is not evidence-based. At least it seems impossible to imagine that articles published in journals with a high impact factor do not relate to earlier research.

Thus, why introduce the new concept of evidence-based research and demand researchers to adhere to it—has research not always been evidence-based?

The evidence

In its guidance on reporting randomised controlled trials, the CONSORT group stated that “The introduction should include a reference to a systematic review of previous similar trials.” The main point of a systematic review is to avoid selection bias (as much as possible)—in consequence, all studies should be included in the review, providing an exhaustive summary of the current literature relevant to the research question. The reason is simple, but crucial. Writers and readers of systematic reviews will be aware just how widely clinical studies can differ in their results and conclusions, even if they are examining the exact same question. Picking and choosing which studies to cite in the study introduction, an author can easily make the case for the new study by simply selecting supporting references. Many scientists, trying to keep the study introduction to a minimum, attempt to support each statement by citing the newest, the best, or the largest study that fits best. Obviously, this approach cannot be scientific because it is based on personal preferences rather than the totality of earlier research.

Using snowball sampling techniques and looking at references, we have identified at least 20 relevant articles evaluating this question. Studies analysing how often scientific authors refer to the totality of earlier research found a general lack of a systematic approach. The main conclusion is, as one of the study authors (Steve Goodman) said in the New York Times (17 January 2011):

“No matter how many randomized clinical trials have been done on a particular topic, about half the clinical trials cite none or only one of them. As cynical as I am about such things, I didn’t realize the situation was this bad”.

Dr Goodman was referring to a study he co-authored with Karen Robinson and published in 2011. They examined all systematic reviews of healthcare questions, published in 2004 that included a meta-analysis combining 4 or more RCTs, so identifying studies that could potentially refer to 3 or more studies within the same area. Even though a great number of the included studies could have referred to 10 or more previous pieces of research, the median number of references for these studies was consistently 2.

In 2005, Fergusson published a cumulative meta-analysis, which showed that by 1994 there were enough studies to conclude that aprotinin diminishes the amount of bleeding during cardiac surgery. Still, in the following decade more than 4000 patients were involved in unnecessary RCTs comparing aprotinin versus placebo. This means that at least 2000 patients did not receive potentially life-saving medication, even though the systematic review from 1994 had proven the beneficial effect.

In a series of studies, Clarke and Chalmers [1997 to 2010] repeatedly showed that RCTs published in the month of May in the five highest ranking medical journals (JAMA; BMJ; NEJM; Lancet and Annals of Internal Medicine) almost never used a systematic review (1-4).

Thus, while many people assume that all research is evidence-based, the evidence clearly indicates that this is not the case.

The solution

To address this issue, an international group of researchers established the Evidence-Based Research Network (EBRNetwork) in Bergen in December 2014. The aim of the EBRNetwork is to reduce waste in research by promoting “no new studies without prior systematic review of existing evidence” and “efficient production, updating and dissemination of systematic reviews.”

Visit the Network’s website to join and sign up for its newsletter.

Hans Lund is Associate Professor at the University of Southern Denmark and Professor at Bergen University College, Norway. He is Chairman of “The Evidence-Based Research Network” and drum player in a rock band.

References:
1.    Clarke M, Hopewell S, Chalmers I. Clinical trials should begin and end with systematic reviews of relevant evidence: 12 years and waiting. Lancet. 2010;376(9734):20-1.
2.    Clarke M, Hopewell S, Chalmers I. Reports of clinical trials should begin and end with up-to-date systematic reviews of other relevant evidence: a status report. J R Soc Med. 2007;100(4):187-90.
3.    Clarke M, Alderson P, Chalmers I. Discussion sections in reports of controlled trials published in general medical journals. JAMA. 2002;287(21):2799-801.
4.    Clarke M, Chalmers I. Discussion sections in reports of controlled trials published in general medical journals: islands in search of continents? JAMA. 1998;280(3):280-2.