Lessons from a network meta-analysis of biologics in rheumatoid arthritis 

The aim of our recent research project was to compare the benefits and harms of biologics for the treatment of patients with rheumatoid arthritis. Given the large number of available treatment options, this required a network meta-analysis of randomised controlled trials (RCTs). In this type of analysis, multiple treatments are compared by means of indirect comparisons across RCTs based on a common comparator together with direct comparisons within RCTs. For the indirect comparisons to be valid, the data must meet certain requirements, i.e. the studies to be included must be similar (e.g. with regard to patient populations), the study results of the individual studies must be homogeneous, and the direct and indirect comparisons in the network must be consistent.  

Our information retrieval resulted in 118 RCTs, 45 of which investigated the clinical question presented in our paper, i.e. treatment of patients with rheumatoid arthritis after methotrexate failure with biologics in combination with methotrexate. However, the majority of these studies were only placebo-controlled and only four provided direct comparisons of two of the various biologic treatment options. Three studies provided outcomes of long-term treatment (> 1 year). Placebo-controlled studies showed high withdrawal rates that were imbalanced between treatment arms. We tried to mitigate this by only including data from the first six months of treatment. The lack of direct comparisons meant that we could not check consistency for most of the comparisons analysed. These methodological weaknesses of the study pool and the consequent poor evidence base hampered the network meta-analysis and were not eliminated by it.

We have thus learned that this method does not “repair” a poor evidence base. Like conventional meta-analysis, network meta-analysis also requires high-quality evidence in order to be a valuable decision making tool and does not relieve us of the obligation of generating robust evidence. If RCTs that are to be used in network meta-analysis are appropriately planned, we can, however, solve more questions than with the individual RCTs comparing two treatment options alone. This requires coordinated efforts to generate sets of RCTs with sufficiently similar design features and relevant direct comparisons. 

An important finding from this project was to see the relevance of targeted re-analysis of individual patient data to properly inform the network meta-analysis. Firstly, when analysing the study pool we found that many studies included insufficiently similar patient populations, for example with regard to pre-treatment with biologics or disease duration. Combining these studies in an network meta-analysis would have left us wondering whether differences between treatments were caused by the biologics or were artefacts caused by the inclusion of heterogeneous patient populations. Our approach to this problem was the re-analysis of the study data. Using individual patient data, well-defined, sufficiently similar patient sub-populations from the studies could be provided. Secondly, changes in the definitions of key outcomes (remission and low disease activity) over the past years left us with a study pool in which most of the studies had not analysed the data according to the current outcome definitions. This problem was solved by re-analysing the individual patient data according to the new definitions. These re-analyses were prepared and provided by the sponsors of the studies included. So beyond the results on biologics in RA, we learnt that if we want to make full use of the network meta-analysis method, we need routine access to individual patient data or to re-analyses of them.

Informative network meta-analysis requires high-quality RCTs with active controls for direct comparisons of relevant treatment options. Network meta-analysis also requires coordinated study planning beyond individual studies to ensure similar populations and data sets, for example based on core outcome sets. The analyses of the individual studies should also be streamlined; however, the routine availability of individual patient data to enable tailored re-analysis of the data for the research question of interest is even more important. This seems to be a highly efficient approach. Given the human and economic resources required for clinical trials, we should maximise the knowledge gain from the data by making them available for further research. Our project has shown that this is meaningful and possible. We owe this to the patients who participated in the studies and to the patients, clinicians and other stakeholders who need reliable information for decision making. 

Beate Wieseler, head of department, Drug Assessment Department, Institute for Quality and Efficiency in Health Care, Cologne, Germany

Kirsten Janke, researcher, Drug Assessment Department, Institute for Quality and Efficiency in Health Care, Cologne, Germany

Competing interests: Please see research paper