You don't need to be signed in to read BMJ Group Blogs, but you can register here to receive updates about other BMJ Group products and services via our Group site.

Gary Collins: Opening up multivariable prediction models

3 Aug, 11 | by BMJ Group

Gary CollinsConsensus-based guidelines for transparent reporting

Prediction models can provide reliable estimates of a patient’s risk (or probability) of having a specific underlying condition or of developing some condition in the future. Prediction models have consistently outperformed estimates made by individual doctors. Familiar examples include the Framingham Risk Score for cardiovascular disease, the APACHE score for intensive care units, and the Ottawa ankle rule. There are also many lesser known models that are being used when they possibly shouldn’t be, due to poor design and (unintentionally) misleading performance data. However, other published models that most people will not be aware of are never used in clinical practice some of which may be clinically valuable. 

Only with a full and clear description of the development and validation process can we begin to judge the potential value of a particular model. You may think that this all appears very obvious and that full and transparent reporting is embedded in good research, but reviews (including our own in cancer and diabetes) have found the quality of reporting to be disappointingly poor, with many key details omitted and many irrelevant and potentially misleading data included. It is long overdue that the reporting of prediction models should meet some minimal set of requirements.  How users of clinical prediction models have decided which one to use over another when, more often than not, crucial information has not been reported is a mystery.

As a critical part of a new endeavour to improve the reporting of multivariable prediction models, we convened a 3 day meeting at Pembroke College in Oxford (27 to 29 June) to develop a checklist of a minimal set of items that are essential to report. We invited a further 20 participants including methodologists (statisticians, epidemiologists), clinicians, and journal editors from around the world (UK, US, Canada, Australia, the Netherlands, and Germany) to develop a consensus on the key items to report.

In preparation for the meeting, we conducted a systematic review of the literature to identify any existing guidance and relevant articles that explicitly or implicitly discuss aspects of reporting.  After numerous discussions the steering committee produced a long list of 76 candidate items, roughly grouped into 32 aspects of model development and validation. We sent this list out to the 20 additional participants prior to the meeting for their initial impressions and to survey how important they deemed each individual item.  As well as giving the participants an initial look at the items up for discussion it enabled them to suggest items we may have overlooked.

The meeting began on Monday afternoon (27th June) with four talks to set the scene from Doug Altman (Scope of the guidance), Hans Reitsma (diagnostic versus prognostic models), Karel Moons (model development versus model validation), and finally myself (initial long list of items: origin, empirical evidence, and initial feedback from the survey).  After a drinks reception amongst the memorabilia of Roger Bannister (a former Master of the college) and dinner in the impressive Harry Potter-esque college dining hall we were all set for Day 2. 

The whole of Tuesday (28th June) was spent on the daunting task of ploughing through the entire list of 76 items.  And despite some initial concerns raised by deputy BMJ editor Trish Groves after the first session, who seemed a little sceptical and felt we could struggle to get through the long list of items, we ploughed on, and after extensive discussions several items were deleted or merged and we successfully reduced the list by approximately 30 to 35 items. 

On the final day, day 3 (Wednesday 29th June), we heard talks and perspectives from clinicians and editors. Ian Stiell (University of Ottawa) gave us clinician’s point of view based on his extensive clinical experience on developing and validating prediction models. André Knottnerus (University of Maastricht, Journal of Clinical Epidemiology), despite only arriving at the college at 4am on the Wednesday morning, concluded the 3 day meeting by presenting a view and perspective jointly from a physician’s and a journal editor’s view.

Building on the discussions during the meeting and taking into consideration the views expressed by the participants, we will meet later this summer to work on the next draft of the checklist and begin drafting the manuscripts introducing and describing the checklist. Based on experience from other reporting guidelines it may well take up to two years to complete this work. All we need now is an acronym …

Gary Collins on behalf of the steering committee (Gary Collins and Doug Altman, Oxford; Karel Moons and Hans Reitsma, Utrecht). Gary Collins is a Senior Medical Statistician at the Centre for Statistics in Medicine University of Oxford

The meeting was funded by a NIHR Senior Investigators Award, Cancer Research UK and the Netherlands Organization for Scientific Research (ZONMW 918.10.615 and 91208004)

By submitting your comment you agree to adhere to these terms and conditions
  • PedroBrasil

    Does anyone is able to point a reference of quality assessment or risk of bias assessment for systematic reviews of clinical prediction models? Something like QUADAS2 for DTA or Cochrane risk of bias assessment for trials?

You can follow any responses to this entry through the RSS 2.0 feed.
BMJ blogs homepage

BMJ.com

Helping doctors make better decisions. Visit site



Creative Comms logo

Latest from BMJ.com

Latest from BMJ.com

Latest from BMJ.com podcasts

Latest from BMJ.com podcasts

Blogs linking here

Blogs linking here