Why not look at what you already know?

A little while ago we blogged on the surprisingly varied methods folk use to pick how how big an effect needs to be in order to be ‘clinically relevant’. A further paper on this theme has emerged that takes up a slightly different aspect of the challenge of getting the number right before doing a trial.

On the basics front, before you know how many people will be needed for a trial, you need to know

  • How big an effect you might see
  • How varied the effect is between people
  • What size of effect is gong to be ‘clinically relevant’ (ie above what level you want to prove the intervention will lie)
  • What chance of making the wrong call (“It works!” when it doesn’t, or vica versa) you are prepared to accept

It may be rather surprising to find that there hasn’t been, until very recently, a really well developed way of using systematic review / meta-analysis methodology to capture the stuff we already know before moving onwards to find out more, when moving between phase II (how-toxic-is-this-and-does-it-make-markers/images-better?) and phase III (are-there-fewer-dead-people?) trials. But now there is.

A group in Birmingham (UK) have looked in great detail at the problem. They use an example from thrombolytic therapy, they have captured six key things to think about while you’re doing your systematic review and power-preparing meta-analysis:

Framework
Use a binomial logistic regression model : avoid the "zero event" problems for small studies
Choice of model
Make decision on using random (almost always) or fixed effects meta-analysis based on good clinical thinking ... "same drug / dose / outcome / duration? ... if 'No' then it's random"
Heterogeneity
Following on from this, expect and incorporate heterogeneity in treatment effects by estimating wide
Uncertainty
Use a Bayesian framework to model parameter uncertainty (confidence intervals around effect size) and external evidence (such as the between-study differences) BUT sensitivity analysis to the choice of prior distributions is required. (OK - so this step is a bit too complex for a mini-blog to cover just yet...)
Prediction intervals
Report 95% prediction intervals 'cause you want to know the potential answers from new trials in different populations - your subsequent Phase III study.
Bias
Consider using skeptical prior distributions for treatment if you think the Phase II trials may be biased.

This all makes tonnes of sense, even if it is written in statistics:

  1. The phase II studies are going to be done in very diverse populations, and the ‘average’ answer might well belie a truth that sits differently for different groups – so use a random effects model and report prediction intervals
  2. Small phase II studies might well be optimised to collect and report if things go well (we know that publication bias is a massive problem and that not all things are shared openly) so account for this
  3. Small studies might end up with 0 events. And we know that 0 is NOT ‘doesn’t happen’. So use the right maths to model it with binomial approaches.

What I’d love to see now is a bunch of folk looking to see if we had done phase II meta-analysis like this would we have undertaken some phase III trials with more honest appreciation of how likely they are to be correct? And even better, hear how new Phase III trials are actually using this to take it all onwards!

– Archi

(Visited 168 times, 1 visits today)