Clinical trials in sports physiotherapy. Building on 5 decades of research to produce even better trials: a critical review and tips for improvements

By  Steven J Kamper1, 2, Anne M ,Moseley1   and, Mark R Elkins3

1 The George Institute for Global Health, University of Sydney. Australia

2 The EMGO+ Institute, VU University Medical Centre, Amsterdam. The Netherlands

3 Department of Respiratory Medicine, Royal Prince Alfred Hospital, Sydney, Australia

Reebok Mens Blue Total Body Toning KitIntroduction

The last decades have seen an enormous shift in the practice of healthcare with widespread acceptance of the evidence-based practice paradigm.[1] This change has been accompanied by increases in the volume of clinical research [2] and refinement of our understanding of how research should be conducted and reported.[3] Concurrent with these changes, longitudinal analyses have identified improvements in the conduct and reporting of various types of research in many disciplines of healthcare.[3-7] However, not all such analyses have identified improvements.[8]

A recent assessment of reports of randomised trials of physiotherapy interventions,[9] allows trial reports in sports physiotherapy to be compared to other clinical subdisciplines of physiotherapy, such as musculoskeletal, neurology or cardiorespiratory. Compared to all other physiotherapy trial reports, trials that enrolled sporting participants reported fewer key design features that reflect rigorous methods and allow readers to interpret the research fully. Therefore, the purpose of this article is to highlight the methodological features that are often lacking in sports physiotherapy trial reports, explain their significance and offer suggestions for improving the conduct and reporting of future trials.

Issues in sports physiotherapy trials

Moseley et al [9] examined the conduct and reporting of almost 15,000 trial reports that were fully indexed on the Physiotherapy Evidence Database (PEDro). When the trial reports were evaluated using items on the PEDro scale,[10] sports physiotherapy trials performed less well on several items in comparison to other subdisciplines. The results are summarised in Table 1 along with those for musculoskeletal trials and all trials, for the purposes of comparison.

Table 1. Studies meeting PEDro criteria by subdiscipline of physiotherapy.

PEDro Item

Sports

Musculo-skeletal

All

eligibility criteria and source specified

61%

75%

73%

random allocation

94%

94%

95%

concealed allocation

15%

27%

22%

baseline comparability

52%

68%

68%

blinding of participants

10%

13%

8%

blinding of therapists

4%

4%

2%

blinding of assessors

28%

40%

32%

more than 85% follow-up

52%

60%

57%

intention-to-treat analysis

14%

26%

22%

reporting of between-group statistical comparisons

93%

92%

93%

reporting of point measures and measures of variability

84%

85%

87%

Total number of studies

743

4,290

14,910

Figures represent the percentage of indexed studies scored as “Yes” to the corresponding quality item. All trial reports indexed up until 2011 were included in the survey. Data from Moseley et al (2013).

In the following items, sports physiotherapy trial reports rated substantially lower than trial reports from other subdisciplines. For each item we provide a short explanation and recommendations for improvement.

Eligibility criteria and source of participants

This item concerns a relatively simple issue of reporting clarity. The note for assessment of this PEDro item states that it is satisfied if the report describes the source of subjects and a list of criteria used to determine who was eligible to participate in the study. The population of interest and specific eligibility criteria for entry into the study should have been established as part of formulating the research question. Clearly stating this information is critical to interpreting the study findings in terms of their generalisability. Achieving this item should be straightforward for researchers; it is a matter of specifying inclusion criteria in the study protocol and registration, then stating them explicitly in the published report of the trial, along with from where and how the study participants were recruited.

Concealed allocation

Allocation concealment means that the researcher determining whether a potential participant should be included in the study is unaware of the group to which the participant would be assigned. If the researcher determining inclusion knows whether each upcoming allocation is to the treatment or the control group, this could bias the decision to include or exclude some potential participants. There is empirical evidence that trials that do not have concealed allocation report larger between-group differences than those that do.[11] Allocation can be concealed by placing each of the randomly ordered group allocations in a sealed, opaque and consecutively numbered envelope, and only opening the envelope after a participant has been enrolled in the study and allocated a study number. These envelopes obviously need to be prepared before recruitment begins, but this is a simple process and can be done by a research collaborator at low cost. Another option is to separate inclusion from randomisation, so the two tasks are performed by different people, in different places. Some trials achieve this separation by using a telephone, email or web-based randomisation service which, while more expensive than the envelope system, is relatively inexpensive (eg, http://www.ctc.usyd.edu.au/our-research/biostatistics/randomisation.aspx could randomise 200 participants using an interactive voice response telephone system for about $2,500 AUD).

Groups are similar at baseline

Groups with similar characteristics can be expected to have similar outcomes, so if only one of two similar groups receives an intervention in a trial, the difference in outcome between the groups can be attributed to that intervention. If there are differences between the groups before the intervention is applied, the effect of the intervention is less clear. Simply relying on the fact that there was random allocation to groups is not sufficient to demonstrate that the groups are similar.

As with the first item about eligibility criteria and source, this item can often be met by simply improving the clarity and completeness of the reporting of the study. The relevant information to show comparability of the groups includes baseline measures of; simple demographics, any likely prognostic factors, and outcome measures for each group. Summary data for each group can simply be presented in a table for comparison. Note that statistical comparisons of the groups at baseline are not recommended.[12, 13]

Blinding of assessors

Blinding is often difficult to achieve in trials that evaluate the effects of complex interventions like physiotherapy. Blinding (sometimes called masking) means that the person specified (participant, therapist, assessor) does not know the group to which the participant has been allocated. While blinding participants and therapists to complex interventions can be difficult, blinding assessors (ie, the researchers that collect the outcome measures) should be possible in most circumstances. For participant-rated outcomes such as questions about symptom severity, researchers should strive to blind the investigator administering the questionnaire so that they cannot introduce bias by knowing the participant’s group allocation. Where participants cannot be blinded, researchers should also consider including observer-scored (not self-rated by the participant) outcome measures in the study so that at least these outcomes can be assessor-blinded. Examples of the latter type of measures might be performance measures, imaging findings or work status as recorded by the employer or insurance provider.

Using blinded assessors can create logistical difficulties and potentially add to the expense of a study, simply because it requires someone other than the treating clinician(s) to collect the outcome measures. Solutions to this problem might be to add the cost of blinded outcome assessment into the research budget, trade off time between studies with other researchers or clinicians, and to use research students or interns for data collection.

Adequate follow-up rates (more than 85% follow-up)

Achieving this item requires that data for the key outcomes (e.g., patellofemoral pain score) are obtained for at least 85% of the participants originally allocated to groups. This is, however, an arbitrary cutoff. The greater the proportion of missing data, the less reliable the results of a study will be. This is because participants who drop out of a study can introduce bias if they differ from those that stay in the study. A high dropout rate can reduce the effectiveness of randomisation and also affect generalisability of the findings. A further consideration is that a high dropout rate might be an indication that many patients find the rigours, expectations or demands of the intervention too difficult, which again affects generalisability.

Ensuring adequate follow-up rates, particularly where participants are to be followed up beyond the intervention period, is demanding and requires careful preparatory work. Strategies that can minimise the number of dropouts include: ensuring the demand of outcome assessment is not too great, in terms of the number, difficulty or length of measures; providing alternative options for data collection, for example in person or by mail, email or telephone; and ensuring there are enough researchers to collect the measures. Collecting a number of different forms of contact information (eg, home, work and mobile telephone numbers, email address and postal address) and supplementary contacts (eg, friends or family) is useful so that alternative points of contact are available if a participant moves or changes their email address. Providing incentives to participants can also be considered, but requires due regard to ethical considerations. Of importance is ensuring that participants completely and fully understand the demands on their time that completing the follow-ups will involve, which is something that needs to be discussed before enrolment in the study.

Intention-to-treat analysis

Almost inevitably there are protocol violations in randomised controlled trials. Protocol violations may involve participants not receiving treatment as planned (including non-compliance) or receiving treatment when they should not have. When the data are analysed, each subject should be included in the analysis as though they had received the treatment or control condition as planned. This is usually referred to as “analysis by intention to treat”.

Not performing the analysis in this way reduces the effectiveness of the randomisation process and fails to account for the likelihood that patients cannot tolerate the intervention, thereby decreasing the generalisability of the findings. Intention-to-treat analysis gives a better estimate of how an intervention is likely to work in a pragmatic setting, rather than an idealised situation where everyone receives and complies with the treatment as planned.[14] Researchers should clearly state that they intend to analyse by intention to treat in the registered trial protocol. Also, all investigators on the trial must understand that participants who do not receive their allocated treatment (whether due to non-compliance, intolerance or other reasons) should be followed up for all planned outcome measures if at all possible so that they can be included in the analysis. Obtaining statistical advice, either by contracting a statistician or by incorporating a statistician in the research team for the trial, may also be beneficial.

Conclusion

Improving study methodological quality demands detailed planning by the researchers and often greater burden on researchers and participants. The benefit is more reliable results that are more likely to be published in good quality journals and to influence healthcare practice and policy.

The subdiscipline of sports physiotherapy can continue to improve the evidence generated by its clinical trials by embracing the need for the most rigorous research methods. Many of the issues identified can be remedied easily. From this report, the items eligibility criteria, source of participants, and similarity of groups at baseline simply require better reporting. Initiatives such as CONSORT[3] offer useful guides in these cases. The items concealed allocation and intention-to-treat analysis mainly require careful planning. Methodical preparation and planning are assisted by trial registration and publication of a study protocol – steps that are becoming increasingly mandatory for publication of trial results.[15] In addition to careful planning and clear reporting, meeting the requirements for blinding of assessors and adequate follow-up may require greater resources. An excellent study may therefore require extra time, personnel or money which can be easily underestimated by researchers. This report serves to draw attention to design issues so that these important considerations are not overlooked.

References

  1. Sackett DL, Rosenberg WM, Gray JA, et al. Evidence based medicine: What it is and what it isn’t. Brit Med J 1996;312:71-72.
  2. Maher CG, Moseley AM, Sherrington C, et al. A description of the trials, reviews, and practice guidelines indexed in the PEDro database. Phys Ther 2008;88:1068-77.
  3. Moher D, Jones A and Lepage L. Use of the CONSORT statement and quality of reports of randomized trials: a comparative before-and-after evaluation. J Am Med Assoc 2001;285:1992-95.
  4. Graf J, Doig GS, Cook DJ, et al. Randomized, controlled clinical trials in sepsis: Has methodological quality improved over time? Crit Care Med 2002;30:461-72.
  5. Moseley A, Elkins MR, Herbert RD, et al. Cochrane reviews used more rigorous methods than non-Cochrane reviews: survey of systematic reviews in physiotherapy. J Clin Epidemiol 2009;62:1021-30.
  6. Moseley A, Herbert RD, Maher CG, et al. Reported quality of randomized controlled trials of physiotherapy interventions has improved over time. J Clin Epidemiol 2011;64:594-601.
  7. Smidt N, Rutjes AWS, Van der Windt DAWM, et al. The quality of diagnostic accuracy studies since the STARD statement – Has it improved? Neurol 2006;67:792-97.
  8. Wilczynski NL. Quality of Reporting of Diagnostic Accuracy Studies: No Change Since STARD Statement Publication—Before-and-after Study 1. Radiol 2008;248:817-23.
  9. Moseley AM, Elkins MR, Janer-Duncan L, et al. The quality of reports of randomized controlled trials varies between subdisciplines of physiotherapy. Physiother Can 2013;accepted 24 June:
  10. Maher CG, Sherrington C, Herbert RD, et al. Reliability of the PEDro scale for rating quality of randomized controlled trials. Phys Ther 2003;83:713-21.
  11. Schulz KF, Chalmers I, Hayes RJ, et al. Empirical evidence of bias. Dimensions of methodological quality associated with estimates of treatment effects in controlled trials. J Am Med Assoc 1995;273:408-12.
  12. Pocock SJ, Assmann SE, Enos LE, et al. Subgroup analysis, covariate adjustment and baseline comparisons in clinical trial reporting: current practice and problems. Stats in Med. 2002;21:2917-30.
  13. Herbert RD. Randomisation in clinical trials. Aust J Physiother 2005;51:58.
  14. Hollis S and Campbell F. What is meant by intention to treat analysis? Survey of published randomised controlled trials. Brit Med J 1999;319:670-74.
  15. Costa LOP, Lin CWC, Grossi DB, et al. Clinical trial registration in physiotherapy journals: recommendations from the International Society of Physiotherapy Journal Editors. J Physiother 2012;58:211-3.

*******************************************************

For all correspondence and reprints:

Steven Kamper, The George Institute for Global Health, University of Sydney

E-mail: skamper@george.org.au

 

 

(Visited 168 times, 1 visits today)