Should we be more sceptical when interpreting injury studies?

 

By Eirik Halvorsen Wik @eirikwik & Ken Quarrie @KenQuarrie

Collecting injury data can be frustrating. The injury incidence and distributions look nice and clean in the final form of a journal article, but there are so many factors that can affect these metrics which can make you question, at times, whether they are worth reporting at all. The challenges faced in the process of putting together a reliable and valid injury database are not obvious and often not appreciated by readers who haven’t been involved in collecting this type of data. We would therefore like to take this opportunity to highlight a few factors that could be important to keep in mind when you read and interpret your next injury study.

The hidden limitations behind the injury rates

Injury surveillance programs provide the foundation for epidemiological research. They identify risk factors and provide estimates of the effectiveness of preventative interventions in sporting contexts (1). Whether the study aims to explore the most common injuries in high school basketball or the effect of implementing a warm-up program in team handball, practitioners, clinicians and researchers alike place considerable trust in their findings, and the conclusions inform decisions and may change practice.

Over the last three decades, scientists have been discussing the best theoretical concepts and practical approaches for collecting and reporting injuries. In an attempt to align and unify injury studies, consensus statements have been published to guide researchers on definitions, collection procedures and reporting, and in this way make injury studies more comparable. Researchers have also examined how different methodological approaches can affect your results. For example, we see that different injury definitions led to different outcomes (2). A broad injury definition (e.g. any physical complaint) will naturally capture more problems than using a narrow definition (e.g. missed match). At the same time the latter is considered more time-efficient and reliable (3). Along the same lines it has been found that medical staff are not able to capture all injuries, as shown in studies comparing this approach against reporting by the athlete (4, 5, 6), technical delegates (6), parents (7) or coaches (8). The overlap in the reported injuries from the different approaches is actually surprisingly small, and it seems more and more challenging to get consistent results the less severe the injuries get.

The level of investment in the data collection affects the outcomes

Our understanding of which factors affect the injury rates and of the strengths and limitations associated with different approaches is improving, yet there are plenty of unsolved issues. One of these is the differences that can occur within a surveillance program, where the injury definition and collection method is assumed to be consistent. Imagine that you are collecting injury data from different age groups over several seasons in a football academy, and for each squad there is a different physiotherapist responsible for recording all the injuries leading to a medical examination. Each year the team physiotherapist may change, and their motivation for following the pre-defined guidelines properly could be different. After all, this represents an added administrative workload which not only takes time, but may also be seen as pointless for some if they never see any immediate results and it doesn’t really affect their day to day practice. Still, they complete the task. Maybe because it is part of their job description and they are pushed to do so by their supervisor? Or perhaps they are genuinely interested in understanding the injury trends in the players they are treating and working on a research project relying on this data?

The scenario described above was the background for a recently published article in the Scandinavian Journal of Medicine and Science in Sport, where the researchers tried to see if injury rates were affected when some data recorders were more invested in the project than others (9). The results showed that a team physiotherapist relying on the data for research purposes reported 8.8 times more non-time-loss injuries than non-invested physiotherapists without invested supervision. Even when comparing physiotherapists who themselves weren’t invested in the project, it was shown that having an invested supervisor lead to a 2.5 times greater incidence. Clearly, relying on these incidence rates to assess the effect of a new training regimen or to implement a prevention program would be close to useless with variations of this magnitude! For time-loss injuries, however, the injury incidence was similar, and based on this study, comparative injury reports should probably be limited to these when multiple recorders are involved.

Researchers have warned against this problem for a long time, and it has been argued that narrow injury definitions should be applied due to the potential differences in interpretation between data collectors of what should be considered as a recordable injury (10). The findings also reiterate observations from surveillance projects elsewhere. A 1987 study by Roux and colleagues (11) found evidence of under-reporting of injuries related to the degree of involvement of the researchers with the schools from which the data were being collected. Rates of concussion were found to be substantially under-reported in schools monitored by correspondence (27 concussions amongst 20 schools over the playing season; 1.4 per school) versus those directly monitored by the researchers conducting the study (34 concussions amongst 6 schools; 5.7 per school).

So, should we be more sceptical when interpreting injury studies?

The simple answer is yes; clinicians, practitioners and researchers should be more sceptical when interpreting the results in injury surveillance studies. Already in 1997, Meeuwisse & Love (12) highlighted that estimating the direction and extent of bias by underreporting is important in order for researchers and clinicians to interpret the findings in light of the methodology applied. Ideally, everyone reading an injury study should have a basic understanding of how and why methodological variations lead to differences in the reported injury outcomes.

Researchers should also be required to provide more detailed information on the context the injury surveillance was performed in when publishing in major sports medicine journals. This could involve more rigorous reporting standards, such as mentioning the number of recorders involved or even stating their reliance on the data for completing an academic degree as a conflict of interest. Regardless, injury rates should not automatically be taken for granted by the reader to represent the true injury situation. A level of scepticism is required, and your understanding of the limitations behind the injury rates will ultimately enhance your ability to decide whether or not the results of a study are comparable to your own context and how much importance you should place in the findings.

***

Dr Ken Quarrie @KenQuarrie is the Chief Scientist for New Zealand Rugby. Since the early 1990s, he has studied factors related to injury risks and player performance, and implementing injury prevention initiatives within rugby union.

Eirik Halvorsen Wik @eirikwik is a PhD student at Aspetar in Doha, Qatar. He focuses on injuries in youth athletes at the Aspire Academy, with a specific interest in training load, growth and maturation, and injury epidemiology. He has a MSc in Exercise Physiology from the Norwegian School of Sport Sciences.

References

  1. van Mechelen W, Hlobil H, Kemper HC. Incidence, severity, aetiology and prevention of sports injuries. A review of concepts. Sports Med. 1992;14(2):82-99.
  2. Bahr R. No injuries, but plenty of pain? On the methodology for recording overuse symptoms in sports. Br J Sports Med. 2009;43(13):966-72.
  3. Clarsen B, Bahr R. Matching the choice of injury/illness definition to study setting, purpose and design: one size does not fit all! Br J Sports Med. 2014;48(7):510-2.
  4. Bjorneboe J, Florenes TW, Bahr R, Andersen TE. Injury surveillance in male professional football; is medical staff reporting complete and accurate? Scand J Med Sci Sports. 2011;21(5):713-20.
  5. Nilstad A, Bahr R, Andersen TE. Text messaging as a new method for injury registration in sports: a methodological study in elite female football. Scand J Med Sci Sports. 2014;24(1):243-9.
  6. Florenes TW, Nordsletten L, Heir S, Bahr R. Recording injuries among World Cup skiers and snowboarders: a methodological study. Scand J Med Sci Sports. 2011;21(2):196-205.
  7. Schiff MA, Mack CD, Polissar NL, Levy MR, Dow SP, O’Kane JW. Soccer injuries in female youth players: comparison of injury surveillance by certified athletic trainers and internet. J Athl Train. 2010;45(3):238-42.
  8. Yard EE, Collins CL, Comstock RD. A comparison of high school sports injury surveillance data reporting by certified athletic trainers and coaches. J Athl Train. 2009;44(6):645-52.
  9. Wik EH, Materne O, Chamari K, Duque JDP, Horobeanu C, Salcinovic B, Bahr R, Johnson A. Involving research-invested clinicians in data collection affects injury incidence in youth football. Scand J Med Sci Sports. 2019.
  10. Orchard J, Hoskins W. For debate: consensus injury definitions in team sports should focus on missed playing time. Clin J Sport Med. 2007;17(3):192-6.
  11. Roux CE, Goedeke R, Visser GR, van Zyl WA, Noakes TD. The epidemiology of schoolboy rugby injuries. S Afr Med J. 1987;71(5):307-13.
  12. Meeuwisse WH, Love EJ. Athletic injury reporting. Development of universal systems. Sports Med. 1997;24(3):184-204.

(Visited 1,803 times, 1 visits today)