You don't need to be signed in to read BMJ Blogs, but you can register here to receive updates about other BMJ products and services via our site.

statistics

Pondering the peanutabout…..

5 Jan, 17 | by Bridie Scott-Parker

I read the StreetsBlogUSA post Study: Diagonal Intersections are Especially Dangerous for Cyclists today with great interest, for a number of reasons that I thought I would share with you.

Firstly, there is no doubt that cyclists are a vulnerable road user group, and that particular segments of road are more problematic for cyclists. The research cited in the post pertains to an Injury Prevention publication which examined, in-depth, police reports of 300 car-cyclist crashes in the New York city area , and the police templates to record crash-pertinent information across the US. Innovative research which approaches a known problem from novel perspectives helps to provide additional pieces for the jigsaw puzzle that we seek to solve, and this research was an intriguing read indeed.

Secondly, the research revealed that some road configurations appeared to increase crash risk (i.e., we want to reconfigure these roads), and that the safest option in the most problematic circumstances was to separate the motor vehicle from the vulnerable cyclist. The ‘solution’ for cyclist safety can be a highly contentious issue, particularly here in Australia in which the motor vehicle has traditionally – through necessity – dominated our vast landscape, and as health and other benefits become apparent, cycling is gaining traction. Indeed, Cadel Evans, arguably Australia’s most celebrated cyclist, has tried to bring clarity to this divisive issue; stating that

I don’t think we should separate the two, because most people who ride a bike also have a car. In the end, they’re public roads for everyone. It’s a privilege to use roads; not a right.

 We have to respect everyone who’s using them, whether they’re driving a car, bus, tractor or truck, or riding a bike or are a pedestrian. We have to respect each other’s privilege and safety.”

in response to the question “What do you say to drivers who think cyclists don’t belong on the road?

Thirdly, the innovative solution of the peanutabout helps speak to ideas beyond the cyclist themselves – this is consistent with systems thinking which argues that safety (in this case, cyclist safety) emerges from a complex web of actions and interactions among a breadth of stakeholders who play a role in the larger safety system (e.g., in the case of my own research interests, an application of systems thinking in the young driver road safety). Given we are more than half way through the Decade of Action for Road Safety, and in the case of Australia, our road toll returned to an upward trajectory in 2016 after many years of a downward trajectory, such innovative thinking is critical.

Fourthly, the researchers noted that the templates used by police to record crash-pertinent information did not provide adequate details regarding the crash circumstances. Unfortunately this is not an uncommon problem, and again one that I have come across in my own research endeavours. If we are to effectively prevent injury, we need as much contextual and other information regarding the incident contributing to the injury.

Fifthly, while the peanutabout appears to be an ideal solution to the critical issues identified for the area noted, I am mindful that drivers do not always ‘cope well’ with complex infrastructure such as roundabouts. As a researcher within the realm of young driver road safety, and the mother of teen with the learner licence which requires full supervision whenever she is behind the wheel, Learner drivers often tell me that they ‘freak out’ when they come to a roundabout, and it is not actually round! According to Learners, roundabouts must be round, while oval roundabouts and others shaped as a parallelogram should be called something different. Hmmmm, on reflection, maybe Learners will be okay with a ‘peanutabout’…..

Finally, I paused to reflect on the safety implications for motorcyclists – another vulnerable road user group. While traversing a roundabout on his Harley Davidson last year, a colleague was driven over by a driver behind the wheel of 4WD, texting, who reported that she had checked the roundabout for vehicles before entering, and that she did not see – or hear – my colleague already on the roundabout (and thus he had right of way) until her front right tyre was on top of his leg and his motorbike. Thankfully he has managed to retain his leg, however he has had multiple operations, requires additional surgery, and will be scarred for life and never walk without support again. My colleague is the first to acknowledge that motorcyclists sometimes deliberately place themselves in danger through their riding behaviours – himself included – however we both eagerly await any intervention that will increase motorcycle safety when traversing complex infrastructure such as roundabouts.

Quantifying the burden of injury in ‘data-poor’ setting; a local-need- driven approach?

12 Oct, 16 | by Brian Johnston

Editor’s Note: earlier this year the journal published injury data from the Global Burden of Disease project. In an accompanying editorial I noted that many of the regional or sub-national estimates were “derived from aggregation and extrapolation of limited primary sources “and yet could “become the basis for policy or programming at an intensely local level.”

I saw this as a challenge to researchers, a call to “crowd source” burden of disease data from  the subregions and subpopulations unrepresented, or simply estimated, in the global aggregate. If we identified those needs and provided resources for good data collection, data management and data reporting , the information collected would be immediately useful at the global scale and  – one hopes – at the local level too. 

Dr. Safa Abdalla, a member of our editorial board, approaches that suggestion with some caution and – in this guest post – draws distinctions between the needs and experience of researchers and public health professionals in “data-rich” and “data poor” environments. – Brian Johnston (Editor-in-Chief)

 

safa-abdallaSome parts of the world, typically in the low- and middle- income country classification range, lack solid basic information about frequency and distribution of injuries in their population. That is not to say that they lack the sources or the capacity to measure them, but in those same places, the public health practice machinery had been occupied (not entirely unduly of course) with a cluster of conditions like communicable diseases that international actors have been investing heavily to tackle. In such environment, local objective assessments of all potentially impactful conditions may not have been deemed necessary. As a result, priority setting has been skewed towards those conditions of historical focus without heavy reliance on local epidemiological evidence.
The very first global burden of disease and injury assessment and subsequent versions have highlighted the need to consider the burden of all realistically possible conditions that affect human health – including injuries – in a way that allows objective comparisons and consequently objective priority setting. Arguably, data from so called ‘data-poor’ countries had not always been sufficient and/or accessible enough to feed into these global-level estimation projects and data gaps were filled with an assortment of methods that continue to evolve to date, probably at a rate that surpasses the rate of improvement in the quantity and quality of data from those countries.
The burden of disease assessment methodology is very demanding, not only computationally but in terms of data input, requiring epidemiological estimates at the very granular level of disease and injury sequelae, and synthesizing those into a range of novel summary measures (Disability-adjusted life years for example). Yet, incidence, prevalence and mortality of any condition at a broader level are key inputs for country- or locality-level policy development and health service planning and monitoring. It is in measuring those epidemiological quantities that the value of country-level estimation in data-poor settings lies, without necessarily delving into the complexities (and relatively unnecessary luxury for the time-being) of summary measure calculation. In addition, country-level assessments can uncover gaps in data systems that, when addressed, can create a seamless flow of better quality data for local decision making.
But with whom does the onus of carrying out such local-level estimation reside? Undeniably, global estimation efforts have produced country-specific estimates, stimulated country data hunts that fed data into their machinery and, in a few ‘data-rich’ countries, facilitated full burden of disease and injury assessments too. However, to date, injury burden estimates for the vast majority of ‘data-poor’ countries come from indirect estimation in these global projects. One can argue that alternatively, an approach that is driven by the need for public health action (be it strategy updating or service development) would be the most beneficial for producing estimates for those very countries at national, sub-national or subgroup levels. This approach entails that a local team of researchers, public health practitioners and other stakeholders evaluate all their data sources, use them in a simple and transparent fashion to develop the best estimates that fit their purpose, and take action based on the estimates and other relevant input while also identifying the data gaps and working on filling them. Arguably, informing local public health action should take priority over informing the global view, but global burden estimation efforts can still (and must) benefit from the products of this process. However, the process needs to be driven by local demand for estimates and not by the need to fill gaps for the global estimates. It should also be led, undertaken and owned by local teams of public health practitioners, analysts and researchers. The reason for this is that assessing and using health data are basic public health functions that all public health practitioners and analysts in any country should be capable of carrying out. Relying on external support from ‘global project’ teams to develop country estimates denies public health practitioners and researchers in those ‘data-poor’ countries the opportunity to hone their skills in public health data assessments and epidemiological estimation. It also denies them ownership of any subsequent efforts to improve data availability via epidemiological studies or administrative data collection.
This approach need not be limited to injury burden assessment but is much more needed for that latter. This is mainly because injuries in many low- and middle- income countries had been neglected for so long that epidemiological assessments of other conditions traditionally associated with those countries are likely more abundant. Hopefully as more and more country teams assess, use and improve their own injury data sources, this reality will eventually change.

Safa Abdalla
drsafa@yahoo.com
twitter: @Safa12233

Neuromuscular control program prevents lower limb injuries in men’s community Australian Football

23 Mar, 16 | by Angy El-Khatib

Injury researchers commonly study elite athletes because they participate in athletics year-round and thus have an increased chance of sustaining an injury. However, most athletes participate at the recreational or community level. (According to the NCAA, only 1.9% of American, high school, soccer players become professional players!)

Understanding that there is a difference between the physical profile of an elite player and a community player is imperative for making recommendations for injury risk factor management. The latest publication by Finch, et al. focuses on this matter.

In the current issue of Injury Prevention, Finch, et al. provide more evidence for targeted neuromuscular control exercise programs for decreasing knee injuries and lower limb injuries (LLI). The randomized-controlled trial (RCT) evaluated 18 male, non-elite, community Australian football clubs with data from more than 1,564 people. As profiled in the study, individuals who participated in the neuromuscular control intervention had a reduced rate of LLI as compared to control players.
The intervention was implemented as a “warm-up” prior to training. The program was based on the Preventing Australian Football Injuries through eXercise (PAFIX) study ; the control group participated in a “sham” program that included similar exercises. Although not in the published article, I was curious to know what PAFIX training fully entailed. The PAFIX training manuals include a detailed look at the neuromuscular exercises implemented, including a variety of plyometric training, stability and balance exercises, and change-of-direction drills.

Despite no statistically significant findings, this “analysis indicates that clinically relevant reduced knee injury and LLI rates can be achieved through targeted exercise training programmes in men’s community AF” (Australian Football).

This finding struck me as particularly important because of the vital role of community sport and recreation programs in providing nonelite athletes with the opportunity to gain the physical literacy skills needed to benefit from participation in sport and physical activity.

I look forward to more injury research which could potentially be generalized for nonelite, athletic communities.

I love a sunburnt country

10 Mar, 16 | by Bridie Scott-Parker

I received an email this week from a friend and colleague, alerting me to a report recently released by the Royal Flying Doctor Service:  The Royal Flying Doctor Service: Responding to injuries in remote and rural Australia.

The reports on falls, burns, poisonings, transport accidents, workplace injuries, drownings, self-harm and assault, with Australians living in remote and very remote areas:

  • Almost twice as likely as city residents to sustain an injury, and 2.2 times more likely to be hospitalised for an injury;
  • Four times more likely to die from a transport related injury than major city residents;
  • 3.8 times (remote) and 4.2 times (very remote) more likely to die from assault than major city residents; and
  • 1.7 times (remote) and 1.8 times (very remote) more likely to die from suicide than major city residents.

Injuries are a leading cause of death and hospitalisation among children—more children die from injuries (36%), than from cancer (19%) and diseases of the nervous system (11%) combined; Indigenous Australians; and agricultural workers.

While, as an injury prevention researcher, I encourage you all to become familiar with the report and the findings, the email sparked two memories for me. The first was a conversation with US colleagues after I invited them to come to visit Australia as we worked collaboratively. If you search the internet, you will find many animals might try to kill you. We have crocodiles, irukandji jellyfish, snakes, spiders, and my colleagues could share many more animals-of-death. Having lived in Australia my whole life, I reassured them that the likelihood of them meeting an untimely demise during their trip was pretty low, and the good news is they went home in one piece.

The second memory – sparked almost instantaneously – was a flashback to my childood. During primary school we learnt the most wonderful poem, My Country, by Dorothea Mackellar, by rote. This stanza in particular has always remained with me:

I love a sunburnt country,

A land of sweeping plains,

Of ragged mountain ranges,

Of droughts and flooding rains.

I love her far horizons,

I love her jewel sea,

Her beauty and her terror –

The wide brown land for me!

Despite the beauty of the poem, and that I love thinking about how this poem resonates with me, growing up in the country can be dangerous for many reasons, including the fact that medical assistance is not always close by.

p values misused

8 Mar, 16 | by Barry Pless

Don’t ask me why but I follow Retraction Watch faithfully. Recently there was a posting about p values I thought would be of interest to our readers and contributors. Here it is verbatim.

“We’re using a common statistical test all wrong. Statisticians want to fix that.

After reading too many papers that either are not reproducible or contain statistical errors (or both), the American Statistical Association (ASA) has been roused to action. Today the group released six principles for the use and interpretation of p values. P-values are used to search for differences between groups or treatments, to evaluate relationships between variables of interest, and for many other purposes. But the ASA says they are widely misused. Here are the six principles from the ASA statement:

P-values can indicate how incompatible the data are with a specified statistical model.
P-values do not measure the probability that the studied hypothesis is true, or the probability that the data were produced by random chance alone.
Scientific conclusions and business or policy decisions should not be based only on whether a p-value passes a specific threshold.
Proper inference requires full reporting and transparency.
A p-value, or statistical significance, does not measure the size of an effect or the importance of a result.
By itself, a p-value does not provide a good measure of evidence regarding a model or hypothesis.
We spoke with Ron Wasserstein, ASA’s executive director, about the new principles.

Retraction Watch: Why release these “six principles” now? What about this moment in research history made this a particularly pertinent problem?

Ron Wasserstein: We were inspired to act because of the growing recognition of a reproducibility crisis in science (see, for example, the National Academy of Sciences recent report) and a tendency to blame statistical methods for the problem. The fact that editors of a scholarly journal – Basic and Applied Social Psychology — were so frustrated with research that misused and misinterpreted p-values that they decided to ban them in 2015 confirmed that a crisis of confidence was at hand, and we could no longer stand idly by.

Retraction Watch: Some of the principles seem straightforward, but I was curious about #2 – I often hear people describe the purpose of a p value as a way to estimate the probability the data were produced by random chance alone. Why is that a false belief?

Ron Wasserstein: Let’s think about what that statement would mean for a simplistic example. Suppose a new treatment for a serious disease is alleged to work better than the current treatment. We test the claim by matching 5 pairs of similarly ill patients and randomly assigning one to the current and one to the new treatment in each pair. The null hypothesis is that the new treatment and the old each have a 50-50 chance of producing the better outcome for any pair. If that’s true, the probability the new treatment will win for all five pairs is (½)5 = 1/32, or about 0.03. If the data show that the new treatment does produce a better outcome for all 5 pairs, the p-value is 0.03. It represents the probability of that result, under the assumption that the new and old treatments are equally likely to win. It is not the probability the new treatment and the old treatment are equally likely to win.

This is perhaps subtle, but it is not quibbling. It is a most basic logical fallacy to conclude something is true that you had to assume to be true in order to reach that conclusion. If you fall for that fallacy, then you will conclude there is only a 3% chance that the treatments are equally likely to produce the better outcome, and assign a 97% chance that the new treatment is better. You will have committed, as Vizzini says in “The Princess Bride,” a classic (and serious) blunder.

Retraction Watch: What are the biggest mistakes you see researchers make when using and interpreting p values?

Ron Wasserstein: There are several misinterpretations that are prevalent and problematic. The one I just mentioned is common. Another frequent misinterpretation is concluding that a null hypothesis is true because a computed p-value is large. There are other common misinterpretations as well. However, what concerns us even more are the misuses, particularly the misuse of statistical significance as an arbiter of scientific validity. Such misuse contributes to poor decision making and lack of reproducibility, and ultimately erodes not only the advance of science but also public confidence in science.

Retraction Watch: Do some fields publish more mistakes than others?

Ron Wasserstein: As far as I know, that question hasn’t been studied. My sense is that all scientific fields have glaring examples of mistakes, and all fields have beautiful examples of statistics done well. However, in general, the fields in which it is easiest to misuse p-values and statistical significance are those which have a lot of studies with multiple measurements on each participant or experimental unit. Such research presents the opportunity to p-hack your way to findings that likely have no scientific merit.

Retraction Watch: Can you elaborate on #4: “Proper inference requires full reporting and transparency”?

Ron Wasserstein: There is a lot to this, of course, but in short, from a statistical standpoint this means to keep track of and report all the decisions you made about your data, including the design and execution of the data collection and everything you did with that data during the data analysis process. Did you average across groups or combine groups in some way? Did you use the data to determine which variables to examine or control, or which data to include or exclude in the final analysis? How are missing observations handled? Did you add and drop variables until your regression models and coefficients passed a bright-line level of significance? Those decisions, and any other decisions you made about statistical analysis based on the data itself, need to be accounted for.

Retraction Watch: You note in a press release accompanying the ASA statement that you’re hoping research moves into a “post p<0.05” era – what do you mean by that? And if we don’t use p values, what do we use instead?

Ron Wasserstein: In the post p<0.05 era, scientific argumentation is not based on whether a p-value is small enough or not. Attention is paid to effect sizes and confidence intervals. Evidence is thought of as being continuous rather than some sort of dichotomy. (As a start to that thinking, if p-values are reported, we would see their numeric value rather than an inequality (p=.0168 rather than p<0.05)). All of the assumptions made that contribute information to inference should be examined, including the choices made regarding which data is analyzed and how. In the post p<0.05 era, sound statistical analysis will still be important, but no single numerical value, and certainly not the p-value, will substitute for thoughtful statistical and scientific reasoning.

Retraction Watch: Anything else you’d like to add?

Ron Wasserstein: If the statement succeeds in its purpose, we will know it because journals will stop using statistical significance to determine whether to accept an article. Instead, journals will be accepting papers based on clear and detailed description of the study design, execution, and analysis, having conclusions that are based on valid statistical interpretations and scientific arguments, and reported transparently and thoroughly enough to be rigorously scrutinized by others. I think this is what journal editors want to do, and some already do, but others are captivated by the seeming simplicity of statistical significance.

Pless note: I would be interested if any readers disagree. Please outline your views in 20 words or less. (Just kidding)

Latest from Injury Prevention

Latest from Injury Prevention