Jonathan A Michaels: Bridging the gap between academics and practitioners

jonathan_michaelsDuring my career as a clinical academic I have seen considerable changes to the clinical, academic, and financial structures within the NHS associated with the introduction of evidence-based practice and elaborate systems for evaluating and recommending on the use of healthcare technologies. [1] Whilst the improved use of research evidence and explicit consideration of the risks and benefits of new treatments is to be commended, some of these changes may have dis-incentivised research amongst “non-academic” clinicians and created an increasing a rift between academic experts and practitioners working at the coalface.

In the late 1980s the concept of a hierarchy of evidence was introduced, often presented as a pyramid with systematic reviews of randomised controlled trials (RCT’s) at the pinnacle. [2,3] Whilst RCT’s may be the most reliable means of obtaining numerical estimates for treatment effects, they have well documented limitations and the desire to provide clear results with minimum trial size and cost may result in the selection of outcomes, comparators, and populations that limit their applicability. [4]

Examination of any health technology appraisal will show that most parameter estimates are not available from RCT’s, including many key outcomes, such as quality of life and estimates of the incidence, costs, and disutility associated with adverse events. Whilst these parameter estimates are necessary to inform appraisals, they are not sufficient. Every decision requires a host of qualitative choices; framing the decision problem, categorising health states, choices of comparators and outcomes, structural issues of modelling and curve fitting, and more fundamental value judgements that underpin choices about which technologies are developed, researched, and appraised.

In La Distinction, Pierre Bourdieu introduces the idea that symbolic power is created through having control “…over the classificatory schemes and systems which are the basis of the representations of the groups.” [5] Vandana Shiva described a form of “Violence against the subject of knowledge … perpetrated socially through the sharp divide between the expert and the non-expert” and Thomas Teo refers to the “epistemological violence” that may occur when the researcher is responsible not only for the analysis of data, but the value judgments and interpretations that are presented as “knowledge” [6, 7].

In recent years hierarchies of evidence, rather than simply representing measures of reliability of numerical parameter estimates, have become the de facto measure of the value and status of research methods and of the researchers who undertake them, creating an academic elite of “experts” largely drawn from clinical trialists working within a rationalist, positivist model.

Such experts have become the arbiters of research “value,” determining funding through grant giving bodies, advisory boards, and the editorial boards that select the publications and citations upon which state funding for academic institutions may depend. It is also these experts who may be called upon to populate the numerous aspects of any decision problem that require expert opinion or value judgements. Even where systems for lay representation are in place, it may be difficult or impossible for a layperson or non-academic practitioner to have sufficient understanding of the appraisal process and complex economic models to generate the required information. This may serve to disenfranchise those closest to the decision, putting the onus for value judgements on a select group with a specific perspective and potential conflicts of academic or economic interests.

It is thus incumbent on those who make research funding decisions and develop the methods of technology appraisal to find techniques that encourage true participation in decision-making, rather than risking the epistemological violence that may occur through placing power in the hands of a small academic elite, who may come to see the wider public, non-academic practitioners and patients as “other.”

References:

1. National Institute for Health and Care Excellence. Guide to the processes of technology appraisal. Process and methods [PMG19] 2014.
2. Sackett DL. Rules of evidence and clinical recommendations on the use of antithrombotic agents. CHEST Journal 1989; 95(2_Supplement): 2S-4S.
3. Dartmouth Biomedical Libraries. Evidence-Based Medicine (EBM) Resources.
4. Higgins JPT, Altman DG, Gøtzsche PC, et al. The Cochrane Collaboration’s tool for assessing risk of bias in randomised trials. BMJ 2011; 343.
5. Bourdieu P. Distinction : a social critique of the judgement of taste. Cambridge, Mass.: Harvard University Press, 1984. (at p467)
6. Shiva V. The violence of reductionist science. Alternatives 1987; 12(2): 243-61.
7. Teo T. From Speculation to Epistemological Violence in Psychology A Critical-Hermeneutic Reconstruction. Theory & Psychology 2008; 18(1): 47-67.

Jonathan A Michaels, professor of clinical decision science, School of Health and Related Research, University of Sheffield.

Competing interests:  I have no current personal or financial interests that are directly relevant to this work. However, over the past 3 years, I have carried out research as an external consultant through a company, of which I am a Director, Michaels Consulting Ltd. This includes work as Principal Investigator on a Research Programme funded by NIHR (PGfAR – RP-PG-1210-12009) and some advisory board meetings and telephone interviews for industry regarding matters relating to technology appraisal. The matters dealt with in this paper are personal views related to my experience as a Clinical Academic in the NHS and my involvement with NICE and HTA processes and do not relate to these interests.