Pritpal S Tamber: Soft-wiring knowledge

Knowing when and how to apply established knowledge into practice is difficult. A recent article in The Lancet shows why.

The thickness of the inner walls of the carotid artery is associated with cardiovascular disease; so much so that the American College of Cardiology (ACC) and the American Heart Association (AHA) support the use of a single measurement to risk-stratify patients with intermediate risk. With such heavy weight organisations backing the measurement you’d be forgiven for thinking this to be established knowledge that should be applied to all processes within cardiology clinics. It’s easy to start imagining the uses: measure everyone’s carotid artery thickness, plot them on a graph, and start proactively intervening with those at the far end.

However the research that backs the use of this measurement isn’t that strong. In fact, the guidelines describe it only as a “reasonable” test (this is based on the rigour of the research that underpins it). The article published in The Lancet finds no association between the rate at which the walls thicken and subsequent cardiovascular events.

Does this mean we abandon the test? If only life was that simple. Another study published in 2011 showed that the increasing thickness of the carotid artery wall was strongly associated with having a stroke. Clearly the indicative value of the thickness of the walls of the carotid artery is an area of knowledge that’s still in development. Rather unhelpfully – although perhaps realistically – an associated comment in the journal concludes that “clinicians could continue to use a single measurement…if required” (italics mine). What do you do with a “could”?

These articles suggest that the value of carotid artery thickness as a measurement is yet to be fully established. But cardiovascular disease is a major source of morbidity and mortality worldwide so anything we can do to help clinicians focus their efforts can only be a good thing. Perhaps, then, there is a case for cautiously using the knowledge;  with that caution reducing as the evidence becomes more and more clear. Certainly the ACC and AHA seem to think so.

Problems arise, however, when seemingly established knowledge is hard-wired into systems that support the delivery of care. As doubts grow as to the value of specific clinical markers, organisations need to be able to pull knowledge out of their systems, or at least moderate how that knowledge is used. In essence, they have to soft-wire their systems.

But how do you soft-wire a system?

There is no easy answer but in my experience there is a general principle: accept that there is no such thing as established knowledge and build systems that are sensitive to change and able to respond.

This is not easy, I realise, but all too often I have come across organisations and processes that claim to be aligned with “the evidence” (whatever that means) with no clear mechanism for staying abreast of changes. I have also rarely seen the kind of clinical governance that is needed around deciding whether and/or how changes in knowledge should change processes.

At best, this is irresponsible; at worst it’s dangerous.

Pritpal S Tamber is the director of Optimising Clinical Knowledge Ltd, a consultancy that helps organisations improve how they use established clinical knowledge. He was previously the medical director of Map of Medicine Ltd, a company that creates clinical pathways to help health communities design services. He was the editorial director for medicine for BioMed Central Ltd and he was also the managing director of Medicine Reports Ltd. He has twice been an editor at the BMJ, the first time as the student editor of the Student BMJ.

Competing interests: None, although I provide consultancy to organisations to help them make better use of established knowledge, an example of which is in this blog post.

 

(Visited 7 times, 1 visits today)
  • Martin Mcshane

    Pritpal, I enjoyed your thinking.
    In Lincolnshire we have a multiprofessional forum (PACEF – or as I like to call it a ‘mini-me NICE’) which looks at the evidence and reviews the evidence we apply in our guidelines and policy on a regular basis. Is this the sort of governance you are seeking? It is good but you have made me think it may not be good enough. What rigour do we need to put in place to address what you have highlighted?

  • Hi Martin, 

    Thanks for the question. What you’ve described sounds good, and certainly better than what I have seen in most places. I see two dimensions to this issue; how knowledge is embedded into systems and how the knowledge landscape is kept abreast of. 

    Once your mini-me NICE has achieved agreement on what knowledge to embed locally, it needs to ensure there is a clear audit trail of where it is being used and who has the right to change it. The latter part of this is essential; having named individuals whose job it is to ‘own’ content is a powerful way to ensure that people (have to) care (building it into job descriptions helps). 

    Who the right owners are will depend on what ’embed’ means. If it means creating printed information using a local printer, it can be anyone (responsible). But if it means adding encoded knowledge to electronic systems then the owners need to understand the process for changing it, when required. 

    Staying abreast of the knowledge landscape can be difficult. The mini-me NICE should be clear on what knowledge it monitors. My view is that it should be so-called secondary literature – systematic reviews and guidelines. It should also be clear what quality threshold to employ (there are validated tools to assess the quality of most article types). The mini-me NICE could relatively quickly agree which knowledge sources they’re checking. 

    The tough part is knowing how often to check. Some areas of healthcare have a regular churn of knowledge, while others are more static. This is something else that the mini-me NICE could agree on. In my experience, there is a lot of churn in high-volume specialties, such cardiology, but less in others. The general rule of thumb amongst knowledge nerds (like me) is that you should do a good scan of the knowledge landscape every 18 months or so, although there is little ‘evidence’ to say this is right, as such. 

    With this thorough approach to how and where knowledge has been embedded into local systems and an agreed approach to how the knowledge landscape is to be scanned and responded to, your mini-me NICE can operate safe in the knowledge that what’s being used in care – whether at the coal-face or at a population level – is aligned with best evidence at all times. 

    I hope that makes sense. 

    Pritpal

  • Martin Mcshane

    Hmm, we might be good – but we’re not that good!!
    Food for thought…
    Thanks
    Martin

  • Martin, 

    If this is something you’re looking into further you may find this article of value:

    Governance for clinical decision support: case studies and recommended practices from leading institutions
    http://www.ncbi.nlm.nih.gov/pubmed/21252052 

    Pritpal