You don't need to be signed in to read BMJ Blogs, but you can register here to receive updates about other BMJ products and services via our site.

Quality Improvement

Look not for the fleck in your brother’s eye, but the gorilla in your own…

25 Jan, 16 | by Toby Hillman


Gorilla

Teaching for medical graduates approaching clinical exams such as the MRCP PACES exam is an anxious time.  One is expected to ‘perform’ under pressure, wary of the need to elicit signs leading to potentially outlandish diagnoses.  The breadth of knowledge and skills required to confidently identify CMV retinitis at one station, followed by a complicated communication scenario, with a subtle fasciculation to pick up on at the next is quite a task.  It is also a task that is asked of graduate trainees in almost all specialties – the clinical portion of any membership exam is a vital stepping stone on the route to full qualification and independent practice.

I was teaching some PACES candidates this week, and played my usual game with them – what can I tell by observation of a patient and just watching their examination – that they miss.  This isn’t just a mean trick – I find it helps me to concentrate on what they are doing, and in turn, helps to identify additional signs that might have been missed completely, be unknown, or simply passed off as unimportant.   The gems this week included the white plaster over the bridge of the nose of a gentleman with COPD – which led to a further inspection of the surroundings  – and the tell-tale NIV mask and tubing just poking out behind a bedside cabinet.  The second was the white sheet of A4 stuck at eye level behind another patient’s head with the very large letters NBM written in green marker pen.

In both cases these clues to the wider diagnosis were staring the candidates in the face.  However, it was only when brought to the fore that their implications for the clinical context was appreciated.  So I finished the teaching session having had my fun, and the pupils might have learned a bit more about the value of careful observation, and how this can influence clinical reasoning.  It was only when I got home and read this recently published paper by Dr Welsby on the neurophysiology of failed visual perceptions that I started to consider this interaction a little more objectively and how the lessons from it could be applied in other spheres.

The paper is one of those analyses of physiology and its application to everyday life that makes medical education and medical practice so enjoyable.  Dr Welsby has taken 3 eye problems, and 7 brain problems, and presented them in such a way as to highlight why clinical experience – the act of examining patients, and the slow acquisition of the lived experience of using and applying knowledge over time – is so important in medical education – and suggests several reasons why he feels trainees today aren’t afforded the same opportunities to develop this experience as he was.

The paper can also give lessons for the more experienced clinicians, and perhaps could be used to highlight errors of clinical understanding on a much wider scale.

Essentially, the data our brains work with is flawed – and to compensate – our brains make it up, or completely miss the obvious because we were concentrating on something else.  The paper has links to two videos which are well worth looking up – this one is my favourite.  The video is a perfect demonstration of how easy it is to miss vital information, and when we apply this to the situations we work in daily – it is more impressive that we ever reach diagnoses, rather than that we sometimes get them wrong.

As one climbs the slippery pole of the medical hierarchy, it would be as well to reflect on Dr Welsby’s observations further.  Clinical experience can make what seems impossible to a first year graduate,  second nature to the fourth year registrar.  The development of this experience allows senior clinicians to spend time thinking and working on other problems – but still with the same eyes and the same brains.  Indeed – it is often successful clinicians who are chosen to lead on projects far from the clinical environment, and demand a somewhat different form of observation and synthesis of information.

As more and more clinicians are becoming involved in leadership positions, and managerial roles – those lessons learned at the bedside should not be forgotten.  If the data from our health systems is flawed – the decisions we take to modify, ‘improve’ and reform them will be as flawed as those conclusions reached by a brain compensating for the incomplete information fed to it by the eyes.

Leaders from the medical profession have a duty to both remain patient with their students who miss the ‘glaringly obvious’ but must also remain vigilant for the gorillas hiding in plain sight no matter where they find themselves.

 

 

If a job’s worth doing…

13 Jul, 15 | by Toby Hillman

Cryptic clothing label

Image via WM Jas on Flickr

Competency based curricula have largely replaced purely knowledge-based curricula in medical education.  As assessment of competency has become a seemingly endless task, the participants in medical education have often complained that learning and development has been reduced to a series of hoops to jump through or, even worse, a series of boxes to tick.

The development of clinical governance frameworks in the late 1990s formalised the involvement of trainee physicians in the process of clinical audit.  Audit became mandated, and as such, became a box to tick.  If one could not demonstrate an audit of some description (any really) then one could not progress.

As such, clinical audit is one of the more reviled duties undertaken by trainees (in their own time) as very often the information ‘uncovered’ is simply an explicit statement of an open secret.  The time taken to prove an acknowledged reality is usually resented by the auditor, and the recipients of the news that their practice falls below expected standards aren’t usually overjoyed.  The result of such projects commonly a list of recommendations, presented on the last week of an attachment, by a junior member of the team, that will be agreed by all, but actioned by no-one. (Only around 5% of audits ever make any difference to practice)

Quality Improvement projects have been lauded by many (me included) as an answer to the problems with clinical audit:  the burden of data required to make changes is less, the measurements and standards can be set by the instigators, and can be flexible enough to actually be achieved, and the change process is embedded as a primary aim within the most common methodologies employed.

Having been adopted into many curricula, quality improvement is now suffering many of the same problems as clinical audit. The projects are usually carried out in trainee’s own time, but are a mandated part of training – leading to resentment. The subjects tackled tend to be huge (‘We need a new IT system – the current one is not fit for purpose’) or focused on another team’s practice (‘The radiology department need to be quicker at doing the tests we ask for…’)  The doctors participating in a QI project often come with a solution in mind (‘We will just get a bit of data – do what they did at my last hospital – and then we’ll show an improvement’) without really understanding the problem in its current context.

Sadly the result is that some of the most powerful tools for driving change within organisations have been reduced to a ‘tick’ on an assessment sheet, and are done as last-minute efforts, to scrape through the next annual progression check.

This does not mean that audits are inherently useless, or that QI projects should be abandoned as a tool for engaging junior doctors in understanding how to improve clinical practice.  What it means is that, if a job is worth doing, it is worth doing it properly…

To do a job properly, one must know what is required, and what the best tools for the job are.  Not everything can be part of a QI project, and not everything needs auditing.  A paper republished in this month’s PMJ is an excellent exploration of the different ways in which changes can be evaluated, and this can be reverse-engineered, allowing potential change agents to know if they are setting off down the wrong road.  It also reminds us that there are more options for change efforts available than the simple ‘before and after’ audit, or the use of multiple PDSA cycles.

Audit and QI are not the only area where the adage of ‘doing a job properly’ applies – as I discussed recently, all of the assessments we use to monitor competency are well intended, and when used enthusiastically and correctly, can uncover unexpected learning from even the most mundane of clinical encounters.  It is probably true that if something has been ‘reduced to a tick-box’ then someone thought that box was worth ticking at one point.  By taking the time to understand the theory and background to where the box came from, we might find ourselves using the tools available to us properly, and learning something in the process.

 

The beauty of the written word?

21 Apr, 15 | by Toby Hillman

New Font "Doctor's Handwriting"

Of the essential skills for doctors, writing has to be up there as one of the most important.  Doctors writing has been the butt of many jokes ove the years – justifiably, and written prescriptions remain a significant source of error in hospitals up and down the land.

The medical notes are another area where the handwriting of doctors is often held up to scrutiny.  In days gone by, the registrar would be the doctor who registered the opinion of the consulting physician, and before that notes were kept mainly for the interest and records of the physician themselves – helping in the discovery of syndromes and new disease entities through meticulous observation, and synthesis of the salient features of cases over time.

And now – what are the medical notes now?  The medical notes are now no longer the preserve of physicians, with entries from the whole MDT contributing to the story of admission, recovery, discharge planning, and plans for the future.  But the written history remains a key part of the journey of every patient.  It is often in those initial observations, descriptions of symptoms and signs that they keys to a diagnosis can be found.

The changes in medical training models, staffing models, and working conditions have also increased the importance of the initial history, and the medical notes as a tool for communicating the key aspects of a case to the team that follows after.  Given that a patient may be cared for by a different doctor almost every day of their admission, written notes are more than ever the most reliable repository of clinical information.

For such a key part of the medical system, one might expect that there is a rigorous training scheme, with competencies, certificates, seminars and reflective practice to ensure that all the appropriate details are recorded, communicated and understood by those who come along afterwards to care for the patient and understand the complexities that make up every patient and their story. Unfortunately there is little work in the literature that supports the training of medical students in written communication.

A study published online for the PMJ looked at the development and evaluation of a scheme that aimed to tackle the lack of formal training in constructing written histories, and support trainers in the evaluation of medical students efforts at a ‘clerking’.

They developed a study with three arms – one of standard practice, one with additional training for students in communication, and a final arm with training in communication, combined with training for residents on how to give feedback using the RIME tool.  The combined intervention showed positive results with statistically significant improvement in clerking scores between the start and the end of the study.  There was also a correlation between good handwriting, and overall quality of the histories – a correlation that could have been one of the key messages of the paper.

In addition the approach that the authors took of not simply ‘educating’ students, but in fact, working to create an environment where useful feedback, using a consistent and routinely applied tool is a good lesson for anyone trying to improve educational interventions, and is an important lesson from this paper.

However, I think we need to look a little more critically at what we as a profession are trying to achieve with the education we offer our students.

I often think that we are training the doctors of the future for the hospitals of yesterday – and we know that a significant proportion of what we teach our students will be wrong by the time they enter the workplace

So when we look at the direction of movement in the medical world when it comes to written notes, perhaps we need to take a leap forwards in terms of what we are teaching our students.

A few years ago patients were hardly ever copied into their clinic letters – now it is accepted as the norm. This increasing access to medical records is gathering pace in a far more profound way in the US through the open notes movement, where physicians and patients share near equal access to the medical notes, changing the balance of consultations, and written records, and ensuring that patients can play a far more active role in the management of their illnesses.

The other transformation is from handwriting to typing, and in the near future, to voice recognition. Electronic health records are also transforming the skills required to be a physician.  No longer is a trusty fountain pen, and a good dictation speed the mark of a skilled communicator in the written form.  Now we shall have to be proficient at forms of communication that are more immediate, direct, and perhaps more open to misinterpretation (tone is notoriously difficult to convey in quick electronic communiqés.)

Training medical students in how to construct and communicate a history is vital, but we must keep in mind how our workplaces are changing, and how our communication is no longer necessarily directed to other medical professionals, but in fact towards the subject of the note.  This will require not only the skills encapsulated in Reporter, Interpreter, Manager, and Educator framework, but all of those that ensure a doctor can also be a partner in managing a chronic disease.

 

Still only human

13 Feb, 15 | by Toby Hillman

A perfect specimen?

There is something different about medics.  We stand out at university – often forming into a clique that others find difficult to fathom, break into, or tolerate.  We strive to be different in many ways; we learn a huge range of facts and figures, along with new languages ( we are taught about everything from the arachnoid mater to xanthelasma, via dysdiadochokinesia) and new ways of behaving – “Hello, my name is…. I’d like to examine your chest if I may?”

This difference has been reinforced over centuries, helped along by the formation of royal colleges, and more recently, by real successes in actually curing some diseases, and managing others so that hospitals are no longer feared as places of death, but instead as places of relative safety for those needing their services.

I think that this paper in the January edition of the PMJ may help to take us back to our roots a little.  The paper is a quality improvement report looking at the impact of a mnemonic device on the completeness of information recorded in the notes in a paediatric department.  The problem was that documentation was of a poor standard, impairing the investigation of complaints and incidents.  The solution used an acrostic to help junior doctors record the important aspects of care that are encompassed within the post-take round.

Results were impressive, showing an increase in completeness of the notes in areas that were previously neglected, including parental concerns, fluid prescriptions, nursing concerns, and investigations.  Understandably there was less increase in areas that had been previously well documented – the final plan, vital signs, presenting problems, and examination findings.

So we can see that, in a time-pressured, complex situation, the junior members of a team find that they are better able to record relevant information when following a set pattern of information recall / record for each case.  This is not perhaps a Nobel-worthy discovery, but it is an important contribution to the ongoing realisation in our profession that there are tools and techniques we can use to enhance our practice, and improve safety and outcomes of the processes we use in our daily work.

Many of the ‘new’ ideas in healthcare like LEAN, six sigma, crisis resource management, human factors training, pitstop handovers, checklists and so on have origins outside of medicine, and in other high-risk, high-reliability, or high value organisations.  The impact of these ideas though can be significant, and in some cases hospitals have been impressed enough to adopt philosophies from industry wholesale – notably the Royal Bolton Hospital.  The medical profession itself though is usually somewhat more reluctant to adopt these concepts, and apply them in practice.

The resistance to checklists, communication methods like SBAR, and other tools that seem to constrain clinical autonomy provides an interesting point to consider.  Is there something inherently wrong in encouraging medics to communicate or work in standardised ways?

Well, no. The ALS algorithm – much maligned by those who have to repeatedly take assessments and refresher courses using the same stock phrases, and act out scenarios that have an uncanny knack of ending in a cardiac arrest – has had great success.  Indeed, when you think of the teams that work in any hospital, the arrest team is one of the most efficient in terms of understanding  common purpose, using a common language, and following a set pattern of actions.  This process even works across language barriers as Dr Davies showed in this article.

And yet, there is always something uncomfortable about being asked to write / think / talk / communicate in a particular way as a medic.  Is this because we are somehow different from those other human beings working in complex, challenging environments?

My feeling is that perhaps we aren’t entirely to blame for our reluctance to adopt these ‘new’ ideas of working.  The hubris required to enter chaotic, painful, emotional situations, take control, decide on a decisive course of action, and do this within a very short space of time is bred into us from the point at which we decided to become doctors.  As I said at the start – we medics are different – and have been since we started on our journey to the positions we now hold.

And therein lies the rub. When it comes down to it, we aren’t really different from those we try to guide through the challenges of illnesses both acute, long-term and terminal. We have the same brains, same cognitive biases and same susceptibility to distraction, and therefore next time you are asked if you can follow an acrostic, use a checklist, or submit to a protocol – before rejecting the concepts out of hand, consider if you are doing so because the tool really isn’t fit for the job, or if you need to follow the advice of Naomi Campbell – don’t believe your own hype.

What do all those numbers really mean doc?

15 Jun, 14 | by Toby Hillman

 

What is ‘normal’

Go into hospital nowadays, and you will do well to escape without having a blood test of some sort.  Very often these are routine tests, which give doctors an overview of the state of play. There might be a few wayward figures here or there – but the doctors will ignore them, or explain them away as part of the normal variation of homeostasis.

In the PMJ this month the spotlight turns to one biomarker that is commonly requested when patients are admitted to hospital.  Indeed, the troponin is one test which I see regularly used completely out of context, and providing information which is often difficult to assimilate into the clinical picture.  The paper – an analysis of >11000 admissions to a large medical facility in Dublin, Ireland has examined troponin results for all admissions under the medical (but not cardiology) service from January 2011 to October 2012.

Now, the troponin is a test that has undergone a change over the time that it has been available to clinicians in everyday practice.  I can remember taking serial CKs in patients with suspected myocardial ischaemia, and my joy at the troponin becoming available for use in my potential CCU patients.  I can also remember the many patients who have been admitted to hospital for 12 hours just to see what their troponin will be – a clear case of a biomarker dictating practise, rather than been a tool for me to use.  And I have many memories of strained conversations with colleagues about the meaning of a mildly raised troponin which had been requested as part of a bundle of tests at the point of admission – without any real thought being given to how one might interpret the results.

These strained conversations have altered in tpne over the years as the blind faith in the value of troponin to indicate ischaemic heart disease which accompanied the hype of the test when it was first released, has been eroded by realisation that troponin is no way near as specific as we were once led to believe – and interpretation now requires quite a lot of Bayesian reasoning to clear the waters.

The article looking at troponin tests on the acute medical take makes a fascinating read, and helps provide some data to the consideration of the not uncommon problem – “well what do I do with this result now?”

The answer in the case of an unexpected elevated troponin is to consider the overall clinical context, and attempt to understand where the physiological stress has proceeded from, as this study shows a significant association between elevated troponin and mortality:

Exponential relationship between high-sensitivity troponin assay (hsTnT) results and in-hospital mortality.

So – a helpful paper looking at a common clinical scenario, and providing a fairly robust argument for how to approach the problem.

But one of the most fascinating parts of this analysis is the determination of what is ‘normal’ and why do we love to have such binary answers to complex questions?

The manufacturers of the assay employed recommend a cut-off of 14ng/L for the normal range. But, given that the test isn’t as specific for myocardial injury as they would like – a figure of ≥53ng/L should be used to indicate myocardial ischaemia. For the purposes of the published study a figure of <25ng/L is used as the cut-of for normal, and ≥25 as ‘positive.’

The persistence of a desire to classify a test result that the outcome of this large observational study indicate is a sliding scale, indicating physiological stress, rather than any specific disease process (in this study that effectively excluded cardiac disorders as the presenting complaint) into normal and abnormal categories belies a huge cognitive bias that we all carry around with us. Essentially we like to make judgements based on prior experience, heuristics, and easily interpreted chunks of information – what David Khaneman would call a ‘System 1″ or ‘fast” process. We do this regularly with a high degree of accuracy when on the acute take.

What this paper could be seen to do is boil down a clinical problem into another readily available answer, that can be applied in everyday practice – to me, it is a reminder of the blind faith I used to have in a test that I and it’s manufacturers understood poorly, and drove clinical protocols and pathways, rather than me applying some critical thinking to my actions, and their results – and using the test to its best effect.  I wonder how many more biomarkers we will see undergoing this sort of evolution.

It’s not about the form… it’s the human touch

20 Oct, 13 | by Toby Hillman

A ‘typical’ request form?

There are several problems which rear their ugly head every few months / years in healthcare and yet seem impossible to crack.

In the main they pass by, unnoticed by the great and the good, and not usually causing discernible problems for patients.  But, time taken to gather phlebotomy equipment, delays in prescribing ‘TTAs’ and ordering too many tests are all a waste of resource.

Waste is the enemy of efficiency in any system, and the 7 wastes:

  • transportation
  • inventory
  • motion
  • waiting
  • over-processing
  • over-production
  • defects in work performed

are the target of many improvement projects (especially those relying on lean thinking)

One such improvement project has shown a successful, and sustained reduction in the waste of excessive laboratory tests.

 

The paper reports the process undertaken to introduce a change in the practices of an emergency department, through a forcing function of only allowing junior staff to order tests once a senior had approved the request.  The largest excesses in requests were targeted and significant changes over time were achieved.
The report puts some of the change into context, but I wonder if a follow up, qualitative study might be required to really evaluate what changed.

I may be wrong, but this intervention doesn’t seem like one which was imposed rigidly from above, but instead was developed in collaboration with the key clinical decision makers in the department, and with an eye on what would actually work on the ground – in *their* department.

And this is the messy bit.

For senior clinicians, and managers who see the headline: ‘Change in form reduces tests, saves $$$‘ there could be a shock coming.

Firstly, the change took time to bed down – see the histograms for the weeks after the intervention – so no quick fix.

Secondly – don’t kid yourself that it was the change in the form which made the difference – it was a shared vision for change from senior, middle and (probably) junior grade doctors.  After signing up to a shared goal, there was a change in working practices, backed up by a staffing and service delivery model (note the absence of a 4 hour target, and retention of responsibility by the ED for short stay patients) which encouraged a dialogue between seniors and trainees. Moreover – and crucially, in my view – the change opened up the possibility of real-time, on the job, training.

Each interaction for an additional request seems like it will have been a discussion point – and trainees benefitted from a culture of learning within the department.

So, could this be reproduced in the UK? In some departments, I’m sure, in others – no way.

In trying to replicate such successes we should not concentrate on the mechanics of the intervention, but the human factors and cultural context. Work on that, alongside such innovations and you stand a much better chance if success.

For more information on human factors (if you’ve not heard the phrase – look it up – it could change how you view your world and where you work) then see the Clinical Human Factors Group  and an inspirational video on human factors in patient safety from the incredible Martin Bromiley:

http://www.youtube.com/watch?v=JzlvgtPIof4

Latest from Postgraduate Medical Journal

Latest from PMJ