You don't need to be signed in to read BMJ Blogs, but you can register here to receive updates about other BMJ products and services via our site.

Medical Education

All in a day’s work

20 Oct, 16 | by Toby Hillman


Becoming a doctor is a long and arduous process.  It involves many years of study and more of practice.  It is inconceivable that this process leaves those who go through it untouched.  This process is called professional socialisation.  It confers values, and behaviours on the participants, and these help to mark our profession out from other groups in society.

The following reflection is from Dr Ciara Deall, a trainee plastic surgeon, recalling events which took place on a flight to North America, and in which her training allowed her to offer a stranger comfort, despite being off duty – a state that perhaps is never truly realised by those whose vocation is the practice of medicine.

We had cleared the west coast of Ireland and I was beginning to relax on flight AA365 heading for New York and a weeklong, intensive microsurgery course. Just time to let go of a non-stop week of on-call mayhem and enjoy some inflight entertainment to help wind down.

The intercom interrupted abruptly: “Hi, this is the chief steward, will any medically trained passengers please make themselves known to the crew; we have an emergency.” Almost without thinking I found myself standing up and telling a stewardess I was a doctor, before wondering what I might be letting myself in for – a stroke, anaphylaxis, heart attack, choking? Was I the only one?

The 19-year old girl was doubled up in agony, clutching her stomach, clearly very frightened and panicky. “Hi, I’m a doctor.” She was French and couldn’t understand much English. However, her GCS was 15, pulse and respiratory rate were raised but in range, she was not breathless and on eyeballing her from the aisle, she was in pain, but not acutely deteriorating.

The stewardess asked if there was anything I needed. “An interpreter please.” Not quite what she had been expecting, but after another intercom request, the perfect match was found and I made rapid progress in establishing my patient wasn’t pregnant, had no fevers, no urinary symptoms or diarrhoea, but had been out the night before eating too many different foods and drinking too much alcohol with subsequent vomiting episodes. Her pain was 4-5/10, crampy in nature and relieved by lying down. On abdominal examination she had very mild generalised tenderness, but a completely soft abdomen with no guarding or rigidity; bowel sounds present.

Her panic was subsiding fast with my apparent calmness as I completed the full history and examination. I was offered an astonishing state-of- the-art medical kit and pointing to an endless array of emergency drugs, including adrenaline, atropine and morphine, the stewardess invited me to help myself to whatever I wanted! I almost felt guilty in only using the sphygmomanometer and some mild pain relief, explaining the other drugs could severely harm or even kill her!

My patient settled to rest lying down, with water to hand for her dehydration. I promised to be back in 15 minutes. The crew were effusive in their gratitude and what it meant to them to have an ‘expert’ on hand. They recounted some past horror stories where no one had volunteered. Unwittingly I had calmed their nerves as well.

Back in my seat I reflected for a while on my encounter and realised the potential vulnerability of tens of thousands of long haul travellers daily and their attending cabin crew. Crossing immense oceans a truly sick person could be many hours away from trained medical staff and properly equipped facilities, unless there happened to be a willing, qualified passenger on board; clearly a gamble that is a daily occurrence. I was glad of my ATLS training, recognising it could be called on at anytime, anywhere, even at altitude.

Furthermore, it was a reminder of the unique (and privileged) position that doctors have, where particularly in emergency situations, complete strangers are willing to put their absolute trust in us. Even when we least expect it, the way we conduct ourselves and the skills we deploy can have a profound effect on those around us, for both patient and onlookers. No one cared whether I was a junior doctor or not. At 38000 feet I was valued for my willingness to offer and use my expertise. It was a sobering, almost humbling thought and without overstating it, I reminded myself that we are never completely ‘off duty’.

My patient slept. On waking she smiled feeling much improved and couldn’t thank me enough. Approaching New York, the stewardess asked if I had space in my carry-on for a bottle of their best champagne. I did!

At the end of the flight I accompanied my French charge off the plane. Another fascinating day in the life of a junior doctor.

In the land of the blind…

13 Jun, 16 | by Toby Hillman


Leadership is one of those areas of medical training that is increasing in prevalence, and the number of schemes to ensure that medical leaders are available within the workforce is ever expanding.

Some in our profession feel that the ‘leaders’ who are ‘trained’ seem to have few leadership qualities, and even less legitimacy to lead their colleagues than those who possess ‘natural’ flare for leadership. (COI: I have been a leadership fellow in the past)

There is one very well defined team, though, in which very clear leadership is absolutely required, and in which even the most junior member of the team can display leadership, clarity of thought, and situational awareness – the cardiac arrest.

With the adoption of international algorithms, regular training days, a huge manual, rigorous testing of candidates, and mandatory updates – advanced life support has to be one of the most directive environments in which we find ourselves at work.  So leadership is required within the cardiac arrest team, to ensure that the team is working to time, maintaining compressions, and giving drugs when required – and most importantly, to review progress, determine measures of success of failure, and sadly – most often – to ‘call it’ when an attempt has failed.  Leadership skills then, would appear to be a necessary attribute of anyone on the cardiac arrest team.

A couple of recent papers published online in the PMJ raise separate but linked questions about leadership in this most stressful of situations.

A paper on leadership at cardiac arrests helpfully documents data that is a bit of a wake up call for those who ‘lead’ them.

Dr Robinson and colleagues studied the perceptions of leadership and team working among members of a cardiac arrest team.  They surveyed a range of members of the crash team at a n NHS Trust in London that covered two acute hospital sites.  Admirably the survey included wider members of the crash team too – healthcare assistants and nurses, as well as those who carry the crash bleep (pager).

The message I took from the data was that the leaders (SpRs / senior residents usually lead cardiac arrests in UK hospitals) thought that leadership at the cardiac arrest was good in 90% of cases, whereas the ‘followers’ (nurses) only thought that there was good leadership 28% of the time.  And perhaps best of all, 100% of the SpRs strongly agreed that they were confident in leading cardiac arrest response.

In this cohort, around 40% of all groups of respondents said they had experienced a debrief at any arrest they had attended.

The second paper, which looks to provide an answer to the questions posed by the first paper, through the use of a debriefing tool, considering the cardiac arrest response to be a missed learning opportunity  The authors again surveyed their cardiac arrest responders – and found that only about 30% had ever experienced a debrief following a cardiac arrest at their centre.  However, there was a great appetite for the opportunity to debrief in a structured way – using a tool which singles out leadership in particular as a domain of interest (93%).

I think that these two papers demonstrate that, although leadership remains one of those areas which induces feelings of revulsion amongst those who have experienced terrible role models, it is one of those skills which, instead of being inherent amongst the medical profession, requires practice.

What is worse is that those who occupy leadership positions by virtue of their grade of training appear to be mistaken as to their effectiveness, and demonstrate misplaced confidence in their abilities.

Whilst I have been fortunate enough to have had the opportunity to participate in a leadership programme, I don’t think I would anoint myself as the next great thing in the medical profession. However, the training I went through did teach me a lot about the capacity people have for self-deception, and the importance of truthful feedback from colleagues (see this blog from a while back)

I have doubts about the enthusiasm of crash to use a debriefing tool in the immediate aftermath of a cardiac arrest response, but these two studies have gone some way to reassuring me that there has been a shift in the culture of the medical profession to even be studying such subjects.  Long may it continue.




Did you choose them, or did they choose you?

24 Feb, 16 | by Toby Hillman

Specialty choice algorithm via @FizzyMcFizz


Medical stereotypes are a well known, ranging from the hippy-esque GP, to the man-mountain of an orthopaedic surgeon, via the suave and sophisticated plastic surgeon.  I’m not entirely sure what the stereotype of a chest physician is, but I would be grateful if you could let me know…

These stereotypes, and perceptions of who goes into which specialty are deep-seated, with some of the negative associations between specialty choice and types of doctor being identified early in medical studies, and seemingly perpetuated by senior staff.  So what makes one choose a particular specialty?  It might be something to do with the types of patients being cared for, the opportunities for research, the work patterns, the remuneration, intensity of on-call, or it may be influenced by our personality.

A study published online recently by the PMJ tried to examine the contribution of personality to specialty choice in doctors working in Sweden.  The paper describes the results of a survey of Swedish medical graduates in 2013.  The Big Five Inventory was used to quantify personality traits, method of entry into medical school was also recorded, along with a number of other questions about lifestyle, economic status, involvement in research and a basic enquiry about the need for mental health treatment within the past 12 months.

The results of the study seemed to confirm the stereotypes of different specialties to a certain degree, with surgeons being more likely to score highly on conscientiousness, and lower on agreeableness than other specialty groups, and psychiatrists being more open to new experiences than the other specialty groupings.  Psychiatrists were also more likely to have required treatment for mental illness in the previous 12 months (57%) than their colleagues in other specialties (GP 42%, Hospital Service Specialties 26% and Surgeons, and Internal Medicine Specialists 25% each)

The authors recognise that personality alone is not the sole reason for a choice of specialty, but that the differences in traits between the groups of specialists, suggests some role of personality in determining ultimate choice of career path.  The authors considered the possibility of a reverse association between personality and specialty choice in that the culture of a specialists working environment may change the Doctors’ personality – leading to the observed differences.  However, this seems less likely given the usual assumption that personality is fairly fundamental and fixed over a lifetime.

As I read the paper, I thought back to my own career choice – and why I followed the path taken.  It is perhaps a little too personal to go into all of the reasoning behind my career choices here, but my career aspirations definitely changed over time.  I left medical school with thoughts of being a Trauma and Orthopaedic Surgeon (for those who know me, this may come as a shock) and I then moved through a phase wishing to be an Emergency Physician, and ultimately chose Respiratory Medicine.  At each point, there were multiple factors at play, but I certainly remember feeling more accepted in some student attachments and working environments than others.  This feeling of being ‘adopted’ into firms whilst a student, and being allowed to ‘join’ the firm once I was a doctor, I think had more of an influence than I appreciated at the time.

I therefore wonder if choice of specialty isn’t an expression of pure agency on the part of the trainee, but in fact, the other way around.  How much are students and junior colleagues ‘chosen’ by a specialty?

Lave and Wenger’s work on legitimate peripheral participation described how junior members of a community of practice become accepted and involved in the work of that community.  My feeling is that perhaps this is at play within the hidden curriculum at medical school, and our own choices about career path may be more influenced by others choice to accept us wholeheartedly into a community, or merely tolerate our presence as a fleeting member of a workforce.

In this way, personality groupings are perpetuated within the medical profession, and our stereotypes continue to live on.  If we are to facilitate the emergence of a truly diverse workforce that is happy and productive, we should not necessarily seek to eliminate these stereotypes, or encourage trainees to follow specific career paths simply based on how we interpret their personality.  Instead we should explore with trainees what draws them to a particular field of practice, and help them to see past the ‘image’ of a specialty, and make perhaps a more informed choice, taking into account how they might fit in with a particular medical tribe.




Look not for the fleck in your brother’s eye, but the gorilla in your own…

25 Jan, 16 | by Toby Hillman


Teaching for medical graduates approaching clinical exams such as the MRCP PACES exam is an anxious time.  One is expected to ‘perform’ under pressure, wary of the need to elicit signs leading to potentially outlandish diagnoses.  The breadth of knowledge and skills required to confidently identify CMV retinitis at one station, followed by a complicated communication scenario, with a subtle fasciculation to pick up on at the next is quite a task.  It is also a task that is asked of graduate trainees in almost all specialties – the clinical portion of any membership exam is a vital stepping stone on the route to full qualification and independent practice.

I was teaching some PACES candidates this week, and played my usual game with them – what can I tell by observation of a patient and just watching their examination – that they miss.  This isn’t just a mean trick – I find it helps me to concentrate on what they are doing, and in turn, helps to identify additional signs that might have been missed completely, be unknown, or simply passed off as unimportant.   The gems this week included the white plaster over the bridge of the nose of a gentleman with COPD – which led to a further inspection of the surroundings  – and the tell-tale NIV mask and tubing just poking out behind a bedside cabinet.  The second was the white sheet of A4 stuck at eye level behind another patient’s head with the very large letters NBM written in green marker pen.

In both cases these clues to the wider diagnosis were staring the candidates in the face.  However, it was only when brought to the fore that their implications for the clinical context was appreciated.  So I finished the teaching session having had my fun, and the pupils might have learned a bit more about the value of careful observation, and how this can influence clinical reasoning.  It was only when I got home and read this recently published paper by Dr Welsby on the neurophysiology of failed visual perceptions that I started to consider this interaction a little more objectively and how the lessons from it could be applied in other spheres.

The paper is one of those analyses of physiology and its application to everyday life that makes medical education and medical practice so enjoyable.  Dr Welsby has taken 3 eye problems, and 7 brain problems, and presented them in such a way as to highlight why clinical experience – the act of examining patients, and the slow acquisition of the lived experience of using and applying knowledge over time – is so important in medical education – and suggests several reasons why he feels trainees today aren’t afforded the same opportunities to develop this experience as he was.

The paper can also give lessons for the more experienced clinicians, and perhaps could be used to highlight errors of clinical understanding on a much wider scale.

Essentially, the data our brains work with is flawed – and to compensate – our brains make it up, or completely miss the obvious because we were concentrating on something else.  The paper has links to two videos which are well worth looking up – this one is my favourite.  The video is a perfect demonstration of how easy it is to miss vital information, and when we apply this to the situations we work in daily – it is more impressive that we ever reach diagnoses, rather than that we sometimes get them wrong.

As one climbs the slippery pole of the medical hierarchy, it would be as well to reflect on Dr Welsby’s observations further.  Clinical experience can make what seems impossible to a first year graduate,  second nature to the fourth year registrar.  The development of this experience allows senior clinicians to spend time thinking and working on other problems – but still with the same eyes and the same brains.  Indeed – it is often successful clinicians who are chosen to lead on projects far from the clinical environment, and demand a somewhat different form of observation and synthesis of information.

As more and more clinicians are becoming involved in leadership positions, and managerial roles – those lessons learned at the bedside should not be forgotten.  If the data from our health systems is flawed – the decisions we take to modify, ‘improve’ and reform them will be as flawed as those conclusions reached by a brain compensating for the incomplete information fed to it by the eyes.

Leaders from the medical profession have a duty to both remain patient with their students who miss the ‘glaringly obvious’ but must also remain vigilant for the gorillas hiding in plain sight no matter where they find themselves.



Three pipe chest pain…

14 Dec, 15 | by Toby Hillman


Medicine is no longer quite so full of time to ponder as it once seems to have been.  Rumination and consideration have taken a back seat to efficiency.  Protocols and pathways seem to be the order of the day, and once a patient is on a pathway, it can be very difficult to get them out of the diagnostic rut they have found themselves in, which more often than not is a medical cul de sac.

A paper in the PMJ on the clinical and diagnostic findings in patients with elevated CSF bilirubin set me off thinking about these dead-ends.

The paper takes a fine toothed comb to the cases of patients who underwent CSF bilirubin analysis as part of their assessment for headache over the course of a decade at two hospitals in Northern Ireland.  The paper explores some of the ins and outs of CSF analysis for possible aneurysmal SAH and gives some helpful insights.  One curiosity that stood out was the 13 patients in whom there was a complete lack of history of headache (not even simply not recorded as far as the presented data suggest) who underwent CSF bilirubin testing.  I suspect that this was over-eager requesting becuase CSF had been obtained, and all the boxes got ticked.  As far as this paper is concerned though, this practice diminishes the specificity of the test and as such erode the positive predictive value of the test.

However, my interest was piqued by the natural use of a term that will be well understood by medics who work in acute medical units, and seems to have become part of our everyday clinical language – the “CT negative headache.”  This terminology has cousins that are probably more often heard, but are just as beguiling in their simplicity and ease of use, but troubling in terms of their complete lack of detail.  These terms can be sprinkled liberally onto the discharge summary – neatly encapsulating the battery of tests that a patient was subjected to – resulting in normal findings (or non-significant ones at least) but sadly they entirely miss the point.

Pathways are designed with an end diagnosis in mind, and if a patient flows along the pathway, ticking the boxes as they go, or being forced to occupy them (the crime of procrustes) then they may usefully end up with the correct treatment, given in a standardised way with utmost efficiency.  However, there are few pathways with “diagnostic uncertainty” as the start point.  There are even fewer that allow one to consider all of the alternative diagnoses (the CSF paper above reminds us that there are over 100 causes of sudden and severe headache described) that might contribute to the clinical conundrum facing us.

As such – if your patient comes to hospital nowadays with chest pain, they may well go home with a diagnosis of chest pain (troponin negative).  This has not necessarily helped many of the players in this scene.  If the patient’s main concern was specifically that they were having a heart attack – this could be reassuring.  However, if I was the GP who had asked for the opinion of their local specialist service, I might feel a little short-changed.  Negative diagnoses do not contribute a great deal to a positive outcome.  Instead, it might be more helpful for the patient to go home with at least a list of possible or probable alternatives – costochondritis, or oesophageal spasm, or dyspepsia, or my personal favourite when I see it – the slipped rib syndrome

Negative diagnoses are undoubtedly here to stay – it is just too easy to be able to exclude the killer diagnoses, assure yourself and your patient that they are safe, and then send them on their way.  However, as educators, and as clinicians we must ensure that our adherence to guidelines, protocols and pathways do not allow our curiosity to atrophy, and through our own acceptance of the negative diagnosis, let this practice to be seen as the norm.

Sherlock Holmes used to rate problems by the number of bowls of tobacco required to think them through – in the world of multi-morbidity there are plenty of three pipe problems to be faced.  And whilst I don’t lament the passing of the ward smoking room, I think there is definitely something to be said for bringing back the art of the positive diagnosis, even if it requires a little rumination, and wandering from a well-marked pathway.


Aiming for ‘normal’

14 Nov, 15 | by Toby Hillman

Don Quixote via scriptingnews on

Normal ranges are papered to the door of almost every clinical medical student’s lavatory door or fridge, inside the cover of every notebook in the wards – accompanying every result on the EHR – everywhere we are told confidently what normal is. But as this paper studying the laboratory findings of several thousand inpatients at a hospital in North London highlights – ‘normal’ is not as clear cut as it may initially seem.

A paper from the hospitals looked at in this study was the subject of a previous blog  which highlighted the variation in practice and often poor implementation of ivestigations into the cause of low sodium values in patients acutely admitted to the three hospitals involved.

This paper has taken a signal from a previous one and has now produced data that questions the validity of the 135-145 range for serum sodium.

The authors noted during their previous studies that many of the patients acutely admitted to the hospital had low sodium results, whilst a cohort of patients from care homes had higher values, and seemed to be dehydrated.  The mortality for patients being admitted rose with increaing sodium concentrations – but the break-point in the graph was within the normal range. So we have a population whose results don’t fit the ‘normal’ range, and a ‘normal’ range that seems associated with increasing mortality:


Locally estimated regression (locally weighted scatter plot smoother, LOWESS) plot of serum sodium against mortality for inpatients aged under 65 and 65 and older.


Clearly these retrospective observational studies shouldn’t have lab managers running around redefining normality and encourage us all to drive our patients’ sodium to the lower half of normal in an attempt to save lives…

BUT and it is a big but that deserves capital letters – we do need to work out who defined normality.  Thankfully Prof McKee and his colleagues have done a bit of digging for us and give a potted history of the normal range for sodium measurement. And it turns out that this range – embedded in millions of memories the world over is actually based on comparatively few data points – the first papers used about a hundred healthy volunteers using flame photometry – a technology that is largely superceded by more accurate methods.  The subsequent studies they refer to us up to a 1000 measurements (often in multiple sub-groups) from which they drew their conclusions.

How can this be? Surely we don’t just take decades old evidence and allow it to heavily influence our treatment plans, delay discharges and so on?

In this case the answer seems to be… yes.  However, this is not the only sphere of medicine where old data continues to heavily influence current practice.

Oxygen is one of the most commonly administered, but not prescribed, drugs in the formulary. In COPD it is one of the few drugs that has evidence for influencing mortality, rather than simply altering a trajectory of decline…

And the evidence for this? It is predominantly based on an MRC funded study from the late 1970s that included 87 patients.  That evidence was enough to change practice, and alter lives I am sure, but it probably would not stand up to scrutiny for the basis of a major shift in practice nowadays.  The linked paper on sodium measurements, for example looks at more than 100000 samples and trials of therapy in COPD looking to demonstrate a mortality benefit now need to have thousands of patients (the TORCH trial enrolled 6200)

So what is truly normal, are any of our favourite ‘common sense’ treatments justified in modern medicince, do we do anything right in our every day practice?

Clearly yes, there have been huge improvements in survival from many diseases over the decades, and common medical practices are clearly successful at identifying pathology, seeking out the underlying disease, and then targeting that.  However, when confidently stating that something is the correct strategy to pursue, we should also be mindful that our convictions might just be based on less than solid ground.  And this uncertainty is at the heart of a healtyh academic examination of our medical practice on a daily basis.

We should not be paralysed by doubt, but we should have a healthy degree of scepticism when appraising both existing practices (the PANTHER IPF trial is perhaps one of the most significant turnarounds of recommended practice triggered by high quality trial evidence) and when new technology comes along (see this blog on troponins in acute medicine.)

So next time you are on a ward round, and find yourself struggling to guide a patient towards ‘normal’ for a biochemical test, or some other finding that we all ‘know’ to be true – you should perhaps make a mental note and work out from the evidence if all we are doing is tilting at windmills, because that is what we have always done, or if there is a genuine reason to strive for that particular outcome.

Wait – did I just hear a zebra going past?

13 Oct, 15 | by Toby Hillman

‘Making a zebra’ by Jurvetson on Flickr (cc 2.0)

There is an often quoted medical witticism, that originated in 1940’s Maryland:

‘When you hear hoofbeats behind you, don’t expect to see a zebra’  

Suffice to say, there aren’t many zebras in Maryland…

In the rough and tumble of acute medical admissions, there are an increasing number of horses in the herd to contend with, and often they come in fairly sizeable herds – multimorbidity is now the norm, and single organ pathology increasingly rare.  Among the horses though, there are occasional zebras.

A paper in the PMJ published online recently explores the features of one of these zebras.  The paper looks at the current state of knowledge about non-convulsive status epilepticus (NCSE).

Non-convulsive status epilepticus is one of those pathologies that sets the mind to thinking – the very name seems a little contradictory.  However, it is a very real pathology and can be incredibly disabling.  As the authors point out, this is a disease that is tricky for many reasons – not least that there is no accepted definition of what constitutes NCSE, and to make a confident diagnosis, one probably requires access to EEG monitoring and a specialist neurological opinion.  So not easy then, for the layman to identify and manage. The incidence of NCSE though, means that those dealing with acutely unwell patients on the medical take ought to be aware of NCSE as a differential diagnosis, and when it would be appropriate to take a second look at the source of all those hoofbeats.

Risk factors for NCSE in the elderly include being female, having a history of epilepsy, neurological injury ( eg stroke), recent withdrawal of long-term benzodiazepines, and having some characteristic clinical signs.  The suggested investigation at this point is then a thorough drug history, review of metabolic derangement, and then to progress to an EEG if one is available in a timely fashion.  The interpretation of the EEG is somewhat beset by pitfalls, but remains the most objective way to reach a conclusion in a tricky situation.

All this is very well, but the ‘half empty’ reader may feel that the paper suggests that this problem, that could affect up to 43 patients per 100,000 is bound to go unrecognised, and therefore untreated as it is poorly defined, and difficult to diagnose.  To assume that because a condition is a challenge to diagnose and manage, the generalist can simply file under ‘too difficult’ would be a shame, and a failing.

The authors use a fantastic phrase that I hope will resonate with jobbing clinicians – ultimately clinical judgement rather than exact criteria is key.

Clinical judgement is one of those qualities one is asked to assess in trainees – a quality that has been lauded and viewed with suspicion over the years, but remains central to clinical practice.  To me, clinical judgement is the synthesis of knowledge about both the patient being considered, their symptoms, signs, and preferences, along with knowledge of up-to-date evidence of therapeutic strategies to formulate a management plan that provides the best outcome – as defined by the needs of the patient.

In the world of multi-morbidity, clinical judgement and one’s ability to interpret available evidence in the context of the patient in front of you is the key clinical skill that can be lost by slavish adherence to criteria, scoring systems, and guidelines.  As the practice of medicine develops, the nuances of how to apply clinical judgement will change, but ultimately this quality continues to be a defining feature of the medical profession.  To maintain a high standard of clinical judgement, one must continue learning – especially about zebras – it would be a shame not to recognise one when it gallops up behind you.


A disease by any other name…

17 Aug, 15 | by Toby Hillman

Single Rose by Thor


As a UK medical graduate, working in a London Hospital, it is fair to say that my CV doesn’t contain a huge diversity of workplaces, or populations served.  However, it is striking how many different levels of health literacy I encounter within the working week.

I have had conversations with patients to correct the perception that oxygen delivered by face mask was being introduced directly into a vein, and also had conversations with patients about the finer points of pulmonary vascular autoregulation, as applied to their own illness.

Given the range of knowledge and experience of patients is so wide, it is essential to be able to evaluate this as part of a consultation.  There is little point launching into an explanation of why a certain treatment is being recommended or discussed if my patient remains completely mystified by what I think might be wrong with them.  However, my getting to meet a patient might well rely on their ability to interpret their own symptoms, and seek help for them.

A paper in the current issue of the PMJ explores this in a setting so far removed from my own that I thought I might not find a great deal relevant to my own practice.  I was pleasantly surprised to be proved wrong on a few counts.

The study is a qualitative exploration of glaucoma awareness and access to healthcare at a clinic in Northern Tanzania (Open access).

The first lesson I took was that qualitative research of this sort is hugely valid, and absolutely required, even in situations where one might think that discovering the opinions, and feelings of patients may be lower down on the research priorities than achieving wider ranging public health successes.  The paper reveals some of the reasons why patients have presented late to the clinic with symptoms that, one feels, could have been noted a little sooner…

“sometimes my wife asked why are you going to off the road”

The paper is rich with the difficulties encountered in accessing healthcare for glaucoma, and the late presentations start to become clear.  There are the expected problems of cost, distances to travel (151.5km on average!) and knowledge of the disease process itself, but the interviews revealed a wealth of other information that point to ways in which this service could improve – through improved health education, changes to operational policies to smooth the running of clinics for those who had travelled furthest, and utilising patients to spread information about a modifiable cause of blindness (a massive economic burden on family and community, especially in poor, rural areas)

The other key point I took from this, that has resonance in all healthcare settings was the use of language, and it’s impact on health literacy and efficacy.  Swahili is the main language of Tanzania, and there is no direct translation of ‘glaucoma’ into swahili.  The word is translated in different forms – contributing to the confusion of patients.

This is not a problem unique to non-English speakers though.  ‘Medicalese’ is a language we all use – it is often a matter of shame amongst the medical profession to admit that one doesn’t know the precise meaning of a medical term, and as such, we can use language as a tool to exclude others (intentionally or otherwise) from our conversations.  We do the same with patients – the art of the medical synonym is well practiced on the wards… ‘malignant, neoplastic, mitotic…’ and when we simplify into ‘lay terms’ we can cause just as much confusion:  ‘water on the lungs’  – pulmonary oedema? pleural effusion?

The use of language is to me one of the key aspects of communication that can influence the ability of patients to hear about, understand, process the implications of, and work with the possible solutions to their problems.  There are definitely times when my choice of words has been below par, and a less favourable outcome has been the same.  Language used in consultations is also key to establishing and maintaining the relationship between physician and patient.

The linked paper shows just why the appropriate use of a clear and unambiguous explanation of medical terms is so important.  There are wide-ranging effects of good and poor language, from the initial access to healthcare, understanding and using treatments appropriately, and thereafter in informal health promotion and education within a community.

Whilst I am lucky not to have to tackle consultations in swahili myself, I think it is right that we remind ourselves regularly of how foreign ‘medicalese’ is from the vernacular, and conciously tackle the use of sloppy terms that often only increase the confusion they attempt to disspate.



If a job’s worth doing…

13 Jul, 15 | by Toby Hillman

Cryptic clothing label

Image via WM Jas on Flickr

Competency based curricula have largely replaced purely knowledge-based curricula in medical education.  As assessment of competency has become a seemingly endless task, the participants in medical education have often complained that learning and development has been reduced to a series of hoops to jump through or, even worse, a series of boxes to tick.

The development of clinical governance frameworks in the late 1990s formalised the involvement of trainee physicians in the process of clinical audit.  Audit became mandated, and as such, became a box to tick.  If one could not demonstrate an audit of some description (any really) then one could not progress.

As such, clinical audit is one of the more reviled duties undertaken by trainees (in their own time) as very often the information ‘uncovered’ is simply an explicit statement of an open secret.  The time taken to prove an acknowledged reality is usually resented by the auditor, and the recipients of the news that their practice falls below expected standards aren’t usually overjoyed.  The result of such projects commonly a list of recommendations, presented on the last week of an attachment, by a junior member of the team, that will be agreed by all, but actioned by no-one. (Only around 5% of audits ever make any difference to practice)

Quality Improvement projects have been lauded by many (me included) as an answer to the problems with clinical audit:  the burden of data required to make changes is less, the measurements and standards can be set by the instigators, and can be flexible enough to actually be achieved, and the change process is embedded as a primary aim within the most common methodologies employed.

Having been adopted into many curricula, quality improvement is now suffering many of the same problems as clinical audit. The projects are usually carried out in trainee’s own time, but are a mandated part of training – leading to resentment. The subjects tackled tend to be huge (‘We need a new IT system – the current one is not fit for purpose’) or focused on another team’s practice (‘The radiology department need to be quicker at doing the tests we ask for…’)  The doctors participating in a QI project often come with a solution in mind (‘We will just get a bit of data – do what they did at my last hospital – and then we’ll show an improvement’) without really understanding the problem in its current context.

Sadly the result is that some of the most powerful tools for driving change within organisations have been reduced to a ‘tick’ on an assessment sheet, and are done as last-minute efforts, to scrape through the next annual progression check.

This does not mean that audits are inherently useless, or that QI projects should be abandoned as a tool for engaging junior doctors in understanding how to improve clinical practice.  What it means is that, if a job is worth doing, it is worth doing it properly…

To do a job properly, one must know what is required, and what the best tools for the job are.  Not everything can be part of a QI project, and not everything needs auditing.  A paper republished in this month’s PMJ is an excellent exploration of the different ways in which changes can be evaluated, and this can be reverse-engineered, allowing potential change agents to know if they are setting off down the wrong road.  It also reminds us that there are more options for change efforts available than the simple ‘before and after’ audit, or the use of multiple PDSA cycles.

Audit and QI are not the only area where the adage of ‘doing a job properly’ applies – as I discussed recently, all of the assessments we use to monitor competency are well intended, and when used enthusiastically and correctly, can uncover unexpected learning from even the most mundane of clinical encounters.  It is probably true that if something has been ‘reduced to a tick-box’ then someone thought that box was worth ticking at one point.  By taking the time to understand the theory and background to where the box came from, we might find ourselves using the tools available to us properly, and learning something in the process.


I am conflicted…are you?

12 Jun, 15 | by Toby Hillman

via Tambako on Flickr

via Tambako on Flickr


I am conflicted… and it is down to a couple of papers in this May’s PMJ that look at the development of a new tool for assessing the performance of trainees in a key medical task.

Most nights – or at least 2 a week – I spend a portion of my evening logging into the e-portfolio system for medical trainees, and try to fill in several online forms to reflect the practice and learning of doctors that I have worked with over the past few weeks.

There is an array of choices to make, and choosing the right assessment for each task can be a bit difficult – you must know your SLE from your WPBA, your Mini-CEX (pronounced ‘kehks’ to avoid worrying conversations) from your DOPS, and woe betide anyone who mistakes their MCR for an MSF, or a CBD for an ACAT.  By the way, none of these is made up.

I find it difficult to make time in the day to fill these forms in with the subject of them sitting alongside me, but I do try to make an effort to build at least one or two learning points into each form to make them more useful than just a tick in a box on a virtual piece of paper.

The conflict I have is that these forms often feel like soul-less, mechanistic hoops that trainees simply have to plough through to enable progression to the next level in the platform game that is a training career in medicine in the UK. Some days I would like nothing more than to ditch the whole enterprise, and head back to the good old days where apprentice medics would work alongside me, learn by osmosis and through trial and error.

However, there are other days when the format of an assessment, or the very fact that a trainee has demanded one provides the opportunity to frame a discussion around an event, an experience, or an interaction that requires more attention – where real learning can take place during a discourse about what went well, less than ideally and what could be improved for the future in someone’s practice.  At these times, I am grateful that I don’t have to make up an assessment on the spot, but there is a framework to formulate my feedback, provide a breakdown of areas to concentrate on, and direction for where to find help and resource to improve.

The papers that have provoked my feelings of conflict look at a project in the West Midlands to develop a tool for assessing trainee’s performance in conducting ward rounds in the paediatric department. One describes the creation of the tool, and the other looks at the reliability and practical use of the tool

The end product is a multi-source feedback tool that does what it says on the tin, and reliably so.  It has similarities to other assessments already in use, but crucially focusses on a narrow, but important and ubiquitous part of medical practice – the ward round.

The development of the tool started in response to a reaslisation that ward rounding is an essential skill, and yet is not usually assessed formally in training.  It is one of those tasks or set-piece rituals that is learned by osmosis.  I think there are other areas that are similarly neglected too… responding to conflict within the MDT, responding to angry patients or complaints, effective handover between shifts, debriefing after significant events – or even after every shift, chairing meetings, reporting to a committee and so on…

Should we, therefore have tools for each of these areas, with specific numbers required by trainees in each post, to demonstrate competence?  I can imagine the response if this suggestion were taken up wholeheartedly for each vital part of a consultant job that is not at present explicitly covered in a WPBA (workplace based assessment)

So no, if we don’t want to be over-burdened by assessments, and end up with a fully tick-boxed CV, we should therefore rely on the education methods of old… in those halcyon days of yore when registrars still knew everything, and would fledge into consultant form without having had to get anything ‘signed off’ on an e-portfolio, but would be vouched for in references and conversations over sherry.

Clearly neither of these scenarios could be considered perfect, but where do we draw the line.  As with targets in all industries – what gets measured gets done, but what gets measured is not always what ought to be measured.

As we become slightly more reductionist in our thinking about medical education, we risk hitting the target but missing the point as we try to encompass all that is important about being a senior clinician in formalised assessments – but I am also convinced that training in the good old days probably wouldn’t be up to the job of training senior physicians and surgeons for the modern world of healthcare – so I remain conflicted…

The tool the authors have developed looks promising, and I intend to use it to help registrars start thinking more objectively about how they conduct their ward rounds – and for myself to improve my practice, but I can’t help thinking that I might just miss something else if I only stick to the tools available to me in the eportfolio.

Latest from Postgraduate Medical Journal

Latest from PMJ