You don't need to be signed in to read BMJ Blogs, but you can register here to receive updates about other BMJ products and services via our site.

Research in Practice

In the land of the blind…

13 Jun, 16 | by Toby Hillman

all-1295505_1280

Leadership is one of those areas of medical training that is increasing in prevalence, and the number of schemes to ensure that medical leaders are available within the workforce is ever expanding.

Some in our profession feel that the ‘leaders’ who are ‘trained’ seem to have few leadership qualities, and even less legitimacy to lead their colleagues than those who possess ‘natural’ flare for leadership. (COI: I have been a leadership fellow in the past)

There is one very well defined team, though, in which very clear leadership is absolutely required, and in which even the most junior member of the team can display leadership, clarity of thought, and situational awareness – the cardiac arrest.

With the adoption of international algorithms, regular training days, a huge manual, rigorous testing of candidates, and mandatory updates – advanced life support has to be one of the most directive environments in which we find ourselves at work.  So leadership is required within the cardiac arrest team, to ensure that the team is working to time, maintaining compressions, and giving drugs when required – and most importantly, to review progress, determine measures of success of failure, and sadly – most often – to ‘call it’ when an attempt has failed.  Leadership skills then, would appear to be a necessary attribute of anyone on the cardiac arrest team.

A couple of recent papers published online in the PMJ raise separate but linked questions about leadership in this most stressful of situations.

A paper on leadership at cardiac arrests helpfully documents data that is a bit of a wake up call for those who ‘lead’ them.

Dr Robinson and colleagues studied the perceptions of leadership and team working among members of a cardiac arrest team.  They surveyed a range of members of the crash team at a n NHS Trust in London that covered two acute hospital sites.  Admirably the survey included wider members of the crash team too – healthcare assistants and nurses, as well as those who carry the crash bleep (pager).

The message I took from the data was that the leaders (SpRs / senior residents usually lead cardiac arrests in UK hospitals) thought that leadership at the cardiac arrest was good in 90% of cases, whereas the ‘followers’ (nurses) only thought that there was good leadership 28% of the time.  And perhaps best of all, 100% of the SpRs strongly agreed that they were confident in leading cardiac arrest response.

In this cohort, around 40% of all groups of respondents said they had experienced a debrief at any arrest they had attended.

The second paper, which looks to provide an answer to the questions posed by the first paper, through the use of a debriefing tool, considering the cardiac arrest response to be a missed learning opportunity  The authors again surveyed their cardiac arrest responders – and found that only about 30% had ever experienced a debrief following a cardiac arrest at their centre.  However, there was a great appetite for the opportunity to debrief in a structured way – using a tool which singles out leadership in particular as a domain of interest (93%).

I think that these two papers demonstrate that, although leadership remains one of those areas which induces feelings of revulsion amongst those who have experienced terrible role models, it is one of those skills which, instead of being inherent amongst the medical profession, requires practice.

What is worse is that those who occupy leadership positions by virtue of their grade of training appear to be mistaken as to their effectiveness, and demonstrate misplaced confidence in their abilities.

Whilst I have been fortunate enough to have had the opportunity to participate in a leadership programme, I don’t think I would anoint myself as the next great thing in the medical profession. However, the training I went through did teach me a lot about the capacity people have for self-deception, and the importance of truthful feedback from colleagues (see this blog from a while back)

I have doubts about the enthusiasm of crash to use a debriefing tool in the immediate aftermath of a cardiac arrest response, but these two studies have gone some way to reassuring me that there has been a shift in the culture of the medical profession to even be studying such subjects.  Long may it continue.

 

 

 

Hidden in plain sight.

5 Apr, 16 | by Toby Hillman

 

Hooded Grasshopper by J.M.Garg – Own work, CC BY 3.0,

Patients do not come with diagnoses attached to their foreheads.  If only they did,  huge numbers of hospital visits and admissions could be avoided.

To overcome the ever increasing number of potential diagnoses, and the rising tide of illness encountered by our ageing populations, we rely ever more heavily on investigations to guide us to the likely diagnosis, and thereafter, management.

But what if the tests don’t tell you what you wanted to hear? what if the clinical picture says one thing, but the tests say another?  Usually this scenario starts a ‘merry’-go-round for the patient concerned.  Oligo-organists (specialists in normal terminology) become increasingly irate with each other, sending a patient on a wild goose chase from clinic to clinic, trying as hard as they can to reassure the poor patient that there is nothing wrong with their X, and it must be the Y-ologists who will hold the key to unlocking their symptoms and making that breakthrough in management.

Heart failure is one of those areas where patients can go for some time before a diagnosis is firmly settled on.  Patients don’t go to their physician complaining of heart failure.  Instead they complain of breathlessness.  It is telling that there are two distinct rating scales for dyspnoea in common usage -(MRC scale if COPD and NHYA class if Heart Failure) – it is a symptom that has become divided by a common language.

Patients with heart failure are not helped by the way in which we as a profession have been guilty of listening to ourselves, and our tests, rather than our patients.  The seemingly contradictory Heart Failure with Preserved Ejection Fraction (HFpEF) is an entity that has been hotly contested, but looks to become the predominant mode of heart failure.  A review recently published online in the PMJ into the pathophysiology and treatment of HFpEF  shows just how far we have come over the last 20-30 years in understanding that such a disease even exists, that it can be characterised using an imaging modality that was once used to cast doubt on a clinical diagnosis of heart failure.  However, despite this increased understanding, we are only just getting to know which treatments might be beneficial, or harmful for a growing cohort of patients.

As I read the review, along with bewilderment at the detail that can be obtained from a non-invasive bedside test, I was struck by the journey that HFpEF has come on in the time that I have been training and practising medicine.

I clearly recall times when I was told that I was plainly wrong when a patient with the clinical syndrome of heart failure was given a clear bill of health by an echocardiogram – causing me to doubt my skills and insight.  And yet now, we discover that by examining the heart with a different mindset, very detailed pictures of the diastolic function of a heart can be estimated, allowing patients to be treated in a more refined manner.

In addition, the review brought home the absurdity of relying solely on a single test to determine the diagnosis of a clinical syndrome.  The review outlines the risk factors for HFpEF – it is a familiar roll call of the diseases of age and lifestyle.  So the test we used to think of as the gold standard to rule out a diagnosis, has been fine-tuned, and gives a more nuanced picture, but despite advancing technology, we return to the need to treat the patient before us, and not the test result.  And in treating the person, we must treat the whole person. This includes their co-morbidities and risk factors, and not just the ones we happen to find interesting.

Perhaps the journey that the diagnosis and management of HFpEF has taken from seemingly outlandish diagnosis to the dominant mode of heart failure also reflects the journey that physicians must go on as they progress through training – from relative ignorance and lack of experience – to specialist knowledge and a narrowing of focus – and back again to a more generalist role, encompassing multi-morbidity and diagnostic uncertainty.

As we face an increasing burden of multi-morbidity, escalating healthcare costs, and increasing patient expectation, I don’t think it will be appropriate in the future to say – no, your lungs are OK, off  you go to see the heart docs. Instead, a more generalist model of care, helping patients to navigate their multiple long term conditions, to reach a balanced solution will be the standard we will aspire to.

Aiming for ‘normal’

14 Nov, 15 | by Toby Hillman

Don Quixote via scriptingnews on flickr.com

Normal ranges are papered to the door of almost every clinical medical student’s lavatory door or fridge, inside the cover of every notebook in the wards – accompanying every result on the EHR – everywhere we are told confidently what normal is. But as this paper studying the laboratory findings of several thousand inpatients at a hospital in North London highlights – ‘normal’ is not as clear cut as it may initially seem.

A paper from the hospitals looked at in this study was the subject of a previous blog  which highlighted the variation in practice and often poor implementation of ivestigations into the cause of low sodium values in patients acutely admitted to the three hospitals involved.

This paper has taken a signal from a previous one and has now produced data that questions the validity of the 135-145 range for serum sodium.

The authors noted during their previous studies that many of the patients acutely admitted to the hospital had low sodium results, whilst a cohort of patients from care homes had higher values, and seemed to be dehydrated.  The mortality for patients being admitted rose with increaing sodium concentrations – but the break-point in the graph was within the normal range. So we have a population whose results don’t fit the ‘normal’ range, and a ‘normal’ range that seems associated with increasing mortality:

 

Locally estimated regression (locally weighted scatter plot smoother, LOWESS) plot of serum sodium against mortality for inpatients aged under 65 and 65 and older.

 

Clearly these retrospective observational studies shouldn’t have lab managers running around redefining normality and encourage us all to drive our patients’ sodium to the lower half of normal in an attempt to save lives…

BUT and it is a big but that deserves capital letters – we do need to work out who defined normality.  Thankfully Prof McKee and his colleagues have done a bit of digging for us and give a potted history of the normal range for sodium measurement. And it turns out that this range – embedded in millions of memories the world over is actually based on comparatively few data points – the first papers used about a hundred healthy volunteers using flame photometry – a technology that is largely superceded by more accurate methods.  The subsequent studies they refer to us up to a 1000 measurements (often in multiple sub-groups) from which they drew their conclusions.

How can this be? Surely we don’t just take decades old evidence and allow it to heavily influence our treatment plans, delay discharges and so on?

In this case the answer seems to be… yes.  However, this is not the only sphere of medicine where old data continues to heavily influence current practice.

Oxygen is one of the most commonly administered, but not prescribed, drugs in the formulary. In COPD it is one of the few drugs that has evidence for influencing mortality, rather than simply altering a trajectory of decline…

And the evidence for this? It is predominantly based on an MRC funded study from the late 1970s that included 87 patients.  That evidence was enough to change practice, and alter lives I am sure, but it probably would not stand up to scrutiny for the basis of a major shift in practice nowadays.  The linked paper on sodium measurements, for example looks at more than 100000 samples and trials of therapy in COPD looking to demonstrate a mortality benefit now need to have thousands of patients (the TORCH trial enrolled 6200)

So what is truly normal, are any of our favourite ‘common sense’ treatments justified in modern medicince, do we do anything right in our every day practice?

Clearly yes, there have been huge improvements in survival from many diseases over the decades, and common medical practices are clearly successful at identifying pathology, seeking out the underlying disease, and then targeting that.  However, when confidently stating that something is the correct strategy to pursue, we should also be mindful that our convictions might just be based on less than solid ground.  And this uncertainty is at the heart of a healtyh academic examination of our medical practice on a daily basis.

We should not be paralysed by doubt, but we should have a healthy degree of scepticism when appraising both existing practices (the PANTHER IPF trial is perhaps one of the most significant turnarounds of recommended practice triggered by high quality trial evidence) and when new technology comes along (see this blog on troponins in acute medicine.)

So next time you are on a ward round, and find yourself struggling to guide a patient towards ‘normal’ for a biochemical test, or some other finding that we all ‘know’ to be true – you should perhaps make a mental note and work out from the evidence if all we are doing is tilting at windmills, because that is what we have always done, or if there is a genuine reason to strive for that particular outcome.

A disease by any other name…

17 Aug, 15 | by Toby Hillman

Single Rose by Thor

 

As a UK medical graduate, working in a London Hospital, it is fair to say that my CV doesn’t contain a huge diversity of workplaces, or populations served.  However, it is striking how many different levels of health literacy I encounter within the working week.

I have had conversations with patients to correct the perception that oxygen delivered by face mask was being introduced directly into a vein, and also had conversations with patients about the finer points of pulmonary vascular autoregulation, as applied to their own illness.

Given the range of knowledge and experience of patients is so wide, it is essential to be able to evaluate this as part of a consultation.  There is little point launching into an explanation of why a certain treatment is being recommended or discussed if my patient remains completely mystified by what I think might be wrong with them.  However, my getting to meet a patient might well rely on their ability to interpret their own symptoms, and seek help for them.

A paper in the current issue of the PMJ explores this in a setting so far removed from my own that I thought I might not find a great deal relevant to my own practice.  I was pleasantly surprised to be proved wrong on a few counts.

The study is a qualitative exploration of glaucoma awareness and access to healthcare at a clinic in Northern Tanzania (Open access).

The first lesson I took was that qualitative research of this sort is hugely valid, and absolutely required, even in situations where one might think that discovering the opinions, and feelings of patients may be lower down on the research priorities than achieving wider ranging public health successes.  The paper reveals some of the reasons why patients have presented late to the clinic with symptoms that, one feels, could have been noted a little sooner…

“sometimes my wife asked why are you going to off the road”

The paper is rich with the difficulties encountered in accessing healthcare for glaucoma, and the late presentations start to become clear.  There are the expected problems of cost, distances to travel (151.5km on average!) and knowledge of the disease process itself, but the interviews revealed a wealth of other information that point to ways in which this service could improve – through improved health education, changes to operational policies to smooth the running of clinics for those who had travelled furthest, and utilising patients to spread information about a modifiable cause of blindness (a massive economic burden on family and community, especially in poor, rural areas)

The other key point I took from this, that has resonance in all healthcare settings was the use of language, and it’s impact on health literacy and efficacy.  Swahili is the main language of Tanzania, and there is no direct translation of ‘glaucoma’ into swahili.  The word is translated in different forms – contributing to the confusion of patients.

This is not a problem unique to non-English speakers though.  ‘Medicalese’ is a language we all use – it is often a matter of shame amongst the medical profession to admit that one doesn’t know the precise meaning of a medical term, and as such, we can use language as a tool to exclude others (intentionally or otherwise) from our conversations.  We do the same with patients – the art of the medical synonym is well practiced on the wards… ‘malignant, neoplastic, mitotic…’ and when we simplify into ‘lay terms’ we can cause just as much confusion:  ‘water on the lungs’  – pulmonary oedema? pleural effusion?

The use of language is to me one of the key aspects of communication that can influence the ability of patients to hear about, understand, process the implications of, and work with the possible solutions to their problems.  There are definitely times when my choice of words has been below par, and a less favourable outcome has been the same.  Language used in consultations is also key to establishing and maintaining the relationship between physician and patient.

The linked paper shows just why the appropriate use of a clear and unambiguous explanation of medical terms is so important.  There are wide-ranging effects of good and poor language, from the initial access to healthcare, understanding and using treatments appropriately, and thereafter in informal health promotion and education within a community.

Whilst I am lucky not to have to tackle consultations in swahili myself, I think it is right that we remind ourselves regularly of how foreign ‘medicalese’ is from the vernacular, and conciously tackle the use of sloppy terms that often only increase the confusion they attempt to disspate.

 

 

Observe, record, tabulate, communicate…

31 Mar, 15 | by Toby Hillman

Observe, record, tabulate, communicate.

© CEphoto, Uwe Aranas / , via Wikimedia Commons

When I was knee high to a grasshopper, I had a teacher that used to be incredibly irritating.  Instead of getting away with a lucky guess, or a grasp at a faded memory, we had to be able to ‘show our workings.’  This meant we had to understand where our answers came from, from first principles, and learning by rote wasn’t going to cut it.  At the time this was infuriating, and led to a whole load of extra work. However, now I realise that she had started me on a learning journey that continues on a daily basis.

This insistence on understanding the basis for an argument or fact has been a common feature amongst a number of my most inspiring tutors over the years since.

One particular tutor was Dr Alan Stevens. He was a pathologist at my medical school and was assigned to me in my first year as my tutor. Pathology made up quite a significant portion of the syllabus in our first years, and what a bore – hundreds of blobs of pink, blue, and occasionally fluorescent green or yellow. And all of these colours were swimming before my eyes in a lab that seemed a million miles from the wards where the ‘real’ work of a hospital was under way.

So when Dr Stevens took us out for a meal in the week before our yearly finals (another insistence that good wine and good company made for better performance than late nights cramming in an airless library – I still nearly believe this one) and he started to explain how pathology is the basis of knowledge of all disease, I was a little upset.  As with most medical students I was sure I knew best and knew what I wanted to learn so pathology remained one of those subjects that was somewhat neglected in my revision schedules.

However, once I hit the wards, I rued the day I forgot to ‘show my workings’.  As I encountered diseases I knew the names, and symptoms of, but had a sketchy understanding of the pathology or pathophysiology, I struggled from time to time with working out why a specific treatment might help, and how treatment decisions were being made.

A paper in this month’s PMJ may appear to be one of those that a casual reader would skip entirely owing to the title, or the description. A clinicopathological paper on fulminant amoebic colitis may not have immediate relevance to my work, but the paper is an example of how medical knowledge has expanded over the years;  a clinical question, borne out of experience is subjected to scientific examination and analysis, in an effort to move beyond the empirical approach to disease.

The paper looks at the clinical featues, pathological findings and outcomes of patients admitted to an 1800 bed tertiary care centre in Western India who underwent colectomy, and were diagnosed with amoebic colitis.  30 patients were included in the study, and the mortality rate was 57%.

Various features are explored – with some information flying in the face of traditional teaching.  For example, the the form of necrosis encountered in the study was not that traditionally associated with the disease – and could lead to a change in practice in the path lab – potentially allowing a more rapid diagnosis.(In the study the authors found basophilic dirty necrosis with neutrophil rich inflammatory exudate in the study population vs eosinophilic necrosis with little inflammation usually reported in textbooks)

The authors also pose some interesting questions in their conclusion regarding their observed increase in disease incidence – relating to many of the current woes in clinical medicine.

Overuse of medication is suggested as a contributing factor to the increased incidence of amoebic colitis. The authors postulate that indiscriminate use of antacid medications may be promoting the increased incidence of amoebic colitis by allowing ameobic cysts to survive transit through the stomach.  This mirrors some of the concerns about the (over)use of PPIs promoting c. diff infections in the UK.  In addition, lifestyle factors are suggested as contributory – a reduction in dietary fibre can increase colonic transit time, increasing opportunities for the amoebae to adhere to the bowel wall – and the organism itself may be changing in virulence.

So whilst I may not have learned a great deal that I will employ next time I am in clinic, this paper is a great example of the value of close observation over time of the population one serves, maintaining an enquiring mind about the pattern of disease encountered, and then subjecting such notions to scientific scrutiny – eliciting new knowledge, new questions for research, and returning this information to the clinical field to improve practice, and hopefully change outcomes for patients of the future. Osler would be proud.

 

 

Still only human

13 Feb, 15 | by Toby Hillman

A perfect specimen?

There is something different about medics.  We stand out at university – often forming into a clique that others find difficult to fathom, break into, or tolerate.  We strive to be different in many ways; we learn a huge range of facts and figures, along with new languages ( we are taught about everything from the arachnoid mater to xanthelasma, via dysdiadochokinesia) and new ways of behaving – “Hello, my name is…. I’d like to examine your chest if I may?”

This difference has been reinforced over centuries, helped along by the formation of royal colleges, and more recently, by real successes in actually curing some diseases, and managing others so that hospitals are no longer feared as places of death, but instead as places of relative safety for those needing their services.

I think that this paper in the January edition of the PMJ may help to take us back to our roots a little.  The paper is a quality improvement report looking at the impact of a mnemonic device on the completeness of information recorded in the notes in a paediatric department.  The problem was that documentation was of a poor standard, impairing the investigation of complaints and incidents.  The solution used an acrostic to help junior doctors record the important aspects of care that are encompassed within the post-take round.

Results were impressive, showing an increase in completeness of the notes in areas that were previously neglected, including parental concerns, fluid prescriptions, nursing concerns, and investigations.  Understandably there was less increase in areas that had been previously well documented – the final plan, vital signs, presenting problems, and examination findings.

So we can see that, in a time-pressured, complex situation, the junior members of a team find that they are better able to record relevant information when following a set pattern of information recall / record for each case.  This is not perhaps a Nobel-worthy discovery, but it is an important contribution to the ongoing realisation in our profession that there are tools and techniques we can use to enhance our practice, and improve safety and outcomes of the processes we use in our daily work.

Many of the ‘new’ ideas in healthcare like LEAN, six sigma, crisis resource management, human factors training, pitstop handovers, checklists and so on have origins outside of medicine, and in other high-risk, high-reliability, or high value organisations.  The impact of these ideas though can be significant, and in some cases hospitals have been impressed enough to adopt philosophies from industry wholesale – notably the Royal Bolton Hospital.  The medical profession itself though is usually somewhat more reluctant to adopt these concepts, and apply them in practice.

The resistance to checklists, communication methods like SBAR, and other tools that seem to constrain clinical autonomy provides an interesting point to consider.  Is there something inherently wrong in encouraging medics to communicate or work in standardised ways?

Well, no. The ALS algorithm – much maligned by those who have to repeatedly take assessments and refresher courses using the same stock phrases, and act out scenarios that have an uncanny knack of ending in a cardiac arrest – has had great success.  Indeed, when you think of the teams that work in any hospital, the arrest team is one of the most efficient in terms of understanding  common purpose, using a common language, and following a set pattern of actions.  This process even works across language barriers as Dr Davies showed in this article.

And yet, there is always something uncomfortable about being asked to write / think / talk / communicate in a particular way as a medic.  Is this because we are somehow different from those other human beings working in complex, challenging environments?

My feeling is that perhaps we aren’t entirely to blame for our reluctance to adopt these ‘new’ ideas of working.  The hubris required to enter chaotic, painful, emotional situations, take control, decide on a decisive course of action, and do this within a very short space of time is bred into us from the point at which we decided to become doctors.  As I said at the start – we medics are different – and have been since we started on our journey to the positions we now hold.

And therein lies the rub. When it comes down to it, we aren’t really different from those we try to guide through the challenges of illnesses both acute, long-term and terminal. We have the same brains, same cognitive biases and same susceptibility to distraction, and therefore next time you are asked if you can follow an acrostic, use a checklist, or submit to a protocol – before rejecting the concepts out of hand, consider if you are doing so because the tool really isn’t fit for the job, or if you need to follow the advice of Naomi Campbell – don’t believe your own hype.

The great game…

10 Dec, 14 | by Toby Hillman

The great game… Image via wikimedia commons. CC 2.0

The PMJ editors met recently, and it was a pleasure to meet up with a range of engaged, eloquent, educated and motivated individuals who all share a passion for Postgraduate Medical Education.  It was therefore a little bit of a surprise when a reference to an article on the gamification of medical education proved to be a little contentious.

My colleagues thought that gamification was not necessarily a ‘thing’ and that for the PMJ to publish a paper with such a term in the title might be a bit wayward.  However, fears were allayed by the fact that I had heard of gamification, and in fact it is a technique in learning that has been in recognised use in other fields for really quite some time.  There is an excellent “Do Lecture” from the 2009 lecture series on the subject, and within patient education, there is quite an industry dedicated to themed ‘games’ – from nutrition to disease management for example – from Channel 4 and from SurgerySquad.

Other than the lecture above, I also heard about ‘gamification’ of learning at a Society of Acute Medicine conference where a team from the Netherlands presented their simulation game – ABCDESim.  This is a serious game that allows players to gain skills and learning around resuscitation of the acutely unwell patient.

So there are real ‘games’ and their use in education has been examined in the educational literature – highlighting the engagement with subject matter that can be achieved through games, even if the longer term benefits of gaming within education are not fully defined.

The paper that raised an eyebrow analyses the effect of not so much a ‘game’ as the application of the principles of gamification – namely 1) voluntary participation
2) explicit rules of competition for each user
3) immediate feedback on performance
4) participation in competing teams
5) the ability improve in terms of rank (eg being awarded a badge or prize for specified achievements)

The game was really a bank of MCQs that addressed core knowledge expected of the residents on an internal medicine residency programme. The ‘play’ element of this was in the competition associated with answering the questions and comparing oneself, or ones team to the performance of others, and the ability to see real-time positions on a leaderboard, and earn badges for good performance and answering certain numbers of questions.

The researchers found that residents did engage well with the game, and were often found to be answering questions in their own time, that some of the techniques they employed to maintain motivation were well founded ( eg regular state of play emails, personalised leaderboards highlighting potential ‘opponents’ that could be overtaken with a few more questions and the earning of badges for good performance) and that there were qualitative and quantitative benefits – particularly with regards to retention of knowledge over time.

So it seems that millenials are open to the gamification of education.  And perhaps millenials are going to be the first generation whose minds have been changed by the internet.  Research from Columbia university in 2011 indicated that there could be a preference to recall where to find information, rather than actually retain the factual content.  This combination presents medical educators with an intriguing challenge – our younger colleagues are happy to engage with technology in novel ways to improve their education, but that very engagement with technology might be eroding what have been seen as key attributes of effective clinicians in the past.

However, how new are these features really?  The gamification of medical knowledge is hardly new.  Although the rules weren’t exactly software-derived, and universally applied, I can still recall my housemates jousting with medical facts as we approached finals – indeed, the only reason I recall the fluorescence of amyloid being apple green after staining with congo red is down to a housemate trying to ‘psych out the opposition’ on the morning of a medicals final paper.  The stimulus to learning such ‘games’ provided probably contributed to my success, and to a certain extent still does.  An older example is the teaching ward round when the consultant questions students in turn to tease out facts in ever increasing detail – ultimately reaching the registrar who traditionally answered with aplomb.

And the other feature of millenial learning – the ability to find knowledge, rather than retain or analyse it?  As we are now deep into advent, it is perhaps appropriate to turn to the motto of King William’s College Christmas Quiz:

Scire ubi aliquid invenire possis, ea demum maxima pars eruditionis est

“To know where you can find anything is, after all, the greatest part of erudition”

So the features of learning elicited in this study are certainly worth noting, and employing them to maintain interest, and enhance postgraduate education for the emerging generation of clinicians is important, but we shouldn’t be fooled that learning itself, or the competitive nature of learners has changed too much – history teaches us that medics have always been competitive, and that when it comes to knowledge seeking – our forefathers already knew that knowing everything wasn’t always the be all and end all – but knowing where to find out was almost as important.

Uncomfortable truths.

2 Nov, 14 | by Toby Hillman

Simulation is an educational tool that is almost ubiquitous in postgraduate medical training – with diverse examples of implementation – ranging from video recording of consultations with actors, to full immersion scenarios allowing trainees to test their skills and mettle in managing medical emergencies.  Indeed, it is so established in some fields that there are contests to show off medical skills being practised under pressure to draw out lessons. SIMwars plays out at the SMACC conference each year to great fanfare.

But what if you aren’t planning to demonstrate the perfect RSI on stage or in a video for dissemination around the world? What if you are just doing your day job – how would you feel, and would it be any use to suddenly find Resusci Annie in the side room you really need for a patient with c-diff, and be expected to resuscitate her from a life-threatening condition?

A paper in the current issue of the PMJ looks at just this – the perception and impact of unannounced simulation events on a labour ward in Denmark.

The research team had planned to carry out 10 in situ simulations (ISS), but only managed 5 owing to workload issues in the target department.  The response rate to questionnaires before and after the ISS events was strong.  Within the questionnaire were items concerning the experience of participating in an unannounced ISS – namely the perceived unpleasantness of taking part in an unannounced ISS, and anxiety about participation in the same.

One third of the respondents reported that, even after participating in an unannounced ISS, they found the experience stressful and unpleasant, however, 75% of them reported that participating in an ISS would prepare them better for future real-life emergencies.  The corresponding numbers for non-participants were a third thinking the experience would be stressful and unpleasant, but interestingly – only a third thought participating would be beneficial to them.

These results made me think about the experience of learning, and if the experience is ever relaxing and pleasant if it is truly effective?

I can’t think of many learning environments where I have felt completely at ease providing me with really deep learning, food for thought, or opportunities for development.  Indeed – a great number of my most profound learning experiences, that have taught me lessons I carry with me today – have been truly unpleasant.  These rich educational experiences tend to have involved challenge – requiring justification for a course of action, or the challenge of making a decision that is later to be judged through clinical outcomes, or a challenge to my strongly held beliefs – requiring an exploration of opinions, morals or prejudices.

Now, not all of these experiences have been publicly unpleasant – observed by others, but all have been relatively uncomfortable in different ways.  And perhaps this is key to deep learning – that it requires examination, challenge and reflection, not just sitting passively in a lecture theatre being told facts, or actions to take in a particular scenario.

So when we look at educational interventions employed in postgraduate medical education nowadays, have we lost a little of the challenge that used to be such a prominent part of the ‘learning by humiliation’ approach? We perhaps don’t need to return to the days of Sir Lancelott Spratt.

 

But equally we shouldn’t shy too far away from the idea that learners require a degree of challenge, discomfort, and even unpleasantness to gain insights into how their knowledge is being put into action, and it is far better to receive that challenge within the simulated environment than to have to face those challenges in real life, without the chance to re-run if things don’t go so well.

Magic

 

What do all those numbers really mean doc?

15 Jun, 14 | by Toby Hillman

 

What is ‘normal’

Go into hospital nowadays, and you will do well to escape without having a blood test of some sort.  Very often these are routine tests, which give doctors an overview of the state of play. There might be a few wayward figures here or there – but the doctors will ignore them, or explain them away as part of the normal variation of homeostasis.

In the PMJ this month the spotlight turns to one biomarker that is commonly requested when patients are admitted to hospital.  Indeed, the troponin is one test which I see regularly used completely out of context, and providing information which is often difficult to assimilate into the clinical picture.  The paper – an analysis of >11000 admissions to a large medical facility in Dublin, Ireland has examined troponin results for all admissions under the medical (but not cardiology) service from January 2011 to October 2012.

Now, the troponin is a test that has undergone a change over the time that it has been available to clinicians in everyday practice.  I can remember taking serial CKs in patients with suspected myocardial ischaemia, and my joy at the troponin becoming available for use in my potential CCU patients.  I can also remember the many patients who have been admitted to hospital for 12 hours just to see what their troponin will be – a clear case of a biomarker dictating practise, rather than been a tool for me to use.  And I have many memories of strained conversations with colleagues about the meaning of a mildly raised troponin which had been requested as part of a bundle of tests at the point of admission – without any real thought being given to how one might interpret the results.

These strained conversations have altered in tpne over the years as the blind faith in the value of troponin to indicate ischaemic heart disease which accompanied the hype of the test when it was first released, has been eroded by realisation that troponin is no way near as specific as we were once led to believe – and interpretation now requires quite a lot of Bayesian reasoning to clear the waters.

The article looking at troponin tests on the acute medical take makes a fascinating read, and helps provide some data to the consideration of the not uncommon problem – “well what do I do with this result now?”

The answer in the case of an unexpected elevated troponin is to consider the overall clinical context, and attempt to understand where the physiological stress has proceeded from, as this study shows a significant association between elevated troponin and mortality:

Exponential relationship between high-sensitivity troponin assay (hsTnT) results and in-hospital mortality.

So – a helpful paper looking at a common clinical scenario, and providing a fairly robust argument for how to approach the problem.

But one of the most fascinating parts of this analysis is the determination of what is ‘normal’ and why do we love to have such binary answers to complex questions?

The manufacturers of the assay employed recommend a cut-off of 14ng/L for the normal range. But, given that the test isn’t as specific for myocardial injury as they would like – a figure of ≥53ng/L should be used to indicate myocardial ischaemia. For the purposes of the published study a figure of <25ng/L is used as the cut-of for normal, and ≥25 as ‘positive.’

The persistence of a desire to classify a test result that the outcome of this large observational study indicate is a sliding scale, indicating physiological stress, rather than any specific disease process (in this study that effectively excluded cardiac disorders as the presenting complaint) into normal and abnormal categories belies a huge cognitive bias that we all carry around with us. Essentially we like to make judgements based on prior experience, heuristics, and easily interpreted chunks of information – what David Khaneman would call a ‘System 1″ or ‘fast” process. We do this regularly with a high degree of accuracy when on the acute take.

What this paper could be seen to do is boil down a clinical problem into another readily available answer, that can be applied in everyday practice – to me, it is a reminder of the blind faith I used to have in a test that I and it’s manufacturers understood poorly, and drove clinical protocols and pathways, rather than me applying some critical thinking to my actions, and their results – and using the test to its best effect.  I wonder how many more biomarkers we will see undergoing this sort of evolution.

Is it all in your head? – not quite…

31 Mar, 14 | by Toby Hillman

 

A paper in the current issue of the Postgraduate Medical Journal tackles a relatively modern concern: chronic postsurgical pain.

With the advent of modern anaesthetics, and advances in surgical technique, the potential for surgical intervention to tackle disease exploded.  Indeed, there is now a whole industry based on surgically changing the way people look, which in the early days of surgery would have been almost unthinkable. For example, Samuel Pepys put off an operation for his bladder stone (which caused great pain and many infections) for many years before submitting to be cut by Thomas Hollier. [The lithotomy is now a rare beast, having been superseded by less invasive means of removing stones from the urinary tract.]

So surgery is now a much more accessible, and much safer option for the management of disease than it once was.  However, it is not without its problems, and one which may have been under-represented for many years is that of ongoing pain.  The incidence of chronic pain is quite remarkable, with up to 35% of patients undergoing hernia repair reporting pain more than 3 months after their surgery, and higher percentages in patients  undergoing cardiac or thoracic surgery, and even in cholecystectomy rates of CPSP of up to 50% have been reported.

The paper discusses the pathophysiology of pain, and strategies to reduce the likelihood of developing chronic pain. The concepts of central sensitisation, secondary hyperalgesia, wind-up potentiation and pre-emptive and preventative analgesia are of great interest.

However, as one progresses through the article, a change takes place.  One is guided into the realm of the pain clinic.  Here, it is recognised that pain is not simple, it cannot be neatly captured in a line diagram of the spinothalamic tracts, but that pain is a multi-faceted experience for each patient, that can be influenced by a whole range of factors.  The physical risk factors identified for the development of chronic postsurgical pain are important to note, including surgical technique, repeat surgery, and radiation to the surgical site, but what struck me more was the number of risk factors which could be described as relating to mental wellbeing.  Six of the listed risk factors relate to mental state.

This key component of chronic postsurgical pain is borne out by the authors as they discuss the importance of the fear-avoidance model, and how anticipation, and fear have measurable influences on pain perception – confirmed through neuro-imaging studies.  These insights into the development of a chronic condition, and how patients respond to their pain is hugely important, and their application extends beyond chronic postsurgical pain.

One of the key interventions the authors highlight is the provision of information to patients who are undergoing surgery to enable them to understand what they were experiencing post-operatively.  The paper refernced was the report of an experiment conducted about 50 years ago   that examined the effect of an enthusiastic anaesthetist discussing the expected levels of post-operative pain, non-pharmacological methods of alleviating that pain, and a daily reinforcement of this message.  The results are quite impressive, with a reduction in narcotics required, improved comfort, and a 2.7 day reduction in length of stay.

It is on similar techniques that the enhanced recovery programmes employed by many NHS trusts are founded. Essentially, patients are encouraged to take an active role in understanding their condition, the surgery they are undergoing, and are briefed as to what is normal with regards to pain and limitation post-operatively.

The key intervention for me here is that patients are fore-warned about what they are likely to experience, they are given ‘permission’ to be in pain – and to know that this is not a harbinger of doom, or that they are doing irreparable damage to their newly fashioned wounds.  By being up-front about these experiences, fear is dissipated, patients are empowered, and outcomes tend to be better, even though the surgical technique, anaesthetic technique, post-operative pain regime and environment are all pretty much the same.  The major difference is that the patient has been offered some psychological protection, even if it is not labelled as such.

The lessons learned through the experiences of surgical patients over the years can be translated across many spheres of medicine – the marriage of body and mind is not always perfect, and yet, if we only pay attention to one side of the equation, our patients may well pay the price in the longer term. It is a shame that the lessons published in 1964 are not more widely employed, although the tide is changin.  I am convinced that psychological interventions can play a hugely important part in enabling patients to cope with their long-term conditions, of all sorts.

Despite being the calling card of politicians recently, it really is true that there is not such thing as health, without mental health.

Latest from Postgraduate Medical Journal

Latest from PMJ