12 Jun, 15 | by Toby Hillman
I am conflicted… and it is down to a couple of papers in this May’s PMJ that look at the development of a new tool for assessing the performance of trainees in a key medical task.
Most nights – or at least 2 a week – I spend a portion of my evening logging into the e-portfolio system for medical trainees, and try to fill in several online forms to reflect the practice and learning of doctors that I have worked with over the past few weeks.
There is an array of choices to make, and choosing the right assessment for each task can be a bit difficult – you must know your SLE from your WPBA, your Mini-CEX (pronounced ‘kehks’ to avoid worrying conversations) from your DOPS, and woe betide anyone who mistakes their MCR for an MSF, or a CBD for an ACAT. By the way, none of these is made up.
I find it difficult to make time in the day to fill these forms in with the subject of them sitting alongside me, but I do try to make an effort to build at least one or two learning points into each form to make them more useful than just a tick in a box on a virtual piece of paper.
The conflict I have is that these forms often feel like soul-less, mechanistic hoops that trainees simply have to plough through to enable progression to the next level in the platform game that is a training career in medicine in the UK. Some days I would like nothing more than to ditch the whole enterprise, and head back to the good old days where apprentice medics would work alongside me, learn by osmosis and through trial and error.
However, there are other days when the format of an assessment, or the very fact that a trainee has demanded one provides the opportunity to frame a discussion around an event, an experience, or an interaction that requires more attention – where real learning can take place during a discourse about what went well, less than ideally and what could be improved for the future in someone’s practice. At these times, I am grateful that I don’t have to make up an assessment on the spot, but there is a framework to formulate my feedback, provide a breakdown of areas to concentrate on, and direction for where to find help and resource to improve.
The papers that have provoked my feelings of conflict look at a project in the West Midlands to develop a tool for assessing trainee’s performance in conducting ward rounds in the paediatric department. One describes the creation of the tool, and the other looks at the reliability and practical use of the tool.
The end product is a multi-source feedback tool that does what it says on the tin, and reliably so. It has similarities to other assessments already in use, but crucially focusses on a narrow, but important and ubiquitous part of medical practice – the ward round.
The development of the tool started in response to a reaslisation that ward rounding is an essential skill, and yet is not usually assessed formally in training. It is one of those tasks or set-piece rituals that is learned by osmosis. I think there are other areas that are similarly neglected too… responding to conflict within the MDT, responding to angry patients or complaints, effective handover between shifts, debriefing after significant events – or even after every shift, chairing meetings, reporting to a committee and so on…
Should we, therefore have tools for each of these areas, with specific numbers required by trainees in each post, to demonstrate competence? I can imagine the response if this suggestion were taken up wholeheartedly for each vital part of a consultant job that is not at present explicitly covered in a WPBA (workplace based assessment)
So no, if we don’t want to be over-burdened by assessments, and end up with a fully tick-boxed CV, we should therefore rely on the education methods of old… in those halcyon days of yore when registrars still knew everything, and would fledge into consultant form without having had to get anything ‘signed off’ on an e-portfolio, but would be vouched for in references and conversations over sherry.
Clearly neither of these scenarios could be considered perfect, but where do we draw the line. As with targets in all industries – what gets measured gets done, but what gets measured is not always what ought to be measured.
As we become slightly more reductionist in our thinking about medical education, we risk hitting the target but missing the point as we try to encompass all that is important about being a senior clinician in formalised assessments – but I am also convinced that training in the good old days probably wouldn’t be up to the job of training senior physicians and surgeons for the modern world of healthcare – so I remain conflicted…
The tool the authors have developed looks promising, and I intend to use it to help registrars start thinking more objectively about how they conduct their ward rounds – and for myself to improve my practice, but I can’t help thinking that I might just miss something else if I only stick to the tools available to me in the eportfolio.