Richard Smith: Patients harmed by misdiagnosed preferences

richard_smith_2014Linda is 58 and has been diagnosed with breast cancer. She would have preferred not to have surgery but was convinced by her surgeon that it would be the best option. After her operation, the hospital contacted her to apologise as she had not had breast cancer. She’d been misdiagnosed. An inquiry, legal action, and compensation followed.

Susan is 78 and has also had breast cancer. She too did not want surgery but was told that it was the best treatment. Six weeks after her operation, she met a friend of the same age who had also had breast cancer. She had been treated with hormone therapy, having been told that she would probably die of something else before her breast cancer. Susan felt profound regret, but no action followed.

These two women have both been damaged by the health system, said Al Mulley, director of the Dartmouth Centre for Health Care Delivery Science, at a meeting at the Health Foundation last week. And, he said, misdiagnosis of preferences is everywhere. For example, three quarters of surgeons think that losing a breast is the main anxiety of women with breast cancer, but only 7% of women rank that as their main anxiety.

Later Alf Collins, clinical commissioner for long term conditions, Somerset Clinical Commissioning Group, told the meeting of a study of 89 people on a waiting list for cardiac bypass surgery that had made a big impact on him. The people had angina, and three quarters of them thought that each episode of angina was a small heart attack and that their hearts were being progressively damaged. When the real nature of angina was explained to them, with the message that steadily increasing exercise would probably reduce the angina, about half decided against surgery.

Mulley is trained as an economist as well as a physician, and pointed out that treating people in a way that goes against their preferences is a case of market failure. He works with Jack Wennberg, the “guru of variation,” at Dartmouth, and is thoroughly familiar with the huge variation in everything within all health systems. Variations that are the result of variations in patients’ preferences are acceptable, but large sums of money are being wasted on treatments that patients would not have chosen if fully informed of all options. Accurately diagnosing patients’ preferences is a matter not only for the interaction between clinicians and patients but for the whole health system. Any response must work at the individual and system level.

Diagnosing patients’ preferences is new jargon for “shared decision making,” something we’ve been hearing about for 20 years. Why, many in the audience asked, have we made such small progress? Perhaps it’s because “shared decision making” sounds like a cuddly “nice to have” rather than a “must have,” whereas misdiagnosis of preferences carries clear implications of harming patients and wasting resources.

Improved performance will depend, believes Mulley, on feedback on what is happening at both the individual and system level. CollaboRATE is a tool to be used with patients that asks just three questions:

1. How much effort was made to help you understand your health issues?

2. How much effort was made to listen to the things that matter most to you about your health issues?

3. How much effort was made to include what matters most to you in choosing what to do next?

Each question is answered on a five or 10 point Likert scale, and the score is then adjusted to be out of 100. Unsurprising scores vary from four to 84. The test is practical to use in every clinical encounter, and the results can then be fed back to clinicians, probably with their score in relation to others. Or data could be produced for whole units.

Option grids are other tools that can be used both with individuals and to build a database of patients’ preferences. For a range of conditions, the option grids show all the options for treating a condition with answers for each option to a range of questions like “What does the treatment involve?” and “What are the immediate risks?” and “What if the treatment doesn’t work for me?” Mulley believes that giving data to commissioners on the preferences of a population of patients allows better design of the system.

The hope that emerged from the meeting was that a constant feedback of data might cause clinicians and commissioners to take patient preferences as seriously as any other part of the system.

Richard Smith was the editor of The BMJ until 2004. He is now chair of the board of trustees of icddr,b [formerly International Centre for Diarrhoeal Disease Research, Bangladesh], and chair of the board of Patients Know Best. He is also a trustee of C3 Collaborating for Health.

Competing interests: Nothing further to declare.

  • susanne stevens

    Part of an OU (Open Uniiversity)Social Sciences course twenty years ago covered decision making. One which your example reminds me of is drawing up Decision Trees. In a similar way each option for each decision was recorded but each decision was given a weight. Having it clearly drawn out on paper provides clear information about preferences and the values the person puts on each option. Trying to weigh information up ‘in the head’ especially when feeling anxious is just about impossible ,as is making lists of pros and cons which equally misses the nuances of decision making. It is rare to hear of anybody using decision aids though. Understanding probabilities was also a skill taught on the course but again not generally used because of lack of confidence in understanding the calculations or even misunderstanding the language used to describe them. If a decision aid is needed to understand the decision aid it is not surprising many of them aren’t used – or maybe they have improved a lot since then