It’s the year 2020 and your hospital is at the cutting edge of technology in healthcare. It is developing particular expertise in machine learning. Staff are looking at all possible functions that could be automated. They think that maybe they could automate parts of the system whereby they allocate critical care beds. They put together some machine learning algorithms to work on how patients are admitted to these beds. The machine learning is clever and gets even more clever over time. It takes in all the data about patients admitted to critical care over the past five years and divides the patients into those who survived and those who died. It does this prospectively as well as retrospectively. By 2021, it has realised that particular characteristics mean that certain people are unlikely to survive their stay in critical care. One characteristic is advanced age. So it automatically excludes older people from critical care beds. The machine learning doesn’t tell anyone what it has done. When interrogated, it just says it used its algorithms.
Machine learning seems to be everywhere these days. It is being touted as everything from a cure for cancer to a driverless lift home from the pub. It is already making inroads into healthcare. It could result in more efficient healthcare. But it could also hardwire discrimination against certain groups of people. So what can we do? One thing that we cannot do is stop machine learning. It has got up a head of steam that is being fuelled by technology, funding, and ambition. Can we steer it in the right direction? Even that might be quite difficult. In the above example, you could say—well let’s make the decision-making process transparent and then we will know what the computer is doing. But that would not be machine learning. And if we are double checking everything that the computer does it will not lead to efficiency. You could say to check the data before you feed it into the computer, but I don’t think that would be machine learning either. Nor would it be efficient. The only alternative is to change the algorithm and the goal that the algorithm will lead to. Or to rebalance it so that everyone would get a fair chance of a critical care bed.
Machine learning may make progress in other fields within medicine also—such as in clinical decision support. BMJ Best Practice is our clinical decision support tool. It supports healthcare professionals in making decisions. But the healthcare professional has the final say and makes the decision. For example, with our medical calculators, the healthcare professional enters data; they see what calculation the calculator makes; they see the result; and they make a decision based on the result. So the calculators are helpful without taking over. But it is difficult to say what clinical decision support might look like in ten years’ time. Apart from the fact that it will be different to what it is now. And that there will be more machine learning.
Kieran Walsh is clinical director of BMJ Learning and BMJ Best Practice. He is responsible for the editorial quality of both products. He has worked in the past as a hospital doctor—specialising in care of the elderly medicine and neurology.
Competing interests: Kieran Walsh works for BMJ which produces resources in clinical decision support and medical education.