Guest Post by Nathan Hodson
Research in robotics promises to revolutionize surgery. The Da Vinci system has already brought the first fruits of the revolution into the operating theater through remote controlled laparoscopic (or “keyhole”) surgery. New developments are going further, augmenting the human surgeon and moving toward a future with fully autonomous robotic surgeons. Through machine learning, these robotic surgeons will likely one day supersede their makers and ultimately squeeze human surgical trainees out of operating room.
This possibility raises new questions for those building and programming healthcare robots. In their recent essay entitled “Robot Autonomy for Surgery,” Michael Yip and Nikhil Das echoed a common assumption in health robotics research: “human surgeons [will] still play a large role in ensuring the safety of the patient.” If human surgical training is impaired by robotic surgery, however—as I argue it likely will be—then this safety net would not necessarily hold.
Imagine an operating theater. The autonomous robot surgeon makes an unorthodox move. The human surgeon observer is alarmed. As the surgeon reaches to take control, the robot issues an instruction: “Step away. Based on data from every single operation performed this year, by all automated robots around the world, the approach I am taking is the best.”
Should we trust the robot? Should we doubt the human expert? Shouldn’t we play it safe—but what would that mean in this scenario? Could such a future really materialize?
This is not just sci-fi. Given the direction robotic surgery is heading, it is increasingly likely to become reality.
The Da Vinci system has become a regular feature in the operating theater, optimizing many laparoscopic procedures in gynecology, urology, and general surgery. Although it has the potential for remote control and automation its present clinical use is limited to operation by a human surgeon in the same room. Increasingly, other robots are improving performance and outcomes by contributing to planning and decision-making as well as technical skills. The Soft Tissue Autonomous Robot (STAR) has shown that automated robots can form more reliable connections between sections of bowel than human surgeons. We must anticipate a watershed moment when robots are able to plan and perform entire operations without the input of human surgeons.
Machine learning is one precondition for such robot-led operations. Machine learning is the process whereby computers optimize their algorithms through feedback, allowing machines to perform tasks without prior programming. The underlying concepts behind deep-learning neural networks have been around for many years, but have only recently come to the fore due to increasing computational power. Recent applications to healthcare have included the diagnosis of melanoma and the DREAM system for diagnosing diabetic retinopathy.
The UC Berkeley Centre for Automation and Learning for Medical Robotics (CAL-MR)is now integrating machine learning and the Da Vinci system. Given the complexity and delicacy of human soft tissue, these researchers believe that programming a robot to operate on internal organs, the model used by STAR, would be improved by allowing robots to learn for themselves.
Preliminary work uses Learning By Observation, which means that the robot “learns” without being programmed. Robots can identify different sensor conditions and represent them in terms of certain parameters. Some of the necessary motions within an operation, or “surgemes,” that have so far been replicated include penetration, grasping, retraction, and cutting. Surgemes can be combined into a “finite state machine,” a kind of algorithm, to execute each subtask within the operation. The concept of reinforcement learning implies that the robot could refine and update the finite state machine based on feedback (for much more detail see this CAL-MR paper on learning by observation).
The education of human surgeons involves the development of muscle memory and heuristics (as well as concrete knowledge) through high-intensity exposure and experience in theater. Combining programmed anatomical knowledge with machine-learned experience reflects this pattern. Both robots and human surgeons will rely upon practical exposure in order to develop surgical expertise, raising the prospect of conflict as they vie for theater time.
What happens if a surgeon disagrees with an autonomous robot?
In the immediate future, trained surgeons will oversee even autonomous robots, but what happens when the robot takes an approach with which the surgeon disagrees? Until now it seemed likely that robots would be programmed by human surgeons, affirming the supremacy of human surgical knowledge over robot skill. The idea behind machine learning is that a robot could potentially surpass human understanding of surgery, likely by sharing data between different robots faster than humans can share skills informally or through journals.
At this point it becomes difficult for the human surgeon to justify overriding the machine. A difference of opinion would pit human intuition and knowledge against the data-driven approach of a machine, rendering a human surgeon’s intervention difficult to defend. This scenario has the potential for positive health outcomes, but it risks taking robotic surgery into an unforeseen domain where no human surgeon is in overall control, disrupting the somewhat blasé picture in much health robotics research.
Would patients consent to machine learning over human education?
The training of human surgeons, contingent upon access to human bodies, may be undermined by the presence of machine learning. By opting for robotic surgeons, patients would know that their operation was helping to improve the care of future patients, but could be spared the potential indignity of showing their internal organs to another person.
In human medical education there is an unspoken exchange: (a) the patient grants the trainee surgeon access to their body and (b) the surgical team performs the operation. As an unavoidable part of the process (a) tends not to be mentioned, even though hands on experience is an essential and scarce resource for trainee surgeons. With machine learning in the picture, the information flow from the patient’s body to the surgeon comes into focus: would patients prefer to use their bodies to educate robots or humans? What if patients opt out altogether?
Within reason, most patients in teaching hospitals are happy to help trainees and students. Surveys of patient attitudes have revealed an awareness that such participation benefits future patients (see for example Haffling and Hakansson,  or Sayed-Hassan, Bashour, and Koudsi ). This altruistic motive would hold for machine learning robots, as increasing data would allow for increasing iterations and improvements to the finite state machine.
Patient surveys also show that engagement with students and trainees is valued for the human contact it offers. This sentiment may carry over to the much-poeticized physical intimacy between the surgeon and the body. It is conceivable that patients would rather be treated by a human in order to fully experience this knowledge exchange, but this may only apply to certain cases. On balance, an anaesthetized patient is unlikely to feel any loss of connection.
Conversely, the loss of this physical intimacy would actually be preferable for many patients. The same surveys confirm that patients perceive medical training as an invasion of their privacy. Data about the toughness of a person’s prostate or the resistance offered by a sigmoid colon is sensitive. Given the choice, many patients would feel less self-conscious about sharing such “carnal knowledge” with a robot than a trainee surgeon.
A final possibility is that some patients may value their privacy over the altruistic motive suggested above. When surgeons are human, the flow of anatomical knowledge to the surgeon is unavoidable. But with a robotic surgeon, the patient could choose (or pay) to delete any data obtained during the operation.
Machine learning in robotic surgeons would offer increased privacy and allow the patient to know they are benefiting those who will subsequently undergo the operation. Additionally it would facilitate entirely non-educational operations as the patient preferred. Both of these features would reduce the educational opportunities available to human surgeons.
Can human surgeons retain authority in the operating theater?
Most authors have presumed that human surgeons would function as a safety mechanism on robotic surgeons, remaining on hand to manage malfunctions or emergencies. Undoubtedly this is the immediate future of robotic surgery, but it is unlikely to be sustainable.
When robots operate they will integrate new information from the patient and this data can be shared with other robots. The purpose is to produce robots whose results are better than those of human surgeons. With time, it is likely that they will take on the majority of the operating. Human surgeons could possibly be squeezed out of theater and trainees prevented from getting the necessary experience. These inadequately-trained humans would be systematically deskilled through an absence of educational opportunities, leaving them unable to resolve an emergency and ill-equipped to disagree with the robot’s intended plan of treatment.
In this event, the best chance for human hands may come from high quality virtual reality surgical training, preserving for as long as possible the necessary skills to resolve surgical emergencies and the necessary knowledge to challenge robot-led treatment plans. While robots glean data from the anatomy of human patients, the remaining human surgeons would train on computer-generated simulations.
Perhaps there is no need for human involvement. An argument that we are safer without human decision-makers in ultimate control could be incorporated into defenses of autonomous robotics. Until then, effective means of training humans outside of the theater, such as virtual reality, are a priority if the pursuit of autonomous robotic surgeons is to retain its human safety catch.
Learning by Observation for Surgical Subtasks: Multilateral Cutting of 3D Viscoelastic and 2D Orthotropic Tissue Phantoms http://cal-mr.berkeley.edu/papers/davinci-icra-2015-v26.pdf
Robot Autonomy for Surgery https://arxiv.org/pdf/1707.03080.pdf