Two recent US initiatives: the New York Times’ rare disease column and a TBS series called Chasing the Cure are pointing to an emerging trend in the media: the idea that medicine can crowdsource ideas to diagnose difficult cases. But, can it be used to help diagnose patients, and what are the potential pitfalls?
Reaching a correct diagnosis is the crucial aspect of any consultation, but misdiagnosis is common, with some studies suggesting that medical diagnoses can be wrong, up to 43% according to some studies. This concern was the focus of a recent report by the World Health Organization. Individual doctors may overlook something, draw the wrong conclusion, or have their own cognitive biases which means they make the wrong diagnosis. And while hospital rounds, team meetings, and sharing cases with colleagues are ways in which clinicians try to guard against this, medicine could learn from the tech world by applying the principles of “network analysis” to help solve diagnostic dilemmas.
A recent study in JAMA Network Open applied the principle of collective intelligence to see whether combining physician and medical students’ diagnoses improved accuracy. The research, led by Michael Barnett, of the Harvard Chan School of Public Health, in collaboration with the Human Diagnosis Project, used a large data set from the Human Diagnosis Project to determine the accuracy of diagnosis according to level of training: staff physicians, trainees (residents and fellows), and medical students. First, participants were provided with a structured clinical case and were required to submit their differential diagnosis independently. Then the researchers gathered participants into groups of between two and nine to solve cases collectively.
The researchers found that at an individual level, trainees and staff physicians were similar in their diagnostic accuracy. But even though individual accuracy averaged only about 62.5%, it leaped to as high as 85.6% when doctors solved a diagnostic dilemma as a group. The larger the group, which was capped at nine, the more accurate the diagnosis.
The Human Diagnosis Project now incorporates elements of artificial intelligence, which aims to strengthen the impact of crowdsourcing. Several studies have found that when used appropriately, AI has the potential to improve diagnostic accuracy, particularly in fields like radiology and pathology, and there is emerging evidence when it comes to opthamology.
However, an issue with crowdsourcing and sharing patient data is that it’s unclear how securely patient data are stored and whether patient privacy is protected. This is an issue that comes up time and time again, along with how commercial companies may profit from third parties selling these data, even if presented in aggregate.
As such, while crowdsourcing may help reduce medical diagnostic error, sharing patient information widely, even with a medical group, raises important questions around patient consent and confidentiality.
The second issue involves the patient-physician relationship. So far it doesn’t appear that crowdsourcing has a negative impact in this regard. For instance, in one study over half of patients reported benefit from crowdsourcing difficult conditions, however very few studies have explored this particular issue. It’s entirely possible that patients may want to crowdsource management options for instance, and obtain advice that runs counter to their physicians’ and theoretically this could be a source of tension.
The last issue involves consent. A survey, presented at the Society of General Internal Medicine Annual Meeting in 2015, reported that 80% of patients surveyed consented to crowdsourcing, with 43% preferring verbal consent, and 26% preferring written consent (31% said no consent was needed). Some medico-legal recommendations, however, do outline the potential impact on physicians who crowdsource without the appropriate consent, in addition to the possible liabilities around participating in a crowdsourcing platform when their opinion ends up being incorrect. Clearly these are issues that have no clear answer: and we may end up in a position where patients are eager to crowdsource difficult-to-diagnose (and treat) sets of symptoms, but physicians exercise sensible caution.
It’s often said that medical information doubles every few months, and that time is only shortening. Collectively, there’s an enormous amount of medical knowledge and experience both locally and globally that barely gets tapped into when a new patient reaches our doors in any given hospital or clinic. Applying network intelligence to solving the most challenging, as well as the illusory “easy,” diagnosis, may give patients the best of both worlds: the benefit of their doctor’s empathetic care with the experience and intelligence of a collective many, but the potential downsides deserve attention as well.
Amitha Kalaichandran, is a physician and journalist based in Toronto, Canada. Follow her on Twitter at @DrAmithaMD.
Competing interests: None declared