Neal Maskrey: How do we become an expert?

neal_maskreyWe humans often use analogies to help us solve problems. From our memory, we identify a problem similar to—but not exactly the same as—the one we are currently faced with, and apply the previous successful approach to the new problem. It’s called analogical reasoning. Sometimes we get great results, sometimes not so much.

I recently came across a great analogy from Brian Goldman, an accident and emergency doctor in Toronto.

Let me share it with you, but here’s the caveat, it relates to baseball and you may know nothing about baseball. So this is baseball in around 100 words.

The essence of baseball is that you have a bat, so you’re called a “batter.” A few yards away is someone else with a ball and they’re going to pitch it at you, so they’re called the “pitcher.” Your job as batter means that when the ball is thrown in your direction you have to try and hit the ball into the field of play, which is an arc in front of you. If you manage to do that without the ball being caught you stand a chance of scoring a run. Easy peasy? Not really, because the ball is thrown at 100mph, sometimes with a spin, causing it to swerve or dip.

Baseball is big, big business in the United States and Canada. Performance statistics are calculated and published daily for all aspects of the game and for individuals, teams, and special functions within each team. So here’s the point. How good is good if you’re a batter? How many times out of 10 does a good batter successfully hit the ball? To use the technical term, we want to know what a good “batting average” is.

Well, it turns out that three out of 10 is very good. That’s the same as 300 out of 1000, and batting averages are usually expressed as, for example, .300. A 300 batting average or better is very good and you’d be richly rewarded for your season’s efforts. A .400 batting average almost never happens; if you manage it across a season you’re a sporting legend.

On a plane I have a keen interest in the pilot’s “safe landing average,” and am reassured that it’s also of great interest to the pilot. Eating out it would occasionally be reassuring to be able to access the restaurant’s “no food poisoning average.”

So, as a doctor—and here’s the point of the analogy—what, for example, is my “safe prescription writing” average. We know from research that the average batting average for consultants is .941 for individual prescriptions. For GPs, it’s .951; one in 550 prescriptions is associated with a serious error. Doctors in training score .916 for individual prescriptions. So better than a baseball batter. But it was drilled into all of us that we had to have a batting average as a doctor of 1.000. It’s just not acceptable to be making mistakes, is it? No one wants to be that doctor.

The batting average is a useful analogy. Our aim, like the batter, is expertise. We’re going to make some mistakes because we’re human, and we have to manage that, but that theme is for another day, as is the theme of constructing meaningful healthcare performance statistics for individuals. Back to the point, we should progress during our training years so we become  . . .  well . . . experts. So where on the training curriculum does it tell you how experts become expert? Because that’s the aim, right? And it would be really good to know how to get there and—indeed—maintain that expertise. Except no one tells you. It’s one of the great omissions; largely you’re left to “catch on,” presumably by osmosis.

So here’s how it happens. Humans remember patterns really well and often use them successfully to make fast, good decisions. If we don’t have a pattern to use, we have to work out what to do and that takes time and effort. This dual process, rational choice theory is now reasonably well known, thanks especially to the work of Daniel Kahneman. Sometimes we can use shortcuts called heuristics to work out our way to solve a problem—rules of thumb, which if present can work as well as (or better) than more laborious and time consuming approaches.

Experts are better than novices at solving problems because they have increased their knowledge in a specific domain. That’s pretty obvious. But the important difference is that their knowledge is organised, so that solutions can be constructed or retrieved quickly.

Novices notice the surface features of problems (how much chest pain), but experts are able to use stored relational features (is there nausea, vomiting, sweating, is the patient a smoker, is it someone with diabetes, is the pain radiation atypical, etc) as well. And, because they have a three dimensional picture—including some (often unconscious) Bayesian assessment of prevalence—alongside the inevitable pattern recognition, they are in the right place at the right time, and they also (usually unconsciously) more often assess and weigh factors that are absent as well as present.

As a result, experts deliver better and quicker problem solving than novices. But because all this is tacit (and because doctors don’t explicitly study their own thought processes in a problem solving situation), experts usually find it impossible to adequately describe how they became expert, and they find it difficult to maximise their help to novices in their battle to acquire true expertise.

It’s not just in medicine that this ability to rapidly scan a whole bunch of information and sort the key, meaningful relational features is an element of expertise. The novice footballer can control and kick a ball, run hard, and tackle well . . . but watches and follows where the ball is. The expert footballer has the same technical skills, but also watches the players without the ball, and moves to negate their ability to receive a pass in a dangerous position. Chess grandmasters have the same ability: they see more strategic options, and assess what’s not there as well as what is.

Such is analogical reasoning. Across the field of cognitive psychology, there are plenty of examples that one of the most important and simple things we can do is to explain to decision makers how they and others make decisions, whatever the decision making approach being deployed is. Insight into what’s going on, while it’s going on, is truly useful. No longer do footballers, golfers, baseball players, or airline pilots get sent out to mindlessly practise without analysis; they get detailed feedback, including being able to work out what they are thinking while performing the required task. In turn, they control their responses better—improving their performance. Sports coaches know this and use it lots, so why don’t we?

The dual process, rational choice approach to decision making, along with hypothesis testing, are probably the best known of decision making approaches. I’ve just illustrated analogical reasoning and I reckon there’s logic, game theory, causality, moral reasoning, insight, genius, and problem solving itself to go.

If we want doctors with batting averages as close to 1.000 as possible, might it be time to recognise the utility of the evidence for improving thinking and deciding? And wouldn’t it then be worth thinking about getting it on the curriculum? But it’s your decision.

Neal Maskrey’s early career was as a GP before spending seven years as a medical manager and part time GP. After 12 years as a director of the National Prescribing Centre and programme director at NICE, he is now honorary professor of evidence informed decision making at Keele University, and consultant clinical adviser in the Medicines and Prescribing Centre, NICE.

Competing interests: I declare that I have read and understood the BMJ policy on declaration of interests and I hereby declare the following interest: Employed part time by the National Institute for Health and Care Excellence.