Richard Smith: The “micro-macro problem” and the difficulty of using evidence to make policy

richard_smith_2014Doctors commonly complain that they consider evidence before they treat a patient, but politicians and policy makers don’t use the same rigour when making changes to health services. Indeed, Margaret McCartney—GP, BMJ columnist, and now stand up comedian—calls for this in her show at the Edinburgh Fringe Festival: “What do we want: evidence based policy making? When do we want it? After systematic review and independent cost-effectiveness analysis.” But McCartney and others (including me at times) are falling into a common philosophical trap, failing to recognise the “micro-macro problem.”

I’d never heard of the micro-macro problem until I read Everything is Obvious: How Common Sense Fails by the sociologist Duncan J Watts. Watts alludes to the problem at the beginning of the book (without using the term micro-macro problem) by describing a physicist’s scorn for sociologists not being able to find the general rules for their discipline that Newton found for physics and that can predict with great accuracy the movements of planets, atoms, and the tides. The physicist saw this as intellectual failure when it’s actually a reflection of human systems being much more complex and unpredictable than the movement of planets.

In retrospect I’ve encountered the micro-macro problem when studying economics, which is divided into micro-economics and macro-economics. Micro-economics was developed by Adam Smith, and, although based on a set of near unbelievable assumptions (not least that we all behave rationally), does not do badly at predicting and describing what individuals and firms will do in different sets of circumstances. But micro-economics is no help at predicting what will happen with whole economies, and even macro-economics does very poorly, as the Queen recognised when she asked a group of economists why none of them had predicted the financial crash of 2008.

The micro-macro problem runs through the whole of science. We can’t explain the behaviour and functioning of a living cell by reference to amino-acids and other chemicals or the workings of the brain in terms of neurones. As Watts points out, science has sidestepped the problem by dividing up into physics, chemistry, biology, and so on with each discipline having its “own set of facts, laws, and regularities.”

Evidence based medicine has severe limitations even within medicine, as all doctors recognise. It’s at its simplest when dealing with drugs. Randomised trials can be done to a high standard testing a drug against a placebo or another drug: nothing is different but the drug. (This is not true even with diet, where increasing or decreasing one component will inevitably mean other changes.) The drug trials can be incorporated into a systematic review, and we have high quality evidence. But we still have problems such as the trials being conducted in ideal conditions, usually in patients with only one condition, and bias in which trials are published. Doctors struggle when they try to fit the evidence to patients with multiple conditions and their own idiosyncrasies.

The complexity problem increases considerably when we move to health system interventions such as guidelines, training, or use of information technology: new methods, like cluster randomised trials, are needed. But now it becomes harder to maintain both internal and external validity, and we may test simplified interventions that fail to recognise the complexity of the system. Those trying to test interventions for obesity have recognised this problem.

But once you move to whole health systems and changes such as the commissioner-provider split or integrating health and social care then you have moved up an order and the micro-macro problem kicks in: you cannot use the methods of evidence based medicine. But this is not to argue that evidence is irrelevant. You can gather evidence from other geographies and disciplines and from history. But such evidence doesn’t allow for confident conclusions. Evidence should be incorporated into decision making, but it’s only one component in decision making, as it is, indeed, in evidence based medicine.

I’ve heard many talks on evidence in policy making, and the conclusion is always that evidence plays a part but that it’s small and that the link with what policies are adopted is often obscure. Making policy is fundamentally different from treating patients.

Richard Smith was the editor of The BMJ until 2004. 

Competing interests: None declared.