If every media report of a cure for cancer were true, we should live forever. But, the media like a headline health story, and we cannot really blame the journalists. It is largely the fault of epidemiologists, according to Joe McLaughlin (International Epidemiology Institute, Maryland USA), who laments the change in culture. He feels that epidemiology has lost its way; experts talk up their findings, shamelessly court the media and, have lost the objectivity of their science. This was how he set the scene at a meeting of epidemiologists and editors at the Royal College of Physicians on Sept 24 and 25, convened by Gerard Swaen, of the Dow Chemical Company, under the aegis of ECETOC (European Centre for Ecotoxicology and Toxicology of Chemicals, Brussels) to discuss the potential for a register of observational epidemiology studies.
To say that epidemiology has lost its way is, perhaps, a little unfair. The signposts are clear and researchers know the waypoints on their research trajectory. These are, unambiguously, peer reviewed publications and grant income. They pave the road to a successful career ensuring that junior researchers chase the elusive P value and the resulting publication, departments advertise their proven expertise, and universities embrace the kudos, publicity, and income. Joe called for epidemiologists to return to dispassionate science- disinterest, rather than vested interest and, in particular, he felt that researchers should not be advocates- those with a “mission” should seek their career in politics or religion.
There are some problems with observational epidemiology. Industry researchers hope for negative findings when looking at harms, while clinical epidemiologists search for statistically significant positive findings that might be publishable. Each can be seduced by the temptations of multiple baseline measurements, subgroup analyses, arbitrary cut off points, selective outcome reporting etc. There is implied misconduct, although, to be fair, sometimes a false positive result is variation and not bias. It can be difficult too for journals to weigh up the relevance of a reported finding as authors talk up the importance of their work. And, the research agenda may become database driven, rather than testing a clinically important a priori hypothesis.
Rules have evolved in searching the literature while preparing systematic reviews. Jos Kleijnen (Kleijnen Systematic Reviews Ltd, NL) reminded us why the literature may not reflect all the research available. He listed some additional reasons which I hope are less likely; that publication in peer reviewed journals is a lottery, that publication maybe at the whim of an editor, or there may be unseen peer reviewer bias. Not all factors leading to failure to publish are quite so disquieting. Simple things happen; researchers may have changed institution, moved house, lost the randomisation code or, perhaps, forgotten about those fieldwork questionnaires languishing underneath a flowerpot in someone else’s office.
For all the above reasons, there was general agreement that a register of observational epidemiological studies may have a place. But, everyone recognised the potential limitations. Pat Buffler (University of California, Berkeley) and the breakout group chairs did well to focus discussion to come up with some the key questions.
What are the problems that a register can fix? If there is a registry, what should it look like, how could we design it to be useful and how will it be used? Should all observational trials be registered and if not, which ones; how should it be structured and, should we extend the current models and if not, what would it take to create another?
There were no clear solutions and this dialogue is just beginning. Some form of registration is likely in the future. There are, for example, already almost 13, 000 observational studies registered on ClinicalTrials.gov and the Human Genetics Epidemiology Network (HuGEnet) already shows potential. But, there are other unknowns and this discussion pivots on our traditional model of data collection where cohort studies are owned by research groups or institutions. But, who really owns these data and who should be permitted to set the research agenda? It is difficult to support restrictive practice when most clinical research is publically funded and, ultimately, the data belongs to patients. If unlimited data sharing becomes widespread or even mandatory, we may be talking about an entirely new game.
Domhnall MacAuley is primary care editor, BMJ