The latest chapter in the sad saga of the Wakefield et al paper on the MMR vaccine raises some difficult questions about access to individual patient data. It is possible that the apparent discrepancies between the patient records and the publication might have come to light a whole lot sooner, perhaps even before publication, if Wakefield et al’s* raw data had been available for public scrutiny. (*I persist in including the rather cumbersome et al, because it is important to remember that there were co-authors, and I will return to this point later.)
Perhaps journals should demand that authors make their raw data available on a public website if they want to publish their findings? This might seem a logical way of preventing fraud and misleading reporting, but it’s fraught with problems. For a start, the data on which the Wakefield et al paper were based were from a small case series of children with a relatively uncommon presentation. Such data, as Brian Deer’s investigation has shown, are extremely hard to anonymise. You might argue that most papers won’t be investigated by a tenacious journalist for several years, so this is a bad example, but I still cannot see how the data (essentially case notes, full of personal details) could have been made publicly available without breaching the traditional confidentiality between doctors and patients.
So if we can’t make individual data available from small case series, perhaps we can safely do it for large clinical trials? Apparently not: in countries such as the US where electoral registers are freely available, experts say it’s possible to identify individuals armed only with a couple of details such as age and location.
If public access to individual patient data is fraught with problems, perhaps we could entrust the data to just a few, trustworthy experts? For instance, we could ask peer reviewers to look over the raw data. But that’s not workable either. Some studies involve millions of datapoints and most reviewers have neither the expertise, nor the time, nor probably the computer capacity, to tackle this: neither do journals. At COPE (the Committee on Publication Ethics) we plan to revise our flowchart about how to handle suspected fabrication by removing the suggestion that editors might request raw data. Experience has taught us that this is simply not feasible, journal offices are not equipped for this type of forensic examination of data, which can probably only be handled by largish research institutions.
My only shred of hope lies with the “et al’s” – it might be possible to frighten co-authors into taking proper responsibility and never to risk putting their name to a paper for which they hadn’t had close contact with the underlying data. Perhaps the Wakefield et al case will help to get this message across. Brian Deer quotes John Walker-Smith as saying “he had ‘trusted’ Wakefield” – perhaps he shouldn’t have done. Perhaps we need to encourage distrust, or at least healthy scepticism, among co-authors. I’m not sure how this can be achieved, but maybe journals could make a start by emphasizing the serious responsibility of co-authorship and by holding all authors to account if problems emerge in their publications. My father taught me never to sign anything I hadn’t read or couldn’t understand – maybe I’ll add another clause and resolve never to sign anything unless I’ve seen the raw data.
Liz Wager PhD is a freelance medical writer, editor, and trainer. She is the current chair of the Committee on Publication Ethics (COPE).