I became interested in acute kidney injury (AKI) during my fellowship in the renal service at the Hospital of the University of Pennsylvania (HUP). A busy, urban hospital, the HUP renal consult service was a machine, with multiple fellows cruising around the 700+ bed institution, through the general wards, multiple intensive care units, and a trauma-heavy emergency department. We barely stopped, such was the pace of consults, often passing each other in the hallway with a silent nod or high five and moving on. There was too much to do.
But one thing we’d stop to commiserate on was a sense that, if only we had been called earlier, we might have had more of a chance to change the outcome, to stave off dialysis, or in some cases, death. Why was that kidney-toxic medication given for so long AFTER the patient had AKI? Why weren’t the electrolyte issues dealt with? Why did no one notice the proteinuria and hematuria on hospital admission?
Those anecdotes led me to believe that providers were simply missing the diagnosis of AKI, and research would bear that out. Multiple studies from multiple institutions have found that providers don’t appear to recognize AKI at its onset, and often fail to enact simple best practices that should improve outcomes.
The advent of the electronic health record (EHR) led to a new innovation in this space—the AKI e-alert. We were worried providers might miss AKI—now we could just tell them. After many years of research and design we integrated a pop-up alert into the electronic health record that would inform the provider that the patient had AKI, give some salient clinical info, and link to an order set that could be used to help with diagnosis.
We were fully aware that pop-up alerts are not often embraced by providers. As such, we needed to show that such an intrusive thing could actually improve patient outcomes. There was no better way to do this, in our opinion than in a randomized clinical trial, but this presented a problem. How do you handle the control group? We realized early on we would need to request a waiver of informed consent from the institutional review board. Such a waiver requires three principal requirements:
1) The intervention does not infringe on the rights or welfare of the patient. We could think of no reason why letting a provider know that their patient had AKI would interfere with their rights or welfare.
2) The study could not be feasibly conducted with consent. We clearly couldn’t enroll people in a study of AKI and then tell them, if they were randomized to usual care, not to mention it to their providers.
3) The intervention must be no more than minimal risk. A purely-informational alert, we reasoned, must be minimal risk. It is merely aggregating data that is already present. In fact, we even ensured the elements of our AKI order set were minimal risk (no fluid boluses here—just low-risk suggestions like urinalysis and following fluid input/output numbers).
After close discussion with the Institutional Review Board (IRB), and support from Yale’s interdisciplinary center for bioethics, the trial was approved. An interim analysis, at 50% recruitment (prior to the enrollment of two non-teaching hospitals) showed no significant benefit (or harm) of the alert and the trial continued to completion.
It was upon inspection of the final results that we got worried. Two of our six medical centers, the two non-teaching hospitals, had results that were worse in the alert than the usual care arm. Specifically, the mortality rate was higher in the alert arm.
Needless to say, we immediately began a deep dive into the data—first confirming that somehow we had not flipped our randomization variable and to ensure balance between the randomization groups—and then going deeper to evaluate for potential mediators of the effect. Could patients in the alert arm have received inappropriate fluid resuscitation? No sign of that. Did they have a lower rate of key diagnostic tests (in a misguided attempt to avoid contrast perhaps)? No signal. In the end, frustratingly, we could attribute the harm to no specific process.
We informed the IRB and the study hospitals, who launched their own investigations. We manually adjudicated every study death in those hospitals looking for a common theme and found nothing. Patients died for reasons many patients die—heart failure, malignancy, sepsis. Aside from the greater number of deaths in the alert arm, there was no sign that any mechanism of death was distributed unevenly between the groups.
We were left with an unsettling situation. The outcome of this work demonstrated the need for this type of research. Alerts are increasingly marketed to hospitals and adopted with the intent to improve care. But do they improve care? Indeed, are they at the very least benign? We accept that the impact of alerts needs to be studied, but how? We had described this study as minimal risk because, we thought, it must be. But our data suggest that assumption may not have been accurate in all cases. To be sure, the effects we saw may be due to the vicissitudes of chance—the analysis of non-teaching hospitals was a sub-group analysis after all. But what if we are seeing an example of heterogeneity of treatment effect? A phenomenon whereby the impact of an intervention differs among different groups due to a variety of factors upstream and downstream of the intervention—many of which may not be easily measured.
Perhaps the most salient question: how do we move forward? What needs to be in place to allow these studies to be conducted under a waiver of informed consent? What monitoring and requirements will make them minimal risk? And if they are not minimal risk, and no waiver can be granted, can they feasibly be done at all? It is strange to think that we may be in a situation where an e-alert can be used clinically without any real oversight, but not studied rigorously. The IRB has worked with us to form a pathway to allow studies of this type to move forward with monitoring and oversight. We continue to work with our colleagues in bioethics and with our IRB to determine how to complete this critical work while protecting human research participants and improving care.
F Perry Wilson, associate professor, Department of Medicine, Section of Nephrology, Yale University School of Medicine. Twitter: @fperrywilson
Competing interests: Please see research paper