By Dr. Geoffrey Modest
Another really interesting study was published on the benefit of placebo, this time in decreasing chronic low back pain (see doi: 10.1097/j.pain.0000000000000700 ). This study was remarkable in that both the clinicians and the patients were aware that the intervention was placebo vs their current care.
- 83 patients (of a total of 243 screened) with chronic low back pain (LBP) for at least 3 months duration were randomized to open-label placebo (OLP) vs treatment as usual (TAU) for 3 weeks. The latter group was told that they would be offered the opportunity to take the placebo pills after the 3 week intervention.
- Exclusion criteria included taking opioids
- Mean age 44, 71% women, 74% employed, 87% used pain meds in past week (76% NSAIDs/analgesics, 22% antidepressants, 15% benzos, 40% adjuvants of cyclobenzaprine/gabapentin/thiocolchicoside, 14% complementary medicine).
- Pain severity assessed in three 0-10 scales: maximum pain, minimum pain, and usual pain. Baseline mean pain overall 1.8 out of 10, mean disability 5 out of 10. All patients were seen by a board-certified pain specialist
- All patients were given a 15-minute script stating the following: the placebo effect can be powerful; the body can respond to the placebo effect, like Pavlov’s dog; a positive attitude can be helpful but is not necessary; and it is crucial to take all of the pills. Those randomized to the placebo pills were given a bottle of orange gelatin capsules filled with cellulose, labeled “Placebo pills. Take 2 pills twice a day”. Those in the control group were reminded of the importance of being in a control group, and that they too could get placebo after 3 weeks.
- Those in the placebo group were also asked what they thought of taking placebo, if they expected placebo to work, and what they thought was in the placebo pills. Those in the control group were asked if they were disappointed in not being in the placebo group, and what they thought about the study
- Outcomes were measured at baseline, 11 days and 21 days
- OLP elicited greater reduction in pain in all 3 scales (p<0.001 for the composite), with moderate to large effects. A reduction of pain by 30% reflects a clinically significant change, which was found in both the usual and maximal pain scales in the placebo group, vs a 9% and 16% reduction in the TAU group. The minimum pain decreased by 16% in OLP, but increased 25% in TAU
- OLP also reduced disability (a secondary outcome) vs TAU (p<0.001) with a large effect size (a 29% reduction vs 0.02% in the TAU group)
- Adverse effects were essentially nonexistent
- Of the 33 respondents in the OLP group, 30 reported that the placebo “was not an active substance”, 3 stated it was a “pain killer” since it worked so well. 21 said that they were skeptical that the placebo would work, 9 thought it would. Most respondents in the TAU group were not disappointed that they were not on placebo, since they “knew” they would have it later.
- And, 17 patients in the TAU group requested the placebo at the end of the study
- Chronic LBP is remarkably common and causes more disability than any other medical condition worldwide: in the US it is ranked third in all diseases by disability-adjusted life-years. As a reference point on the degree of pain reduction in the above study, NSAIDs do reduce chronic LBP vs placebo, though the net benefit is less than one point on the 0-10 point scale.
- There have been a few similar open-label placebo studies, all small, but showing some efficacy for placebo in depression, ADHD, and irritable bowel syndrome
This article was quite remarkable, and brought up several interesting issues:
- There seemed to be pretty equivalent encouragement and interaction by the investigators for both the OLP and TAU groups, making it unlikely that the positive placebo effect was simply increased contact and empathy from the clinicians
- There have been many neuroimaging studies which have found that placebo leads to changes similar to those found by medical analgesics, in the same specific areas of the brain and with changes in relevant neurotransmitters.
- BUT, although these patients had chronic LBP, overall their pain scores were relatively low, the disability score was mid-range, and they were not on opioids. i.e., these patients had pretty mild chronic pain symptomatology and OLP efficacy may not be generalizable to patients with much more severe pain and/or on opioids. And this was a really short (3-week) exploratory study, again limiting interpretation of its potential sustainability in actual chronic pain patients. Also, the individuals who volunteered for a study advertised as a “novel mind-body clinical study” may not reflect well the general population of patients with chronic LBP (though only 14% were actually using complementary treatments, and most of the patients were highly skeptical of the benefit of placebo).
- We know that in many large randomized-controlled studies, along with the frequent 20-30% treatment efficacy in the placebo wing, there are also pretty frequent adverse events with placebo (sometimes rivaling the number of adverse events in the active treatment wing). And, as the contrary issue, too much discussion of potential adverse effects of medications can increase the likelihood that the patient will have an adverse event (i.e., the “nocebo effect”, see http://blogs.bmj.com/bmjebmspotlight/2013/11/25/primary-care-corner-with-dr-geoffrey-modest-nocebo/ ). This supports using open-label, transparent prescribing of placebos (I would like to emphasize this pretty striking finding: in those on blinded placebo in controlled studies, they typically have lots of adverse effects attributed to the placebo; but none in the open-label placebo study despite the significant benefit….)
- A prior blog was on a candesartan study (the CHARM trial) finding that candesartan was indeed superior to placebo in patients with heart failure and reduced ejection fraction. But if one looked only at patients who were adherent in taking their meds (either candesartan or placebo) there was in fact no difference in outcomes (i.e., efficacy was from med taking and not whether it was an active med or placebo). see http://blogs.bmj.com/bmjebmspotlight/2016/04/14/primary-care-corner-with-geoffrey-modest-md-another-blog-on-the-power-of-placebos/
- An interesting side-line to the placebo effect is that there seem to be some significant genetic influences/determinants to the placebo effect (see http://blogs.bmj.com/bmjebmspotlight/2015/05/07/primary-care-corner-with-geoffrey-modest-md-placebo-genetics-and-the-placebome/ )
- So, perhaps one advantage to the open-label approach is that it seems that patients knowingly taking open-label placebo, most of whom were initially skeptical of the potential benefit from the placebo, had therapeutic benefit with essentially no adverse effects.
This study also brought up several bigger issues about prescribing placebos:
- It effectively circumvented ethical concerns about “deceiving patients” (their terminology) by giving them open-label placebo: patients were aware that they were taking placebo, and the majority confirmed that they knew that the placebos were likely not effective. I should comment that the issue of “deceit”, to me, is a bit overstated and has too strong a negative connotation. Given that our goal is to help patients, is it really deceitful if I really don’t give all of the information to a patient? I would argue that all clinicians give partial and subjective information almost all of the time:
- Our understanding of the best approach to our individual patient is fundamentally subjective, since reading the medical literature is quite complex and the literature is quite incomplete: different studies often have conflicting results from the same treatment, in part because of differing methodologies, in part from different inclusion/exclusion criteria, and in part simply because people are very complex biological systems, and attempts to reduce the person in front of us mathematically to the average study participant (e.g., mean 51 years old, 45% female, 83% white, 14% with diabetes, 18% on a statin, and without renal insufficiency…) may be meaningless in terms of treating my 73 yo Ethiopean woman with renal insufficiency etc etc. (i.e., i am extradorinarily likely to be prescribing an “untested medication” for this particular woman)
- In addition to the often complex medical conditions, different therapies, etc of our specific patient is the much added complexity of the multitude of psychosocial factors which affect the disease prevalence, severity, response to treatment, etc. It is really difficult to factor these into interpreting the studies, and by-and-large they are never even included in the studies. And we know from the limited data, for example, that depressed people do less well. Or patients without adequate housing/food/highly stressful lives may not do so well (e.g., see http://blogs.bmj.com/bmjebmspotlight/2016/08/24/primary-care-corner-with-geoffrey-modest-md-neighborhood-deprivation-and-diabetes-risk/ , which shows that these psychosocial/environmental factors influence the development of diabetes). These psychosocial factors are important, but rarely part of the RCTs, again challenging our application of most of the current studies to our individual patient.
- These last 2 points clearly affect our ability as clinicians to interpret the medical literature and apply it “objectively” to patients. But added to this is that we humans are really a subjective lot. As clinicians, we are influenced disproportionately by our empirical/anecdotal experience (and the more recent experiences weighing more heavily). Also the most recent medical article on a problem seems to hold disproportionate sway over the prior medical studies. We are also influenced by our medical model of the disease process leading us to accept some studies’ conclusions over others, and this is determined by our own accepted medical culture (and many of my previous blogs display many of the inaccuracies of our held medical models). And we are trying to discuss the complexity of these medical issues across the complexity of who the patient is (their education, cultural background, health beliefs, psychological state, etc etc), all of which affect their ability to understand/interpret what we say.
- And, besides, we have gone through years of medical training, and years of experience (in many cases). And the thought that we can break down complex medical concepts and pathophysiology to explain what we think is going on and what we suggest doing in the context of a 20 minute session with the patient (and often deal with many other issues in the same clinical encounter) reflects mystical thinking
- So, I think that we as clinicians are always trying to figure out what we think is best for the patient. And, in most cases, that is what the patient wants and expects. We process lots of data, and provide the best advice we can, in the context of and modified by who the patient is sitting in front of us. Is it deceitful that we are doing this??? If we give this patient a not-so-likely-to-work therapy that might help them?? We often give medications that we think are pretty harmless, have little data to support them, but are worth trying (e.g., simethacone for gas). And, they often work… is that deceitful? More deceitful than giving a truly innocuous placebo???
- I should add to the above points, that I am not a therapeutic nihilist. I do try to figure out what is best for my patients, and do prescribe lots of drugs (though I am increasingly skeptical of new drugs overall, especially if there are old tried-and-true ones which work quite well; I have learned over and over that apparently promising new drugs often fall flat on their faces over time)
- My untested clinical sense is that my level of enthusiasm about a treatment does affect the results (??enhancing the placebo effect). I learned this a while ago when giving antidepressant meds for the first time (tricyclics, prior to the advent of SSRIs). The results were somewhat muted initially, but as soon as someone responded dramatically, I was much more convinced of the utility of the meds and therefore persuasive in prescribing them, and subsequent patients seemed to respond much more frequently to these same medications.
- Another correlate of the above is that perhaps we should be interpreting studies a little differently. For complex biopsychosocial medical problems (e.g. chronic LBP), there really are very few nonpharmacologic interventions which have been rigorously tested. Exercise is one of the best evaluated and seems to be quite helpful. But the small studies finding that acupuncture is no better than sham acupuncture in a structured RCT may not mean that acupuncture doesn’t work. It may mean that it really does work, but in a way different from what we thought we understood of how it would work (maybe there does not need to be specific “acupuncture points”, or maybe there are more “points” than we know, and in either case the “sham acupuncture” is actually “real acupuncture”). Maybe we are just involving the patient in a treatment that they feel will really help, and this empowerment of the patient is what is working. Maybe it is just the placebo effect (though perhaps some types of placebos work better in some people than other types, e.g. meds vs physical interventions). But many people do get relief. Again, we should not dismiss a therapy just because it does not stand up to our “scientific rigor”. Maybe we are not asking the right question. Maybe getting benefit from any intervention (including placebo) is really the goal.
- I do realize this is a long blog with lots of general random thoughts on the interpretation/application of the medical literature, but I think this study does bring many of these issues and assumptions to the fore.
- So, this study reinforced that there is a clinically important placebo effect (in fact, the Institute of Medicine in 2011 noted that “placebo [could] conceivably be a form of treatment of pain, especially in light of the shortcoming of other modalities or benefits they bring in their right”. As part of patient care, I think that we should figure out creative ways to integrate this powerful placebo effect into routine patient care. Perhaps best as open-label placebo, perhaps as just another prescription. Or, perhaps it might be different from one patient to another. So, to rephrase what pretty much all editorialists say almost all of the time about studies: we need more studies, longer-term and with more types of patients. In this case of LBP, looking at the long-term efficacy/adverse effects of OLPs, and even comparative studies assessing OLPs vs disguised placebos prescribed as active medication …..