You don't need to be signed in to read BMJ Blogs, but you can register here to receive updates about other BMJ products and services via our site.

Uncategorized

Tracking Guidelines’ Errors

19 Dec, 16 | by Kelly Horwood, BMJ

Tracking Guidelines’ Errors

Guest Blog Post
Authors: Primiano Iannone, MD, Monica Minardi, MD, James Doyle, MD
Institution: Emergency Department, Ospedale del Tigullio, Lavagna, Genova, Italy
Email: p.iannone@live.com

Perspective: Wrong guidelines: why and how often they occur

Methods: Wrong guidelines: how to detect them and what to do in the case of flawed recommendations

Despite being used by most physicians to offer the best option of care for their patients, guidelines   can often suffer from serious flaws making them untrustworthy, even though considered evidence based tools.

We  identified three categories of guidelines’ untrustworthiness: 1) method related, when incorrect methods have been used (including inadequate management of conflict of interests, panel composition ; 2) content related, when there is discrepancy between recommendations and primary evidence which they refer to; and 3) outcome related, in the case of outcomes diverging from those expected by following the recommendation.  We considered quality of primary evidence against trustworthiness of guidelines, and  identified the need to set a trustworthiness threshold to be reached before adopting a recommendation as true, depending on quality of guideline and the amount of evidence available.  Furthermore, we searched the possible causes of guidelines’ untrustworthiness not only amongst the traditional factors commonly considered (conflict of interests, poor methods, panels not representing all of the stakeholders, lack of external and independent assessment of recommendations) but also with regards to the “waste” of biomedical research, as depicted by sir Iain Chalmers, which raises concerns regarding relevance of clinical research and coherence with existing knowledge. Additionally the lack of addressing  public health outcomes is considered.

Ultimately, we offered a “safety bundle” to help users to navigate guidelines with confidence, since current quality assessment tools (AGREE, GIN, IOM instruments) and guidelines repositories and databases do not express a quality rating which is directly useful in order to reliably discriminate between  right and wrong guidelines.

We identified and collected a substantial number of guidelines untrustworthy for either methods, content, or evidence of unexpected outcomes. We hope the readers will find this approach valuable in highlighting the awareness on flaws and errors, discussing guidelines trustworthiness hence cautiously interpreting their recommendations.

 

 

Recruiting for a new EBM Editor – 2017

13 Dec, 16 | by Kelly Horwood, BMJ

BMJ is looking for the next Editor(s)-in-Chief who can continue to shape Evidence-Based Medicine into a resource that offers the most up-to-date, clinically relevant, evidence-based content.

Read the full advert here >>

The candidate is an active scientist, who can demonstrate critical appraisal skills, an awareness of trends and hot topics in current clinical research and who will act as an ambassador; actively promoting and strengthening the journal and upholding the highest ethical standards of professional practice.

As Editor of Evidence-Based Medicine you will benefit from the following:

  • ● Competitive Annual Honorarium
  • ● Free subscription to EBM for you and your Editorial Team
  • ● Access to exclusive BMJ content, including The BMJ and BMJ Learning
  • ● Full training and support; from publishing processes to social media
  • ● The chance to shape the field and play a role in the development of the specialty
  • ● Have a positive effect on the careers of fellow scientists
  • ● Interact with the latest research from scientists based all over the world

International and joint applications are welcomed. Interviews will be held at BMA House. Term of office is 5 years; the role will take up in total one day a week. Contact Kelly Horwood for more information and to apply with a CV: khorwood@bmj.com.

Application deadline: 9th January 2017

Primary Care Corner with Geoffrey Modest MD: Tylenol in Pregnancy, and Differing Interpretations of Serious Women Vs. Men

9 Nov, 16 | by EBM

By Dr. Geoffrey Modest

The NY Times had a couple of interesting articles recently:

  1. They had an opinion piece on the potential adverse effects of acetaminophen in pregnancy (see http://www.nytimes.com/2016/09/25/opinion/sunday/the-trouble-with-tylenol-and-pregnancy.html), which was prompted by a recent article in JAMA Pediatrics (see doi:10.1001/jamapediatrics.2016.1775 ), but they also make the following points (I added some of the scientific details and provided references to some relevant studies):
  • Experiments show that acetaminophen impedes our ability to empathize (e.g., see Mischkowski D. Soc Cogn Affect Neurosci. 2016; 11: 1345)
  • It suppresses the immune response after vaccination, e.g. with combo of pneumococcal and DTaP vaccines (see Prymula R. Lancer 2009; 374 (9698): 1339). One study found that in patients infected with rhinovirus, acetaminophen suppressed serum neutralizing antibody response and increased symptoms (see Graham NM J Infect Dis 1990; 162(6):1277)
  • There has been speculation that by depleting glutathione (an antioxidant), acetaminophen could increase lung inflammation and the incidence of asthma (a Norwegian study found that prenatal acetaminophen increased 7 yo kids’ risk of asthma by 13% and postnatal exposure by 29% –see Magnus MC. Int. J Epidemiol 2016 — doi:1093/ije/dyv366)
  • And, the study below…
    • 7796 mothers from the prospective birth cohort study ALSPAC (Avon Longitudinal Study of Parents and Children) in Bristol, England,  enrolled in 1991
    • Maternal age 29, gestational age 39 weeks, birth wt 3.4 kg, maternal prepregnancy BMI 23, 16% low SES/42% intermediate/42% high, 93% no maternal psychiatric illness, 82% never smoked during pregnancy, 45% never drank during pregnancy
    • Assessed acetaminophen use when 18 and 32 weeks pregnant, and then again postnatally when kid was 61 months; then assessed kid’s development at 7 yo
    • Results:
      • Overall, 5% of the kids had behavioral problems, at mean age of 79 months
      • Acetaminophen use at 18 weeks, associated with behavioral issues in kids:
        • 20% increased risk of conduct problems, RR 1.20 (1.06-1.37)
        • 23% increased risk of hyperactivity symptoms, RR 1.23 (1.08-1.39)
      • Acetaminophen use at 32 weeks, behavioral issues in kids:
        • 46% increased risk of SDQ difficulties, RR 1.46 (1.21-1.77) [SDQ, Strengths and Difficulties Questionnaire, is validated test of  behavioral issues in kids, including emotional symptoms, conduct problems, hyperactivity symptoms, peer relationships, pro-social behaviors]
        • 29% increased risk of emotional symptoms, RR 1.29 (1.09-1.53)
        • 42% increased risk of conduct problems, RR 1.42 (1.25-1.62)
        • 31% increased risk of hyperactivity symptoms, RR 1.31 (1.16-1.49)
      • Postnatal acetaminophen use: NO significant increase in behavioral issues in kids; also none with partner’s use at 61 months postnatal visit (97% of the partners stated they were the biological fathers)
      • None of these increases in developmental issues in kids changed when controlling for postnatal acetaminophen use
      • There was no significant relationship between maternal ADHD polygenic risk scores (a composite score of molecular risk factors for ADHD from available genotype data. see paper for details) and maternal prenatal acetaminophen use at 18 or 32 weeks, or postnatally

Commentary:

  • >50% of pregnant women in the US use acetaminophen (50-60% in the EU)
  • Animal studies suggest that acetaminophen is hardly benign: if given to mice during neonatal brain development, cognitive function is affected as well as change in BNDF (brain-derived neurotrophic factor). Also there are endocrine effects: long-term acetaminophen Is associated with increased risk of cryptorchidism
  • Proposed mechanism?? Some thoughts: acetaminophen does cross the placenta, and animal studies have found that the fetus can produce toxic metabolites of acetaminophen; acetaminophen reduces serum antioxidants and could thereby increase oxidant stresses.
  • A Danish National Birth Cohort study found increased ADHD when acetaminophen is used during pregnancy. Also found in New Zealand study. And a large Norwegian found that kids exposed prenatally to acetaminophen for >28 days had poorer gross motor development, communication, externalizing behavior, and higher activity levels. Those exposed but to <28 days, had poorer gross motor outcomes, though less so than with greater exposure
  • This current study tried to control somewhat for confounding (i.e., looking at use in mothers when not pregnant, to see if there was an underlying non-pregnancy related condition which led to increased acetaminophen use as well as to neurodevelopmental problems with the kid; assessing partner use as a ??marker of social/familial stressors)
  • The association of neurodevelopmental issues in kids was more pronounced when acetaminophen was taken in the 3rd trimester, in concordance with other studies.
  • Unfortunately no data available on why women were taking the acetaminophen, only their usage
  • So, this is a tough one. Given the adverse effects of NSAIDs, most women are steered towards acetaminophen for pain relief. Though it is hard to come to definitive conclusions based on the above data, and it is pretty unethical to do a real RCT where some women are randomized to acetaminophen and others to placebo; so we may never know for sure. Based on the above, it certainly makes sense to avoid acetaminophen for minor indications. And, it makes sense to use nonpharmacologic remedies to the extent possible (massage therapy for pain, yoga, )

 

  1. A rather striking “gray matter” piece by Lisa Feldman Barrett, a psychology professor at Northeastern University, who has explored the interpretation of similar facial expressions by men and women (see http://www.nytimes.com/2016/09/25/opinion/sunday/hillary-clintons-angry-face.html).

Details:

  • The head of the Republican National Committee tweeted after a speech by Hillary Clinton that she was “angry and defensive the entire time — no smile and uncomfortable”, which brought up the issue: are women who are serious viewed differently from men??
  • A study by the author (Dr Feldman Barrett) in the journal Emotion in 2009 (see org/10.1037/a0016821 ): participants were shown photos of male and female faces (e.g. with smiles, frowns, widened eyes) and asked why they thought he face was that way.  The participants viewed the women as being emotional (an internal expression), whereas the men were felt to be reacting to an external situation (the are “just having a bad day”)
  • A further study used the same computer-generated androgynous faces, but with gender-typical hair. The participants were again more likely to attribute the exact same facial expression as from an internal, emotional cause in those photos with female-looking hair, and an external situational cause in those with male-looking hair
  • This discrepancy might be at least partially responsible for some documented issues:
    • Women who visit the ER for chest pain and shortness of breath are more likely than men to be dismissed as having anxiety, and perhaps this is part of the reason women end up dying more frequently from heart attacks
    • When a woman “violates her emotion stereotype” she is more likely to be seen as less likable and less trustworthy. One example is in court when women accuse men of rape or domestic violence. If a woman expresses grief on the witness stand, the judge is more likely to give the perpetrator a harsher sentence. If she expresses anger (i.e. violating the stereotype of being passive, helpless), the perpetrator tends to get a lighter sentence
  • And, as the author concludes, though Hillary is generally seen as a more credible candidate, when she acts “presidential”, she is interpreted as being harsh and cold. Trump, on the other hand, can make outrageous comments about immigrants and be interpreted that his anger being situational (“he is just upset about terrorism”)……..

Reporting and appraising research: a cautionary tale

3 Oct, 16 | by Kelly Horwood, BMJ

Substituting various fats for carbohydrates or saturated fat: an uncertain recipe missing quantitative context and a cautionary example of reporting and appraising research

Guest Blog Post
Author: Martin Mayer, MS, PA-C
Institution: Department of Physician Assistant Studies, East Carolina University
Email: mayerm@ecu.edu

Broadly speaking, science is a way of thinking that involves asking answerable questions about phenomena and then systematically and impartially pursuing means to reduce uncertainty about the answer as much as possible. During the pursuit, findings must always be appropriately contextualized to avoid inaccurate, disproportionate, or otherwise mistaken interpretations, as such mistaken interpretations run contrary to the raison d’être of scientific inquiry. Unfortunately, confusion about and mistaken or overreaching interpretations of research abound.

A recently-published article investigating various patterns of fat intake on total and cause-specific mortality1 speaks to the above and will add tangibility to the above considerations; it therefore serves as an instructive example to be considered in some detail, but the concepts considered herein are certainly more broadly applicable.

NUTRITIONAL RESEARCH AND BASIC PRINCIPLES OF RESEARCH METHODOLOGY

Nutritional studies are often plagued by methodologic shortcomings that preclude strong knowledge statements and contribute to implausible results.2,3 Perhaps most bothersome is the lack of methodologic rigor required to start making causal inferences about dietary patterns or interventions, and better designs do seem feasible with proper design and sufficient infrastructural support (including, importantly, funding).2,3

There have been reproachful whispers of “methodolatry” with respect to appraisal of research, and some champion observational data as reflecting “real-world” data; nevertheless, well-designed, well-executed randomized controlled trials (RCTs) are undoubtedly the most reliable method to assess interventional effects or cause-and-effect relationships. Due to inherent methodological limitations, observational data are typically unable or less able to provide such insight, though Hill’s classic criteria offer foundational considerations for the degree to which observational data can begin to facilitate or permit causal inferences.4,5 For instance, there will never be an RCT of smoking and lung cancer, but observational data make this causal link abundantly clear; however, such instances of observational data clearly demonstrating a causal relationship are decidedly uncommon.

Still, good observational data do have value, and to the extent people blindly view the RCT as a sacred cow of epistemology (e.g., not applying the same degree of critical appraisal to RCTs as one would observational studies, failing to consider a given RCT within the broader context of what is known about the topic at hand [greatly simplified, this latter concept forms the basis for the Bayesian notion of priors]), the reproachful whispers of “methodolatry” have considerable credence, elevating them to appropriate admonitions.

WANG AND COLLEAGUES’ STUDY

Wang and colleagues recently published an investigation of intake of specific types of fat and possible associations with total and cause-specific mortality; specifically, they investigated quintiles of intake for specific types of fat and isocaloric substitution of specific types of fat for either carbohydrates or saturated fat at certain levels of energy intake.1 Theirs is among the most recent of many similar studies investigating dietary patterns and patient-relevant outcomes.6-10

Wang and colleagues’ data come from two large and well-known prospective cohort studies: the Nurses’ Health Study (NHS) and the Health Professionals Follow-up Study (HPFS). Follow-up for both cohorts via biennial postal questionnaires exceeds 90% of potential person-time. Wang and colleagues excluded those who did not report information on fat intake, those who reported what they considered to be implausible energy intakes (men, <800 or >4,200 kcal/day; women, <600 or >3,500 kcal/d), and those with a history of diabetes, cardiovascular disease, or cancer. The final sample for analysis had 83,349 women and 42,884 men and amounted to 3,439,954 person-years of follow-up. Dietary intake was assessed with a semiquantitative food frequency questionnaire (SFFQ); the SFFQ asks how often, on average, the respondent consumed a specified portion of food during the preceding year. For all but one survey used Wang and colleagues’ analysis, this was done for 116 to 150 foods. Wang and colleagues also collected detailed information on the type of fat or oil used when preparing food as well as the brand or type of margarines used. A total of nine SFFQ assessments from the NHS and seven SFFQ assessments from the HPFS were included in their analysis.

PROBLEMS WITH PRESENTATION AND INTERPRETATION – A CAUTIONARY EXAMPLE

The large sample and long and fairly complete follow-up are obvious strengths of Wang and colleagues’ study, but sample size and follow-up duration and completeness are not themselves sufficient qualities to establish the reliability or meaningfulness of research; indeed, their study still suffers from typical and important weaknesses inherent to cohort data and questionnaire-based nutritional studies. For instance, the observational design with use of SFFQs in populations that offer only questionable generalizability (e.g., exclusively health care professionals with noteworthy exclusion criteria) leaves much to be desired, and to the extent one might be inclined to point to the frequency with which food surveys or similar methods of dietary assessment are used in nutritional research, this ultimately does nothing to lessen the marked uncertainties and methodological weaknesses inherent in such a strategy. Prevalence is not and never will be a per se indicator of rigor or acceptability, and to argue otherwise is tantamount to argumentum ad populum (argument from popularity). While Wang and colleagues acknowledge weaknesses in their study to a certain extent in their discussion, they still make overly-assertive statements (e.g., “Our analyses provide strong evidence that using PUFAs [polyunsaturated fatty acids] and/or MUFAs [monounsaturated fatty acids] as the replacement nutrients for SFAs [saturated fatty acids] can confer substantial health benefits”1(p1142)) even though they later state “causality cannot be established” 1(p1143) by their study.

Although Wang and colleagues make efforts to provide evidential context for their findings via their discussion of related literature, another prominent weakness of their article is failure to provide appropriate quantitative context for their findings even if one theoretically accepts their findings as being likely reflective of an underlying truth (which must in reality be decided only after careful critical appraisal). This becomes even more problematic due to their repeated statements of “substantial” findings, sometimes also erroneously using causal phrasing (e.g., “can confer substantial health benefits”1(p1143)). Unfortunately, they only report associated relative metrics, which precludes a straightforward quantitative evaluation of their findings and even lends to an exaggerated sense of the findings.

ADDING QUANTITATIVE CONTEXT WHEN WHAT IS PROVIDED IS NOT SATISFACTORY

It is imperative to seek satisfactory appreciation of the quantitative implications of research findings, perhaps particularly when the research does not readily lend itself to such. When it is possible to construct or otherwise establish a reasonable quantitative framework, one can then use this framework as a thought experiment of sorts to help gauge the potential meaning of given findings under the (potentially strong) assumption that the research actually reflects an underlying truth. One can then subjectively levy any weaknesses in the methodology against this “best-case-scenario” framework in an attempt to form a judicious appreciation for the research findings.

Using the total number of deaths and person-years of follow-up in the individual cohorts and pooled dataset, one can derive baseline estimates for rate of death (supplemental file). With these baseline estimates, one can use the hazard ratios from the rightmost column of Tables 2 and 3 in Wang and colleagues’ study to estimate the associated risk of death with isocaloric substitution of a particular fat for total carbohydrates at a particular percentage of energy intake (supplemental file).11 One can use Figure 2 in Wang and colleagues’ study to do the same for substitution of a particular non-saturated fat for saturated fat (supplemental file). Finally, one can then derive absolute risk differences between baseline risk estimates and the dietary-substitution-adjusted risk estimates (supplemental file).

It is not clear why Wang and colleagues did not provide such estimates, and further data or analysis from the authors’ dataset might allow for better or additional estimates than those outlined above and in the supplemental file; in the absence of such, however, it remains important to consider what the reported data might mean on an individual level, and the above approach is certainly reasonable.

CONCLUSION

Wang and colleagues’ study ultimately leaves much to be desired, one should remember they conducted their study in an attempt to help clarify existing uncertainty surrounding this topic, and other research echoes the sentiment of uncertainty.7 So, although sometimes a reflexively-offered sentiment, further – and better – research does seem indicated. Additionally, relative metrics are most useful when appropriately applied to corresponding baseline absolute risks, but relative metrics in isolation are considerably less informative and can contribute to distorted appraisal of research findings. This can be readily appreciated via the supplemental file or by simply considering, for instance, the absolute versus relative difference between 0.5% and 0.25% (0.25% versus 50%, respectively). Relative metrics might also convey important information when pursuing a population-level appreciation of research findings. While this is certainly not irrelevant, clinicians and patients ultimately care most about applying research on an individual level. The additional quantification of Wang and colleagues’ data shows how this can be estimated (at least in the setting of hazard ratios) when estimates of absolute differences are not provided. With specific regard to Wang and colleagues’ study, the estimation of absolute risk differences suggests much less “substantial” findings than what Wang and colleagues’ article suggests even if one thought their findings were valid, and then when one considers the notable weaknesses in Wang and colleagues’ study, the absolute risk differences seem even less “substantial”.

The considerations herein, although important, are but a whisper amidst a roaring literature pertaining to the execution, translation, and application of medical research. Nevertheless, this writing hopefully makes clear the importance of researchers maintaining the utmost care when reporting research, always providing balanced and objective qualitative and quantitative context for their findings; similarly, readers must maintain an exquisitely judicious approach to the appraisal, synthesis, translation, and application of research.

References

  1. Wang DD, Li Y, Chiuve SE, et al. Association of specific dietary fats with total and cause-specific mortality. JAMA Intern Med. 2016;176(8):1134-1145. doi:10.1001/jamainternmed.2016.2417. Epub 2016 Jul 5.
  2. Nissen SE. U.S. dietary guidelines: An evidence-free zone. Ann Intern Med. 2016 Apr 19;164(8):558-559. doi: 10.7326/M16-0035. Epub 2016 Jan 19.
  3. Ioannidis JP. Implausible results in human nutrition research. BMJ. 2013 Nov 14;347:f6698. doi: 10.1136/bmj.f6698.
  4. Hill AB. The environment and disease: association or causation? Proc R Soc Med. 1965;58(5):295-300. PMCID: PMC1898525.
  5. Lucas RM, McMichael AJ. Association or causation: evaluating links between “environment and disease”. Bull World Health Organ. 2005 Oct; 83(10):792-795. PMID: 16283057. PMCID: PMC2626424.
  6. Chowdhury R, Warnakula S, Kunutsor S, et al. Association of dietary, circulating, and supplement fatty acids with coronary risk: a systematic review and meta-analysis. Ann Intern Med. 2014;160(6):398-406. doi: 10.7326/M13-1788.
  7. de Souza RJ, Mente A, Maroleanu A, et al. Intake of saturated and trans unsaturated fatty acids and risk of all cause mortality, cardiovascular disease, and type 2 diabetes: systematic review and meta-analysis of observational studies. BMJ. 2015;351:h3978. doi: 10.1136/bmj.h3978.
  8. Farvid MS, Ding M, Pan A, et al. Dietary linoleic acid and risk of coronary heart disease: a systematic review and meta-analysis of prospective cohort studies. Circulation. 2014;130(18):1568-1578. Epub 2014 Aug 26. doi: 10.1161/CIRCULATIONAHA.114.010236.
  9. Jakobsen MU, O’Reilly EJ, Heitmann BL, et al. Major types of dietary fat and risk of coronary heart disease: a pooled analysis of 11 cohort studies. Am J Clin Nutr. 2009;89(5):1425-1432. doi: 10.3945/ajcn.2008.27124.
  10. Mozaffarian D, Micha R, Wallace S. Effects on coronary heart disease of increasing polyunsaturated fat in place of saturated fat: a systematic review and meta-analysis of randomized controlled trials. PLoS Med. 2010;7(3):e1000252. doi: 10.1371/journal.pmed.1000252.
  11. Altman DG, Andersen PK. Calculating the number needed to treat for trials where the outcome is time to an event. BMJ. 1999 Dec 4;319(7223):1492-1495. PMID: 10582940. PMCID: PMC1117211.

Primary Care Corner with Geoffrey Modest MD: New CDC Recommendations for Opiate Prescribing

23 Dec, 15 | by EBM

By Dr. Geoffrey Modest

The CDC just came out with draft guidelines for prescribing opiates for chronic pain (see http://www.regulations.gov/#!documentDetail;D=CDC-2015-0112-0002​ ). These draft recommendations include the following regarding when to initiate or continue prescribing opiates.

  1. Nonpharmacologic therapy and nonopioid meds are preferred for chronic pain. There are no data supporting chronic opiates, so hard to recommend given their known risks, except for this little caveat: “no study of opioid therapy versus placebo, no opioid therapy, or nonopioid therapy for chronic pain evaluated long-term (>1 year) outcomes related to pain, function, or quality of life. Most placebo-controlled randomized trials were <= 6 weeks in duration”. So, no data really. There is a comment that it’s okay for end-of-life care (commenting that “evidence of long-term opioid therapy for chronic pain outside of end-of-life care remains limited”), which does suggest there may be benefit (and, I’m not sure what the real difference in subjective pain is, comparing those at end-of life and those not).  My point is that there are basically no data, that in my experience there are patients with really bad chronic pain who pretty clearly benefit from opiates and sometimes higher doses, ands this puts us providers in a bind. There is no question to me that trying nonpharmacologic therapy is really important (PT, weight loss in those with knee pain etc., massage/manipulation, psych therapy and esp CBT, exercise…., and combinations of these). And that non-opioid therapies often help (acetaminophen, etc… though I am concerned that prolonged NSAID use has its very real problems for the GI tract and heart especially, and significant mortality), but I should add that some of these drugs (e.g. salsalate) are off the Medicare-approved list for unknown reasons, are benign, but sometimes work well. And local steroid injections often give reasonably long-term relief (e.g. joint injections, trigger point injections) in my experience, as well as other pain meds such as tricyclics, anticonvulsants (pregabalin, gabapentin, carbamazepine), and SNRIs. I would further reinforce avoiding opiates unless really needed, the above often work, and it is clear that opioids do have significant harms (abuse/overdoses, MIs, car accidents…)
  2. Before starting opioids for chronic pain, prescribers should establish realistic treatment goals with patients in terms of pain and function. This applies to pain lasting >3 months or past the time of normal tissue healing
  3. There should be periodic discussions with patients taking opioids about the risks and realistic benefits of continued use, as well as the patient and provider responsibilities for managing therapy. This includes safety issues, which might be uncovered by looking at the prescription drug monitoring program (PDMP). Again, the issue is: unknown benefit (i.e., not studied, though the patient may attest to the benefit) but clear risks
  4. When starting opioids, use immediate-release ones, not the extended-release ones. (The latter have likely increased potential for overdose, and, as I have mentioned in earlier blogs, there really are no data showing that long-acting ones are better, either more effective or safer). If you decide to switch from short to long-acting and are switching opioids, remember that there is incomplete cross-tolerance, so the dose of the long-acting med should be reduced. Also, given the above, they recommend NOT giving long-acting along with short-acting opiates (this is pretty different from the old model: give long-acting to get steady state of opiates, then short-acting for “breakthrough” pain). Also given the [not-so-scientific] data finding more deaths with methadone, that should not be the first agent to use for a long-acting one.
  5. Use the lowest effective dose of opiates. And especially if increasing the dose to >50 morphine milligram equivalents (MME)/day. And “generally avoid increasing the dosage to >=90 MME/day”. One interesting contradiction is that methadone maintenance programs often have people above 100mg methadone/day (that is, >300 MME) for the longterm. In fact I have a chronic pain patient who is in a methadone program and on 70mg for the past many years. Given the presumed benefit of TID dosing of methadone for chronic pain, I appealed to the Medicaid program in Massachusetts so I could give him 70mg of methadone in divided doses at the health center but was unable to get approval for more than 60mg, the Medicaid max. However, I was told I could give him 60mg of methadone and an almost unlimited amount of oxycodone along with it….. The above dosage restriction, as pointed out in prior blogs, comes from ecological data showing that those on higher doses (e.g. >100 MME/day) have higher risk of overdoses and deaths. But, again, there are NO (as in zero) randomized controlled studies looking at the benefits of higher vs lower doses. And I certainly have some chronic pain patients who are on high doses for a long time and who are very willing to take risks in order to get “better pain relief and function”, from their perspective (part of the issue may be genetic variants in mu receptors – see past blogs as listed at the end). Also, we should consider giving naloxone kits to patients on opioids in case there is an overdose, esp if they are on higher doses of opiates.
  6. Chronic opiate use begins with acute pain therapy. So, for acute pain, we should also give the lowest dose possible and shortest duration of immediate-release opiates (and ERs should not blithely prescribe opiates). And “3 or fewer days usually will be sufficient for most nontraumatic pain not related to major surgery”. The 3-days is largely “expert opinion”, though there was a study in patients with acute low back pain showing that there was usually a significant decrease in pain by the 4th day. And another one I blogged on recently (see blogs below) not finding that opiates were in fact better than nonopiates for acute low back pain.
  7. Evaluate benefits and harms within 1-4 weeks of starting opiates and at least every 3 months thereafter. There may be utility to using validated scales to assess function, pain control and quality of life (e.g. PEG scale–Pain average, interference with Enjoyment of life, and interference with General activity). The recommended rate of tapering doses is not clear, some suggesting rapid tapers over 2-3 weeks in those with severe adverse events (e.g. overdose), others recommend slower tapers at 10%/week.
  8. Before starting and periodically thereafter, evaluate risk factors for opiate-related harms.
  • Patients with disordered breathing: the issue is opiate-related respiratory depression. those with moderate-to-severe sleep-disordered breathing should probably not have opiates
  • Pregnant women: avoid initiating opiates during pregnancy, since they are associated with stillbirth, poor fetal growth, pre-term delivery, neonatal opioid withdrawal syndrome and birth defects. And for those pregnant and on opiates chronically, be careful about tapering (risks to patient and fetus if patient goes into withdrawal). Also a potential issue with breast-feeding: neonatal toxicity and death have been reported when mothers take codeine
  • Patients with renal or hepatic insufficiency — use more caution and increased monitoring, given decreased processing/clearing of drugs
  • >65 yo: opiates may be more dangerous, given reduced renal function. Also, more opiate-related confusion.
  • Mental health issues: untreated depression could lead to overdoses (suicide, or confusion). Anxiety treated with benzos adds toxicity when given together with opiates. And, though not mentioned in the recommendations, those under stress or not sleeping well experience more pain (i.e., best to try to help with underlying issues here)
  • Patients with substance use disorders — illicit drugs and alcohol increase likelihood of opioid-related overdose deaths
  • Again, consider giving naloxone to those who are at higher risk of overdose
  1. Review the PDMP to see if patient is receiving high dose opiates or other meds that put him/her at higher risk. This should be done at least every 3 months. (though I would add that there are a few problems here: important people involved in prescribing and monitoring those on opiates are not allowed access, at least in Massachusetts, such as nurses, nurse practitioners/physician assistants; the pharmacy data are not updated as quickly as should be; navigating the website is not easy and one has to click on the same patient many times if they list different addresses; hard to get data on patients who go to other states for opiates or gets them through the VA system; and it really takes a lot of time doing so in a busy primary care session (hence the benefit of giving nurses access….).
  2. Check urine drug screen prior to starting opiates, and “consider” doing them at least annually thereafter. These are important for a variety of reasons, including patient safety
  3. Avoid prescribing opiates if patients are on benzodiazepines. Based on a lot of observational data, but as pointed out in some prior blogs, those on benzos by themselves may have underlying psych conditions which have significant mortality associated. But the opiates and benzos in combo are likely to produce more respiratory depression. In stopping the benzos, very important to taper slowly (e.g. decrease dose not more than 25% every 1-2 weeks)
  4. Arrange treatment for patients with opioid use disorder (e.g. with methadone or buprenorphine). I believe that all of us who prescribe buprenorphine are very impressed with the results in the majority of patients… I really feel it is one of the few interventions I do which really give patients back their lives. And, as with PDMP, I can see no reason why nurse practitioners/physician assistants/medical residents should not be able to prescribe buprenorphine, both because it is so effective in so many people, and because these providers are allowed to prescribe much more potentially dangerous meds anyway (oxycontin, methadone etc)

So, my bottom line is that there is no doubt that opioids are dangerous both to the patient and society (through diversion, availability in the streets, overdoses, crime). And this danger is very likely increased with higher doses of opiates, or their combos with other meds (e.g. benzos). But there really are very little scientific data to inform these guidelines, making it hard for us in the trenches to accept the “expert opinion” when we have patients in front of us with inadequately treated chronic pain. And, I think, pain is pain, whether it is in cancer patients, those at the end-of-life, or those who fall off a ladder. So, I am a strong advocate for pretty much all of the above guidelines, especially trying to avoid opiates whenever possible, using adjunctive therapies including injections, trying to avoid benzos, giving the lowest opiate dose possible, educating patients on risks and benefits (and emphasizing that the benefit of opiates is rarely complete or near-complete pain relief). And I have even had several patients come off chronic opiates, some having been on them for years. But, there is no question in our practice, this issue of treating chronic pain is remarkably common and remarkably difficult (and remarkably hard to do in the context of a brief primary care visit, where we also deal with their depression/psych issues, hypertension, homelessness or other profound social issues, diabetes, illicit drug use, domestic violence, ……..)

 

For other blogs:

http://blogs.bmj.com/ebm/2015/11/10/primary-care-corner-with-geoffrey-modest-md-prescribed-opioids-and-future-prescription-opioid-misuse-in-teens/ shows that teens given legit prescribed opiates are more likely to misuse opiates later in life

http://blogs.bmj.com/ebm/2015/11/06/primary-care-corner-with-geoffrey-modest-md-opiates-for-acute-low-back-pain/ finding unclear benefit of giving opiates

http://blogs.bmj.com/ebm/2015/06/17/primary-care-corner-with-geoffrey-modest-md-mass-med-society-opioid-prescription-guidelines/ which includes many of my comments about the lack of studies on opiates and the risks of developing strict guidelines in their absence (many more comments than above)

http://blogs.bmj.com/ebm/2015/03/16/primary-care-corner-with-geoffrey-modest-md-feel-good-gene/​ which looks at some genetic variants (e.g. in the mu receptor) and their effects on individual’s drug use

Primary Care Corner with Geoffrey Modest MD: On-demand HIV Pre-exposure Prophylaxis

18 Dec, 15 | by EBM

By Dr. Geoffrey Modest

A recent trial looked at the effect of on-demand short courses of TDF/FTC (tenofovir/emtricitabine) in preventing HIV transmission in men at high risk (see N Engl J Med 2015;373:2237-46). There was a prior blog on pre-exposure prophylaxis from the PROUD trial, with continuous use of TDF/FTC, and included a brief report on the IPERGAY trial prior to publication — see http://blogs.bmj.com/ebm/2015/10/05/primary-care-corner-with-geoffrey-modest-md-hiv-pre-exposure-prophylaxis/​ . The full report on IPERGAY was just released, and I think it is so important that it is worth reviewing in some detail.

  • 400 men without HIV infection but who had a history of at least 2 encounters of unprotected anal sex with men ​within the past 6 months. Those with hepatitis B or C were excluded, as well as those with eGFR <60ml/min, ALT > 2.5 times normal, and glycosuria or proteinuria >1+ on urine dipstick.
  • Intervention: those randomized to active drug took TDF/FTC (300mg/200mg), 2 pills 2-24 hours prior to sex, then a third pill 24 hours later and a fourth 24 hours after the third, vs taking placebo. In cases of multiple sexual exposures, patients were instructed to take 1 pill/day until the last exposure, then 2 post-exposure pills. If there were exposure within 1 week of the last 4 pill dosing, then the patient took only 1 initial pill prior to the next exposure. All participants received risk-reduction counseling from a peer community member and condoms
  • Demographics: mean age 35, 90% white, 73% not in monogamous relationship/8% in a relationship with HIV-1 positive partner, 72% with postsecondary education, 45% used recreational drugs and 23% >5 alcoholic drinks/d. 5 enrollment sites in France and one in Montreal. followed median of 9.3 months
  • Results:
    • 56 of the patients received post-exposure prophylaxis (31 in TDF/FTC group, 25 in placebo)
    • Median of 15 pills taken/month in each group
    • ​Adherence — of 113 patients tested: 86% had TDF and 82% FTC in their blood, consistent with taking the med within the past week. TDF and FTC were found in 8 people on placebo, 3 of whom were on postexposure prophylaxis. Overall adherence: 28% did not take TDF/FTC or placebo; 29% took suboptimal doses, and 43% took the assigned drug correctly
    • No difference in sexual practices from before to during the study, though the placebo group did have a significant decrease in the number of sexual partners
    • 40% had a new sexually-transmitted infection during the study (20% got chlamydia, 22% gonorrhea, 10% syphilis, 1% hep C), confirming continued high-risk sexual practices
    • ​16 HIV infections happened after enrollment: 2 in the TDF/FTC group (0.91/100 person-yrs) and 14 in the placebo group (6.60/100 person-yrs), with relative risk reduction of 86% (36-97%, p=0.002)
    • The 2 HIV infections in the TDF/FTC​ group were in men nonadherent to pre-exposure prophylaxis (pill counts showed no meds taken and no meds were detected in the serum at the time of HIV diagnosis)
    • ​Adverse events: no significant difference in frequency of grade 3 or 4 adverse events (which were pretty much nonexistent), though there were more GI adverse events, esp nausea in 8% of those on TDF/FTC and 1% placebo and abdominal pain in 7% vs 1%. There was a transient decrease in eGFR to <60 in 2 people on TDF/FTC​

So, remarkably similar results to the PROUD trial with daily TDF/FTC. A few points:

  • These results are also similar to the iPrex trial (see Lancet Infect Dis 2014; 14: 820), which in their open-label extension found that those who had intracellular TDF levels consistent with taking at least 4 tablets/week had no incident HIV infections, very similar to the number of pills in IPERGAY (median of 15 pills/month), and adding to the validity of the IPERGAY results.
  • IPERGAY was a really short-term study, which not only may overestimate the effectiveness in clinical practice (over time, patients may become weary of taking meds each time they have sex) and underestimate adverse effects (too short a med exposure to see chronic renal disease, for example). Also raises the question of whether we need to monitor for adverse effects with intermittent dosing of TDF and how to do so (e.g., see blog http://blogs.bmj.com/ebm/2015/07/23/primary-care-corner-with-geoffrey-modest-md-tenofovir-nephrotoxicity/​ on how to monitor TDF renal toxicity in those on daily TDF)
  • But it seems from all of the pre-exposure prophylaxis studies, that TDF/FTC works, and if you look at the subset of patients who actually take the drug, it essentially always works.
  • Overall, in these studies, there were not a lot of HIV infections, but it would be really interesting to know if this even regimen works in patients who have TDF or FTC resistant viruses, given that in some areas resistance is pretty common (i.e., useful to have a trial in a community of people with frequent HIV resistance mutations). It may well be that people need only a single active agent, so that the combo TDF/FTC would still work for many people in communities with resistant HIV infection. For example, in a South African study in women using tenofovir vaginal gel, there was a 54% lower rate of HIV acquisition in those who were high adherers to the treatment (see Science 2010;329(5996):1168) – i.e. using TDF only
  • It would be really useful to know if starting the TDF/FTC right after unprotected sex worked, since my guess is that adherence would be better than anticipating sex and remembering to take the pills at least 2 hours before.
  • Also, these studies with TDF/FTC were done in predominantly white, educated MSM communities, so would be useful to see how this on-demand approach works in poorer communities, in communities where the predominant spread is heterosexual or through drugs, and in minority communities. ​

_______________________

I will add a comment from Jon Pincus about a blog sent out on the new release of meds with TAF (tenofovir alafenamide, which has less renal and bone toxicity, and is as effective as TDF) — see http://blogs.bmj.com/ebm/2015/11/17/primary-care-corner-with-geoffrey-modest-md-new-hiv-1-drug-approved-by-fda/

 

So I know you don’t buy into big pharma conspiracies but……..

 

Odd that Gilead is releasing the combo pill long before the individual TAF or TAF/FTC pill.  TAF I believe is due for an FDA vote in April 2016.  

 

Coincidence that TDF patent is set to expire I believe in 2017 and that Gilead’s blockbuster single pill HIV drug, Atripla, was just bumped off its pedestal in the last Guidelines and is losing market share to triumeq [that is: abacavir/dolutegravir/lamivudine; and for the last guidelines, see http://blogs.bmj.com/ebm/2015/04/17/primary-care-corner-with-geoffrey-modest-md-updated-hiv-guidelines-2015/​ for details]

 

All just coincidence I’m sure……

Primary Care Corner with Geoffrey Modest MD: Uric Acid Lowering Cardiovasc Benefit, And An Evolutionary Perspective

3 Nov, 15 | by EBM

By Dr. Geoffrey Modest

  1. There have been a slew of articles over the years either implicating or exonerating uric acid as a cardiovascular risk factor. Some studies have found it to be an independent risk factor, and some have suggested that its role was through its association with metabolic syndrome/diabetes, obesity, hypertension, endothelial dysfunction, oxidative stress and/or low-level chronic inflammation. A recent observational study from Taiwan strongly supports a more direct relationship between uric acid and cardiovascular disease (CVD), with amelioration by uric acid lowering agents (see J Rheumatol 2015; 42: 1694)​.
  • Results:
    • Patients with gout not treated with ULT vs controls had increased CVD mortality [HR 2.43 (1.33-4.45)] and all-cause mortality [HR 1.45 (1.05-2.00)]
    • Patients with gout treated with ULT vs those with gout not treated by ULT had strikingly decreasedCVD mortality [HR 0.29 (0.11-0.80)] and all-cause mortality [HR 0.47 (0.29-0.79)]​
    • Overall, there was no significant difference between survival in patients with gout but on ULT, and the reference group without gout
    • ​Results were independent of whether allopurinol or benzbromarone was used
  • Assessed:
    • Mortality rates compared 1189 patients with gout who did not receive ULT vs those with neither gout nor ULT (matched for age, sex)
    • Mortality rates compared 764 patients with gout who received ULT vs 764 patients with gout who did not receive ULT
  • Details:
    • 40,623 adults (mean age 50, 62% male, BP 128/76, chol/HDL=200/44, eGFR 75, BMI 24, 10% daily drinkers, 10% daily smokers) in prospective case-matched cohort study, followed 6.5 years
    • Gout was treated with uric acid lowering therapy (ULT) with either allopurinol or benzbromarone (a potent uricosuric med which was withdrawn from the market because of serious hepatotoxicity, though was still used in some countries – esp. in Asia where HLA-B*5801 haplotypes are common, which are associated with allopurinol hypersensitivity reactions)
    • Baseline uric acid levels were: those with gout on ULT was 8.1 and those with gout not on ULT was 6.5, and for non-gout reference patients was 5.7. It is not clear what the achieved uric acid level was with meds (the article and the supplementary tables are remarkably opaque. not sure what the on-therapy uric acid numbers are).

2. An interesting article was just published in Scientific American on “The Fat Gene” which implicates uric acid in obesity, diabetes, hypertension… (see http://www.nature.com/scientificamerican/journal/v313/n4/full/scientificamerican1015-64.html​). Their argument is as follows:

  • Over the past 50 years, the “thrifty gene” hypothesis has been in and out of vogue, the hypothesis being that in times of food shortage there was a selection bias to a genetic variant which made the body more efficient in handling food, with increased storage of food as fat to be used in times of scarcity, but then leading to obesity in our modern era of high caloric processed food being plentiful and a more sedentary lifestyle. However, there was a lack of evidence of prolonged periods of human starvation
  • But, more recent data has suggested that in fact there were significant changes in food availability and starvation of great apes in global cooling periods around 10-20 million years ago
  • These authors hypothesize that the genetic change in great apes and humans was a nonfunctional mutation of the uricase enzyme, which seems to have occurred at about this same time period (and suggests that great apes and humans had a common ancestor)
  • Consistent with this, humans (even those with non-Western habits) and great apes do have higher uric acid levels than other animals, though uric acid levels are much higher in humans with Western diets and sedentary habits
  • Early animal studies (rats) found that blocking uricase activity led to hypertension, and lowering the uric acid decreased the blood pressure
  • There are also several human studies. Uric acid levels in obese adolescents with newly diagnosed hypertension are high, 90% in one study. 30 individuals who had new hypertension and uric acid >6 mg/dl in a double-blind, placebo controlled crossover trial were treated with allopurinol 200mg bid vs placebo for 4 weeks and had significant decreases in blood pressure, with 20 of the 30 achieving normal blood pressure on allopurinol vs 1 while on placebo (see JAMA 2008; 3000(8): 924). These same authors wrote a longer treatise in NEJM, presenting lots of data suggesting that hyperuricemia is a cause of hypertension, animal studies finding that uric acid can cause microvascular renal disease independent of hypertension, hyperuricemia precedes and seems to be related to development of the metabolic syndrome (animal studies show that decreasing uric acid levels can prevent or reverse the metabolic syndrome), and that in humans, part of the cardioprotection of losartan in the LIFE study and atorvastatin in the GREACE study​ is related to their ability to lower uric acid levels (see N Engl J Med 2008; 359: 1811 for a pretty exhaustive/exhausting review of the data).
  • Fructose, largely from fruits and honey (and now often from high-fructose corn syrup!!) does a few things:
    • It is the only sugar which is metabolized to form uric acid
    • In animals, it is associated with increased appetite and fat accumulation (fructose blunts the effects of leptin, a hormone which decreases appetite)
  • So, the proposed mechanism evolutionarily is: eating fructose, mostly from fruits (which many hibernating animals do prior to hibernation) leads to more fat deposition, and in animals with non-functioning uricase (which they think is the “Fat Gene”), leads to higher uric acid levels. These increasing uric acid levels provide an evolutionary advantage in times of starvation (adding to the effect of the fructose itself), resulting in an insulin resistant state, which decreases immediate energy production and leads to more accumulated fat for future energy, and also by increasing the blood sugar, giving the brain more glucose for its function in times of food scarcity. And there are some animal data suggesting that the main culprit may be the uric acid produced by the fructose: giving the animals a high-fructose diet and also allopurinol blocks many of the features of the metabolic syndrome.​

So, it seems from the above studies that there really is a plausible role of uric acid per se playing a major part in our current prevalent issues of obesity, metabolic syndrome/diabetes, hypertension, and cardiovascular disease. Of concern, average daily intake of fructose has doubled in past 30 years, with adolescents consuming 73 grams/d (12% of their caloric intake). There is a linear trend of increasing fructose consumption and decreasing HDL levels and increasing triglycerides. Small studies and the above Taiwan observational study support the efficacy of lowering uric acid levels. But, it really would be helpful to have a large clinical trial testing the clinical efficacy of lowering uric acid levels.  Then we might target lowering uric acid levels themselves, even without gout or the various uric acid-related nephropathies. The first approach would be to limit fructose and especially high-fructose corn syrup from the diet. My experience over the past couple of years confirms that just by eliminating sodas (high in high-fructose corn syrup), there is a significant decrease in serum uric acid levels (I just saw a patient whose uric acid went from 8.5 to 6.9 by eliminating sodas). And, in general promoting a healthy lifestyle with more exercise and more freshly prepared foods.

Primary Care Corner with Geoffrey Modest MD: USPSTF Guidelines on Blood Pressure Screening

2 Nov, 15 | by EBM

By Dr. Geoffrey Modest

The USPSTF just came out with their final recommendations about screening for high blood pressure in adults (see doi:10.7326/M15-2223), an issue not addressed in JNC8 (Eighth Joint National Committee).

Recommendations:

  • The current recommendation, unlike the previous USPSTF ones, assessed the diagnostic accuracy of different blood pressure measurement protocols
  • Grade A recommendation was given to “screen for high blood pressure; obtain measurements outside of the clinical setting for diagnostic confirmation (my emphasis)”
  • Perform a risk assessment for developing hypertension: those at highest risk are those with high-normal blood pressure (130-139/85-89), those who are overweight or obese, and African-Americans
  • Screening tests: office BP measurement done by manual or automated sphygmomanometer. Make sure proper protocol is used: use the mean of 2 measurements when patient is seated, allow for >= 5 minutes between entry into the office and blood pressure measurement (my emphasis), use the right size cuff, place patient’s arm at the level of the right atrium. Multiple measurements are most predictive of high blood pressure. “ambulatory and home blood pressure monitoring can be used to confirm a diagnosis of hypertension after initial screening”
  • Screening interval: adults >40yo should be screened annually, those 18-39 with blood pressure <130/85 and no other risk factors can be rescreened every 3-5 years
  • Treatment and interventions: for nonblack patients, initially use thiazide diuretic, calcium-channel blocker, ACE-I or ARB. For black patients, use thiazide or calcium-channel blocker. Initial or add-on treatment for patients with chronic kidney disease consists of ACE-I or ARB (but not both)
  • Balance of benefits and harms: net benefit of screening is substantial

Several Points:

  • The main issue I see regularly and repeatedly with the diagnosis and treatment of hypertension, by far, is that we rely on the blood pressure readings right after the patient is brought into the examining room, whether done by a medical assistant, nurse, or provider. In my experience I find very dramatic differences when the patient sits for 5 minutes in a quiet room (i.e., there are huge discrepancies between my own measurement right away, which can be 30+mm Hg higher than when I recheck 5 minutes later, perhaps because the patient is deconditioned and is walking to the exam room, or they are anxious about the exam, etc…). I also reinforce to the patient that if home-based blood pressure monitoring (HBPM) is being done, I have the patient bring the cuff to my office to make sure it is accurate vs my manual measurement and I suggest that the patient should sit down and relax several minutes before checking their pressure (I also suggest that the patient wait quietly a few minutes if they go to a pharmacy to have the blood pressure checked).
  • “USPSTF found convincing evidence that ABPM is the best method for diagnosing hypertension” noting that 15-30% of patients diagnosed with hypertension have lower blood pressure outside the office — see first figure below. (Of note, the accuracy of office-based blood pressure measurements does increase by averaging many different  measurements)
  • The evidence is less substantial for home-based monitoring but confirms that “HBPM may be acceptable”.  So, ABPM is the “reference standard” and HBPM is “an alternative method of confirmation if ABPM is not available”. See the 3 figures below, which provide evidence for their recommendations regarding ABPM and HBPM
  • For treatment goals, the USPSTF states that a target of 150/90 be used for people >60yo, and a goal of 140mm systolic be used in those <60yo (similar to the JNC 8 recommendations), though they mention the new SPRINT trial but are withholding incorporating it until it is published (for the SPRINT trial, see http://blogs.bmj.com/ebm/2015/09/28/primary-care-corner-with-geoffrey-modest-md-aggressive-blood-pressure-management/ )
  • BUT, though they embrace ABPM/HBPM (which I really support), there are several significant lacunae in their recommendations, from my perspective
  • Though ABPM/HBPM is an important diagnostic confirmation of hypertension, it is only useful in those patients with nearly normal blood pressures (i.e., the higher the office-based blood pressure, the less likely ABPM is helpful. i.e., no need to do an ABPM if someone comes into the office with 230/130….)
  • And, related to that, for some reason they delete the comment in their draft recommendations that there should be immediate treatment for some people (e.g. BP>180/110)
  • They do not even mention lifestyle changes in their treatment section, but jump right into meds
  • The treatment recommendations also do not even mention diabetics (seems like the treatment recommendations are really an afterthought to their primary task of screening, and are not very complete)
  • I will refer you again to the NICE recommendations from 2011, which are quite extensive and, i think, really very thoughtful
  • See prior blogs for a review of ambulatory blood pressure monitoring (ABPM) and the draft USPSTF recommendations  from early this year, which includes some of the NICE guideline recommendations (http://blogs.bmj.com/ebm/2015/01/15/primary-care-corner-with-geoffrey-modest-md-uspstf-recs-on-ambulatory-blood-pressure-monitoring/​ ) and of the JNC8 recommendations (see http://blogs.bmj.com/ebm/2013/12/22/primary-care-corner-with-dr-geoffrey-modest-jnc-hypertension-guidelines-simple-goals/​ )

Proportion of elevated office blood pressure readings that are confirmed as hypertension by ABPM or HBPM

graph3

Risk of cardiovascular outcomes and death: 24-h ambulatory monitoring of systolic blood pressure, adjusted for office blood pressure.

graph2

Risk of cardiovascular outcomes and death: home monitoring of systolic blood pressure, adjusted for office blood pressure.​

graph

 

Primary Care Corner with Geoffrey Modest MD: Heart Failure Outcome and CHADS-VASc Risk Score, Even if Not in Afib

24 Sep, 15 | by EBM

By Dr. Geoffrey Modest 

The CHA2DS2-VASc score is perhaps the best metric for predicting thromboembolic complications in patients with atrial fibrillation. This study assessed this tool for a variety of clinical outcomes in patients with heart failure, both with and without atrial fibrillation (see doi:10.1001/jama.2015.10725).

Details:

  • Danish registry study of 42,987 patients (all >50yo, mean age 75) with incident heart failure (HF), not on anticoagulation, of whom 21.9% had concomitant atrial fibrillation (afib), from 2000-2012
  • Assessed relation between CHA2DS2-VASc score and ischemic stroke, thromboembolism (TE) and death within 1 year of HF diagnosis

Results:

  • Patients without afib, risks of ischemic stroke was 3.1% (n=977), TE was 9.9% (n=3178), and death was 21.8% (n=6956), with stratification by CHA2DS2-VASc score (max=10)
    • Ischemic stroke: by CHA2DS2-VASc​ score of 1 through 6, the one year absolute risks were:
      • With afib: 4.5%,  3.7%, 3.2%, 4.3%, 5.6%, 8.4%
      • Without afib: 1.5%, 1.5%, 2.0%, 3.0%, 3.7%, 7%
    • All-cause death:
      • With afib: 19.8%, 19.5%, 26.1%, 35.1%, 37.7%, 45.5%
      • Without afib: 7.6%, 8.3%, 17.8%, 25.6%, 27.9%, 35.0%
    • At CHA2DS2-VASc score​>=4, absolute risk of TE was high regardless of presence of afib (e.g. for score of 4, 9.7% and 8.2% for those without and with afib)
  • The negative predictive value for ischemic stroke at 1 year post HF diagnosis was 92% (91-93%) in those with afib and 91% (88-95%) in those without afib

So, this study found that those with HF and without afib are at high risk of ischemic stroke, TE and death; the CHA2DS2-VASc score was helpful in stratifying these patients and had a moderately high negative predictive value as determined by 1 year post HF diagnosis; and those with CHA2DS2-VASc score​ >=4 had high absolute risk of TE (and even higher in those without afib than those with afib, though it seems that they only excluded those on anticoagulation prior to the HF diagnosis). On subgroup analysis, there was no association between female sex and increased risk of ischemic stroke, in patients both with and without afib (actually, of the individual components of the CHA2DS2-VASc score as noted below, female sex was somewhat protective in the group without afib and was not associated with ischemic stroke in those with afib. So, there seems to be differences depending on the individual components of the CHA2DS2-VASc score​.)

In general, in patients with afib, a stroke risk of >1%/yr is typically used as the cutpoint in identifying benefit from anticoagulation (i.e., tends to outweigh risks); in this Danish study the risk of ischemic stroke in those without afib was approx 1.5%/yr with CHA2DS2-VASc score>1. However, it is important to comment that it is not clear what the cutpoint should be in those without afib, though there are other studies showing that those with HF without afib are at increased risk of stroke and TE, and that these clinical events are decreased with warfarin therapy.

One clear concern is that this study does not have data on the LV ejection fraction (EF). Are the ones with terrible EFs the ones who get TE? And, does the CHA2DS2-VASc​ score, which it seems would correlate mostly with vascular risk, just pick out those with ischemic cardiomyopathy/low EF (i.e., are those with low EFs, who are more likely to have embolic events because of LV clots and/or stasis, being identified by the CHA2DS2-VASc score, and really just the EF itself is important??).  There are some studies in the literature which suggest that those with definite HF (recent decompensation requiring hospitalization), that HF itself was a significant independent risk factor for stroke/systemic embolism irrespective of LV systolic function, with overall rate of stroke being 1.5-2.4%/year — perhaps related to the finding in those with HF without afib that there are higher levels of pro-coagulants and pro-inflammatory factors such as elevated b-thromboglobulin, thrombin-antithrombin III complexes, and D-dimers (see Clin Ther. 2014; 36: 1135-44)​.  Other studies have also found the CHA2DS2-VASc score predicted clinical events even in patients without HF: in a 4.1 years study of 20,970 patients who were discharged with a diagnosis of acute coronary syndrome without known afib in a Canadian registry, 453 (2.2%) had a stroke or TIA with an annual incidence >=1% in those with CHA2DS2-VASc score >=4 (e.g., see Heart 2014: 100: 1524-30).

Another concern is that those with HF and high ​CHA2DS2-VASc score but without afib on initial evaluation may actually have intermittent afib leading to the adverse clinical events. For example, identifying those with intermittent afib by an event monitor might find those at high risk for TE, allowing for targeted anticoagulant therapy.

So, bottom line: HF is a bad disease with 45-60% 5-yr mortality. This Danish study is an observational one, with a limited database (not have ejection fraction, or know if the patient smoked, or drank alcohol, or….). It seems to me that given the high incidence of HF and high mortality, there really should be a randomized control study using anticoagulation vs not in those with HF and no evident afib. And, perhaps as part of this study, it would also be useful to utilize event monitors to identify those with HF and intermittent afib to see if they might be the patients who really benefit from anticoagulant therapy.

Here is the CHA2DS2-VASc scoring system:

chart

Primary Care Corner with Geoffrey Modest MD: Antibiotic Overprescribing

3 Aug, 15 | by EBM

By: Dr. Geoffrey Modest

An array of recent articles highlighted the issue of antibiotic overuse (and the increasing potential for antibiotic resistance).

Background — the CDC in 2013 released a report elaborating the burden of antibiotic resistance:  2 million antibiotic-resistant illnesses and 23,000 deaths yearly in the US.

  1. CDC researchers published a study looking at outpatient prescriptions dispensed in 2011, using the IMS Health Xponent database which contains >70% of all outpatient prescriptions in the US, including all payers from community pharmacies and nongovernmental mail service pharmacies (see Clinical Infectious Diseases 2015;60(9):1308–16). >60% of antibiotic expenditures are in the outpatient setting. 58% of all antibiotic prescriptions in the outpatient setting are for respiratory infections that are predominantly viral.  A total of 262.5 million courses of outpatient antibiotics were prescribed in 2011, an astounding 842 prescriptions per 1000 persons in that year.

addiction-71573_640

Results:

  • Antibiotic prescriptions: penicillin – 60.3 million, macrolides 59.1 million, cephalosporins 35.6 million, quinolones 27.6 million, b-lactams with increased activity 21.6 million, tetracyclines 21.1 million, trimethoprim/sulfa 20.3 million.
  • Top five agents: azithromycin 54.1 million, amoxacillin 52.9 million, amox/clavulanate 21.2 million, ciproflox 20.9 million, cephalexin 20.0 million
  • By gender (rate per 1000 persons): female 990, male 672
  • Census region (rate per 1000 persons):  south 931, midwest 897, northeast 848, west 647
  • Age (rate per 1000 persons): under 3yo — 1287, 3-9yo — 1018, 10-19yo — 691, 20-39yo — 685, 40-64yo — 790, >65yo — 1048
  • By provider type (rate per 1000 persons): family practice 667, pediatrics 598,  emergency medicine 427, internal medicine 383
  • By general assessment of demographics by counties: adjusted odds ratio of 0.6 in those with highest % with 4 years of college, 0.5 for highest 1/3 of per capita income, 1.7 for those with more obese adults
  1. A retrospective cross-sectional review of all patients seen in the VA system between 2005 and 2012 for acute respiratory infections (ARIs) — (see Ann Intern Med. 2015;163(2):73-80​). Finding:
  • VA network includes 6.5-8.5 million veterans seen yearly at 1700 clinical and 152 hospitals with approximately 13 million primary care visits/yr, all using the same electronic medical record.
  • 1.045 million people (85.8% men, median age 61, 98% without fever, 62.5% seen by MD/24.5% by midlevel provider, median provider age 50, 72.4% seen in primary care clinic and 30.1% in community-based outpatient clinic, 22.9% in ER, 19.6% from the western US/28.4% central/35.6% south/ 16.5% northeast),  with diagnosis of nasopharyngitis, pharyngitis, sinusitis, acute bronchitis, upper respiratory infection, and others (laryngitis, tonsillitis), excluding those with diagnoses of pneumonia, influenza, urinary tract infection, or with serious comorbidities (HIV, neoplasia, diabetes, chronic lung disease, end-stage renal disease, transplantation, other immunocompromise).

Results:

  • Overall increase in use of antibiotics from 67.5% in 2005 to 69.2% in 2012 (p<0.001).
  • Increase in macrolide prescriptions form 36.8% to 47.0% in same time period (p<0.001), with decrease in penicillins (36.0% to 32.1%; p<0.001) and fluoroquinolones (15.0% to 12.7%; p<0.001).
  • Antibiotics were prescribed for 68.4% of ARIs, with antibiotics given for 86% of those with sinusitis, 85% bronchitis, 78% with T>102. antibiotics given in 75% of urgent care visits, slightly more by midlevel providers than MDs (70% vs 68%), slightly higher in VA clinics than community-based ones (70% vs 64%).
  • Macrolides prescribed for 51% of bronchitis and 49% of upper respiratory infections (macrolides are not recommended as first-line therapy for either pharyngitis or sinusitis; and there is an increase in macrolide-resistant pneumococcal disease as well as potential cardiotoxicity).
  • The greatest variability in prescribing was by provider (instead of temperature, setting type, geographical region), with 10% of providers prescribing antibiotics >95% of all ARI visits, and 10% in <40% of these visits.
  1. The MMWR just published a report on the knowledge and attitudes of adult patients and health care providers regarding antibiotic usage (seehttp://www.cdc.gov/mmwr/preview/mmwrhtml/mm6428a5.htm?s_cid=mm6428a5_w). They surveyed 4701 US consumers in 2012 (response rate of 86%), 4420 consumers in 2013  (response rate 79%), 2609 Hispanic consumers (response rate 38%), and 3149 health care providers (response rate 48%). results:

For consumers overall:

  • 17% felt that when they have a cold, they should take antibiotics to prevent getting sicker
  • 25% felt that when they have a cold, antibiotics help them get better more quickly
  • Approx 20% thought that antibiotics had common side-effects (nausea/vomiting, diarrhea, headache, rash), and only 16% thought antibiotics had none of these adverse effects
  • 20% had taken antibiotics from sources other than clinics/providers (mostly left-over ones, some from family members, some from neighborhood grocery stores)
  • Only 26% expected an antibiotic when they saw a provider for a cough or cold. 35% expected suggestions for symptom relief. 42% just wanted to make sure they had nothing more serious going on
  • Hispanic consumers were different in a few areas –more likely to think antibiotics helped (around 45%), more likely to get antibiotics from outside of clinic/provider (54% of Hispanics overall: especially leftover antibiotics and from the neighborhood grocery store — the local “bodega”), and more of them expected antibiotics to be prescribed (41%)

For health care providers:

  • 54% thought parents/patients expected an antibiotic
  • 77% thought they wanted symptom relief and 72% wanted reassurance that it wasn’t anything more serious going on
  • In terms of deterrents to prescribing antibiotics, 94% were concerned about antibiotic resistance and 71% about adverse effects. also 58% were concerned about killing “good bacteria” ​

For prior blogs on antibiotic overprescribing, see http://blogs.bmj.com/ebm/2015/01/25/primary-care-corner-with-geoffrey-modest-md-antibiotic-overprescribing/​ which assesses 2 studies, one finding large-scale antibiotic overprescribing for kids with pharyngitis, and the other looking at giving antibiotics for adults with respiratory infections in the Partners system in Boston finding that overall there was overprescribing, but that there were more prescriptions later in a clinic session, suggesting provider fatigue. For a rather sobering blog on the recent WHO report on worldwide antibiotic resistance, see http://blogs.bmj.com/ebm/2014/07/11/primary-care-corner-with-geoffrey-modest-md-whos-remarkable-scary-report/

One positive development is that there have been major gains in decreasing the use of antibiotics in chickens (use of antibiotics in animals leads to increased size of animal/profit but at the considerable expense of increasing antibiotic-resistant bacteria. In 2011 there was 29.9 million pounds of antibiotics sold in the US for meat/poultry, and 7.7 million pounds for people). Tyson just announced that they would eliminate routine use of antibiotics within 2 years. McDonalds (which uses lots of Tyson chicken) and Chipotle Mexican Grill are eliminating chickens raised with antibiotics. Purdue and Pilgrim’s Pride are decreasing antibiotic usage. The first 2 studies above confirm that we as providers overall seem to be increasing the use of antibiotics for non-indicated reasons, and we are using more broad-spectrum antibiotics which create more wide-spread antibiotic resistance. And, it seems that most people with cough/cold are not expecting antibiotics. In this regard, it is pretty striking that health care providers lag behind patients in how they view antibiotic prescribing for predominantly viral illnesses, with more than twice as many thinking parents/patients expect antibiotics than actually do!!! My own experience is that in the vast majority of cases, confidently telling the patient that “the good new is that you have a viral infection, which will get better on its own and antibiotics will not help” really works. and my sense over time is that there are many fewer patients expecting antibiotics or unsatisfied with that statement.

EBM blog homepage

Evidence-Based Medicine blog

Analysis and discussion of developments in Evidence-Based Medicine Visit site



Creative Comms logo

Latest from Evidence-Based Medicine

Latest from EBM