You don't need to be signed in to read BMJ Blogs, but you can register here to receive updates about other BMJ products and services via our site.

Archive for April, 2017

Primary Care Corner with Geoffrey Modest MD: Dvt recurrence in unprovoked dvts — HERDOO2 tool

24 Apr, 17 | by gmodest

​by Dr Geoffrey Modest

One perplexing issue in primary care is the appropriate duration of anticoagulation for people with unprovoked venous thromboses. A recent international study found that a specific clinical decision rule was effective in predicting recurrent DVT in women and could permit individualizing different therapies (see​).


— 2747 participants with a 1st unprovoked venous thromboembolism, VTE (either DVT with a noncompressible segment in the popliteal vein or more proximal leg veins and/or documented pulmonary embolism) who had completed 5 to 12 months of short-term anticoagulant treatment were followed prospectively from 44 healthcare centers in 7 countries (from North America, Europe, India, Australia), from 2008 to 2015.

— Mean age 54, 84% white, 75% on vitamin K antagonists for anticoagulation, VTE event was isolated DVT 41%/isolated PE 40%/DVT and PE 21%

— They used the HERDOO2 clinical decision rule: Hyperpigmentation, Edema, or Redness in either leg; D-dimer level ≥ 250 µg/L; Obesity with BMI ≥ 30; or Older age ≥ 65. D-dimer levels were drawn during anticoagulant treatment.

— Of these components: 24% had hyperpigmentation, edema or redness of leg/50% D-dimer ≥250 µg/ 32% >65 yo/ 43% BMI ≥30.

— Low risk patients (women with HERDOO2 score ≤1) were to discontinue anticoagulants (and almost all did); for high risk women and men it was left to the discretion of the clinicians and patients

— primary outcome was an adjudicated symptomatic major VTE


— of 1213 women, 631 (51.3%) were classified as low risk

— 17 who discontinued anticoagulants developed a recurrent VTE during 564 patient years of follow-up (3.0% per patient year)

— of 323 high risk women and men who discontinued anticoagulants, 25 had VTE during 309 patient years of follow-up (8.1% per patient year).

–7.4% in high risk women and 8.4% in high-risk men.

— of 1802 high risk women and men who continued anticoagulants, 28 had recurrent VTE during 1758 patient years of follow-up (1.6% for patient year)

— secondary outcomes:

–1 recurrent PE death (in a high-risk person who continued anticoagulation); risk of major bleeds was nonsignificant in any who stopped anticoagulation, and was 1.2% per patient year in men and high risk women who continued oral anticoagulants. 2 major bleeds were fatal.

–subgroup analyses: in women <50 yo (n=429) rate of recurrent VTE was 2.0% (not related to estrogen use) vs 5.7% in those >50 yo. No difference by country, type of index VTE, or type of anticoagulation


— patients with provoked VTE, such as after surgical procedure, have a 1% chance of VTE recurrence, whereas those with unprovoked VTE have a 10% chance in the 1st year after stopping short-term anticoagulants, 5% in the subsequent year, and 30% at 8 years. 3.6% of recurrent VTEs are fatal. Oral anticoagulation reduces the risk of recurrent VTE by 80-90%.

— The International Society on Thrombosis and Hemostasis suggest that it is safe to discontinue anticoagulants if the risk of recurrent VTE is <5% at one year after discontinuing treatment (with an upper bound of the 95% confidence interval being <8%).

— The HERDOO2 clinical decision rule has been found to be clinically effective in discriminating low risk versus high risk women, though not for men. This study was a large randomized trial in patients with unprovoked VTE.

— of note, over ½ of the women with unprovoked VTE in their study were low risk and could stop their anticoagulants (ie, less than the 5% cutpoint that they noted above)​. So, the potential effect of this decision rule is quite high for women.

— so, where does this HERDOO2 rule come from?? A study done in 2008 (see doi:10.1503/ cmaj.080493​ ) prospectively looked at 600 people with first unprovoked VTE and followed 18 months, finding an overall annual recurrent DVT rate of 9.3%. They focused on the 91 patients with confirmed recurrent DVTs to assess potential risk factors, and developed the HERDOO2 clinical rule, finding annual recurrent VTEs in 1.6% with scores ≤1 and 14.1% in those with higher scores.

— issues about generalizability:

–this study had only an 11.6 month followup (and the original study was only a bit longer), and, as per the above statisitics, lots of recurrent VTE events happen after the 1-year mark

–they excluded the few people with known high-risk thrombophilia (this is not routinely assess after a first event, so not sure why those patients had the test done and if this exclusion could affect the results)

–there were few non-white patients, and the risk of thrombophilia may vary by groups, though there are large deficits in our knowledge here, but there are some data suggesting that factor V Leiden and the prothrombin G20210A mutation are less common in African-Americans, though Black Africans in another study of patients who had strokes tended to have lower levels of protein S, protein C, and antithrombin III levels.

–subgroup analysis in the above study of women >50 yo had a higher VTE recurrence rate of 5.8% and would be good to see if this were a better cutpoint than the ≥ 65​ in the HERDOO2 algorithm

–continuing anticoagulants in the high risk groups was left to the discretion of the clinicians/patients, so unclear who the group was who continued or discontinued the meds and how that might skew those results.

— Overall, would be great to have another study of longer duration and including a more mixed group of patients, to assess generalizability of the results

so, bottom line: this study may well have far-reaching implications, given that a large number of women (not men) might be able to stop long-term (perhaps life-long) anticoagulation for unprovoked first VTE (including PEs, where the risk of a recurrent PE is higher). And, I would add the results of this study to my general gestalt in discussing the pros and cons of stopping anticoagulation. But, to me, this is still such a difficult clinical decision, with potentially life-threatening implications either way, that there should be another confirmatory study in a more mixed population of patients.

See here for a slew of articles on VTE, with my concerns about the novel anticoagulants (NOACs)


Primary Care Corner with Geoffrey Modest MD: Antibiotics, microbiome changes and colorectal adenoma

21 Apr, 17 | by gmodest

by Dr Geoffrey Modest

There been a few studies over the past suggesting a relationship between the gut microbiome and colorectal cancer, as well as between antibiotic exposure and colorectal cancer. An evaluation of the Nurses’ Health Study recently confirmed prospectively that there was a dose-response curve between women’s prior use of antibiotics and colorectal adenomas (see gutjnl-2016-313413).


— 16,642 women aged at least 60 who had at least one colonoscopy between 2004 and 2010 and had reported their antibiotic use in a 2004 questionnaire, comparing antibiotic users versus nonusers

— mean age 70, family history of cancer in 20%, diabetes in 9%, BMI 25, hormone therapy 20%, regular use of aspirin in 40%, multivitamins in 78%, 20 pack-years of smoking in those who were ever-smokers, 2.3 g of alcohol per day, 6 servings of red meat per week.


— 1195 cases of adenomas were detected

— women who used antibiotics for more than 2 months between the ages of 20 and 39 had a 36% increased risk of adenomas by multivariate analysis, OR =1.36 (1.03-1.79)

— women who used antibiotics for more than 2 months between the ages of 40 and 59 had a 69% increased risk by multivariate analysis, OR = 1.69 (1.24 – 2.31)

— there was a trend between increasing antibiotic use at age 20-39 (p=0.002) and also at 40-59 (p=0.001), in each case with progressively more adenomas when increasing antibiotic use, from no use to 1-14 days, to 15 days-2 months, to >2 months.

— this association was similar for low risk versus high risk adenomas (high-risk being defined as size > 1 cm, with tubulovillous/villous histology, or > 2 detected lesions), though was slightly stronger for proximal lesions.

— there was no association between antibiotic use in the prior 4 years and risk of adenoma [ie, the microbiota were not influenced by recent antibiotic usage]

— women who used antibiotics for a longer duration were overall similar to those who did not in terms of family history, personal disease/screening history, and lifestyle factors, but were more likely to regularly use menopausal hormonal therapy, aspirin, and undergo colonoscopy for symptoms rather than routine screening.


–the Nurses’ Health Study is an ongoing prospective cohort study of 121,700 US female nurses aged 30 to 55 at enrollment in 1976. The advantage of looking at this cohort is the high quality of data collected (which had accurate data both on an array of lifestyle issues as well as medical problems/medications etc, as well as specifically on prior intermittent antibiotic use many years beforehand), and the long-term follow-up

— the presumed mechanism for a relationship between antibiotics and colorectal adenomas is through the effect of antibiotics on the microbiota. For unclear reasons antibiotics may induce either temporary, quasi-stable states, or alternative stable states. The specific microbiota changes associated with colon cancer include depletion of Bacteroides, Firmicutes (Clostridia), and Proteobacteria (Enterobacteriaceae) and enrichment of Fusobacteria.

— of course, though this was a really good prospective study following lots of items (a rather long questionnaire….), there could well be unaccounted-for differences between the antibiotic users and nonusers which could explain the microbiome differences as well as the increase in adenomas. The noted differences between these groups (eg, using postmenopausal hormones, aspirin, having nonscreening colonoscopies) were accounted for, but were there other issues? were there differences in psychosocial issues between the groups? were those on these meds and getting antibiotics more anxious or stressed out (and there is some evidence that increased cortisol levels, often found with stress, can effect changes in the microbiome)? Were these women on the above meds also taking other unassessed meds that could affect the microbiome and adenoma rate (and perhaps leading to the long-term changes in the microbiome)? As with all observational studies, one cannot attribute causality to an association.

–so, I bring this up mostly because this study has a great database, and long-term follow-up, and reinforces many of the articles brought up before regarding the effects of microbiota changes and human disease. And, it provides us with an even stronger imperative to try to decrease antibiotic use, except when clearly indicated. 

See here for an array of articles on the microbiome, including mechanism by which microbiota changes might lead to a variety of diseases including NAFLD, cancer, diabetes, metabolic syndrome, heart disease….  ​

See here for another array of articles, but dealing with the consequences of overuse of antibiotics in humans and livestock and microbial resistance

Primary Care Corner with Geoffrey Modest MD: 23andMe genetic analysis approved for direct advertising

20 Apr, 17 | by gmodest

 by Dr Geoffrey Modest

The FDA just approved direct-to-consumer marketing for genetic risk information (23andMe Personal Genome Service Genetic Health Risk) for 10 conditions, though noting that “the tests cannot determine a person’s overall risk of developing a disease or condition … there are many factors that contribute to the development of a health condition, including environmental and lifestyle factors.” This approved test involves saliva samples, assessing more than 500,000 genetic variants associated with increased risk of: Parkinson’s disease, late-onset Alzhemer’s, Celiac disease, Alpha-1 antitrypsin deficiency, Early-onset primary dystonia, Factor XI deficiency, Gaucher disease type 1, Glucose-6-phosphate dehydrogenase deficiency, Hereditary hemochromatosis, Hereditary thrombophilia. see

The FDA reviewed the data for 23andMe through a premarket review pathway for low-to-moderate risk devices, with expectations about assuring test accuracy, reliability and clinical relevance, and also to make sure the results be clearly understandable by consumers. But the FDA now intends to exempt further tests added on by 23andMe from further premarket review, and may well exempt other genetic testing companies after submitting their first premarket notification.  These exemptions “would allow other, similar tests to enter the market as quickly as possible and in the least burdensome way”. [and, I might add, this is before confirmation of Trump’s pro-industry FDA nominee Scott Gottlieb, who has “received millions of dollars from various investment and pharmaceutical firms” per Bloomberg Technology…..]

Statnews had a really impressive review of the 23andMe test, at the cost of $199, revealing many of its limitations (see​ ). For example, they note that those having the specific variant for Parkinson’s disease tested increases their risk 3-fold, from a baseline of 0.3% to 1%…. Or, that those with Apo ℇ4 alleles may not get Alzheimer’s, and those without it may (the frequency of the Apo ℇ4 allele varies by ethnicity, 15% in Caucasian, 25% African-Americans; the presence of one allele increases the risk of Alzheimer’s by 2-3 fold, and two alleles by 8-12 fold).  So, the presence of a genetic variant, either for the Parkinson’s gene or if only 1 allele of Apo ℇ4, still makes the development of the disease unlikely (and actually rare, in the Parkinson’s case). And still about 10% or so of those who are homozygous for Apo ℇ4 do not get dementia.


–the big issues here, to me, are:

–these tests may well have pretty low sensitivity and specificity, as well as low positive predictive value

–patients may have trouble understanding the wording: 3x higher incidence of Parkinson’s sounds like a lot, but the actual 1% incidence not so much. Can be very confusing

–and, there are real concerns about the psychological effects of finding out one has a somewhat higher likelihood of a bad disease for which there is no current treatment. Will there be more depression, anxiety, decreased social cohesion/more isolation, hopelessness/even suicide?

–focusing on the genes undercuts the very important role of environmental/lifestyle factors: it really reinforces the conceptual deterministic framework that one’s future is set by one’s genes, undercutting the oftentimes dominant message that our environment and lifestyle are really important

–and it reinforces the conception that technology is the answer to our ills…



Some recent article on dementia are tangentially related to the above.

–The WHO reported that dementia deaths have increased, unseating AIDS as one of the top killers in the world (see ), and taking over the number 7 slot of the top 10 causes of death. And, as per this article in Bloomberg News, about 100 experimental treatments for dementia have failed to make matters better. Part of the issue causing the “elevation” of dementia is the aging population and probably that it is more often diagnosed now. But, so far, drugs do not seem to be the answer

–in this light, and complementing the above point that genes often do not play a decisive role, there was a recent study finding that lower adherence to a Mediterranean diet was associated with more significant loss of brain volume (see Luciano M. Neurology 2017;88:1)..

–Background: increased adherence to Mediterranean diet (lots of fruits, veges, legumes, cereals, olive oil as primary fat, moderate consumption of fish, low to moderate intake of dairy and wine, and low intake of red meat and poultry) is associated with less inflammation, better cognitive function, and lower risk of Parkinson’s and Alzheimer’s, as well as cardiovascular and cancer mortality. And cross-sectional studies have found higher consumption of components of the Mediterranean diet are associated with larger MRI-based brain volumes and cortical thickness. Higher fish and lower meat intake seemed to be the most important players.

–The current study was a prospective one of 562 Scottish men and women, assessing diet and brain structural changes from age 73 to 76

–50% female, 30% Apo ℇ4 positive, 4% diabetic/38% hypertensive/22% cardiovascular disease/BMI 28

–baseline cognitive ability: Mini-Mental Status Exam 29 (30=max, so no significant baseline dementia), and they assessed reading ability and general cognitive ability which relates to IQ (no comment on the scales they used or their validity). Diet was assessed only at baseline, age 70.

–change in brain structure from age 73 to 76:

–total brain volume: decreased 19 ml (from 990), gray matter volume decreased 9 ml (from 465), mean cortical thickness decreased 0.05 mm (from 3.11 mm)


–the group with highest adherence to Mediterranean diet had more carriers of Apo ℇ4 alleles (reason for this unclear in this healthy population who did not have underlying dementia), yet had greater total brain volume and gray matter volume at age 76

–in the fully adjusted model (controlling for those factors found in prior studies related to Mediterranean diet and brain MRI measures: age, sex, education, BMI, diabetes, general cognitive ability, MMSE), there was significant association between Mediterranean diet components and total brain volume change between age 73 to 76 (p=0.04), and presence of Apo 4 genotype did not change this. Fish and meat consumption were not found to be the drivers of this association. [perhaps it is a different combination, or even the all of the components together: parsing out specific components may be a tad reductionist and undercut potential interactions between the individual components. Better to eat well overall]


–so, there was a significant association between the diet and brain volume changes over this 3-year period

–and, the effect size of the Mediterranean diet on brain volume was substantial: half the size of that due to normal aging

​–of course, this was not a randomized controlled trial, so there could well be confounders (do those choosing to adhere to a more Mediterranean-type diet do other, unmeasured healthful things that may really be the ones that decrease cognitive decline, such as exercise??)

but, all in all, this study supports the concept of environmental/lifestyle factors being really important in the development of Alzheimer’s/cognitive decline, that this appeared to be  independent of the known genetic risk factor of Apo ℇ4​​, and adds to the argument against a genetic-determinant view of the development of this important condition (as is conceptually promoted by 23andMe etc)


Primary Care Corner with Geoffrey Modest MD: The elusive search for afib in stroke patients; and an app

19 Apr, 17 | by gmodest

​​​​by Dr Geoffrey Modest

Atrial fibrillation is an important risk factor for current ischemic strokes, but may be hard to diagnose in those presenting in sinus rhythm. A reasonably large German study found that prolonged Holter monitoring picked up many more cases of atrial fibrillation than standard monitoring, the Find-AFRANDOMISED trial (see Wachter R. Lancet Neurol 2017; 16: 282–90).


–398 patients were recruited from 2013-2014 in 4 German centers, all with acute ischemic stroke and symptoms for 7 days or less, aged 60 years or older, in sinus rhythm and no history of atrial fibrillation (AF).

— Mean age 73, 40% women, 80% hypertension/27% diabetes/41% hyperlipidemia/18% current smoker/29% previous smokers/20% previous ischemic stroke/8% previous TIA/5% heart failure/10% MI/15% CAD/7% with ejection fraction <50%

— lacunar lesion on brain imaging found in 40%, cardioembolism 20%/small vessel disease 30%/stroke of unknown cause 50%, mean CHA2DS2-VASC score 4.8 (most in the 4-6 range), mean CHADS2 score 3.5 (50% in the 4-6 range). 197 patients were classified as having cryptogenic stroke; 201 as non-cryptogenic, mostly small vessel occlusion (118 pts) and cardioembolic stroke (75 pts)

— Those with severe ipsilateral carotid or intracranial artery stenosis were excluded

— patients were randomized into standard monitoring (at least 24 hours of rhythm monitoring: 188 of 198 patients had stroke unit telemetry for a median duration of 73 hours, and 149 of the 198 patients received additional Holter monitoring for a median of 24 hours) versus 10-day Holter monitoring at baseline, at 3 months, and at 6 months of follow-up. The initial Holter was done at a median of 3.5 days after symptom onset

— primary endpoint was the occurrence of atrial fibrillation or atrial flutter (lasting 30 seconds or longer) within 6 months after randomization and before stroke recurrence.

— secondary endpoints included: the detection of AF within 12 months, recurrence of stroke, systemic embolism or death within 12 months.


— after 6 months, 13.5 % were found to have atrial fibrillation in the enhanced monitoring group versus 4.5% in the standard group, absolute difference 9.0%, p=0.002, number needed to screen=11

— no patient with detected atrial fibrillation had a recurrent stroke or systemic embolization before the detection of atrial fibrillation within 6 months [by the way, this and another recent study I saw challenged the prior conventional wisdom that recurrrent strokes were much more common within the first week or two after the initial event]

— one of 27 patients in the enhanced monitoring group had atrial flutter

— the median duration of the longest AF episode during Holter monitoring was 5 hours, though one third lasted more than 24 hours and slightly less than one third < 6 minutes, and the number of episodes of atrial fibrillation detected ranged from 1 to 12

— review of their graph shows that the 1st 10 day Holter monitor picked up 18 patients, about ½  were picked up in the 1st 5 days; the 2nd  10-day monitor picked up an additional 6 with 2 picked up in the 1st 5 days; and the 3rd picked up one on the 8th day

— oral anticoagulation was given to all of the 39 patients who developed AF, more in the intervention group since more AF was picked up there

–clinical sequelae were found in 8 patients in the intervention group (5 recurrent strokes and 3 TIAs) and 14 in the control group (9 recurrent strokes and 5 TIAs), for rates of 3.7% vs 5.4%, nonsignificant (though this trial was underpowered for clinical outcomes, this finding does mirror that of the CRYSTAL-AF trial, which used an implantable cardiac monitor to pick up AF, finding 21% fewer events after 12 months). No cases of systemic embolization. No difference in picking up AF by age, sex, CHADS2, NIH Stroke Scale, symptoms at admission, or if the stroke was considered “cryptogenic”)​


— The rationale for looking aggressively for atrial fibrillation is that strokes from AF can be more severe, there is a high risk of recurrent strokes, and the detection of AF really changes therapy from antiplatelet drugs to oral anticoagulants, the latter decreasing the risk of recurrent strokes by 60 to 70%.  Since there are significant adverse events associated with these anticoagulants, it seems that their indications need to be pretty clear.

— The European Society of Cardiology recommends at least 72 hours of to monitoring, and also gives a Class IIa recommendation for implantable cardiac monitors (see Eur Heart J 2016; 37: 2893–962.)

— Review of the timing of AF pickups in the above study found that most (18/25, 72%) happened on the first 10-day cycle, and the pickup was reasonably evenly spread throughout the 10-day period; 6/25, (24%) were picked up in the second 10-day monitoring, again spread throughout the 10-day period; and one (4%) was near the end of the third 10-day period. This suggests to me that the monitoring should be for the entire 10-day periods, and that it is unlikely that a 4th 10-day period would be useful. The researchers in the above study suggested 7-10 days of monitoring within the first few days of symptom onset, and then repeating if higher risk (repeated cryptogenic strokes or embolic stroke of unknown source, frequent supravenrtricular ectopies, elevated natriuretic peptides, left atrial enlargement, or reduced atrial contractility).

–Holter monitoring has the advantage of being cheap, noninvasive, available, and able to be done within days of a cerebrovascular event.

so, very interesting study finding a significant number of patients having a stroke do in fact have AF on monitoring, and the more monitoring , the higher the pickup rate.  But hard to come to firm conclusions without a larger study powered sufficiently to assess clinical outcomes in order to see if AF pickup and treatment mattered (eg, is AF causative, or is it an innocent bystander which we know is common as age increases? and we also know that strokes themselves can cause cardiac arrhythmias, so which came first?) The other issues the larger trial could assess include:

​– what defines risky AF: eg, do really short episodes of AF matter (and what length does seem to matter?), and is this age-dependent?

— is there a number of AF episodes per 10-day monitoring that increase risk of stroke/TIA (and does that number vary depending on the length of AF episodes)? and, is this age-dependent?

— at what age should we do more aggressive monitoring (and should there be scaled amounts of monitoring based on different age groups, since AF is more common with increasing age)? is there an age where monitoring stops being clinically useful (either the AF doesn’t really increase risk that much, or the risks start to outweigh the benefits)?

the bottom line to me is that if we can show that picking up AF leads to improved clinical outcomes,  I would support more aggressive monitoring than the recommendations of the study authors: even though there were only 1 pickup during the third 10-day period, given how devastating a recurrent stroke can be, my inkling would be to support the 3 monitoring periods.

See here which argues for enhanced screening for atrial fibrillation overall (not just in people with strokes)

and  there are many blogs on atrial fibrillation treatment ( type atrial fibrillation in the search window)


As an aside, there is a free app for iphones called Cardiio which displays one’s pulse (just place a finger lightly on the camera on the back of the iphone). In Europe, it is approved to diagnose AF, but the FDA has not approved it in the US at this point. But one can see one’s rhythm, and patients could be shown how to use it and assess for abnormalities which might be AF. Basically, a study found that in 1013 patients with hypertension, diabetes, and/or aged >65, the sensitivity for the full Cardiio (Cardio Rhythm) was 92.9% and the specificity was 97.7%, as compared to single-lead ECG tracings reviewed by 2 cardiologists (see Chan P-H. J Am Heart Assoc. 2016;5:e003428, or  doi: 10.1161/JAHA.116.003428), though the positive predictive value in this study was only 53.1%. I have played with the app a little and seems pretty impressive to me (ie, I can see a clear waveform, documentation of the pulse, and, at least for the few times I’ve done it, I seem to be in normal sinus rhythm. Though not sure what I’d find with three 10-day Holter monitor recordings…)

Primary Care Corner with Geoffrey Modest MD: Hepatoma surveillance after hep c treatment

18 Apr, 17 | by gmodest

by Dr Geoffrey Modest

The American Gastroenterological Association just published a clinical practice update on the care of hepatitis C patients who achieve a sustained virologic response (SVR) after direct-acting antiviral therapy (DAA). See Their recommendations:

— Reconfirm SVR at 48 weeks post-DAA treatment. Studies have found that <1% of patients relapse after SVR at 24 weeks (SVR24, though SVR12 at 12 weeks is now more commonly checked). These are real relapses, not reinfections, and seem to be independent of viral genotype or particular type of patient. But this low rate of relapses still justify checking [and presumably treating]. The European Assn for the Study of the Liver also recommends the 48 week SVR check.

— Continue surveillance for hepatocellular carcinoma (HCC) with liver imaging +/- serum AFP 2x/year indefinitely in all patients with stage 3 fibrosis or cirrhosis post-SVR (but not in those with stage 0-2 fibrosis). AFP screening is now considered optional or adjunctive per most current guidelines. There are HCC cases found >5 years post-SVR in patients with interferon-based regimens, so at this point there is no recommendation as to when/if we can stop. Also, there are documented cases of HCC in those with F0-F2 fibrosis, though it is unclear from these reports whether there might have been other reasons for HCC (NASH, alcohol…).  For these F0-F2 patients, they do comment that “some clinicians might choose to obtain a final ultrasound during the year after SVR following DAA therapy”. Of course, one issue here is that biopsies may miss higher fibrosis regions, and liver elastography is operator-dependent and may not correlate with the (also imperfect) biopsy. See article below for some suggestive evidence that DAA could actually increase the likelihood of HCC.

–endoscopic screening for esophagogastric varices should be done in all patients with cirrhosis, independent of SVR. And it should be repeated at 2-3 years if no varices or small varices are present initially. This can be stopped after the second screening if no varices are found and there are no risk factors for progressive cirrhosis, on an individual patient basis. They also suggest that for those with small varices on initial exam (where no treatment is necessary), no further screening is necessary if followup endoscopy after 2-3 years shows unchanged or smaller varices.

–it is okay to check fibrosis with noninvasive tools (eg liver elastography) on an individual basis, but “improved fibrosis measurements should not alter the frequency of HCC surveillance at the present time”. [so, I’m not sure why we would do this…..]

–and, patients who achieve SVR should be counseled about minimizing risk of liver injury (alcohol, fatty liver, heptotoxins), and should be evaluated for these if serum liver enzymes are elevated. They note: “no safe limits for alcohol consumption has been established post-SVR and, therefore, avoidance of significant alcohol intake should be recommended for all patients, and complete abstinence is prudent in patients with advanced liver fibrosis or cirrhosis.” Diabetes is also a risk factor for HCC in those with hepatitis C, including those with HCC post-SVR and in non-cirrhotic patients, though there are insufficient data evaluating the benefit diabetes control or decreasing fatty liver disease.



The data are mixed on the effect of DAA for hepatitis C on the development of HCC, though older studies did find a reduction with interferon-based therapies (decreased all-cause mortality, liver-related mortality, need for liver transplant, variceal bleeding, as well as HCC, where a pooled study found a 76% decrease). A recent letter in Gastroenterology presented the results of a retrospective study of 66 cirrhotic patients treated with DAA in 2015-6 at the University of Alabama, with SVR in 61 (92%).  The above clinical guidelines cite a baseline HCC rate of 1-4%/year in those with cirrhosis. In this study, they found that 9% of patients developed de novo HCC within 6 months of DAA therapy (1/2 of whom developed HCC during DAA therapy), and another 3% having new indeterminate lesions (see There have been other studies finding either higher or lower incidence of post-SVR HCC; the variability of results may reflect the predominance of different genotypes in the different studies, the degree of cirrhosis/Child-Pugh class, as well as selection biases/imaging modalities to assess HCC (eg, ultrasound missing smaller lesions, especially in cirrhotic patients)/etc.)  But this and some other studies reinforce, at least for now, the need to continue surveillance for HCC in those with cirrhosis and treated effectively with DAA. The above clinical guidelines suggest NOT doing enhanced surveillance in the immediate post-SVR period, though this study did find that 1/2 the patients with HCC developed it during therapy.  Why would there be differences in HCC in those getting DAA vs interferon-based regiments of yore?? One thought is that SVR after DAA leads to down-regulation of cytokines (including endogenous interferon) which may have anti-tumor effects.


so, these studies suggest a few conclusions:

— we should be checking SVR one year after treatment, and not just at 12 or 24 weeks

— we should continue with HCC surveillance in those with SVR for the indeterminate future, per the usual 6-monthly schedule

— and it does not seem to make sense to rely on ultrasound or liver elastography to assess regression of cirrhosis as a means to decrease this HCC surveillance at this point.



Primary Care Corner with Geoffrey Modest MD: 1 shot of penicillin for early syphilis in HIV patients??

13 Apr, 17 | by gmodest

by Dr Geoffrey Modest

2 articles of note just came out on syphilis.

MMWR presented data on rates of primary and secondary syphilis in the US in 2015 (see ). The overall case rate was 7.5/100K population, nearly 4 times the previous lowest documented rate of 2.1/100K in the year 2000, a nadir after which it has increased each year. The rate increased 22% during 2011-13.


— in 2015, there were 23,872 reported primary and secondary syphilis cases in the United States.

— 81.7% of male primary and secondary syphilis cases were among gay, bisexual, and other men who have sex with men (MSM)

— among the 44 states reporting information on the sex of sex partners for > 70% of male cases, the rates were:

— overall for men over 18 years old: 17.5/100K

— men who have sex only with women: 2.9/100K

— MSM: 309.0/100K, which translates to:

106.0 times the rate among men who have sex with women only, varying by states from 39.2-342.1 times

— 167.5 times the rate among women

— the highest rates of primary and secondary syphilis among MSM were in the South and West, the top 5 being in North Carolina (peaking at 748/100K), Mississippi, Louisiana, South Carolina, and New Mexico

—  the highest rates of primary and secondary syphilis overall were in Louisiana, California, North Carolina, Nevada, Florida, Arizona, Oregon, Maryland, Illinois, and Mississippi.

— As a point of historical reference, the lowest state specific MSM primary and secondary syphilis rate in 2015 was 73.1/100K in Alaska, surpassing the highest overall US primary and secondary syphilis rates in 1946, at 70.9/100K

— this analysis was limited by a few issues: only 44 states had the sex of sex partners reported for >70% of male cases; the number of MSM in each state was estimated based on surveys and there may be significant underestimation; and the incidence of syphilis infections may be underreported.



Another article looked at 1-dose versus 3-dose regimens of intramuscular benzathine penicillin for early syphilis in patients with HIV (See DOI: 10.1093/cid/ciw862).


— 64 patients were randomized to 2.4 million versus 7.2 million units of intramuscular benzathine penicillin for early syphilis (2.4 million units every week for 3 weeks). The study was from 2009-2013.

–mean age 35, 95% male, 84% MSM, 58% African-American/31% Hispanic/11% white, 6% primary syphilis/61% secondary/33% early latent, 59% had had syphilis before, mean CD4=388/64% on HAART, 49% of those on HAART had undetectable viral loads

— primary syphilis was defined as having compatible genital, anal, or oropharyngeal ulcers; secondary syphilis if they had skin rash or mucosal lesions.  Those with positive serologies (all had positive RPR as well as the more specific TP-PA, T pallidum particle agglutination) were classified as early latent syphilis if they had a documented negative result followed by a positive within 12 months, or at least 4-fold increase in RPR titer.

— median RPR at baseline was 1:128

— RPR and symptoms were monitored every 3 months, and treatment success was defined as at least a fourfold (2 dilution) decrease in RPR during 12 month follow-up.


— only 9 of the 64 patients had seroconversion (negative RPR after treatment), 4 in the 1-shot and 5 in the 3-shot groups

— intention to treat analysis: treatment success rate was 80% in single-dose versus 93% in 3-dose regimens, absolute difference 13%, but not statistically significant.

— Per protocol analysis: success rates were 93% with single-dose and 100% with 3-dose regimens, also not statistically significant.

— no difference by CD4 counts (< 350 vs >350), HIV viral load, use of HAART at baseline, RPR at baseline (<32 vs >=32), or syphilis stage

— only 1 of 20 (5%) patients with undetectable HIV viral load did not achieve treatment success ; whereas 8 of 44 with detectable HIV did not achieve treatment success, a non-significant difference, but there were 6 in the 1-shot group and 2 in the 3-shot group.

— no severe reactions (eg Jarisch-Herxheimer), and none developed neurologic symptoms during the follow-up period.

— They conclude that the current CDC recommendations for a single-dose of benzathine penicillin is reasonable for HIV-infected patients with early syphilis


— the historic concern here was that:

–Treponema pallidum was and is quite susceptible to penicillin

–essentially all adults with early syphilis have their RPR revert to normal after treatment with 2.4 million units of benzocaine penicillin, i.e. a single dose

— but, in those with HIV infection, about one third do not serorevert.

–there seemed to be a higher rate of abnormal CSF findings as well as clinical neurosyphilis in HIV-positive people at earlier stages of syphilis infection (these data predated  HAART therapies).

— However, clinical failure after 1-shot treatment was actually quite rare, though I saw a report of at least one HIV-infected patient who had progression to neurosyphilis after 2.4 M units of benzathine penicillin for early syphilis infection.

–so, many of us, myself included, automatically prescribed 3 weekly doses, or 7.2 million units. However subsequent data on seroreversion were not much better with this higher dose.


–is this study generalizable? And should we just follow the CDC guidelines (ie 1-shot of 2.4 million units of benzathine penicillin)?  There are a few issues:

–really small numbers of patients overall, so above study was underpowered to detect clinically significant differences

–overall a pretty healthy group from CD4 perspective. They did comment that of the 17 patients with CD4 <200 at the time of the syphilis diagnosis, all 11 with 1-shot and 5 of 6 with 3 shots had appropriate serologic response

–but the questions remain: does this apply to those who are more immunocompromized (eg a patient with CD4=50)? or those with detectable viral loads (there was difference noted above, though not statistically significant in this small study)? Does short-term followup of the surrogate marker of RPR titers necessarily correspond to clinical efficacy in the longer-term? Could some of these patients (perhaps with lower CD4 counts) still infect others while their RPR more slowly responds? Should we look at CSF findings (perhaps a better marker of neurosyphilis) and potential long-term neurologic outcomes instead of RPR?


So, my conclusions from this study:

— the big conclusion is that syphilis is increasing in MSM around the country, meaning that people seem to be less protected from the spread of other sexually-transmitted infections as well (eg HIV). And the syphilis/HIV coinfection rates are quite high: reported rates of 15.8% in Los Angeles and up to 47.4% in Philadelphia. Sounds like a potential public health (and individual) disaster waiting to happen…

— in terms of the appropriate treatment for syphilis, my guess is that the CDC guidelines are reasonable. But, given the dearth of clear data, my inclination would still be to use 7.2 million units (3 shots) in those more severely immunocompromised (eg CD4 under 200 or so ????, even more so if nonsuppressed HIV viral load). Would be great to have a larger study with more varied patients (different CD4 counts, viral loads, longer term followup including clinical outcomes, etc)

Primary Care Corner with Geoffrey Modest MD: physical activity energy expenditure, a new paradigm

12 Apr, 17 | by gmodest

by  Dr Geoffrey Modest

​​The Scientific American magazine just had an article which made the point that in cross-cultural studies, human beings expend the same number of calories whether they are doing intense exercise or sitting around (see ). These anthropologists studied several human populations, though this report was largely on the Hadza people who lived in the dry savanna wilderness of northern Tanzania. In particular they found that Hazda men who spent days hunting and tracking game, ate and burned 2600 cal a day. Hadza women, who also did a lot of physical work, ate and burned 1900 cal a day. This is pretty much the same as adults in the US or Europe, and was independent of body size, fat percentage, or age. Similarly, research has shown that rural Nigerian women and African-American women in Chicago have similar energy expenditure, despite large differences in activity level. And, a large collaborative effort on non-human primates found that captive primates living in labs or zoos expended the same number of calories as those in the wild.

They posit that there might be differences in calories spent on different activities, and those who are sedentary may spend more energy on other things. For example exercise reduces inflammation, so sedentary individuals may have to spend more energy reducing inflammation. In addition women who exercise a lot may have decreased estrogen levels and fewer ovulatory cycles (ie, less energy spent there), perhaps to compensate for their increased exercise-related energy expenditure. Other studies have found that those who do long-term exercise have reduced basal metabolic rates [and, my addition, have slower heart rates, which may itself be associated with longer life], expending less energy in this manner

A more scientific study published by the author of the above article looked at total energy expenditure over a range of physical activity, finding that there was a positive relationship between total energy expenditure and physical activity but only at the lower ranges of physical activity. Energy expenditure plateaued at the upper ranges of physical activity (see Pontzer H. Current Biology 2016: 26, 410.)


— 332 adults age 25 to 45, 55% female, from 5 populations across Africa North America (Ghana, South Africa, Seychelles, Jamaica, and the US)

— total energy expenditure was measured using doubly labeled water, considered the most accurate measurement (where each subject would ingest radioactively labeled water and assess urinary excretion of the isotopes)

— physical activity was measured by wearable accelerometers, measured in counts/minute


–there was a small increase in the total energy expenditure from a baseline of about 600 kcal/d with 0 counts/min (cpm) on the accelerometer to about 800 kcal/min as this increased to 230 cpm, with no significant further increase up to >700 cpm


— these articles are consistent with the repeated finding that exercise by itself does not lead to weight loss, though is an important factor in maintaining weight loss in those on a diet; these data undercut an Additive model (where total energy expenditure is a simple linear function of physical activity and that this determines variations in total energy expenditure) to more of a Constrained total energy expenditure model (where the body adapts to changes in physical activity to maintain total energy expenditure within a narrow range). And perhaps there is an evolutionary aspect as to why those who exercise have to conserve energy in other ways, such as lowering basal metabolic rate, decreasing population (and the decrease in estrogen/ovulatory cycles in women also decreases energy expenditure on growth, perhaps with the added issue that those who exercise lots to get food may need to have fewer mouths to feed to survive),  etc.

— But, of course, even if exercise does not lead to weight loss, it is important to maintain the perspective that physical activity has a multitude of important healthy benefits, both mentally and physically, and I believe we as clinicians should be encouraging it regularly with our patients. Increasingly, we are learning of the benefits of exercise even where physicians initially felt it was potentially harmful, including in patients with severe heart failure for example. (and in the old days, clinicians prescribed prolonged strict bedrest post-MI, or back pain…)

— A recent meta-analysis for example found that lower levels of exercise was associated with excess mortality (see ), and, as opposed to other studies, this one looked at total physical activity and not just leisure-time activity:

— a review of articles from 1981 to 2016, prospective international cohort studies examining association between total physical activity and least one of the 5 diseases studied: 35 for breast cancer, 19 for colon cancer, 55 for diabetes, 43 for ischemic heart disease, and 26 for ischemic stroke, yielding 174 identified articles.

— Higher levels of physical activity are associated with low risk for all of these outcomes, at 3000-4000 metabolic equivalent minutes per week, or METs/wk.

— overall, comparing those with < 600 METs/wk to those with > 8000 METs/wk was associated with:

— 14% decreased risk of breast cancer

— 12% decreased risk of colon cancer

— 28% decreased risk for diabetes

— 25% decrease risk for ischemic heart disease

— 26% decreased risk for stroke

–but for all of these, the major risk reductions occurred at lower levels of activity, with diminishing returns after 3000-4000 MET minutes/wk.

–and, another recent study (see Ekelund U. Lancet 2016; 388: 1302–10) did a meta-analysis of 13 studies with over 1 million individuals followed for 2 to 18 years, assessing sitting time, TV time and physical activity. They found that, as compared to those in the most-active quartile (35.5 MET-h/week):

— those performing < 16 MET-h/week had a 12% higher mortality rate

— those with the lowest quartile of physical activity (<2.5 MET-h/week and sitting > 8 h/ day) had mortality rates 59% higher

— And, subgroup analysis revealed that it was the exercise level determined risk, independent of the sitting time (comparing <4 h/d and up to >8 h/d of sitting time).

— However, watching TV for >5 hours per day was associated with increased mortality regardless of physical activity, with noticeable increases in the group watching 3-5 hours TV/d.

–of course, these are observational studies, though large and consistent in their conclusions. but those doing less exercise may be very different from those doing more (poorer health, different social determinants of health, etc)

So, what does all of this mean? A few points:

–exercise is really important in maintaining a healthy life (as above, but also as shown in smaller randomized controlled trials: see here ​ for an array of blogs)

–the first studies suggest that there is a conservation of total energy expenditure, largely independent of how much exercise is done. This is likely evolutionarily determined. And perhaps the energy expenditure on exercise is in part diverting energy away from what would be unnecessary bodily functions needed for the sedentary (eg decreasing inflammation, which exercise itself helps). But this finding of conservation of energy expenditure may help explain in part why increasing exercise is not associated by itself with weight loss.

–and it is interesting in the last study how TV watching seems to be a particularly bad sedentary activity, much worse than just sitting time (which would include such things as sitting at work, which might involve more thought, fidgeting, other activities than the more typically passive “vegging out” in front of the tube).

Primary Care Corner with Geoffrey Modest MD: Insulin pumps in type 1 dm, not the best solution

11 Apr, 17 | by gmodest

by Dr Geoffrey Modest

A recent trial looked at the effectiveness of insulin pump treatment versus multiple daily injections in patients with type I diabetes (see doi: 10.1136/bmj.j1285). Prior studies have suggested that pumps work better, but it may have been that those patients on pumps had received more intensive training and education than those on multiple daily injections. So, this study looked at patients given similar education, finding that the benefits of education/training outweighed the advantage of using the continuous subcutaneous insulin infusion (the pump) over multiple daily injections.



— 317 adult participants in the UK from multiple sites ​with type I diabetes were randomized to insulin pump therapy versus multiple daily injections

— Both received structured education: 267 attended one week DAFNE skills training courses (Dose Adjustment for Normal Eating), with a further visit at 6 weeks. This training stresses flexible dose adjustments according to eating, physical activity, and blood glucose level, and was slightly different for those on multiple daily injections vs pumps, to emphasize the specific use and problems with each.

— Mean age 41, 60% male, 91% white, BMI 27, mean duration of diabetes 18 years, 55% with macrovascular complications/43% retinopathy/7% neuropathy/19% nephropathy, 12% with at least one episode of severe hypoglycemia in the past year, mean hemoglobin A1c 9.1 with a range 5.7 to 16.7 and only 9% had a hemoglobin A1c < 7.5%

— Main outcome: changes in hemoglobin A1c at 2 years. Secondary outcomes included body weight, insulin dose, and episodes of moderate or severe hypoglycemia. They also looked at quality-of-life and treatment satisfaction



— Mean change in hemoglobin A1c at 2 years:

–decreased 0.85% with pump treatment

–decreased 0.42% with multiple daily injections

— with adjustment for missing values etc, the A1c difference was 0.24% between the therapies, which is neither clinically nor statistically significant (0.5% being considered clinically significant)

— on a per protocol analysis, the mean difference favoring pump treatment was 0.36%, which did have a p=0.02, still not clinically significant.

— But at 24 months, combining both treatment groups, there was a hemoglobin A1c decrease of 0.54%. In those with an A1c initially >7.5%, the A1c decrease was 0.64%. These decreases, presumably attributable to the education and training prior to beginning each of the drug regimens, were clinically significant.

— secondary outcomes:

— hypoglycemia: 49 episodes in 25 patients over 24 months, did not differ between groups. The incidence of severe hypoglycemia decreased by about half for both groups as compared to baseline.

— No statistically significant difference in body weight, but there was a slight increase in HDL cholesterol and decrease in total cholesterol in both groups without a difference. Insulin dose decreased in both treatments, a little greater in those on the pumps (0.07 IU/kg). No difference in the odds of proteinuria.

— Diabetic ketoacidosis: this was greater in the pump group compared to the multiple daily injections group (17 versus 5 episodes), most related to infections, and 18% by technical failures in those using pumps

— psychosocial questionnaires: no difference between groups in generic quality-of-life status instrument. Improvement in both groups in the overall diabetes-specific quality-of-life questionnaire, though this was greater in the pump group though not always reaching statistical significance. Pump users showed greater improvement in treatment satisfaction as well as more dietary freedom and less daily hassle at both 12 months in 24 months

— other findings: those on pumps had twice the number of contacts with diabetes professionals, especially during the 1st year. There also were more face-to-face contacts and of longer duration in the 2nd year of the study.



— Pumps are used less frequently in the UK, an estimated 6% of type I diabetics use pumps there versus 40% in the US (which may be related to differences in the medical cultures between the 2 countries, with us going more quickly/easily to high tech fixes).

— Pumps are clearly more expensive than multiple daily injections: the pumps cost £2500 in the UK plus an additional £1500 for consumables (cannulas, reservoirs, batteries). And this does not include the increased number of office visits noted above.


— As per the authors, “These results do not support a policy of using insulin pumps in  those with poor glycemic control until the effects of training on participants level of engagement in intensive self-management have been determined”.  I personally support a strong effort to encourage a healthier lifestyle for both type 1 and type 2 diabetics (and pretty much everyone else), for its myriad of positive health effects.  However, diabetes raises particular challenges, since dosing of insulin in particular is so dependent on consistency in diet/exercise as well as on other events that change insulin effectiveness (eg infections, which increase insulin resistance). There may certainly be some advantages of the pump in some patients, with the potential for having more variations in life (different foods, even a small piece of cake on a birthday; doing less exercise some days when not feeling well or the weather is bad; UTIs, etc) and more flexible dosing to compensate. But this study in type 1 diabetics does point out the primacy of structured education to improve glucose control, and then considering technological fixes in some cases on an individual basis. And I think the lessons are more broadly applicable to type 2’s and beyond…​

Primary Care Corner with Geoffrey Modest MD: ?Thyroid meds for subclinical hypothyroidism in older adults

10 Apr, 17 | by gmodest

​by Dr Geoffrey Modest

A randomized controlled trial assessed the effect of levothyroxine therapy in older adults with subclinical hypothyroidism, finding no clear benefit (see DOI: 10.1056/NEJMoa1603825 )


— 737 adults at least 65-years-old who had persistent subclinical hypothyroidism were randomized to levothyroxine starting at a dose of 50 µg a day, or 25 µg if their body weight were <50 kg or had coronary heart disease, with subsequent dose adjustment to achieve a TSH between 0.4 and 4.6, versus placebo.

— Subclinical hypothyroidism was defined as TSH of 4.60-19.99, with a free thyroxine level that was within the normal reference range.

— Mean age 74, 54% women, 98% white, 97% in non-sheltered community housing, 50% with hypertension/15% diabetes/14% ischemic heart disease/13% osteoporosis/9% smokers, mean baselineTSH level was 6.4

— Primary outcomes were changes in the Hypothyroid Symptoms score and in the Tiredness score on the thyroid related quality-of-life questionnaire, at one year (range of each score was 0-100, with higher scores indicating more symptoms, and the minimum clinically important difference being 9 points for each scale). Baseline Hypothyroid Symptom score was 17, Tiredness score was 26

— Secondary outcomes included changes in generic health-related quality-of-life, comprehensive thyroid related quality-of-life, hand grip strength, executive cognitive function (as assessed with the letter-digit coding test which indicates the speed of processing according to the number of correct responses in matching 9 letters with 9 digits in 90 seconds), blood pressure, weight, BMI, waist circumference, activities of daily living, instrumental activities of daily living, fatal/nonfatal cardiovascular events.


— Mean TSH level decreased from 6.4 to 5.5 in the placebo group as compared to 3.6 in the levothyroxine group, with median dose of 50 µg of levothyroxine. This was achieved within 6-8 weeks after starting the medication.

— There was no difference in the mean change at one year in the Hypothyroid Symptom score (0.2 for each group).

— There was no significant difference in the change in the Tiredness score (3.2 in those on levothyroxine, 3.8 in those on placebo)

— There was no benefit in any of the secondary outcome measures

— There were extended outcomes assessed for about half the patients, at a median of 24.2 months, also finding no difference in the primary or secondary outcomes

— Adverse events: no difference



— Subclinical hypothyroidism is an important issue for a few reasons:

— It is very common, between 8 and 18% of adults > 65yo

— Thyroid hormone acts throughout the body with receptors pretty much everywhere, affecting cognition, skeletal muscle function, vascular tree and heart, skeletal muscle, bone, etc, etc

— There are epidemiologic data suggesting that patients with subclinical hypothyroidism are at increased risk of coronary heart disease and perhaps heart failure. Data on total mortality are mixed.

— To me, there is a fundamental contradiction in the term “subclinical hypothyroidism”, since the normal limits of free T4 level reflects the bell-shaped curve of the community lab values, whereas TSH reflects the individual person’s response to their own circulating hormone levels. And patients may not be asymptomatic (ie “subclinical”). Subclinical hypothyroidism therefore, I think, just reflects a low level of hypothyroidism, such that depression of T4 levels still remains within the community norm, but still could have effects on that individual’s body.

— Other studies have found that treating subclinical hypothyroidism has shown improvements in the Tiredness score. These studies have been small and somewhat underpowered, and often with younger patients.

— About half of the patients with subclinical hypothyroidism will progress to overt hypothyroidism with a low serum thyroxine level over 10 to 20 years, with an annual progression rate of 2 to 4%. However some also have spontaneous recovery, less likely in those that are anti-TPO antibody positive

— Some limitations of the study which limit its generalizability:

​– The study tested a pretty uniform demographic (white people with stable housing and not a lot of medical comorbidities)

— The median achieved TSH level was 3.6, and some people believe that a more reasonable target is between 0.4 and 2.5 (i.e., it is possible that there would have been a measurable effect if they had achieved the lower and perhaps optimal TSH concentration)

— Very few people had TSH levels > 10, with a mean only slightly above the normal range, meaning that most of the patients had really mild hypothyroidism, which has a lower likelihood of progressing further or being symptomatic

— Hypothyroid symptom levels at trial entry were also quite low to begin with

— The trial was underpowered to detect an effect on cardiovascular events or mortality

— They did not measure thyroid antibody levels (which do predict to some extent which patients are more likely to progress to hypothyroidism)

— They did not find any difference in the speed of information processing, which has been found to be slowed in persons with subclinical hypothyroidism. However they did not assess other measures of cognitive function, though these are typically pretty blunt instruments (MMSE, MOCA, etc) and might not pick up very subtle though potentially important changes for the person and family/supports. But treating the subclinical hypothyroidism might still make a real difference for the individual, especially in the long-term (and a 65 year old in otherwise good health has a 20ish year life expectancy)

So, what is one to do with older patients who have subclinical hypothyroidism? The answer is not entirely clear, and this study really only adds the finding that short-term treatment of essentially asymptomatic patients with minimal laboratory abnormalities suggesting hypothyroidism does not seem to be effective. And I am particularly concerned about the potential links with atherosclerosis and cognitive decline. ​My sense is that it does seem reasonable to treat people with higher TSH levels (e.g.>10) since they have a higher likelihood of progressing to overt hypothyroidism. In those with lower TSH levels, it might be reasonable to check anti-TPO levels and treat those patients. It also might be reasonable to treat those who are more symptomatic than in this trial. But, overall, if one elects not to treat, it does make sense to follow these patients closely to see if they progress to more thyroid dysfunction. But one concern I have is that the usual symptoms of hypothyroidism, on the one hand, are pretty nonspecific, and, on the other hand, in many cases reveal themselves so slowly over time that patients may accommodate to them and not even notice them (though their treatment might still positively affect their quality-of-life).


for review and critique of thyroid screeninng guidelines, see here

Primary Care Corner with Geoffrey Modest MD: Home-based CBT for low back pain

6 Apr, 17 | by gmodest

​by Dr Geoffrey Modest


As mentioned in a recent blog (see here ), the effectiveness of medications for chronic pain is somewhat limited, and more studies have been coming out about nonpharmacologic therapy, either as solo or adjunctive therapy. Cognitive behavioral therapy (CBT) has been shown to benefit patients with chronic low back pain (see blog referenced below), but patient access to such therapy may be limited. In this light, a new trial showed that home-based, telephonic therapy may be as good as in-person CBT (see doi:10.1001/jamainternmed.2017.0223).


— Details:

— a single center VA study enrolled 125 patients with chronic back pain, allocated equally to interactive voice response-based CBT (IVR-CBT) versus standard CBT

— this was a non-inferiority study, with primary outcome being change from baseline to 3 months in patient-reported Numeric Rating Scale (NRS) of pain, a scale from 0 to 10. Secondary outcomes included pain-related interference in daily activities; and emotional functioning, sleep quality, and quality of life at 3, 6, and 9 months. These were assessed by the West Haven-Yale Multidimensional Pain Inventory, and the Morris Disability Questionnaire.

— 97 men and 28 women, 65% white/26% black, mean age 60, 20% full-time employed/14% part-time/15% unemployed/29% retired, 18% disabled, 26% with history of substance abuse, mean duration of back pain was 11 years, 55% with nonspecific cause/43% with radiculopathy or spinal stenosis, 12% with opioid prescriptions at baseline, average NRS pain rating was 5.58,

— All patients received a manual specific to their intervention (CBT versus IVR-CBT), to be followed over 10 weeks. The manual included an introductory module about the rationale for CBT, 8 pain-coping skill modules, and a relapse prevention module. All patients received IVR, consisting of 11 weeks of daily telephone calls to the patient to assess pain, sleep, step count, and pain-coping skill practice; if patients were engaged in a progressive walking program; and if they continue to receive care from their primary care clinician. All patients in both groups received these calls.

— In-person CBT involved weekly 30 to 40 minute treatment sessions, where the therapist reviewed the IVR reports and provided feedback during the sessions

— IVR-CBT involved receiving therapist reviews of the IVR reports in a 2 to 5 minute personalized feedback session


— Results:

— 82% completed at least 3 treatment sessions, though the IVR-CBT group attended 2.3 more sessions than in-person CBT (8.9 versus 6.6)

— NRS score: IVR-CBT decreased 0.77 points, versus a decrease of 0.84 with CBT, signifying noninferiority. Both groups had statistically significant reductions in average pain intensity at 3 and 6 months post-baseline but not after 9 months. These improvements were considered clinically meaningful changes, though of modest effect size.

— Statistically significant improvements in physical functioning, sleep quality, and physical activity of life at 3 months occurred in both treatment groups, with no difference between the groups.

— Post-treatment, 33% of those with standard CBT reported clinically meaningful improvement in pain intensity of at least 30% compared with 19% in those receiving IVR-CBT, not statistically significant.

— Adverse events: 46 participants, mostly related to increased pain from exercise, no difference between groups


— Commentary:

— IVR-CBT seems to offer a more accessible and lower cost treatment option for patients with chronic low back pain, which may well apply to other types of chronic pain (there are data supporting CBT benefit for back pain, osteoarthritis, and fibromyalgia). CBT involves helping patients reconceptualize pain as influenced not only by biological but by psychological, behavioral, and social factors. Patients learn cognitive (e.g. reframing catastrophic thoughts) and behavioral (e.g. relaxation techniques) coping skills through this process, as elaborated in the article.

— It is notable that patients were more engaged with the IVR-CBT-based therapy, attending significantly more sessions, than with standard CBT therapy. This suggests not just the acceptability of this IVR-CBT therapy, but likely also the decrease in burden/increase in accessibility and appeal of this treatment.

— There are several limitations to the study, including the fact that it was carried out in only one VA Hospital and with a small number of patients. Also, there was no nonintervention/placebo arm. However, this last concern may be less significant given that the average duration of pain was 11 years, suggesting that patients actually act as their own control.

— Also, it would be really interesting to know how those with a history of substance use disorder (26% in this article) or those on prescription opioids (12%) would do with IVR-CBT. The numbers of patients in this study was probably too small to get meaningful insight into this.


So, this may well be a viable and accessible alternative or adjunct for chronic pain management, and may really help patients who are functionally impaired by the pain, adding to the increasing numbers of nonpharmacologic therapies for this common and difficult problem. It also adds to the impetus for us to offer these types of therapies instead of just jumping to prescribe medications.

see​ which reviews a few articles: the main one on tai chi for knee arthritis, another on mindfulness-based stress reduction for chronic pain, and another on CBT for back pain




EBM blog homepage

Evidence-Based Medicine blog

Analysis and discussion of developments in Evidence-Based Medicine Visit site

Creative Comms logo

Latest from Evidence-Based Medicine

Latest from EBM