No, We Should Not Denounce Digoxin
The Study of the Week goes back in history today to look at the DIG trial. You might be surprised.
William Withering first used digitalis in 1798. Wikipedia describes it as the beginning of modern therapeutics. FDA approved digoxin for use in 1998. Since then there has been controversy over its use.
The only time I’ve ever been a protagonist in a debate, has been my defense of digoxin as a useful drug.
In recent years, a large number of observational studies have been published—most of which associate dig use with increased death rates. The inherent problem with these studies is that sicker patients receive the drug, and this selection bias mars interpretation of the results.
Digoxin is an inexpensive generic medicine taken once daily. Briefly, It is a channel blocker and leads to increased calcium in the cardiac cell. This results in two effects: a stronger contraction (positive inotropy) and a slower rate, especially during AF.
The DIG Trial
The hypothesis of the DIG trial was that digoxin would reduce mortality vs placebo in patients with heart failure due to a reduced ejection fraction. Patients could be included if they had a LVEF < 45% though the mean LVEF of enrolled patients was 28%. The average age of patients was 63 years and most were men (78%).
The recommended dosing of digoxin came via an algorithm based on age, sex, weight and kidney function—like we do today in our heads.
DIG was a large trial—randomizing about 3400 patients each to the drug or placebo arm. The primary endpoint was death. Follow-up averaged about three years.
Results
There were 1181 deaths (34.8 percent) with digoxin and 1194 deaths (35.1 percent) with placebo (risk ratio when digoxin was compared with placebo, 0.99; 95 percent confidence interval, 0.91 to 1.07; P = 0.80).
Given the tight confidence intervals, long follow-up and large number of events, I might tiptoe into writing that there seems to be evidence of absence of a mortality effect from digoxin.
Now let me show you one of the least well known findings from the trial. I’ve made a slide of it. Digoxin significantly reduced hospitalizations due to heart failure (HHF), which was a secondary endpoint.
The trial was published in 1997, but 20 years on, the heart failure community now leans heavily on the ability of a new drug to reduce HHF.
Here is the slide with another comparison. I’ve added the HHF reduction seen with dapagliflozin in the DAPA-HF trial and the HHF reduction with sacubitril/valsartan seen in the PARAGON-HF trial (HFpEF).
My point is that the 28% reduction in HHF with digoxin is quite similar to the celebrated new drugs of today.
Let me show you something else. Milton Packer wrote the editorial for the DIG trial.
Here is how he described the reduction in HHF in DIG.
In the trial reported here, digoxin had no effect on mortality but reduced the risk of hospitalization by 8 percent. This reduction, though significant, is so small that physicians would avoid only 9 hospitalizations by treating 1000 patients with digoxin for one year.
He downplayed it by describing it in absolute terms. In the previous paragraph he wrote that it was hard to determine causes of death or hospitalization.
This framing is why I have argued that heart failure trialists tell us how their drugs reduce total hospitalizations. The DIG trialists gave us this info. And, indeed, digoxin did reduce total hospitalizations.
Look also at the proportion of hospitalizations. In 1997, HHF hospitalizations in the treatment arms were roughly 40% of total hospitalizations. (26/64%).
In the most recent HFpEF trials of SGLT2i drugs, (dapagliflozin in the DELIVER trial) total hospitalizations was not presented. In the EMPEROR-Preserved trial of empagliflozin vs placebo in HFpEF, total hospitalizations were reduced but the reduction did not reach statistical significance. More important though is that HHF now represents a much smaller proportion of total hospitalizations ( 16%).
This reduction is because the modern patient with heart failure is older and has more competing causes of illness. Competing causes of ill health (co-morbidity) need to be considered when translating trial data to patients. It also renders less important the old endpoint of heart failure hospitalization.
Summary:
Digoxin, like any drug, has to be used carefully. It has a narrow therapeutic window. Excess levels can cause toxicity. Yet I think we’ve grown quite careful with the drug.
When I trained at Indiana in the 1990s, about every fourth unknown ECG was a dig-toxic rhythm. I have not seen one in years. I am pretty sure young doctors could not identify a dig-toxic rhythm—because it’s so rare.
My point in this post is to highlight data over dogma.
When you look at the data—not listen to dogma—digoxin compares fairly well to the new heart failure drugs—many of which also have no effect on mortality and reduce only one reason to be hospitalized.
Digoxin has been less well studied for its use in atrial fibrillation. It slows the ventricular response rate without lowering blood pressure.
I feel somewhat reassured using the drug in this setting because in sicker patients (enrolled in DIG) there was no signal of harm. Though I agree we need more data for this indication.
This posting reminds me of a woman, about age 65, once referred to me (a general internist) in desparation by her family doctor. Having enjoyed power and respect as an elementary school teacher in charge of her own classroom, she had little tolerance for doctors telling her what to do. Also, she had been unable to tolerate heart-rate controlling drugs (various beta-blockers, calcium channel antagonists) and was unwilling to try any others.
When I met her, she could barely walk from waiting room to examining room, and turned out to be (quietly) in frank pulmonary edema with very fast heart rate and ECG proof of atrial fibrillation, but no valvular disease. At my suggestion to try digoxin, in which I thought I might interest her as an "entirely different older drug that few people still know how to use", my friendly South Africa-trained cardiologist colleague agreed to admit her to a CCU - perhaps the first time in recent memory that anyone was admitted to be "digitalized."
This treatment worked like hot damn to slow her heart rate, bringing her out of pulmonary edema and back to independent function. She was never willing to try anything else, and insisted that digoxin had saved her life (I agreed). Over some years she never lost her crustiness, but she did lose a great deal of weight and at least 10 years later was thriving. This was almost certainly not an accident, iin my opinion.
Your detailed objective reviews of the literature are priceless. For digoxin your observation that studies with unfavorable outcomes have common characteristics. They are prospective with the digoxin arm having sicker patients and the dosages used were too high. The outcome,then as expected is sicker patients developing dig toxicity. I have noticed this pattern of science unfortunately in our current FDA approval process. HHS terminated the FDA Unapproved Drug Initiative in November 2020 for multiple reasons one being the pattern of older drugs being shown in a bad light in comparison to newer patented alternatives. Indeed digoxin would fall in to this category of drugs in use prior to June 25, 1938 when the FDA was created by Congress. HHS found this element of the FDA Modernization Act resulted in exclusion of generics in lieu of costly alternatives. I first noticed this pattern when a new and very expensive expensive IV anti emetic medication Zoltan was introduced and the tried and true anti emetic droperidol was given a black box warning for prolonged QT syndrome. More recently strange biases emerged with the COVID pandemic. Early in the pandemic there were two fraudulent studies published in the Lancet and NEJM. These were touted by NIH and CDC government public health officials as the end of HCQ as a treatment for COVID on the very day the articles appeared on the pre print server with no peer review.. One demonstrated the inexpensive medication HCQ did not demonstrate efficacy and the other demonstrated unexpected severe toxicities. Both studies were retracted for fraudulent data. Indeed one week of peer review revealed the studies were entirely falsified. At the time Prodromos at U Illinois reviewed all the HCQ literature presumably searching for any patterns of bias or worse yet fraud. Of the 43 published reports 25 reported efficacy for HCQ 15 showed no improvement and 3 demonstrated negative efficacy. Fortunately no additional fraudulent papers were found. Their conclusions were that HCQ was consistently effective in unbiased studies. 11 studies treated early by design and they all demonstrated efficacy. The studies with no benefit or negative efficacy were studies where treatment was given late in doses that were known to be toxic and late in the disease that is more than 48 hours after hospital admission with most already in the ICU on ventilators. HCQ total drug treatment acquisition costs were $20 with Remdesivir costing $3100. My point here is there may be times where unexplained biases in the literature can be explained. I now analyze the potential for financial biases in the medical industrial complex before I conduct literature reviews routinely. Freedom of Information documents reveal from 2010-2020 $350M in royalty payments were received by NIH employees. Remarkably there is no requirement for disclosure revealing who paid how much to whom. We know NIH employees Fauci received 23 royalty payments and NIH Director Collins received 14 royalty payments and Clifford Lane 8 royalty payments in this period. We are not permitted to know how much was paid and which companies or drugs were subject to royalty payments. I have formally recommended all such payments be fully disclosed. I know you work very hard with your reviews. The presence of royalty payments to key officials is pertinent to implicit biases in study design. The NIH awards $30B per year to roughly 56,000 researchers per year. The refusal of NIH to provide transparency on $350M in royalty payments to its employees makes me suspicious of NIH studies having a higher rate of unintentional or intentional biases in study design.