STRONG HF – A Positive Trial that Does Not Help Clinical Medicine
The Study of the Week explores the larger question of what are trials for?
In his post yesterday, Adam discussed the STRONG-HF trial.
I will expand on it because it is a great example of a positive trial that does not add knowledge to the treatment of patients with heart failure.
The larger question is: what are trials for?
When I think about science classes in high school, I remember experiments as a way to explain nature. To discover things.
The worry I have about medical science is that a lot of our experiments produce little to no knowledge. Why is that?
Heart Failure Notes
The background of STRONG-HF concerns the treatment of patients with heart failure. Over the years, sequential trials have shown that 4 classes of drug improve survival in patients with heart failure due to a weak squeeze. We call it HFrEF (HefRef = heart failure with reduced ejection fraction).
The first drugs were renin-angiotensin system blockers (ACE-I or ARB). Then investigators showed that adding beta-blockers to RAS inhibitors improved mortality. Then, in the RALES trial, mineralocorticoid blockers further lowered mortality. Most recently, the SGLT2i drugs improved outcomes in patients on these three drugs.
Heart failure doctors want other doctors to get these four drugs going as soon as possible, and at the highest doses.
But trials are best case scenarios. Trials select their patients, test the drugs in run-in periods and use research coordinators to help patients navigate complex health systems. In the real-world, there is none of that. So there are oodles of papers showing inadequate treatment of heart failure in real world practice.
The Trial
STRONG-HF studied two treatment strategies after a patient was discharged from the hospital after treatment for heart failure. Nearly 90% of the patients were recruited from Africa and Russia.
One group got “usual care” and the other group got high-intensity care, wherein patients had aggressive therapy even before hospital discharge, then they were assessed at 1,2,3 and 6 weeks after discharge. Actual cardiologists performed these visits, which included up-titration of meds, but, also, as Adam correctly notes, patients had MD-level assessments.
Here were the results. I know; it’s surprising:
The nearly 5-fold more frequent in-person visits led to far more patients in the high-intensity arm taking optimal doses of heart failure medications compared with those in the usual care arm.
The primary endpoint (of hospital readmission for HF or death) occurred in 15.2% of patients in the high-intensity arm vs 23.1% in the usual care group (hazard ratio, 0.66; 95% CI, 0.50 - 0.86; P = .0021).
Readmission for heart failure drove the benefit, but all-cause death was 16% lower and cardiovascular death was 26% lower in the high-intensity arm.
The authors concluded:
An intensive treatment strategy of rapid up-titration of guideline-directed medication and close follow-up after an acute heart failure admission was readily accepted by patients because it reduced symptoms, improved quality of life, and reduced the risk of 180-day all-cause death or heart failure readmission compared with usual care.
My Comments:
The prominent journal Lancet published this trial. It was presented at the 2022 AHA meeting to great celebration. Most experts held this trial up as a win for aggressive up-titration of medicines.
Such an interpretation borders on ridiculous. Because there is a lot more going on than only medical prescribing.
Most patients were recruited from Africa and Russia. One group gets extra attention in the hospital than 4 extra visits in the first 6 weeks with a cardiologist. The other group gets usual care. I’ve not been to Africa or Russia, but I’ve seen “usual care” in the US, and it is approximately 99% less than the high-intensity arm of this trial.
Then the group that gets nearly 5x more attention has better outcomes. How does this answer any practical question? A: It does not.
What, then, was the study for? I don’t know the answer. I am open to your ideas.
I do know that if hospital systems were incentivized to help patients with heart failure, they would improve post-discharge follow-up to something along the lines of STRONG-HF. Yet we don’t need a trial to tell us that would help.
Which brings me to the larger issue with medical science.
The best part of STRONG-HF is that I can’t find a profit-motive underlying it. That’s not the case for many trials in the cardiology (and oncology) space.
The next time you read a trial in these areas ask yourself if the trial was designed to answer a question, or garner a positive result?
A recent example:
The TRILUMINATE trial of a tricuspid valve clipping device for patients with leaky valves. I wrote about it here on Sensible Medicine.
One group got the procedure, the other got no procedure, just tablets. The primary endpoint favored the device arm, but was driven solely by quality of life measures. There were no differences in death, heart failure admissions or even walking distance.
If you are trying to answer an important question; if you are trying to learn something about a new procedure, how (or why) do you design a trial where one group gets a procedure and the other does not and then measure a subjective endpoint? We learn about placebo controls in medical school.
My friends, we spend a lot of time on this site dissecting the guts of trials. Yet most of the bias occurs way before the first patient is enrolled.
When I find a trial that actually tries to answer a question about nature, I will let you know. In the meantime, always think about the purpose of a trial.
A concluding note to remind everyone that Sensible Medicine remains a user-supported site. We do not have advertising support. We depend on your generosity. And it’s been great so far. Thank you. JMM
Slightly off topic. In the US one big barrier to getting heart failure patients on SGLT2is is cost. The other meds are relatively inexpensive and I can get people on them. I've even failed to get insurance coverage for an SGLT2i when a patient has both HFrEF and diabetes.
One of the main reasons trials differ from real life is the relative ease of getting the pharmaceutical agents.
I’m less pessimistic than you about this trial and its actual utility in clinical practice.
The “guidelines” (which are with every passing day progressively becoming more full of bunk) promote ‘4 pillars for everyone as soon as yesterday’ in the absence of evidence. This trial actually (indirectly) suggests that, “IF” you can get pts quickly titrated on all 4 drugs, it does in fact provide a benefit (even if driven primarily, though not entirely, by HHF reductions). That’s more than could be said before this study (and more than could be said for the basis of the guidelines themselves). That’s a useful thing to have some confirmation for.
And it to me actually serves as a brake for the mindless enthusiasm for “all 4 drugs ASAP for everyone and even in the water supply”….cuz if you can’t replicate the conditions of this trial (which most in Canada and North America cannot)….then such promotion remains in the absence of evidence. I also focus on the fact that the average age was 63. It tells me that, in young HFrEF pts for whom I have the good fortune to be able to provide this degree of intensive follow up, this rapid uptitration of “4 pillars” stuff is worth trying; but for most people, there is no evidence to compel me to deviate from the progressive titration style of yore.
I agree it “makes sense” that intensive follow up care “should help”. But this actually proves it. That someone went out to prove a motherhood statement to be correct should be celebrated.