When a "negative" trial causes confusion -- back to the story of cerebral embolic protection devices
Here is a huge trial, well conducted, simple endpoint, and I don't know how to translate it. Please help. The story of PROTECT TAVI.
For me, transcatheter aortic valve implantation (TAVI) approaches a miracle. My brain thinks: how in the world does a doctor place a valve in the aorta, then into the stenotic valve, squish the calcified old valve and land a new one. Watching the procedure on x ray does not do the miracle justice.
Miracle number one is the valve stays in place. Miracle number two is that the debris-traveling-north does not cause stroke in every patient. The incidence of stroke is a lot lower than I would have thought, but it is not zero.
Boston Scientific sells a cerebral embolic protection (CEP) device that goes into the carotid arteries during TAVI. The idea is to catch debris going to the brain and save the patient from stroke.
A second large RCT, called PROTECT TAVI, was presented at the ACC meeting recently and NEJM published the manuscript.
It builds on evidence I discussed last year on Sensible Medicine, when I covered the PROTECTED TAVR trial and interviewed Dr. David Cohen regarding their observational study looking at real world data on CEP.
The PROTECT TAVI enrolled 7000 patients having TAVI in 33 UK centers. That was more than double the 3000 studied in the PROTECTED TAVR trial, published in 2022.
The problem with PROTECTED TAVR was the wide confidence intervals, which I wrote about here. Since the UK trial enrolled more patients, you would think it has a better chance of detecting signal from noise.
The trial was simple and elegant. One group gets TAVI with the CEP and the other group gets TAVI without the CEP. The primary endpoint is clear: stroke within 3 days.
The main results were that stroke rates were nearly the same: 2.1% vs 2.2% (CEP vs no CEP). The absolute difference is 0.02%. Confidence intervals go from -0.68 to +0.63. The p-value approaches 1 at 0.94.
In relative terms, the HR is 0.99 (0.73 to 1.34). Now you see the problem, right? Even with 7000 patients randomized, contained within the 95% confidence intervals is a 27% lower rate of stroke or 34% higher rate of stroke with the device. Both of these would be considered clinically meaningful effect sizes.
Darn it. This is a problem.
What about disabling stroke? Sadly, we have a similar problem. It was 1.2% vs 1.4% in the CEP vs no-CEP arm. The HR is 0.89 (0.60 to 1.31). Within those 95% confidence intervals there could be a 40% reduction of stroke vs a 31% increase with the CEP vs no-CEP choice.
NEJM editors make the authors conclude this:
Yet I don’t think this is exactly correct. Here is why.
Observational Data to the Rescue
In my column over at theHeart.org | Medscape Cardiology, I cited an observational study conducted with US registry data. The authors looked at more than 400,000 patients of whom 53,000 or about 13% received a CEP. Now you have two groups—CEP and no-CEP.
It’s not randomized, so the authors made adjustments, one particularly nifty one, called an instrumental variable analysis to attempt to get closer to randomization.
This analysis—in many more patients than in the trials—found that CEP led to a 10% reduction in stroke and a 13% reduction in disabling stroke. Patients with previous stroke had greater degrees of protection.
Perhaps now you can see the problem with the wide confidence intervals in the TAVI trials. If the observational study is correct in estimating effect size to be 10%, then you would need a much larger trial to sort out signal from noise.
PROTECT TAVI was powered to find a 33% reduction in stroke. If we wanted to detect a 10% rate, on online calculator reveals that you would need a trial of more than 100,000 patients.
Easy Conclusions
First, from a plausibility POV, you would think that CEP should work. There are pictures of debris on the device that is removed from the body. That debris would have gone to the brain and caused stroke.
Second, neither trial found a significant difference in clinical stroke. But both trials had super wide confidence intervals.
Third, a large and well-done observational study suggests that CEP confers a small difference (benefit) in stroke. And in doing so, shows why the trials were inconclusive.
Fourth, neither trial found any subgroup which might benefit more from CEP use. Though the observational study suggests that patients with previous stroke may benefit most—but that is a tenuous finding given that it’s a non-random comparison.
Harder Conclusions: What to do at the bedside?
Given this data, would you want a CEP? Would you want it for a family member? If the answer is yes, and healthcare was more like buying a TV, at what price would you be willing to pay for it?
Right now, CEP adds cost to the procedure. Doctors don’t get paid more for using the device, which involves making an extra arterial puncture and taking some time manipulating the device into the carotids.
The truth is I do not know what to recommend with CEP. You could take the view that two trials are negative and the device should be removed from the market. That seems extreme.
You could also take the opposite view that if there is an even 10% benefit we should use it everyone—despite the negative trials. But that adds a lot of cost at minimal benefit.
You could try to carefully select the patients who get CEP. But would this not be a blind guess given the lack of subgroup findings.
I truly do not know. Do you?
I would love to hear your ideas. Comments are open to all.
My view (UK perspective): the trials showed no clear benefit and were pretty clear that any such benefit would be smallish ie not an improvement factor or 2 and also not a low NNT (say 50 or less). Not worth making the change and not worth funding it.
[Although If private patients in private hospitals want to stump up real cash for it, after being informed of the benefits harms and costs, well fine go ahead.]
Dig into the details, figure out why it doesnt help much, redesign the gadget or procedure, come back with a mk2 version that might achieve a useful improvement.
This is how medicine, and science, advances.
Healthcare providers need to see decent improvements before stumping up for innovative / cost-escalating treatments.
The fundamental problem is considering a relative change as clinically meaningful. Clinical decisions should be based on absolute differences. It makes no sense to state that a ~30% relative change is clinically important.