8 Comments
User's avatar
Harold Lehmann's avatar

Pragmatic trials should be preceded by an EHR query, and not rely on priors from the literature, alone. An argument could be made that a retrospective in silico simulated trial should precede a prospective (more expensive) trial. However, I am not sure if funders are enthusiastic about "delaying" a prospective study in this way. And then the prior should be skeptical (as per Spiegelhalter). Of course, a more explicit Bayesian design is not a bad idea.

Expand full comment
Matt Perri's avatar

Thanks for this, well done. In a previous article of this thread the issue was "bias." I think it important to consider how bias, either direction, can play into sample size calculations based on statistical power. One's bias is sure to play a part in assuming the effect size parameters - assumptions - that must be made in calculating (or being able to support) an anticipated effect size. The sample size samba is a dance most researchers know well. As a researcher, author and reviewers I am puzzled by how little attention is paid to these issues. Thanks again Dr. Kaul.

Expand full comment
Mark Buchanan's avatar

Would it be more informative to use absolute instead of relative risk reduction? The two groups were roughly 850 each. The stress test group had 5 fewer events, so the NNS (number needed to stress) to prevent one event was 5/850, or 170. Half of the events were death, half MI or unstable angina. So the NNS to prevent a death was 340. Or maybe a little more or a little less, given the wide confidence intervals. If the odds are presented in these terms, how would patients decide whether to accept stress testing?

Expand full comment
Marco Bobbio's avatar

A useful analysis to remember that inconclusive results are more frequent than declared by authors preferring to present conclusive results. Greetings to Sanjay, Marco

Expand full comment
norstadt's avatar

I understand the numbers, but what is a stress test? What is the biologically plausible mechanism?

Expand full comment
Ben's avatar

When they have you walk on a treadmill or what not to see if there are signs of ischemia (and potentially do another intervention sooner rather than later.)

Expand full comment
Dan's avatar

One of the problems with an underpowered study that results in an inconclusive finding is that it makes it difficult to secure funding for like trials that are of sufficient size.

Expand full comment
Frank Harrell's avatar

Sanjay this is well written and points out the continuing serious error made by NEJM and most other journals: use phrases like "did not improve" in the conclusion. That is inappropriate as you so well discussed. More honest phrases would include "the money was spent" or "at the current sample size there is insufficient evidence amassed to counter the supposition of no difference in outcomes between strategies." A Bayesian analysis would give an accurate and succinct result: "The probability of clinical similarity of outcomes for the two strategies is 0.25." [One could compute the actual probability. 0.25 is a rough guess.]

Expand full comment