6 Comments
User's avatar
⭠ Return to thread
Bruce Kallick's avatar

You’ve completely missed my point. In each case I was asking you to consider a Gedankenexperiment which would yield exactly the same data set, but one for which the same statistical analysis would be considered entirely valid.

Expand full comment
Druceratops's avatar

Perhaps we are talking past each other, but the point I was trying to make is that predefining a new variable of interest will inherently change the execution of the study and resulting data set because you will factor the new variable of interest into your randomization strategy / group assignment and sample size / power calculations. Under the artificial presumption that the pre-defined and retrospective studies generated identical data output, the confidence in the data from the pre-defined study would be higher because we would know that the pre-defined study decreased the potential for confounders whereas we would not the same confidence in the context of the retrospective analysis study.

Expand full comment
Bruce Kallick's avatar

Indeed, I think we have been talking past each other, but thanks to your explanation I now do fully understand the objections made to Dr. Mandrola’s request. I had incorrectly assumed that the hypothetical alternative scenario would have yielded, if not exactly the same data set, at least one that was statistically equivalent.

Yet, I still don’t understand the objection to the authors adding the additional sentence to the conclusion. Here, I think you’ve missed my point because in the hypothetical scenario (terminating the trial at the beginning of the pandemic) there would have been no retrospective study, the data sets would indeed have been identical, and there would have been no predefined new variable of interest.

Expand full comment
Dennis Robert's avatar

What if they had a done post-hoc power analysis for this pre-covid subgroup which showed a positive benefit and they find that the power was adequate enough? I know post-hoc power analysis is frowned upon by many, but in dire circumstances like these I think it is justifiable to do it. Not sure whether the study authors reported this to FDA or not.

Expand full comment
Druceratops's avatar

Power analysis is performed in the study planning and design phase in order to determine the appropriate sample size to evaluate the hypothesis under test. Once you have identified an effect between treatment group and control group (or subgroups as in the CARDIOMEMS study) it is no longer informative. The one place I have seen post hoc power analysis sometimes applied is when you have a subgroup difference that does not meet significance and you want to show that the study was not adequately powered for that analysis (i.e., we cannot reject a treatment effect because the study was not adequately powered for that variable).

I did some further looking into the study and the pre-CoVID analysis was actually pre-specified in an amendment to the statistical analysis plan of the protocol while the study was blinded, so this is not an example of a true post hoc analysis. This was justified on the basis that the CoVID-19 pandemic was a confounding factor for the study, but the statistical criteria that they used to substantiate that the pre-CoVID and post-CoVID were different (p value 0.15) was not as conclusive as one would like. The investigators cite that hospital visits were significantly reduced post pandemic compared to pre-pandemic. In my opinion, the lingering issue is that the study investigators have only loosely demonstrated that the pre- and post pandemic cohorts were different in accessing medical care but have not really demonstrated a plausible mechanism for why that would essentially eliminate the benefit of the CARDIOMEMS device. However, given that the FDA signed off on the revised pre-specified statistical analysis plan while the study was still blinded, it was probably difficult to reject the application.

Expand full comment
Dennis Robert's avatar

The power analysis during study protocol and SAP development is different. I am only speaking about post-hoc analysis. Regardless of whether it is 'typically' only done to assess 'absence of evidence is not evidence of absence' for a subgroup effect, it can still be done to justify say 'look, we know that the primary endpoint didn't show any benefit, but this subgroup had a significant effect and our post-hoc power analysis shows that the sample size for this subgroup analysis was also having adequate power'. Not a concrete evidence, but I would take this as a positive hypothesis generation.

Anyway, for this GUIDE-HF trial, it seems like this discussion is perhaps not relevant now as I see you mentioned this analysis was 'preplanned' and was approved before unblinding. In that regard, I think this main article of 'how not to look at the data' is a misleading one because the example (cardiomems guide-hf trial) it shows is not the right one.

Expand full comment