21 Comments

I wonder if Sensible Medicine would discuss the recent announced use and distribution of retroviral vaginal rings in Sub Saharan African countries. Data suggests compliance with antiretroviral treatment is better with the vaginal ring versus oral treatment, but as best I can tell the only data currently available is on safety. I do not see data on efficacy on this type of delivery but it is being touted as a game changer for HIV prevention.

The Lancet HIV DOI:https://doi.org/10.1016/S2352-3018(23)00227-8

Expand full comment

I can find no data which indicate that was ever the case. What is your source for your claim that before 1990 drug development was conducted primarily in universities and funded by the government?

Expand full comment

Good one John! Keepin' 'em honest! 👏

"More events to count helps sort out signal from noise" also increases the odds of spurious "positive" statistical findings.

Back in my day, "peeking" at the data before the study was completed was a definite No-No. We can protect the validity of clinical research by (1) requiring PI's to file their research design before starting the trial (2) prohibiting pre-completion design changes and (3) requiring PI's to bank their research data for future review/analysis.

Standards matter.

Expand full comment

One exception to that was the Lipid Research Clinics Coronary Primary Prevention Trial that was published in JAMA in 1984 and was heralded as providing the definitive proof that cholesterol reduction was an essential element for prevention of coronary artery disease. In the published pre-trial protocol the researchers set a higher than normal standard for statistical significance stating that they would accept only a p<.01 rather than the usual p<.05. They also specified a two-sided test rather than the less rigorous one-sided test. However the final paper published in 1984 revealed that they had changed these criteria to p<.05 and used the one-sided test. One year later JAMA published a blistering commentary on the study written by Dr. Richard Kronmal, a highly respected biostatistician at the University of Washington. It turned out that the results were not statistically significant using the standards set at the onset of the trial, but barely qualified under the revised statistical parameters. Dr. Kronmal wrote: "The critical aspect of this comment is not the p value that was set prior to the trial or the use of the one-sided test; it is that the observed beneficial effect of cholestyramine now has t he characterization of "statistically significant" (reported as p<.05) and that this is based on a change in criteria that apparently took place after analyzing the data." I have never seen nor heard any response from the people who push cholesterol reduction to this critique of this landmark study.

Expand full comment
author

I think they looked only at the number of events total. Interim looks blinded to treatment are fairly typical.

Expand full comment

Apparently it's hard to get a negative trial published, so there's a push to find something positive in every trial. Beyond that, having invested in the drug trial, why would the drug company want to publish a study that says it doesn't work?

What ought to happen is that studies should proceed according to the approved research protocol, and they should be published regardless of whether they show the desired effect or not.

Expand full comment

It seems these days as a matter of routine, investigators are “surprised” by lower event rates than they had banked on in their power calculations. This occurs across all domains in cardiology. At some point, that has to stop being a surprise.

Perhaps trialists should simply account for this fact, and plan to enrol more than they think they might need to. And if they happen to luck out and event rates are higher than expected, then the DSMB can step in if efficacy crosses stopping boundaries. Hopefully this will help us avoid clinically meaningless endpoints.

Expand full comment

Apparently sales weren't going well so a "study" was cooked up in order to generate some positive marketing. What is the biological plausibility that a drug designed to inhibit glucose reabsorption in the proximal renal tubules would have any significant benefit for patients with MI? Adding new endpoints that do have something to do with the action of the drug and inventing a new statistical procedure to generate some positive numbers plumbs new depths of scientific dishonesty.

By the way, all cause mortality is the only truly hard endpoint. Cardiovascular death may be semi-hard but a number of autopsy studies have shown that CVD as a clinical diagnosis may not be accurate 1/3 to 1/2 of the time. Hospitalization for heart failure may be due to a range of reasons and is, therefore, somewhat soft.

Expand full comment

Thank you for sharing.

All your points are well taken. But, if the investigators had used a 99.7% CI, this would really not have been a "positive" study.

(Of course, we would have missed an opportunity to learning about importance of circumspection about "positive" results with little clinical impact and the problems with adding end-points that might not really be meaningful.)

Expand full comment

Will we ever get a totally independent trial that does not try to fudge the data and outcome? In other words, we can look at a trial and not have to dig under the hood to discover the truth and results are far from advertised.

Expand full comment

John-

Always enjoy your cogent analysis. It’s truly a great mini-journal club for all of us. Thank you-

Expand full comment

"The problem was that hard clinical endpoints like death, CVD, HHF, MI, and even all-cause hospitalization were not different. .. The primary endpoint was driven by softer endpoints like new diagnoses of diabetes and weight loss—both of which are way down the hierarchical scale."

Slam dunk!

Expand full comment

John -- Good observation.

I note that the study was funded by Astra Zeneca, so my BS radar is immediately put on flashing red. My take home is that this is yet another way the drug companies use to move the goal posts CLOSER to get a positive result from a trial.

Expand full comment

I think what we have in the study is a good start. From studies like this one, better-designed trials that tell us more can be done. Maybe this result is bogus. Further work will tell.

Expand full comment

1) If drug companies did not fund studies of their products, who would?

2) The results of the trial, and John’s analysis of them, are very clear. No evidence that the data were in any way manipulated, only that the inclusion of soft endpoints not directly related to post-MI cardiac heath, resulted in an entirely expected positive outcome for those endpoints only.

Expand full comment

Well, back in the old days, before 1990, drug research was done at universities and paid for by government grants. Now we have pharma owning the research labs in many cases, as well as the marketing companies. On top of that, we have direct-to-consumer ads all over the place. I could go in to Dr Mandrola and badger him to prescribe me Dapagliflozin, based on nothing more than a slick commercial. If I were overweight or diabetic he might prescribe it. But would I have really needed it? One really does have to look at the funding source as part of the big picture because it puts everything in clear focus.

Expand full comment

I'm pretty sure pharma has covered all the bases with generous donations and honoraria to selective researchers and university departments as well.

Expand full comment
Dec 5, 2023·edited Dec 5, 2023

Plus, I forgot to mention, there are now paid companies who will write up your test results in a way that will make your drug look very effective. I think we're surrounded.

When I read a paper the first things I check are the affiliation/conflict of interest/funding source sections. The info isn't always complete but it can give an idea. Then I check prior pubs by the authors to see where previous research funds came from.

I commented a couple months ago, on a YouTube video posted by an MD specializing in post menopausal symptoms. I said the drug she was praising, a non-hormonal pill that directly targets the neurons that control body temperature, hadn't been adequately tested before its approval by FDA. She blasted me as a misinformation spreader. I searched on pubmed and found she'd received money from the company manufacturing the drug. And the study on which the approval has been based, done by her and other researchers also funded by the same company, only enlisted 1100 women. The goal was to reduce hot flashes and insomnia but if you looked at the numbers, the relative risk of insomnia was something like 150% in the test vs the control group. Plus there is significant risk of liver damage. The accusation of misinformation spreading is so breathtakingly hypocritical that it's almost comedic. The marketing is in full sway, the researchers are biased, and the accusations of misinformation fly at anyone who calls this out.

Expand full comment

I admire your diligence and the depth of your research. If more people would put in just a fraction of your effort we would be a lot better off. Keep digging and keep posting the truth.

Expand full comment

Thanks, you've been a big encouragement. Your book told me and my husband just how closely it's worth scrutinizing the sources and all the data they provide.

I've published in my own field in physics. Not saying people don't lie in physics also but to write a summary and conclusion that directly contradict your actual results was pretty unusual, and the reviewers would have called you out.

Expand full comment