52 Comments
User's avatar
Cloverleaf1234's avatar

Or a lot less despondent? If perhaps the results from a given analysis aren’t what one was hoping for?

Expand full comment
Andy Daykin's avatar

Is it possible that some of the methods were just poor to begin with or poor choices for a particular use case and shouldn't have been used at all? I'd imagine that a poor model compared to a good one would end up skewing results.

Expand full comment
Ben's avatar

Almost equally as interesting are always the comment sections which tend to show exactly what people get from it and where there own biases lay. It’s depressing that there has been the general idea that science results are so inviolable that any questioning is tantamount to being anti-science. So even your very scientific critique is taken commandeered by those who choose to hold a much less probable view of reality as a way of saying…ha, I told you so HCQ works or ha I told you “follow the money”. Any critique is taken to mean all critiques are equally valid.

Expand full comment
Kirsten's avatar

Thank you, wow this is so important to understand.

Expand full comment
tracy's avatar

I'd like to see you guys address those studies done 20 years ago that demonstrated that practising doctors in hospitals COULD NO properly assess the true risk of a 95% risk in a rare. illness. I can't remember the details to find said study, but maybe this request might job your memory

Expand full comment
fischer's avatar

The term you want is "Epistemic Confidence Interval". This study highlights that it's really, really big, and in fact dwarfs what we usually call the 95% CI

Expand full comment
dana ericson's avatar

Thank you for this.

Expand full comment
Dr. Howard Mann's avatar

"The size of the treatment effect in large-scale studies is very small. Indeed, it's so small that the true size of the effect is deliberately hidden by researchers and others with a vested interest in the outcome of the studies. "

From: https://www.amazon.com/Stats-fooled-statistics-based-research-medicine/dp/1907313338

The conundrum : Small absolute effect sizes & the usefulness of statistical gymnastics ?

Expand full comment
zi#UL3:}]U(L6#|'s avatar

The problem of researchers reporting the results of just a single analytic approach on their dataset is unfortunately almost certainly not because they only looked at the dataset with a single analytic approach.

Expand full comment
GERRY CREAGER's avatar

Those of us who have done clinical research are well aware of the bias pitfalls you discovered. Those of us, today, who have done so and are analyzing the reports are critically looking at the methods used, and are aware of most of the best practices for analysis. I've discarded a large number of COVID studies and reports due to analytical errors, and my advice to my organization has been based on my best practice efforts to represent good quality studies and analyses.

I'd LOVE to see a machine-learning approach to selecting the "best" analytical process for clinical research but fear we're still a bit away from that as an automated feature. Then, there's still the requirement to determine which analytical process best fits the clinical scenario. It might not be the "best" statistical process. Still, we should strive to remove investigator bias.

Expand full comment
Tami Secor's avatar

My Grandfather was a farmer with a sixth grade education. He always used to say, “figures don’t lie, liars figure.” Wisdom comes not from degrees at Ivy League schools but from critical thinking and discernment which are unfortunately in short supply these days. Glad to see you are doing your part to bring it back to the table.

Expand full comment
The Skeptical Cardiologist's avatar

John,

Great post. I feel that my decade spent in academia during which I collected my own data, made my own measurements and then performed my own statistical analyses made it crystal clear that I had immense control over the "results." The desire to make those results "positive" or significant in some way is a bias that is hard to resist and hard to measure. The years spent in research help greatly in critically reading medical science and recognizing the biases and the limitations of publications.

Anthony Pearson, MD

Expand full comment
Matt Dubuc's avatar

A very informative read. Thanks.

Expand full comment
Ally's avatar

Lies, damn lies and statistics. We always knew this. Thanks for explaining it.

Expand full comment
Dr. Molly Rutherford's avatar

@drjohnm you should attend our summit in Oldham County October 8th. Ryan Cole, Richard Urso, Mollie James, me, are speaking on what our profession got wrong during the pandemic and how to move forward and rebuild trust in our profession. In my opinion, it starts by divorcing all the conflicted organizations (hospitals, govt, pharma, insurance companies...i.e. Direct Care, physician owned hospitals and surgery centers) Time to opt out!!

Expand full comment
Tom Hogan's avatar

How can/should researchers choose which statistical tool(s) to use?

Expand full comment
John Mandrola's avatar

That’s the question— I wonder whether it would be wise to use numerous analyses and publish them all.

Expand full comment
Tom Hogan's avatar

I figured that you were driving us towards that question. :)

I'm still interested in the parachute question, which is out of article scope, but I discussed it in a comment to Dr. Rutherford. There might be another case of parachute studies--where absolute risk is low, but benefit relative to the control is large. I'm pondering how researcher bias might come into play in that situation and what kind of statistical analyses might be necessary.

Expand full comment