6 Comments

Regarding Brian Nosek's study: an MD is a limited expert, limited to one field. If a clinician does medical research, the clinical practitioner needs to rely on the experts in data science in order to do credible science. It's a rare clinician who is also a great statistician, since statistics and data analysis seem to be the worst taught subject in medical school. Nosek's study confirms what many of us have long suspected, that the statistician expert himself may not be enough. His proposal to be more transparent by publishing all the possible conflicting statistical approaches (with their associated diverging outcomes) suggests that data science experts may need a referee as well. Otherwise, an already skeptical lay public will become frankly cynical about science in general.. Data science, by seeming to support any conclusion on the same data set, will cause science to lose credibility.

My impression is that the field of statistics is just a child, whereas probability is the parent. Talk to any mathematician expert specializing in the theory of probability, and you will likely find out that many statisticians don't really understand probability on a rigorous, deep, mathematical and philosophical level. It's more than just knowing to use non-parametric methods when the underlying data distribution is non-normal.

Nosek's study did include adjusting for level of expertise in statistics, and didn't find much difference with his overall conclusion. I wonder if it would be possible to adjust for level of expertise in probability. For instance, compare the data experts who came to statistics after learning probability, versus those who learned only enough probability to do statistics.

If such a study could be done, let's just hope that the probabilists don't disagree among themselves!

After all, probability experts are also human, and may show th3 same variable opinions on the same data set as do the statisticians. In that case, us simple-minded clinicians will have to clear two credibility hurdles: with our limited understanding of statistics, can we trust the statistical experts? And how do we choose among the probability experts when we understand even less about probability?

Expand full comment

Something similar has happened with this study that showed a correlation between Vitamin D levels and all-cause mortality based on a Mendelian randomization analysis, which is based on so many assumptions that when researches say it is almost as good as an RCT, you wonder. Lancet Diabetes Endocrinol 2021; 9: 837–46 was the original article. This week saw two letters arguing that the statistics and assumptions used were flawed and the authors have accepted this and finally said that their study does not support a causal relationship between vitamin D and outcomes. This is basically a retraction of the results, which were initially touted in the media...not sure if the journal has subsequently published an editorial on this whole mess.

I don't think they shared the data but based on what was published, the authors of the rebuttal letters posted their comments. Also, this is the UK biobank data, so I assume the data is public...not sure.

Expand full comment

I always learn a lot from your content, thank you !

Expand full comment
Dec 19, 2022Liked by John Mandrola

Amen. Great article. Happy to see some follow-on, finally, to the generally ignored but deeply significant Noseck study.

Kudos.

Expand full comment
Dec 19, 2022Liked by John Mandrola

Fascinating story - thanks for sharing and explaining. Should we be moving toward expectation that data is shared on request? The highest impact journals could start by requiring authors of accepted articles to put data in digital repository that could be opened on request or if audit is needed. Would prevent another Surgisphere debacle as well as "open-source" for re-analysis.

Expand full comment

Humility and generosity - indeed. I’m certain that those words mean nothing to current members of big pharma or the FDA, but dang it this gives me hope!

Expand full comment