9 Comments

Fascinating topic and discussion. After realizing all of this research is based on completely unreliable food frequency questionnaires I have worried little about the headlines. This research adds to the multiple reasons not to trust nutritional epidemiology.

Expand full comment

I appreciate this, but if medical research abandoned +/- 2 sigma (95% CIs) and adopted +/- 3 sigma (99.7% CIs) you wouldn't even need this novel approach by Zeraatkar, et al because all of bs would be filtered out no matter what analytical choice was made.

Ioannidis suggested this 2 decades ago.

Zeraatkar's proposal is like epicircles trying to preserve Ptolemy or even the epicircles of Copernicus until Kepler's ellipticals. It is an attempt to preserve the status quo of normal science (which has adopted 2 sigma approach) See Kuhn. Let's just jump ahead to new paradigm of 3 sigmas

Of course, 99.7% is ostensibly an arbitrary choice. 50% or 85% would also be arbitrary too. But empirically, the usefulness of 3 sigma approach is well documented in line from Shewhart to Deming to Wheeler. (~ 100 years in making).

Does medicine want to be scientific or scientistic?

Expand full comment

John,

Thank you for interviewing this brilliant and articulate researcher. I feel overwhelmed by the constant media focus on meaningless and or overblown observational publications (MOOPs, https://theskepticalcardiologist.com/2023/09/10/categorizing-scientific-studies-from-practice-changing-to-meaningless-moops/) particularly those in the nutritional epidemiology field.

No matter, how often the limitations are reviewed/highlighted/discussed by we skeptical clinicians, journalists seize on the tantalizing findings and the public believes they must be true.

Dr. Zeraatkar approach shows how biased and/or uninformed scientists can almost always find a significant association between food X and outcome Y.

Excellent and thoughtful work as always!

Dr. P

Expand full comment

Fascinating stuff as always. I do think there's potential in this way of doing things. However, I do have concerns for the insistence on data analysis. At a certain point, we can't do things based on data. It either works or it doesn't, and that's usually and individual reaction by the patient.

Expand full comment

Excellent stuff. Keep up the great work with Prasad and Cifu.

Expand full comment

This is more support for how flawed nutritional research is. For years I have stopped reading all published nutritional studies.

Expand full comment

Absolutely true. When the fundamental data are unreliable (as they are in all diet/nutritional studies), the method of statistical analysis is irrelevant. The basic principles of nutrition are actually pretty simple and straightforward, but the topic seems to inspire endless interest by the general public. Just look at the annual best seller lists for "non-fiction" books over the past 50 years.

Expand full comment

If someone could apply this methodology to observational study topics where we KNOW there is a link, then that would be very interesting to confirm if this approach holds up.

Expand full comment

Wow, what an amazing analysis and review. I love when I come away with a new understanding of a topic AND have more questions than when I started listening. The utility of a "Monte Carlo Simulation" to analytic methods is clearly an interesting approach. One interesting question I had was, if the investigators chose to use a statistical method because their study "required" that methodology, then the remaining analyses would lead towards inappropriate conclusions and add to the noise rather than lead to a more accurate conclusion. Could your analysis lead to a improved way of approaching this research in general?

Expand full comment