Agree that calling a spade a spade is a good start.
Dr. JMM, both here and on TWIC, has for years used the term “non-nefarious” when analyzing many examples of questionable/bad science. And I’ve started wondering about the “non” part of those characterizations. More recently, he’s alluded to gaping methodology flaws with something akin to “those esteemed researchers ought to have known better”, particularly with some egregious examples among device trials.
I believe Dr. JMM is being far too polite. And I agree that the process is being bastardized by incentives, which themselves are being propagated by failures of the FDA and top-flight journal editors to discharge their gatekeeping fiduciary duties.
"There’s very little we can do to detect p-hacking, base rate neglect, and publication bias...."
I found a pretty clear cut case of publication bias earlier this year. I brought it up with some doctors and researchers on substack and a certain medical science forum that I won't name here. No one cared. Doctors are very willing to be led by the nose and just cite something in JAMA uncritically and are more prone to motivated reasoning than any group of people I've ever met. The researchers themselves are certainly aware of all the shenanigans. Let's not pretend any of this is unintentional.
As a graybeard who has grappled with the propensity of healthcare to perpetuate error (see Cochrane's Brake: Randomized Controlled Trials and the Doctor's Pen), it seems to me that the errors mentioned are most common among young investigators who have not seen enough of the recurring patterns to include such critiques in their discussion of their own results. For me, it took into my 40s, after years of immersion in my primary research topic, to be begin to ask the hard questions, and maybe another 10 years to accept that the propensity to error is nearly universal in healthcare. Perhaps we need to start earlier in our education/training to open the eyes of our students. I remember being astonished by a young Vinay Prasad's recognition of "reversals" as historical evidence of error eventually unmasked. I hold high hopes for his return to the battlefield.
One serious problem is when they conduct a study based on a specific set of parameters, then prescribe based on different parameters not studied. For example, psychiatric drugs like antidepressants and ADHD drugs are typically studied for two months but prescribed for years or even a lifetime.
The problem with this is obvious. A short-term benefit likely wears off over time as tolerance to the drug develops. Over the long term, adverse side effects from the drug can build up and get progressively worse. To make matters worse, psychiatrists often assume the adverse effects of the medications are evidence of progression in the original disorder, leading to a negative feedback loop of higher doses and additional medications.
Good one! I would also add that 4 of the 5 Cochrane risk of bias or quality domains for RCT's is solely up to the researcher : randomization& concealmnt, blinded assessors for outcomes, intention-to treat analysis, and pre-registration. Missing outcome data or blinding participants/providers is one that researcher has no control.
Agree that calling a spade a spade is a good start.
Dr. JMM, both here and on TWIC, has for years used the term “non-nefarious” when analyzing many examples of questionable/bad science. And I’ve started wondering about the “non” part of those characterizations. More recently, he’s alluded to gaping methodology flaws with something akin to “those esteemed researchers ought to have known better”, particularly with some egregious examples among device trials.
I believe Dr. JMM is being far too polite. And I agree that the process is being bastardized by incentives, which themselves are being propagated by failures of the FDA and top-flight journal editors to discharge their gatekeeping fiduciary duties.
"There’s very little we can do to detect p-hacking, base rate neglect, and publication bias...."
I found a pretty clear cut case of publication bias earlier this year. I brought it up with some doctors and researchers on substack and a certain medical science forum that I won't name here. No one cared. Doctors are very willing to be led by the nose and just cite something in JAMA uncritically and are more prone to motivated reasoning than any group of people I've ever met. The researchers themselves are certainly aware of all the shenanigans. Let's not pretend any of this is unintentional.
As a graybeard who has grappled with the propensity of healthcare to perpetuate error (see Cochrane's Brake: Randomized Controlled Trials and the Doctor's Pen), it seems to me that the errors mentioned are most common among young investigators who have not seen enough of the recurring patterns to include such critiques in their discussion of their own results. For me, it took into my 40s, after years of immersion in my primary research topic, to be begin to ask the hard questions, and maybe another 10 years to accept that the propensity to error is nearly universal in healthcare. Perhaps we need to start earlier in our education/training to open the eyes of our students. I remember being astonished by a young Vinay Prasad's recognition of "reversals" as historical evidence of error eventually unmasked. I hold high hopes for his return to the battlefield.
One serious problem is when they conduct a study based on a specific set of parameters, then prescribe based on different parameters not studied. For example, psychiatric drugs like antidepressants and ADHD drugs are typically studied for two months but prescribed for years or even a lifetime.
The problem with this is obvious. A short-term benefit likely wears off over time as tolerance to the drug develops. Over the long term, adverse side effects from the drug can build up and get progressively worse. To make matters worse, psychiatrists often assume the adverse effects of the medications are evidence of progression in the original disorder, leading to a negative feedback loop of higher doses and additional medications.
Good one! I would also add that 4 of the 5 Cochrane risk of bias or quality domains for RCT's is solely up to the researcher : randomization& concealmnt, blinded assessors for outcomes, intention-to treat analysis, and pre-registration. Missing outcome data or blinding participants/providers is one that researcher has no control.
Great read. Very thought-provoking. I had to use the dictionary. Thank you for your time.