13 Comments

Thank you for the interview.

I've thought that if the standard of rigor for any observational study was to use a confidence interval of 99.7%, medical research would be much better off.

FOMO which I think underlies the use of 95% CIs is misguided. Medicine is hiding its lack of better understanding of true bio mechanisms in the space between 2 sigma and 3 sigma.

Expand full comment

So glad to have the podcast back. I listen on my long commute. Really enjoyed this one.

Expand full comment

Great topic and discussion! Great to hear ever more often about nuance and clinical expertise in clinical decision-making, which does not mean abandoning - but actually supporting - EBM

Expand full comment

John, I love this Stack. But each time you publish something to which many of us cannot or will not listen (too long, not an aural learner, hard of hearing, whatever) you lose many valuable readers and we lose the chance to get smarter. Autotranscribers are free/cheap and will do a completely adequate job of transcribing these sessions. I will continue to beg: Please post a transcript of audio/video content you publish. All of us who are missing this good stuff will be forever grateful.

Expand full comment

I can read something so much faster than listening or watching a video. I agree with Dr. K.

Expand full comment

It's good for clinicians to recognize that RCT's can also have issues, and it's important to know what they are. (The first link below is a good refresher, while also advocating more collaboration between disciplines as Yeh recommended). One of the strategies Yeh mentioned to help clinicians decide how representative the study sample is of the population is to look at the table describing the study sample. But it seems these tables are usually just demographic info, and do not include disease burden/severity, which seems to be a variable clinicians often use to decide whether to implement an intervention, as Yeh described in his example. Demographics are not always good proxies for disease burden/severity. I agree it's important to explicitly identify the question the study will answer up front, and not stray beyond that in the conclusions. Isn't that what clinicaltrials.gov is for? Too bad so many registered trial outcomes never get reported! Humans are highly variable, and teasing out which interventions are effective for which populations seems tricky; hopefully AI will help identify all the relevant independent variables, which can move us further down the road to precision medicine. Given this discussion was largely about establishing causation for interventions for which RCT's are impractical or unethical, I was surprised there was no mention of the Bradford Hill criteria (2nd link), even though a few were mentioned (association but not its strength, consistency). Are these criteria being considered in observational research in which causation is being inferred?

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6019115/

https://ete-online.biomedcentral.com/articles/10.1186/s12982-015-0037-4

Expand full comment

A major weakness of observational studies is that there is no clinicaltrials.gov reporting requirement nor is there a requirement for a pre-specified statistical analysis plan. Lots of investigator degrees of freedom make it possible to get a variety of answers from the same dataset.

Expand full comment

Duh! I should have known that, since it's called clinical TRIALS! I wonder if it would be possible to start something similar to clinicaltrials.com for observational studies in which the question(s) to be answered are explicitly stated, along with a statistical analysis plan. Less chance of cherry picking?

Expand full comment

The reason that clinicaltrials.gov is theoretically useful is because it is required by law. So the only chance of creating another database would be by a similar, more expansive law.

However, even the current clinicaltrials.gov is woefully incomplete because the law is not enforced (https://jamanetwork.com/journals/jama/article-abstract/2763289).

Expand full comment

Right, that's why I mentioned what a shame it is so many go unreported. Unenforced laws are useless, and even when Big Pharma is fined, it doesn't stop their behavior. It would have to become a norm within the profession, such that researchers who failed to follow the protocol would lose respect and credibility. Hmmm...does that even matter any more these days?! Journals could predicate publication of observational research based on such a protocol, but what are their incentives?

Expand full comment

Mandating pre-registration of observational studies has been discussed (but I can't remember where) and the best researchers will go along with it. But it will have less benefit for the best researchers. The not-so-great researchers will do this kicking and screaming.

Expand full comment

Excellent guest! So clear and understandable, even for the layperson.

Expand full comment

I was glad to see Robert's comment in the paper about the serious shortcoming of the target trial approach re: false sense of comfort regarding confounding by indication. What I wish Robert had mentioned was increasing rigor in observational treatment comparisons by avoiding accommodation to available data, i.e., rationalizing that available data sufficiently capture confounding. This can be done by mandating researchers to formally interview at least 10 clinical experts who are not connected to the project, asking each to list the factors the she uses to select patients for therapies. These factors are pooled over experts. Then see if all the factors are present in the data, are measured with little error, and are not missing frequently. If the database is not adequate for the task, find another project. Addendum: Now that I've finished listening to the podcast I hear that you've emphasized data availability bias in observational studies more than you did in the paper. Nice!

Expand full comment