13 Comments
User's avatar
Daniel Bruetman MD, MMM's avatar

As a 64 year old practicing oncologist, my observation is that the concept of critical thinking is non existent among most young physicians. They practice based on self-inflicted algorithms that have no scientific or logical basis. Every conceivable test linked to a symptom is ordered hoping something sticks. They treat an EMR that shows labs and images, not patients. Their performance is measured in RVUs and a delusion of cost savings generated by decreased LOS, not appropriateness of care.

Expand full comment
esFOAMeados's avatar

Very interesting piece! Though I must alert you that Bayesianism is, epistemologically, not a hypothetical-deductive model of inference and is much closely related to inductive logic -- it does make a big difference in philosophy of science and reasoning.

Expand full comment
Mary S. LaMoreaux's avatar

The best doctors I know are able to look at a patient and come up with a probable diagnosis which they then prove through testing. I understand the difficulty of teaching this to young doctors, but many times doctors these days are testing first and diagnosing based on what the test results tell them, with little physical exam. My best friend could feel your wrist and know that you were sick and what you are probably sick with. Doctors do not touch patients anymore. At least this is how patients see it. I realize the doctor must get through the EHR questions before they can deal with the patient, but I see one of the problems in our current medical system is the lack of emphasis on the actual physical exam

Expand full comment
Frank Harrell's avatar

In many ways, teaching of probabilistic diagnosis in medical schools focuses on an opportunity to teach Bayes' rule, sensitivity, specificity, and likelihood ratios. For many diseases the pre-test probability of disease is more important than the test result, and estimation of pre-test probability is de-emphasized. Decision making would be improved by teaching the logistic regression model instead, where multiple continuous test results can be easily handled and the pre-test variables get all the emphasis they deserve. We need to move past oversimplified diagnostic impact summaries such as sens, spec, LR and think multivariably. And keep in mind that LRs, more useful than sens and spec, are still oversimplifications that have turned continuous test outputs into binary all-or-nothing variables.

Expand full comment
Ben Recht's avatar

This is a thought provoking comment, but it gives me some pause. I would worry that teaching statistical models would only bring confusion and not much insight to medical decision making. The Bayesian reasoning in diagnostics can be derived from basic counting: it relates rates to rates, with no need for some probabilistic model of the patient population. It is a heuristic, but as statistical decision rules go it's (relatively) easy to justify and teach. But models, including logistic models, come with heavy baggage of assumptions that rarely apply.

Do you have examples in mind where logistic modeling have provided better decisions than those based on simple manipulations of rates by Bayes' Rule?

Expand full comment
Frank Harrell's avatar

I wouldn't suggest teaching statistical models in general but instead introduce how log odds can be added to give an optimum estimate of disease probability, respecting continuous patient characteristics and multiple continuous test outputs. The Bayesian reasoning based on counting requires gross oversimplifications that cloud thinking of medical students for decades. Assumptions of logistic models are far fewer than the assumptions of the application of Bayes' rule such as the assumption of constancy of sensitivity and specificity. Logistic models do not require dichotomania and allow for nonlinear effects and interactions so are quite flexible. See lots of examples at http://hbiostat.org/bbr/dx.html

Expand full comment
Adam Cifu, MD's avatar

So so true. I worry that, so far at least, we are incorporating more dichotomized variables into decision rules and not making use of the data we actually have. Thanks for reading and commenting. Adam

Expand full comment
SAA's avatar

I hope medical schools incorporate sensible medicine into training!

Expand full comment
Randall Burchell's avatar

Hence a coherent chief complaint is still more important in the note than a panoply of negative review of system check boxes, that nobody really asks anyway?

Expand full comment
Phil Shaffer, MD's avatar

This is some excellent, interesting, and valuable work. I am stunned you couldn't find a home for it in the literature. Doubly stunned when I see the trivial nature of 80% of the material I see in the refereed literature.

Expand full comment
User's avatar
Comment deleted
Nov 30, 2022
Comment deleted
Expand full comment
Jeff Cunningham's avatar

Probably a naive, non-MD question here: doesn't relying primarily on history make it more likely you will miss new or "innovative" conditions?

Expand full comment
User's avatar
Comment deleted
Nov 30, 2022
Comment deleted
Expand full comment
Jeff Cunningham's avatar

Yes, I understood that you meant medical history. My question is more like looking at priors may make you miss something new. An extreme example in the other direction would be a judge not allowing prior arrest records to be heard in a criminal trial because they have a tendency to bias the jury.

Expand full comment
User's avatar
Comment deleted
Nov 30, 2022
Comment deleted
Expand full comment
Jeff Cunningham's avatar

Makes sense. Thank you.

Expand full comment