A quantitative analysis demonstrating the superiority of positive findings over negative ones. Should be good post for fans of the JAMA's Users's Guides, our Symptom to Diagnosis and McGee's Evidence-Based Physical Diagnosis.
As a 64 year old practicing oncologist, my observation is that the concept of critical thinking is non existent among most young physicians. They practice based on self-inflicted algorithms that have no scientific or logical basis. Every conceivable test linked to a symptom is ordered hoping something sticks. They treat an EMR that shows labs and images, not patients. Their performance is measured in RVUs and a delusion of cost savings generated by decreased LOS, not appropriateness of care.
Very interesting piece! Though I must alert you that Bayesianism is, epistemologically, not a hypothetical-deductive model of inference and is much closely related to inductive logic -- it does make a big difference in philosophy of science and reasoning.
The best doctors I know are able to look at a patient and come up with a probable diagnosis which they then prove through testing. I understand the difficulty of teaching this to young doctors, but many times doctors these days are testing first and diagnosing based on what the test results tell them, with little physical exam. My best friend could feel your wrist and know that you were sick and what you are probably sick with. Doctors do not touch patients anymore. At least this is how patients see it. I realize the doctor must get through the EHR questions before they can deal with the patient, but I see one of the problems in our current medical system is the lack of emphasis on the actual physical exam
In many ways, teaching of probabilistic diagnosis in medical schools focuses on an opportunity to teach Bayes' rule, sensitivity, specificity, and likelihood ratios. For many diseases the pre-test probability of disease is more important than the test result, and estimation of pre-test probability is de-emphasized. Decision making would be improved by teaching the logistic regression model instead, where multiple continuous test results can be easily handled and the pre-test variables get all the emphasis they deserve. We need to move past oversimplified diagnostic impact summaries such as sens, spec, LR and think multivariably. And keep in mind that LRs, more useful than sens and spec, are still oversimplifications that have turned continuous test outputs into binary all-or-nothing variables.
This is a thought provoking comment, but it gives me some pause. I would worry that teaching statistical models would only bring confusion and not much insight to medical decision making. The Bayesian reasoning in diagnostics can be derived from basic counting: it relates rates to rates, with no need for some probabilistic model of the patient population. It is a heuristic, but as statistical decision rules go it's (relatively) easy to justify and teach. But models, including logistic models, come with heavy baggage of assumptions that rarely apply.
Do you have examples in mind where logistic modeling have provided better decisions than those based on simple manipulations of rates by Bayes' Rule?
I wouldn't suggest teaching statistical models in general but instead introduce how log odds can be added to give an optimum estimate of disease probability, respecting continuous patient characteristics and multiple continuous test outputs. The Bayesian reasoning based on counting requires gross oversimplifications that cloud thinking of medical students for decades. Assumptions of logistic models are far fewer than the assumptions of the application of Bayes' rule such as the assumption of constancy of sensitivity and specificity. Logistic models do not require dichotomania and allow for nonlinear effects and interactions so are quite flexible. See lots of examples at http://hbiostat.org/bbr/dx.html
So so true. I worry that, so far at least, we are incorporating more dichotomized variables into decision rules and not making use of the data we actually have. Thanks for reading and commenting. Adam
Hence a coherent chief complaint is still more important in the note than a panoply of negative review of system check boxes, that nobody really asks anyway?
This is some excellent, interesting, and valuable work. I am stunned you couldn't find a home for it in the literature. Doubly stunned when I see the trivial nature of 80% of the material I see in the refereed literature.
Yes, I understood that you meant medical history. My question is more like looking at priors may make you miss something new. An extreme example in the other direction would be a judge not allowing prior arrest records to be heard in a criminal trial because they have a tendency to bias the jury.
As a 64 year old practicing oncologist, my observation is that the concept of critical thinking is non existent among most young physicians. They practice based on self-inflicted algorithms that have no scientific or logical basis. Every conceivable test linked to a symptom is ordered hoping something sticks. They treat an EMR that shows labs and images, not patients. Their performance is measured in RVUs and a delusion of cost savings generated by decreased LOS, not appropriateness of care.
Very interesting piece! Though I must alert you that Bayesianism is, epistemologically, not a hypothetical-deductive model of inference and is much closely related to inductive logic -- it does make a big difference in philosophy of science and reasoning.
The best doctors I know are able to look at a patient and come up with a probable diagnosis which they then prove through testing. I understand the difficulty of teaching this to young doctors, but many times doctors these days are testing first and diagnosing based on what the test results tell them, with little physical exam. My best friend could feel your wrist and know that you were sick and what you are probably sick with. Doctors do not touch patients anymore. At least this is how patients see it. I realize the doctor must get through the EHR questions before they can deal with the patient, but I see one of the problems in our current medical system is the lack of emphasis on the actual physical exam
In many ways, teaching of probabilistic diagnosis in medical schools focuses on an opportunity to teach Bayes' rule, sensitivity, specificity, and likelihood ratios. For many diseases the pre-test probability of disease is more important than the test result, and estimation of pre-test probability is de-emphasized. Decision making would be improved by teaching the logistic regression model instead, where multiple continuous test results can be easily handled and the pre-test variables get all the emphasis they deserve. We need to move past oversimplified diagnostic impact summaries such as sens, spec, LR and think multivariably. And keep in mind that LRs, more useful than sens and spec, are still oversimplifications that have turned continuous test outputs into binary all-or-nothing variables.
This is a thought provoking comment, but it gives me some pause. I would worry that teaching statistical models would only bring confusion and not much insight to medical decision making. The Bayesian reasoning in diagnostics can be derived from basic counting: it relates rates to rates, with no need for some probabilistic model of the patient population. It is a heuristic, but as statistical decision rules go it's (relatively) easy to justify and teach. But models, including logistic models, come with heavy baggage of assumptions that rarely apply.
Do you have examples in mind where logistic modeling have provided better decisions than those based on simple manipulations of rates by Bayes' Rule?
I wouldn't suggest teaching statistical models in general but instead introduce how log odds can be added to give an optimum estimate of disease probability, respecting continuous patient characteristics and multiple continuous test outputs. The Bayesian reasoning based on counting requires gross oversimplifications that cloud thinking of medical students for decades. Assumptions of logistic models are far fewer than the assumptions of the application of Bayes' rule such as the assumption of constancy of sensitivity and specificity. Logistic models do not require dichotomania and allow for nonlinear effects and interactions so are quite flexible. See lots of examples at http://hbiostat.org/bbr/dx.html
So so true. I worry that, so far at least, we are incorporating more dichotomized variables into decision rules and not making use of the data we actually have. Thanks for reading and commenting. Adam
I hope medical schools incorporate sensible medicine into training!
Hence a coherent chief complaint is still more important in the note than a panoply of negative review of system check boxes, that nobody really asks anyway?
This is some excellent, interesting, and valuable work. I am stunned you couldn't find a home for it in the literature. Doubly stunned when I see the trivial nature of 80% of the material I see in the refereed literature.
Probably a naive, non-MD question here: doesn't relying primarily on history make it more likely you will miss new or "innovative" conditions?
Yes, I understood that you meant medical history. My question is more like looking at priors may make you miss something new. An extreme example in the other direction would be a judge not allowing prior arrest records to be heard in a criminal trial because they have a tendency to bias the jury.
Makes sense. Thank you.