I’d been certain that all would be well in medicine in the age of AI. I had been telling myself that the addition of AI to medical practice would be a net positive. AI would assist with some of the drudgery that is part of every doctor’s job. We would also incorporate AI into our decision making. AI would help us read X-rays or pathology specimens. AI would stimulate and refine our clinical reasoning. I had foreseen a future in which AI tools built into the clinical environment would improve medicine.
Now I see a future where computers equipped with AI use doctors as tools and ultimately make us worse at our jobs and make medicine worse for patients.
I was blissfully confident that I could never be replaced by AI. My confidence was based on three facts. First, obtaining an accurate history is a particularly human endeavor. It depends on putting a person at ease, leading them to reveal the critical issues, sometimes admitting to the thing that they do not feel comfortable revealing.1 Obtaining useful data takes filtering important from unimportant information. The doctor must be able to read a person’s mannerisms, gauge their level of anxiety, and understand how they react to symptoms. Are they a stoic or the proverbial boy who cried wolf? I could not imagine a computer ever being up to this task.
Second, if a diagnosis is not made on the history and physical, the patient must be educated and informed about why further testing -- some of it expensive, or uncomfortable, or invasive – is necessary. Again, something that I can’t see a computer doing.
Third, because so many of our tests are neither perfectly sensitive nor specific, it is necessity for us to be Bayesians. Pretest probability often depends on the subtleties of that initial history and physical. In the last 2 weeks, for instance, I saw two patients who presented with chest pain. One was a man; one was a woman; both were 55 years old with similar comorbidities. Risk calculators put the man at somewhat higher risk of having unstable angina than the woman. I admitted the woman from clinic and sent the man home with no further testing. (Both have done well, the woman after a percutaneous coronary intervention, the man after a few days without contact with the medical field). How could a computer possibly match these clinical skills, learned through deliberate practice over decades?
So, what changed my thinking.
1. I saw how good AI can be at documenting a history taken by a doctor.2 It seems almost like magic3 that your phone can listen in on a visit (with the patient‘s consent) and then produce a nearly perfect note in the medical record, including some of the “assessment and plan.“
2. I realized that it is a very short step for AI to use this information -- collected by a human -- information that it probably gathers in a more complete and granular way and use it complete the assessment and plan. The computers should be able to generate and test differential diagnoses far better than we can because they will have a greater knowledge base, faster processing speed, and will be better at learning from their mistakes. The deliberate practice that bore fruit for me over decades will do so over weeks for AI systems.
3. I listened to a recent episode of one of my favorite podcasts, Cautionary Tales.4 The podcast recounts an Air France crash caused by over reliance on computer assisted piloting. The human pilots had gotten worse at their jobs after hours and hours of just sitting while computers did the piloting. When a situation occurred that the computers were not equipped for, the humans were no longer prepared to step in.
What do I now imagine, or fear, is the future? Instead of humans doing the medicine with computer support, I see computers doing the medicine with humans doing one necessary, truly human task – talking to the patient. This will certainly be worse for doctors. Will it be worse for patients? I think so. For most things, a system in which humans gather data, observed by a computer, which then processes that data, will work just fine. The patient might not even notice the change. The problem will be on the occasions that a computer can no longer provide a reliable recommendation. This will happen when the medicine is most challenging and the decision making most fraught. When this occurs, there will no longer be a master clinician to step in.5
Let’s hope this turns out to be a speculative dystopia and not a prescient prediction.6
One could argue that there is less embarrassment with keying symptoms and habits into a computer than admitting them to a human being. I am not sure.
We are piloting one of the AI scribe programs.
I never need much incentive to quote Arthur C. Clarke. “Any sufficiently advanced technology is indistinguishable from magic.”
For those with a fear of flying, I’d skip this episode.
We have been here before. Duty hour limitations made medical training more humane but it also made it harder. Back in the day, you could not help but learn when you were overwhelmed with cases. Now, it takes real commitment to make the most about every case. Doctors of the future will be challenged to stay sharp when computers can usually do the work. I fear that only the minority will put in the effort.
This weeks Clinical Excellence Podcast is about “The Present of AI in Medicine.”
😳
Speaking of flying, and of relying on technology: when I went through Army helicopter training almost 25 years ago, we were taught to navigate using a paper map. GPS navigation existed, of course, but the Army insisted we be able to navigate using map, compass, and pencil. So, while your instructor pilot (IP) flew the aircraft, you would sit there with a map on your kneeboard and use your pencil to follow and mark your planned route for the day while you gave verbal directions to the IP (unless you accidentally dropped your pencil out of the aircraft because you were flying with no doors on a hot Alabama day). I suppose there are almost no situations now where GPS navigation fails completely and a pilot would need to have those map skills to fall back on...are there?