The hardest part of reading this - knowing my complex medical history - is the belief that this will be my care over the next batch of years. I’ve survived some difficulties and now freaking AI, which is horrible with music, might be in charge of my medical care. Anything that’s terrible with art and music can’t be good for medicine once the businessmen get ahold of it.
Short paragraph? I found my grandfathers patient records...one to two lines. And i heard he could remember every detail of cases going back 40 years. One anecdote was him meeting an old patient on the street whom he had treated 40 years earlier. He immediately asked after the mans right wrist injury while shaking his hand.
Great anecdote. One of my senior partners started his medical practice in the late 1940s. He wrote very brief notes in the hospital charts on his patients. When utilization review began sometime in the 1980s or 90s, he rebelled and wrote no notes at all just to spite the reviewers. Yet he could relate every detail and his thoughts and plans on every patient he had. He drove the UR clerks crazy and just smiled when they tried to harangue him about the blank pages of progress notes in his charts. The fledgling bureaucrats tried to get him suspended from the hospital staff but he was highly regarded in the community and the corporatization of hospitals hadn't taken place yet.
I think these “automated” softwares are an inevitable trajectory, but I think this guys prediction may be hyperbolic and over-indexed in impact, depending on the specialty you think about
Some thoughts that come to mind on its limitations
History taking perseverance. I cannot tell you how many times my differential has changed just by not giving up on the history. By changing by changing my position to the patient (getting closer,etc), my tone, my general energy, I can get info out of a patient that otherwise could not be done. Many specialties, you’re a part time journalist .
Physical exam : this goes without saying. Essential for any specialty that examines range of motion, tone, bulk. There are a million maneuvers a neurologists knows to tease out functional vs real.
Non-evidence based reasoning. Sometimes you get scans to appease a patient, even though you have low suspicion.
Inpatient politics - 1/2 of the battle of inpatient is dispo politics, admissions politics. Largely based on relationships within the hospital
5 .AI docs will lower patient threshold to sue doctors. Patients won’t feel as bad suing an ai as opposed to a human. Creating a massive litigation frenzy for hospitals
Many good clinical arguments against automated care. But, look at the incentives and read about Clayton Christiansen low end disruption. This is “good enough” and cheaper. It’s like when MP3s took over from CDs — lower fidelity, but good enough, and way more convenient.
As long as the medical system is corporatized (for profit only), A/i will only inject further cost savings or amplified costs for services. Actual health care will not change. The same tests, drugs and procedures will be offered and no new roads will be traveled. Explain why the food is so abysmal that the nurses will not touch it and how crappy foods can ever help a person become well...in a hospital no less. That is pure BS...cost cutting for corporate profits, not patient health.
Allopathetic treatment is NOT all there is and constant worship of the computerized side of medicine is abysmal. I have my wife's ongoing 11 day hospital stay to verify that. And hers isn't a life threatening condition. They have dragged her stay on and on...miscommunication (even with computers), lack of communication, too many doctors coming in and out not telling her much.
You can take that A/i guy and lock him in a closet...next to the useless doctors.
Thank you for a very thoughtful article with valuable insights. The central issue is accurately summarized in the sentence: "The raw information is there to do more, but whether or not conversational models will be trained to understand the medical literature, apply clinical reasoning, and even advance to direct patient engagement through simulated empathetic speech remains to be seen." It won't.
Many years ago, while speaking with a friend’s father who was a practicing psychiatrist I asked him how he chose his specialty. He said that he was a Family Practitioner for several years and after discovering that “80% of my patients needed a psychiatrist, I thought I should get the training”.
Most of the above discussions have centered around accuracy of diagnosis (and liability for mis-). What he asserted was that, with personal interaction he discerned underlying pathologies that no amount of EHR element box checking could elucidate.
The iPhone arrived in 2007. It took 10-15 years for us to see the pathological effects it had on a “phone based childhood” (Haidt-The Anxious Generation). A wonderful technology with a very dark unintentional consequence.
We will repeat history because we never learn from it.
I will start to get nervous when AI can streamline the antiquated process by which patients are roomed. Or when it can automate pointing out which med changed in the last 6 months. Basic skills. DAX offloads basic parts of interview, misrepresenting key details (largely tacit info) all the time. The only reason I rely on it is because admin requirements. The real threat will be cognitive deskilling if physicians ever were to take a passive role. AI makes me a better physician because I use it when I have Qs. I would be thrilled to watch the role play link in the article attempt a complaint of fatigue followed by a list of 3 nebulous (or noisy) rabbit holes. Docs are about as safe as they come imo.
Agree AI outperforms humans in information rich environments. Most diagnoses come to light in information deserts where humans thrive. Excited to see how diagnosis is augmented once we combine these ‘superpowers’
The thing is that AI is always learning. Sure, at first the AI will fail in a lot of edge cases and the human doc supervising the AI will have to intervene to correct the diagnosis, but every time this happens, it's another datapoint for the AI to learn from. Eventually the AI will have thousands of fatigue complaints and correct diagnoses to draw from, and it's ability to correctly diagnose fatigue will approach and perhaps even exceed that of a trained physician.
AI is coming for every white-collar job that primarily deals with information, the question will become, what will we humans do with ourselves once an AI can do our jobs better than we can?
An AI might do better on the fatigued, rabbit-hole patient if it were less prone to premature closure than doctors are. As a patient I would give AI a shot in that scenario since i've seen how poorly doctors can handle those situations.
Thank you for bringing forward these technological advances which, whether we like it or not—like cell phones and the internet—sooner rather than later will not only become ubiquitous but will also dominate and add value to clinical care.
But what exactly would that added value be?
A colleague once said that the only goal or purpose of technology was to do things faster so we could get home earlier to enjoy time with our families. But increasing efficiency obviously comes at the expense of the time needed to listen to the patient, to understand them in their essence and context, which goes far beyond simply capturing noises, sounds, or words in the consultation setting.
How would this technology capture the atmosphere of the interaction, what the patient is thinking and reasoning, their level of pain or anxiety, and above all, what the clinician is thinking and reasoning diagnostically and therapeutically if none of this is verbalized?
Medical error—most of it due to diagnostic mistakes arising from premature closure and the lack of a physician-patient relationship—is the third leading “cause” of death in the United States [Makary & Daniel, 2016]. This means that for some, medicine is becoming not a potential solution but part of the health problem, at least in the U.S.
If AI comes to offer some relief from this terrible indicator, then it is certainly welcome.
I would like to touch on a second aspect, which in my view is the most important.
Technology is an adolescent model: it always promises benefits and aesthetics but never anticipates the risks, the ethical implications, or the adverse effects that its use could cause—even when it is applied “judiciously.”
Let’s propose, as a hypothetical case, that in the course of patient care, with the assistance of any form of artificial intelligence, an error is induced that ends the patient’s life. In this scenario, who is responsible for that patient’s death?
We must remember that the medical act is a human act, mediated by all the science and technology we may wish, but co-responsible between a patient who assumes commitments and a clinician who makes decisions—or helps the patient make decisions—amid uncertainties, weighing their context, preferences, and disease prevalences.
Thus, in this implicit contract, legal co-responsibility lies with these two parties.
Or will it be that the use of artificial intelligence will become the entity to which faults and failures of clinical practice are attributed, since AI begins to take command of decision-making?
Dear colleagues, let us not close our ears and eyes in anticipation of this panacea. We urgently need to approach it scientifically and ethically, to evaluate its adverse effects of every kind, hoping that they will be outweighed by greater efficiency and coverage—but above all without losing our humanity and the natural intelligence that AI can never fully replace.
⸻
Citation:
Makary MA, Daniel M. Medical error—the third leading cause of death in the US. BMJ. 2016;353:i2139. doi:10.1136/bmj.i2139
Speaking as a patient: When I am in the room shut that nonsense down. Turn it off. The privacy violations are staggering as well as the predictable errors. AI has an established track record of creating bad information. There is no reasion to assume in the rush to make applications based on AI that these applications will do better.
And…I don’t want my every word recorded. I don’t want “simulated empathy.” I don’t want any more “interests” in the room (it’s bad enough now). What a nightmare.
What about the FDA disclaimer "This statement has not been evaluated by the Food and Drug Administration. This product is not intended to diagnose, treat, cure, or prevent any disease"
The hardest part of reading this - knowing my complex medical history - is the belief that this will be my care over the next batch of years. I’ve survived some difficulties and now freaking AI, which is horrible with music, might be in charge of my medical care. Anything that’s terrible with art and music can’t be good for medicine once the businessmen get ahold of it.
if it can be monetized (and it can) it will be. That this is coming is a certainty imo.
Sad but true.
It’s amazing that anyone survived when the medical office visit was documented by a short paragraph that took a couple minutes to write.
Short paragraph? I found my grandfathers patient records...one to two lines. And i heard he could remember every detail of cases going back 40 years. One anecdote was him meeting an old patient on the street whom he had treated 40 years earlier. He immediately asked after the mans right wrist injury while shaking his hand.
Great anecdote. One of my senior partners started his medical practice in the late 1940s. He wrote very brief notes in the hospital charts on his patients. When utilization review began sometime in the 1980s or 90s, he rebelled and wrote no notes at all just to spite the reviewers. Yet he could relate every detail and his thoughts and plans on every patient he had. He drove the UR clerks crazy and just smiled when they tried to harangue him about the blank pages of progress notes in his charts. The fledgling bureaucrats tried to get him suspended from the hospital staff but he was highly regarded in the community and the corporatization of hospitals hadn't taken place yet.
I think these “automated” softwares are an inevitable trajectory, but I think this guys prediction may be hyperbolic and over-indexed in impact, depending on the specialty you think about
Some thoughts that come to mind on its limitations
History taking perseverance. I cannot tell you how many times my differential has changed just by not giving up on the history. By changing by changing my position to the patient (getting closer,etc), my tone, my general energy, I can get info out of a patient that otherwise could not be done. Many specialties, you’re a part time journalist .
Physical exam : this goes without saying. Essential for any specialty that examines range of motion, tone, bulk. There are a million maneuvers a neurologists knows to tease out functional vs real.
Non-evidence based reasoning. Sometimes you get scans to appease a patient, even though you have low suspicion.
Inpatient politics - 1/2 of the battle of inpatient is dispo politics, admissions politics. Largely based on relationships within the hospital
5 .AI docs will lower patient threshold to sue doctors. Patients won’t feel as bad suing an ai as opposed to a human. Creating a massive litigation frenzy for hospitals
Many good clinical arguments against automated care. But, look at the incentives and read about Clayton Christiansen low end disruption. This is “good enough” and cheaper. It’s like when MP3s took over from CDs — lower fidelity, but good enough, and way more convenient.
Following MP3s came the return to vinyl.
Way less fidelity than CDs and especially digital CDs.
MP3s degraded quality to fit bandwidth available at the time.
Progress is in the eye of the beholder, but more definitely in the income of the supplier.
There definitely is a lesson there for you in your own analogy.
all of those are good reasons this should not happen. Not one is a reason it wont happen
Tell me incentives and I’ll tell you the outcome”
As long as the medical system is corporatized (for profit only), A/i will only inject further cost savings or amplified costs for services. Actual health care will not change. The same tests, drugs and procedures will be offered and no new roads will be traveled. Explain why the food is so abysmal that the nurses will not touch it and how crappy foods can ever help a person become well...in a hospital no less. That is pure BS...cost cutting for corporate profits, not patient health.
Allopathetic treatment is NOT all there is and constant worship of the computerized side of medicine is abysmal. I have my wife's ongoing 11 day hospital stay to verify that. And hers isn't a life threatening condition. They have dragged her stay on and on...miscommunication (even with computers), lack of communication, too many doctors coming in and out not telling her much.
You can take that A/i guy and lock him in a closet...next to the useless doctors.
Thank you for a very thoughtful article with valuable insights. The central issue is accurately summarized in the sentence: "The raw information is there to do more, but whether or not conversational models will be trained to understand the medical literature, apply clinical reasoning, and even advance to direct patient engagement through simulated empathetic speech remains to be seen." It won't.
Really critical that patients and doctors control this stuff. Get rid of third-party payment!!!
Many years ago, while speaking with a friend’s father who was a practicing psychiatrist I asked him how he chose his specialty. He said that he was a Family Practitioner for several years and after discovering that “80% of my patients needed a psychiatrist, I thought I should get the training”.
Most of the above discussions have centered around accuracy of diagnosis (and liability for mis-). What he asserted was that, with personal interaction he discerned underlying pathologies that no amount of EHR element box checking could elucidate.
The iPhone arrived in 2007. It took 10-15 years for us to see the pathological effects it had on a “phone based childhood” (Haidt-The Anxious Generation). A wonderful technology with a very dark unintentional consequence.
We will repeat history because we never learn from it.
I have an upcoming colonoscopy due to 2 sessile serrated polyps discovered 9 months ago, one of which was 12 mm.
This type of polyp has a high 'miss'rate.
I asked if any AI features might be usefully integrated into the procedure due to my profile (I've had 12 colonoscopies).
Nothing was being considered for implementation from this large Ivy league named health system. I was surprised.
Talk about timing. I just got a notice today that my own doctor is implementing ScribeBerry.
I will start to get nervous when AI can streamline the antiquated process by which patients are roomed. Or when it can automate pointing out which med changed in the last 6 months. Basic skills. DAX offloads basic parts of interview, misrepresenting key details (largely tacit info) all the time. The only reason I rely on it is because admin requirements. The real threat will be cognitive deskilling if physicians ever were to take a passive role. AI makes me a better physician because I use it when I have Qs. I would be thrilled to watch the role play link in the article attempt a complaint of fatigue followed by a list of 3 nebulous (or noisy) rabbit holes. Docs are about as safe as they come imo.
Agree AI outperforms humans in information rich environments. Most diagnoses come to light in information deserts where humans thrive. Excited to see how diagnosis is augmented once we combine these ‘superpowers’
https://first10em.com/diagnostic-reasoning-as-artificial-intelligence-emerges-a-distributed-cognition-framework/amp/
The thing is that AI is always learning. Sure, at first the AI will fail in a lot of edge cases and the human doc supervising the AI will have to intervene to correct the diagnosis, but every time this happens, it's another datapoint for the AI to learn from. Eventually the AI will have thousands of fatigue complaints and correct diagnoses to draw from, and it's ability to correctly diagnose fatigue will approach and perhaps even exceed that of a trained physician.
AI is coming for every white-collar job that primarily deals with information, the question will become, what will we humans do with ourselves once an AI can do our jobs better than we can?
An AI might do better on the fatigued, rabbit-hole patient if it were less prone to premature closure than doctors are. As a patient I would give AI a shot in that scenario since i've seen how poorly doctors can handle those situations.
Dr. Schloss,
Thank you for bringing forward these technological advances which, whether we like it or not—like cell phones and the internet—sooner rather than later will not only become ubiquitous but will also dominate and add value to clinical care.
But what exactly would that added value be?
A colleague once said that the only goal or purpose of technology was to do things faster so we could get home earlier to enjoy time with our families. But increasing efficiency obviously comes at the expense of the time needed to listen to the patient, to understand them in their essence and context, which goes far beyond simply capturing noises, sounds, or words in the consultation setting.
How would this technology capture the atmosphere of the interaction, what the patient is thinking and reasoning, their level of pain or anxiety, and above all, what the clinician is thinking and reasoning diagnostically and therapeutically if none of this is verbalized?
Medical error—most of it due to diagnostic mistakes arising from premature closure and the lack of a physician-patient relationship—is the third leading “cause” of death in the United States [Makary & Daniel, 2016]. This means that for some, medicine is becoming not a potential solution but part of the health problem, at least in the U.S.
If AI comes to offer some relief from this terrible indicator, then it is certainly welcome.
I would like to touch on a second aspect, which in my view is the most important.
Technology is an adolescent model: it always promises benefits and aesthetics but never anticipates the risks, the ethical implications, or the adverse effects that its use could cause—even when it is applied “judiciously.”
Let’s propose, as a hypothetical case, that in the course of patient care, with the assistance of any form of artificial intelligence, an error is induced that ends the patient’s life. In this scenario, who is responsible for that patient’s death?
We must remember that the medical act is a human act, mediated by all the science and technology we may wish, but co-responsible between a patient who assumes commitments and a clinician who makes decisions—or helps the patient make decisions—amid uncertainties, weighing their context, preferences, and disease prevalences.
Thus, in this implicit contract, legal co-responsibility lies with these two parties.
Or will it be that the use of artificial intelligence will become the entity to which faults and failures of clinical practice are attributed, since AI begins to take command of decision-making?
Dear colleagues, let us not close our ears and eyes in anticipation of this panacea. We urgently need to approach it scientifically and ethically, to evaluate its adverse effects of every kind, hoping that they will be outweighed by greater efficiency and coverage—but above all without losing our humanity and the natural intelligence that AI can never fully replace.
⸻
Citation:
Makary MA, Daniel M. Medical error—the third leading cause of death in the US. BMJ. 2016;353:i2139. doi:10.1136/bmj.i2139
Speaking as a patient: When I am in the room shut that nonsense down. Turn it off. The privacy violations are staggering as well as the predictable errors. AI has an established track record of creating bad information. There is no reasion to assume in the rush to make applications based on AI that these applications will do better.
Makes you wonder why medical mistakes is the third leading cause of death. Are we supposed to trust A/i?
And…I don’t want my every word recorded. I don’t want “simulated empathy.” I don’t want any more “interests” in the room (it’s bad enough now). What a nightmare.
"The incentives for health systems to create an automated care model are obvious: "
FOLLOW THE MONEY
“So, CEO Smith. the physicians hate it.
“What about nursing?”
“They hate it, too.”
“Patients?”
“Ditto. Says it makes them feel like cattle.”
“Well…I spoke to the CFO, and the board. They love it. So, it’s unanimous: let’s get this baby up and running!”
You describe Epic very well. This is a new beast. Read next week’s piece.
What about the FDA disclaimer "This statement has not been evaluated by the Food and Drug Administration. This product is not intended to diagnose, treat, cure, or prevent any disease"
Will that be said before the call?
I wonder if we would get informed consent and how all that data will be used and who will have access to it. Privacy?
You probably already gave that "right" away when you agreed to the EMR, you think?