I’d been certain that all would be well in medicine in the age of AI. Now I see a future where computers equipped with AI use doctors as tools, ultimately making us worse at our jobs and making medicine worse for patients.
Speaking of flying, and of relying on technology: when I went through Army helicopter training almost 25 years ago, we were taught to navigate using a paper map. GPS navigation existed, of course, but the Army insisted we be able to navigate using map, compass, and pencil. So, while your instructor pilot (IP) flew the aircraft, you would sit there with a map on your kneeboard and use your pencil to follow and mark your planned route for the day while you gave verbal directions to the IP (unless you accidentally dropped your pencil out of the aircraft because you were flying with no doors on a hot Alabama day). I suppose there are almost no situations now where GPS navigation fails completely and a pilot would need to have those map skills to fall back on...are there?
I agree with the assessment that AI is desperately needed to remove the clerical mountain that medical providers are under. The main rate-limiting step for most physicians is the documentation that is required for every "encounter" (almost another full-time job! Many people hire a scribe - but those of us who don't are doing our own "scribing"). The time, energy, effort that goes into "documentation" by physicians/medical providers (physical therapists also in this boat) is sucking the life-blood of an entire generation (really 2+ generations) of people. Such a waste. AI must take over those clerical duties. When the office visit is done and the patient and I leave the exam room, my note should be written, all orders and referrals placed, and I move onto the next person. For most straight forward diagnoses, I will not need AI to assist. Maybe I'd see if it had any useful input for a more complex or perplexing one. But I will be the one to decide! Isn't this how most things work? We all start using it. If some docs just turn over diagnostics to AI, we'll see if their results are better vs those of us who use it for clerical duties. My need for the removal of my second career as a scribe/stenographer overshadows any fear I might have of AI. It is a tool - we will decide how best to use it.
Another wonderful role for AI: when patients have frustrations with pharmacies, prescriptions need to be resent, the dreaded prior auth, they need referrals to a different specialist, need a referral to massage therapy, physical therapy, orders for labs and X-rays, they need a form filled out or letter written etc etc etc- - AI. The "busy work" that fills every nook and cranny of my day. Tedious clerical issues. I will gladly hand that over. Interacting with patients and formulating diagnoses is not the area I'd focus on - that is where the joy of medicine is and where the human-to-human interaction plays such a key role.
and yet, when doctors go on strike, patients just get better anyway, whilst the third largest cause of death is medical misadventure. I'm far from convinced. Less is usually better.
It might well be so, but we are all feeling very hurt at moment in the UK, with the dreadful and terribly belated realisation of the toll from contaminated blood products - "Before 1996, an estimated 30,000 people in the UK were given contaminated blood transfusions and blood products infected with hepatitis C , hepatitis B and/ or HIV. More than 3,000 people have died as a result, and thousands more live with on-going health complications."
Meanwhile "Nearly 14,000 people in Britain have applied for payments from the government for alleged harm caused by Covid vaccines, new figures show.
And I think that is just the tip of the iceberg.
Freedom of Information requests made by The Telegraph show that payments have already been awarded for conditions including stroke, heart attack, dangerous blood clots, inflammation of the spinal cord, excessive swelling of the vaccinated limb and facial paralysis.
“Now I see a future where computers equipped with AI use doctors as tools and ultimately make us worse at our jobs and make medicine worse for patients.”
Truth. I’ve seen it already in the “clicker medicine” mentality of the EMR - flags, stopgaps, and the inordinate amount of time it takes just to chart has - once again - taken tech that was supposed to make life easier and made it harder, mostly with time consumption. What you describe as a clinician is looking at the whole person, mind, body and spirit. Computers can’t see that. People trained with computers only will lose that edge.
Yeah, I suspect that this is going to become a problem for many fields. Funny enough, this was an argument made by many who lost work due to automation. Although they went largely ignored and it was claimed that the solution was the “creative economy” where people with the most creativity will be the most successful.
Now we’re seeing the same things happening in medicine and other fields and people are freaking out. Sadly this was entirely predictable and “no one saw it coming”.
Dear Dr. Cifu, Thanks to you all for what you are doing with Sensible Medicine. I would like to emphasize what 2 other commenters have noted: My biggest fear with AI is that it is controlled by a programmer. It has no conscience. And even though doctors can also be influenced to prescribe a treatment that is not in the best interest of the patient, those doctors DO have a conscience. They can decide they are not going to be unethical. AI has no conscience and as stated above, is subject to the programmer, and thus the influence of big pharm, etc... and even those who believe in population reduction. To all reading this, please, please do not discount this. I appreciate that opening one's mind to these possibilities can be, mind blowing. For those who believe the government and agencies behind the government are all working in the public interest, questioning this is a terrible thing for the mind. It completely challenges everything that we thought we could believe in. But keeping an open mind and working very hard to be intellectually honest is imperative in these times. Thank you for all that you do.
' My biggest fear with AI is that it is controlled by a programmer. '
My biggest fear is that it will be controlled by Pharma. The field is wide open for their influence which is probably being fully exploited as we speak.
Pharma has the most money and power within the Medical Industrial Complex, so why wouldn't they try to control "Medical AI?" But there are a lot of different sections to the "AI Industry" including "Open Software AI Models, and it's hard for Pharma to control those now. When I want to research medical, or other types of questions, I can ask ChatGPT, Claude, or Gemini, and more LLM models are coming. It's hard for Pharma to control all that.
I think Pharma's control of medicine was/is based on their large sales forces that INDIRECTLY buy off doctors, and often (forgive me) make doctors Pharma's bitches.
the real danger IMO is that humans are not good at looking for the needle in the haystack. If we are to look over the shoulder at AI programs work we will fail often
"I see computers doing the medicine with humans doing one necessary, truly human task – talking to the patient. " Surely you don't mean talking to the patients is not "doing the medicine"?
It sounds like you are saying doctors may become more lazy. A friend of mine recently went to her dermatologist complaining about a rash. She was misdiagnosed by him and another dermatologist so she decided to put images of it into AI and it said she had melanoma which was correct. I was surprised neither doctor did that themselves. I think doctors should work WITH AI as much as possible. I know I'm not really addressing what you are saying but I'm just a layperson and thought I'd give my two cents. Very informative article. I've learned a lot about medicine from your posts! sabrinalabow.substack.com
Use of AI can and will dull your (doctors') senses/ability. It'll help too, I"m sure, but there's not doubt the "intuition" part will be dulled. This is why FAA mandates that 50% of the landing be done manually. And you know when it's a manual landing because, inevitably, you get used to the greaser landing by autopilot.
There's another example. In the Infantry, map reading/land nav was an ABSOLUTELY essential skill. We practiced a LOT of it. As I was getting out GPS (previously not available to the infantryman) started to become a commodity. And I thought, "uh oh....land nav skills are going to perish" So did it happen? Yes! When you consider that one of the biggest hurdles at SFAS (Special Forces Assessment and Selection) is land-nav. Think about that. Only the best soldiers attempt SFAS, and something that almost any private could do 30 years ago, is a challenging event for the elite soldier. Why? Because it takes practice to read the terrain, to walk an azimuth, do a pace count, to shoot a resection etc. Something almost no one does anymore because you can just depend on your GPS.
I always did dead reckoning as a pilot regardless of LORAN and eventually GPS because I always wanted to know everything I needed to know in case electric went out. Because you never know!
“ The patient might not even notice the change. The problem will be on the occasions that a computer can no longer provide a reliable recommendation. This will happen when the medicine is most challenging and the decision making most fraught. When this occurs, there will no longer be a master clinician to step in”
I imagine this will be greatly outweighed by the lower rate of mistakes on more routine and basic care because the computer sees all.
We are lost. Insurance isn't medicine. Are ethics dead?
"...particularly useful for doctors at small practices, who might not ordinarily have time to appeal an insurer’s decision — even if they think their patient’s treatment will suffer because of it."
"With the help of ChatGPT, Dr. Tward now types in a couple of sentences, describing the purpose of the letter and the types of scientific studies he wants referenced, and a draft is produced in seconds. THEN, he can tell the chatbot to make it FOUR TIMES LONGER." ~New York Times, July 10, 2024
NYT presents M.D.s as dependent upon bullshit voluminous AI outputs for UNFAIR DENIALS of care - while bypassing any concept of actual humane medical care. If humans are INSURANCE OWNED WIDGETS that live and die based upon AI - we will only SUFFER and DIE. Why did we ever trust anyone making money off our life & death? Why would we trust them after watching them lie, botch public health, and sell our lives to insurance companies. We are really and truly fucked in this system. M.D.s must break away and fight.
"Families of two now-deceased former beneficiaries of United Health have filed a lawsuit... alleging it knowingly used a faulty AI algorithm to deny elderly patients coverage for extended care deemed necessary by their doctors." ~Money Watch, By Elizabeth Napolitano, November 20, 2023
ChatGPT is a bullshit machine. "In a paper released earlier this year, three academics from the University of Glasgow classified ChatGPT's outputs not as "lies," but as "BS" - as defined by philosopher Harry G. Frankfurt in "on BS" (and yes I'm censoring that) - and created one of the most enjoyable and prescient papers ever written. In this episode, Ed Zitron is joined by academics Michael Townsen Hicks, James Humphries and Joe Slater in a free-wheeling conversation about ChatGPT's mediocrity - and how it's not built to represent the world at all."
Considering how many papers are being rescinded, and how many are shown to be fraudulent, I would say it's maintaining status quo. And I'm not even talking about "fringe" scientific papers. I mean Harvard, Standford "leading scientists" papers. So much so there's a YT channel that does nothing but bring these fraud to light.
AI is basically a computer code driving hardware, thus the new world medical introduction, if governed by AI, will mirror, behind the curtain, the intentions of those who paid for that code... How about Rockefellers??? Nothing changed or will in the field of the official med-I-CIN-e, which wants us HUMANS dead.
I think your first two points for why AI will not replace the enlightened physician are very solid, therefore your conclusion is most likely a dystopian paranoia.
The example you cite to support the superiority of the clinician confuses me. perhaps complicated by a missing word (sent the man (home) with no....)
" I saw two patients who presented with chest pain. One was a man; one was a woman; both were 55 years old with similar comorbidities. Risk calculators put the man at somewhat higher risk of having unstable angina than the woman. I admitted the woman from clinic and sent the man with no further testing. (Both have done well, the woman after a percutaneous coronary intervention, the man after a few days without contact with the medical field). How could a computer possibly match these clinical skills, learned through deliberate practice over decades?"
I'm thinking you were basing your decision on the nature of the chest pain. Perhaps the man had pain that was clearly musculoskeletal (reproduced by movement or palpitation) and the woman's was much more worrisome for angina, occurring with exertion at low work loads, lasting a few minutes until she stopped exertion. If so, these aspects of the HPI would be available to the AI and could be factored into the AI decision.
😳
Speaking of flying, and of relying on technology: when I went through Army helicopter training almost 25 years ago, we were taught to navigate using a paper map. GPS navigation existed, of course, but the Army insisted we be able to navigate using map, compass, and pencil. So, while your instructor pilot (IP) flew the aircraft, you would sit there with a map on your kneeboard and use your pencil to follow and mark your planned route for the day while you gave verbal directions to the IP (unless you accidentally dropped your pencil out of the aircraft because you were flying with no doors on a hot Alabama day). I suppose there are almost no situations now where GPS navigation fails completely and a pilot would need to have those map skills to fall back on...are there?
I agree with the assessment that AI is desperately needed to remove the clerical mountain that medical providers are under. The main rate-limiting step for most physicians is the documentation that is required for every "encounter" (almost another full-time job! Many people hire a scribe - but those of us who don't are doing our own "scribing"). The time, energy, effort that goes into "documentation" by physicians/medical providers (physical therapists also in this boat) is sucking the life-blood of an entire generation (really 2+ generations) of people. Such a waste. AI must take over those clerical duties. When the office visit is done and the patient and I leave the exam room, my note should be written, all orders and referrals placed, and I move onto the next person. For most straight forward diagnoses, I will not need AI to assist. Maybe I'd see if it had any useful input for a more complex or perplexing one. But I will be the one to decide! Isn't this how most things work? We all start using it. If some docs just turn over diagnostics to AI, we'll see if their results are better vs those of us who use it for clerical duties. My need for the removal of my second career as a scribe/stenographer overshadows any fear I might have of AI. It is a tool - we will decide how best to use it.
Another wonderful role for AI: when patients have frustrations with pharmacies, prescriptions need to be resent, the dreaded prior auth, they need referrals to a different specialist, need a referral to massage therapy, physical therapy, orders for labs and X-rays, they need a form filled out or letter written etc etc etc- - AI. The "busy work" that fills every nook and cranny of my day. Tedious clerical issues. I will gladly hand that over. Interacting with patients and formulating diagnoses is not the area I'd focus on - that is where the joy of medicine is and where the human-to-human interaction plays such a key role.
and yet, when doctors go on strike, patients just get better anyway, whilst the third largest cause of death is medical misadventure. I'm far from convinced. Less is usually better.
I’m, obviously quite the minimalist/medical conservative but the “death by medical errors” data is terribly flawed.
It might well be so, but we are all feeling very hurt at moment in the UK, with the dreadful and terribly belated realisation of the toll from contaminated blood products - "Before 1996, an estimated 30,000 people in the UK were given contaminated blood transfusions and blood products infected with hepatitis C , hepatitis B and/ or HIV. More than 3,000 people have died as a result, and thousands more live with on-going health complications."
Meanwhile "Nearly 14,000 people in Britain have applied for payments from the government for alleged harm caused by Covid vaccines, new figures show.
And I think that is just the tip of the iceberg.
Freedom of Information requests made by The Telegraph show that payments have already been awarded for conditions including stroke, heart attack, dangerous blood clots, inflammation of the spinal cord, excessive swelling of the vaccinated limb and facial paralysis.
Around 97 per cent of claims awarded relate to the AstraZeneca jab, with just a handful of payments made for damage from Pfizer or Moderna. https://www.telegraph.co.uk/news/2024/08/17/covid-vaccine-astrazeneca-sick-victims-compensation-scheme/
How many other horrors have yet to surface?
“Now I see a future where computers equipped with AI use doctors as tools and ultimately make us worse at our jobs and make medicine worse for patients.”
Truth. I’ve seen it already in the “clicker medicine” mentality of the EMR - flags, stopgaps, and the inordinate amount of time it takes just to chart has - once again - taken tech that was supposed to make life easier and made it harder, mostly with time consumption. What you describe as a clinician is looking at the whole person, mind, body and spirit. Computers can’t see that. People trained with computers only will lose that edge.
Yeah, I suspect that this is going to become a problem for many fields. Funny enough, this was an argument made by many who lost work due to automation. Although they went largely ignored and it was claimed that the solution was the “creative economy” where people with the most creativity will be the most successful.
Now we’re seeing the same things happening in medicine and other fields and people are freaking out. Sadly this was entirely predictable and “no one saw it coming”.
Dear Dr. Cifu, Thanks to you all for what you are doing with Sensible Medicine. I would like to emphasize what 2 other commenters have noted: My biggest fear with AI is that it is controlled by a programmer. It has no conscience. And even though doctors can also be influenced to prescribe a treatment that is not in the best interest of the patient, those doctors DO have a conscience. They can decide they are not going to be unethical. AI has no conscience and as stated above, is subject to the programmer, and thus the influence of big pharm, etc... and even those who believe in population reduction. To all reading this, please, please do not discount this. I appreciate that opening one's mind to these possibilities can be, mind blowing. For those who believe the government and agencies behind the government are all working in the public interest, questioning this is a terrible thing for the mind. It completely challenges everything that we thought we could believe in. But keeping an open mind and working very hard to be intellectually honest is imperative in these times. Thank you for all that you do.
' My biggest fear with AI is that it is controlled by a programmer. '
My biggest fear is that it will be controlled by Pharma. The field is wide open for their influence which is probably being fully exploited as we speak.
Pharma has the most money and power within the Medical Industrial Complex, so why wouldn't they try to control "Medical AI?" But there are a lot of different sections to the "AI Industry" including "Open Software AI Models, and it's hard for Pharma to control those now. When I want to research medical, or other types of questions, I can ask ChatGPT, Claude, or Gemini, and more LLM models are coming. It's hard for Pharma to control all that.
I think Pharma's control of medicine was/is based on their large sales forces that INDIRECTLY buy off doctors, and often (forgive me) make doctors Pharma's bitches.
the real danger IMO is that humans are not good at looking for the needle in the haystack. If we are to look over the shoulder at AI programs work we will fail often
"I see computers doing the medicine with humans doing one necessary, truly human task – talking to the patient. " Surely you don't mean talking to the patients is not "doing the medicine"?
It sounds like you are saying doctors may become more lazy. A friend of mine recently went to her dermatologist complaining about a rash. She was misdiagnosed by him and another dermatologist so she decided to put images of it into AI and it said she had melanoma which was correct. I was surprised neither doctor did that themselves. I think doctors should work WITH AI as much as possible. I know I'm not really addressing what you are saying but I'm just a layperson and thought I'd give my two cents. Very informative article. I've learned a lot about medicine from your posts! sabrinalabow.substack.com
Use of AI can and will dull your (doctors') senses/ability. It'll help too, I"m sure, but there's not doubt the "intuition" part will be dulled. This is why FAA mandates that 50% of the landing be done manually. And you know when it's a manual landing because, inevitably, you get used to the greaser landing by autopilot.
There's another example. In the Infantry, map reading/land nav was an ABSOLUTELY essential skill. We practiced a LOT of it. As I was getting out GPS (previously not available to the infantryman) started to become a commodity. And I thought, "uh oh....land nav skills are going to perish" So did it happen? Yes! When you consider that one of the biggest hurdles at SFAS (Special Forces Assessment and Selection) is land-nav. Think about that. Only the best soldiers attempt SFAS, and something that almost any private could do 30 years ago, is a challenging event for the elite soldier. Why? Because it takes practice to read the terrain, to walk an azimuth, do a pace count, to shoot a resection etc. Something almost no one does anymore because you can just depend on your GPS.
I always did dead reckoning as a pilot regardless of LORAN and eventually GPS because I always wanted to know everything I needed to know in case electric went out. Because you never know!
“ The patient might not even notice the change. The problem will be on the occasions that a computer can no longer provide a reliable recommendation. This will happen when the medicine is most challenging and the decision making most fraught. When this occurs, there will no longer be a master clinician to step in”
I imagine this will be greatly outweighed by the lower rate of mistakes on more routine and basic care because the computer sees all.
We are lost. Insurance isn't medicine. Are ethics dead?
"...particularly useful for doctors at small practices, who might not ordinarily have time to appeal an insurer’s decision — even if they think their patient’s treatment will suffer because of it."
"With the help of ChatGPT, Dr. Tward now types in a couple of sentences, describing the purpose of the letter and the types of scientific studies he wants referenced, and a draft is produced in seconds. THEN, he can tell the chatbot to make it FOUR TIMES LONGER." ~New York Times, July 10, 2024
NYT presents M.D.s as dependent upon bullshit voluminous AI outputs for UNFAIR DENIALS of care - while bypassing any concept of actual humane medical care. If humans are INSURANCE OWNED WIDGETS that live and die based upon AI - we will only SUFFER and DIE. Why did we ever trust anyone making money off our life & death? Why would we trust them after watching them lie, botch public health, and sell our lives to insurance companies. We are really and truly fucked in this system. M.D.s must break away and fight.
"Families of two now-deceased former beneficiaries of United Health have filed a lawsuit... alleging it knowingly used a faulty AI algorithm to deny elderly patients coverage for extended care deemed necessary by their doctors." ~Money Watch, By Elizabeth Napolitano, November 20, 2023
https://www.nytimes.com/2024/07/10/health/doctors-insurers-artificial-intelligence.html
ChatGPT is a bullshit machine. "In a paper released earlier this year, three academics from the University of Glasgow classified ChatGPT's outputs not as "lies," but as "BS" - as defined by philosopher Harry G. Frankfurt in "on BS" (and yes I'm censoring that) - and created one of the most enjoyable and prescient papers ever written. In this episode, Ed Zitron is joined by academics Michael Townsen Hicks, James Humphries and Joe Slater in a free-wheeling conversation about ChatGPT's mediocrity - and how it's not built to represent the world at all."
https://podcasts.apple.com/us/podcast/better-offline/id1730587238?i=1000662702287
Considering how many papers are being rescinded, and how many are shown to be fraudulent, I would say it's maintaining status quo. And I'm not even talking about "fringe" scientific papers. I mean Harvard, Standford "leading scientists" papers. So much so there's a YT channel that does nothing but bring these fraud to light.
AI is basically a computer code driving hardware, thus the new world medical introduction, if governed by AI, will mirror, behind the curtain, the intentions of those who paid for that code... How about Rockefellers??? Nothing changed or will in the field of the official med-I-CIN-e, which wants us HUMANS dead.
> For most things, a system in which humans gather data, observed by a computer, which then processes that data, will work just fine.
Atrophy of skills & experience will likely reduce how well the humans gather data, too.
Very thought-provoking!
I think your first two points for why AI will not replace the enlightened physician are very solid, therefore your conclusion is most likely a dystopian paranoia.
The example you cite to support the superiority of the clinician confuses me. perhaps complicated by a missing word (sent the man (home) with no....)
" I saw two patients who presented with chest pain. One was a man; one was a woman; both were 55 years old with similar comorbidities. Risk calculators put the man at somewhat higher risk of having unstable angina than the woman. I admitted the woman from clinic and sent the man with no further testing. (Both have done well, the woman after a percutaneous coronary intervention, the man after a few days without contact with the medical field). How could a computer possibly match these clinical skills, learned through deliberate practice over decades?"
I'm thinking you were basing your decision on the nature of the chest pain. Perhaps the man had pain that was clearly musculoskeletal (reproduced by movement or palpitation) and the woman's was much more worrisome for angina, occurring with exertion at low work loads, lasting a few minutes until she stopped exertion. If so, these aspects of the HPI would be available to the AI and could be factored into the AI decision.
I fixed the missing word. Thanks.
Adam
Am I correct about the nature of the chest pain in those two patients?
Yup