Having practiced urgent care, medicine, both part-time and full-time over the last 35 years, I can see how a machine could follow the rules very accurately. I wonder how the machine would do for atypical presentations, hysterical patients, and emergency situations. While I am sure, AI does well in its own environment, when it’s working in the emergency room or urgent care, it is working in mine. A good pair of eyes, working hands and feet, and a sympathetic ear are attributes that I don’t associate with a computer program!
I completely understand where Arman is coming from regarding this. I recently was having a conversation with people about AI and I think it can be an incredible tool if we look beyond the hype and headlines. Where I am concerned though is in the idea of Perverse Instantiation.
It's sorta the AI version of what Sensible Medicine published regarding the influence of doctors on human decision making. We should be concerned about it giving the right advice, but we also have to be concerned about how it might recommend things that could end badly for a patient.
We have to create tools to use tech and AI as a path to give us more time to care for patients. There are more patients and health conditions than ever before. Human care is what keeps us going.
A most concerning part of the unlimited implementation in AI in medicine is the fact that because of how human brains work and because of how incentives are structured in medicine, humans will default to following the AI recommendation nearly completely.
This is another impact which is not discussed frequently in the literature and a very concerning aspect given AI will, as its predecessor EHR was, be thrown at patients and doctors without adequate study prior to being implemented in patient care.
If I could exclude 21% of cases in which I have ‘low confidence’ in the assessment, my clinical decision making would look stellar. These are the cases that differentiate good from average clinicians. In this study, AI gets a pass on these cases. So it’s hard to say based on this study how AI decision making does when it matters most.
The lack of blinding of the adjudicators seems like something that would’ve been easy to overcome, and is a HUGE limitation…esp when a bunch of the authors work for the company making this gadget.
“Employees say their boss’s product works better” summarizes the study and its issues.
I look at AI in medicine the same way I look at AI in music. If you hear AI produced music, you can tell. And I suspect over the years it will get more focused. But it will never be the soul and spirit that a human can imbibe into a song. And AI music is as obvious to me as a rash on my face. I would imagine physicians would feel the same way. Now granted, I’m old. So I guess this is just a warning to those who think AI is going to be a panacea. I see it the same way we used to see “up to date” when it first came out. Used as a companion it can be beneficial. Used as primary not so much.
AI is dependent upon a thorough history and accurate, honest clinical studies. I am not a physician but an occupational therapist. In my experience, it often took multiple visits and establishing a connection with my patient to uncover the many factors contributing to the condition. People often do not remember (auto accident 10! years ago with cervical injury that could contribute to current carpal tunnel syndrome), lack of sophistication to report something that may be relevant (pt with multiple admissions for low sodium who turned to to be drinking bottles and bottles of water a day to be healthy), or do not want to reveal the truth. In addition, we are now learning that research has been falsified. In clinical practice, I had the ability to look at studies and question if those studies may have a conflict of interest. Or it may just not have made sense to me and thus I would hold it as questionable. I don’t know if AI will have the capacity to do that. In a perfect world, I think AI would be a fantastic tool. In our flawed world AI will be another tool that has the potential to be used for both good and evil.
Although computer interpretation of ECGs may not be the best example of AI, it should be one of the easiest to develop (recognition of a limited number of patterns). In my opinion, other than the completely normal ECG, computer interpretation of ECGs are often wrong. The computer often misses important diagnoses, and often assigns potentially serious diagnoses to either artifact or normal variance.
I started practice in the 1970s with a cardiology group that contracted with our large community hospital to provide interpretations for all of their ECGs and agree with every bit of the above statements. The fact is that there are lots of variations of normal and judgement is an essential part of ECG interpretation. Judgement (which is also an essential part of medical practice) cannot be computerized.
AI has a couple advantages over human physicians. AI doesn't care what its patient satisfaction scores are and will not suffer if it is sued for malpractice.
I’m guessing that AI is subject to the same “leash” as human doctors. That is, it is programmed to suggest the accepted “standard of care,” even if it detects a better treatment is available. For example, if it were around in late 2021, AI would not be “allowed” to recommend ivermectin to a patient with a COVID-positive PCR test, even though its safety and efficacy (both superior to Remdesivir) were well-established by then.
The adjudication process was not outcome based. We don't know how these patients actually fared just what (biased?) adjudicators thought should happen.
The adjudication process did not involve costs.
I like what it says about unjustified empirical treatment. If AI can stop empirical treatment it should win the Nobel Prize in Medicine. I would like to know what *justified* empirical treatment is.
It will be interesting to see the pace of improvement of these “agents” especially regarding patients who present with more complex histories. How will AI resolve potentially contradictory data?
Having practiced urgent care, medicine, both part-time and full-time over the last 35 years, I can see how a machine could follow the rules very accurately. I wonder how the machine would do for atypical presentations, hysterical patients, and emergency situations. While I am sure, AI does well in its own environment, when it’s working in the emergency room or urgent care, it is working in mine. A good pair of eyes, working hands and feet, and a sympathetic ear are attributes that I don’t associate with a computer program!
I completely understand where Arman is coming from regarding this. I recently was having a conversation with people about AI and I think it can be an incredible tool if we look beyond the hype and headlines. Where I am concerned though is in the idea of Perverse Instantiation.
It's sorta the AI version of what Sensible Medicine published regarding the influence of doctors on human decision making. We should be concerned about it giving the right advice, but we also have to be concerned about how it might recommend things that could end badly for a patient.
We have to create tools to use tech and AI as a path to give us more time to care for patients. There are more patients and health conditions than ever before. Human care is what keeps us going.
A most concerning part of the unlimited implementation in AI in medicine is the fact that because of how human brains work and because of how incentives are structured in medicine, humans will default to following the AI recommendation nearly completely.
This is another impact which is not discussed frequently in the literature and a very concerning aspect given AI will, as its predecessor EHR was, be thrown at patients and doctors without adequate study prior to being implemented in patient care.
https://www.sensible-med.com/p/artificial-intelligence-in-clinical
If I could exclude 21% of cases in which I have ‘low confidence’ in the assessment, my clinical decision making would look stellar. These are the cases that differentiate good from average clinicians. In this study, AI gets a pass on these cases. So it’s hard to say based on this study how AI decision making does when it matters most.
😬
I would really worry about GIGO (garbage IN ..garbage OUT)…our minds are already colonized by. BIG PHARM
The lack of blinding of the adjudicators seems like something that would’ve been easy to overcome, and is a HUGE limitation…esp when a bunch of the authors work for the company making this gadget.
“Employees say their boss’s product works better” summarizes the study and its issues.
I look at AI in medicine the same way I look at AI in music. If you hear AI produced music, you can tell. And I suspect over the years it will get more focused. But it will never be the soul and spirit that a human can imbibe into a song. And AI music is as obvious to me as a rash on my face. I would imagine physicians would feel the same way. Now granted, I’m old. So I guess this is just a warning to those who think AI is going to be a panacea. I see it the same way we used to see “up to date” when it first came out. Used as a companion it can be beneficial. Used as primary not so much.
AI is dependent upon a thorough history and accurate, honest clinical studies. I am not a physician but an occupational therapist. In my experience, it often took multiple visits and establishing a connection with my patient to uncover the many factors contributing to the condition. People often do not remember (auto accident 10! years ago with cervical injury that could contribute to current carpal tunnel syndrome), lack of sophistication to report something that may be relevant (pt with multiple admissions for low sodium who turned to to be drinking bottles and bottles of water a day to be healthy), or do not want to reveal the truth. In addition, we are now learning that research has been falsified. In clinical practice, I had the ability to look at studies and question if those studies may have a conflict of interest. Or it may just not have made sense to me and thus I would hold it as questionable. I don’t know if AI will have the capacity to do that. In a perfect world, I think AI would be a fantastic tool. In our flawed world AI will be another tool that has the potential to be used for both good and evil.
Although computer interpretation of ECGs may not be the best example of AI, it should be one of the easiest to develop (recognition of a limited number of patterns). In my opinion, other than the completely normal ECG, computer interpretation of ECGs are often wrong. The computer often misses important diagnoses, and often assigns potentially serious diagnoses to either artifact or normal variance.
I started practice in the 1970s with a cardiology group that contracted with our large community hospital to provide interpretations for all of their ECGs and agree with every bit of the above statements. The fact is that there are lots of variations of normal and judgement is an essential part of ECG interpretation. Judgement (which is also an essential part of medical practice) cannot be computerized.
AI has a couple advantages over human physicians. AI doesn't care what its patient satisfaction scores are and will not suffer if it is sued for malpractice.
There needs to be a Press Ganey report card for AI! A must.
with 5% of its electrons at risk if the results are not good.
🤣
I’m guessing that AI is subject to the same “leash” as human doctors. That is, it is programmed to suggest the accepted “standard of care,” even if it detects a better treatment is available. For example, if it were around in late 2021, AI would not be “allowed” to recommend ivermectin to a patient with a COVID-positive PCR test, even though its safety and efficacy (both superior to Remdesivir) were well-established by then.
AI can’t pray and it can’t give compassion. My love for others will never be replaced. ❤️+
Some will find it odd that the accompanying editorial, which covers the same ground, is not cited. JPK
The adjudication process was not outcome based. We don't know how these patients actually fared just what (biased?) adjudicators thought should happen.
The adjudication process did not involve costs.
I like what it says about unjustified empirical treatment. If AI can stop empirical treatment it should win the Nobel Prize in Medicine. I would like to know what *justified* empirical treatment is.
It will be interesting to see the pace of improvement of these “agents” especially regarding patients who present with more complex histories. How will AI resolve potentially contradictory data?