As long as medical records are mostly for documentation for payment, I think it's a hopeless task do try to do it together with medical communication.
I never used an EMR until I joined a group that had one about 3 years before I retired in 2017. Never brought a computer into the room until then, just the paper chart. Always dictated a note after I left the exam room in plain medical prose, saying what the problem was, what the findings were and what we agreed to do. My trusty longstanding transcriber got the note back in a couple of days. In later years I did maintain a problem list and a drug list that I made up myself.
When I ran into the EMR I couldn't bring myself to use the point and click method which I found hopeless for real appropriate detail. I just used Dragon Dictate, which was included in the EMR, after I left the room and sadly had to give up my transcriber, although the dictation software wasn't nearly as good as she was. I had to proofread everything. The computer was useful to look up old labs and such and was admittedly a lot less trouble for this purpose than the paper charts which got pretty voluminous over the years.
Somehow, I always got paid.
Today's records, which I always look at after I go to my docs, are almost worthless from the standpoint of concise and accurate transmission of worthwhile information. They're frequently wrong. I don't mean to be critical. I'm just interested since it's me. The payors just want the correct gobbledygook. My dictated records were what I relied on to review before I went into to see the patient. I'm wondering how today's docs function properly without them. I think it's a real detriment.
I would think, as Dr Schloss is indicating, the modern docs could let AI do its thing automatically for the payors, and then dictate a real note after the visit. AI might be as good as a human transcriber after it gets the hang of your style.
Spot on. I've over 50 years clinical experience plus teaching and administration and have seen only one or two notes worth reading. Typemanship has destroyed clinical medicine such that the mature are deserting ship and the best and the brightest no longer apply.
For myself it's back to the future. I'm becoming "the frugal concierge" and going back to charting only what will help me figure out what is wrong with my patient. Isn't that what charting was created for?
Well said. Let our software talk to their software and exchange codes, approvals and finances; meanwhile,we’ll talk to the sick lady and attempt to make her feel better, with treatment, medication, or listening and providing reassurance.
It’ll never happen. IT and the CFO’s eyes will glaze over, and mutter something about “…we’ll try to get that into the second quarter budget next year,” as they hustle off to another Very Important Meeting.
I did it myself when I encountered an EMR in the last 3 years of my practice. I didn't use the stupid point and click function, but instead just dictated my note into the record after the visit as I was accustomed to do using Dragon. It wasn't hard but took longer because I had to proofread the note. I'm assuming the software could now do a better job at transcribing especially if AI gets into the act.
I remember when what I called mouse click medicine was increasing with Cerner; suddenly patient care came from mouse clicks. Awkward as a provider, scary as a patient. As I have said a time or two, I worry less about those who are thorough with notes and using AI and the EMR as a tool, rather than the brain.
Give it about a year before every large hospital network/university system’s pharmacy and accounting departments have input into their system, allowing the clinical/documenting system to respond with “I’m sorry, Dr Dave, but I’m afraid I cannot let you do that.”
—-But TOTALLY not for financial reasons, don’tcha know…for ‘quality care’.
Our local system literally does not ALLOW clinicians to simply call an Rx to the local pharmacy, for a patient. One is chastised, if it is not entered through The System.
All the wrong people are pushing and supporting AI.
We used a different AI to generate notes, associated with the athena EMR. They contained about 90% of the info I would want, took 5 times the words to do it, and were organized so that it was harder to find the next time I saw the patient, making future chart prep harder.
Also, about 10% of the time the system would fail catastrophically leaving me with only my memory to produce my clinic notes from because I wouldn't have scribbled or typed anything during the visit because AI was supposed to do that.
Not quite ready for prime time was my evaluation, but I'm old. Maybe if I were 30, I'd be able to tolerate its short comings better.
John Mandrola was right. Your crisp perspective on the Epic mess vs. Abridge clarity and simplicity is spot on. I am a semi-retired cardiologist. My joy these days is helping my long-time and elderly patients navigate their way through an increasingly complex and impersonal health care world. Abridge has made it immensely easier to have meaningful and personal conversations with my friends (who masquerade as patients). For better or worse, it is also a career extender.
In another life this was right in my wheelhouse - I used to work with a group that studied the usability of medical devices.
Epic didn't always suck as badly as it does now. It's collected cruft over the years partly because administrators, billing and lawyers drive the features more than clinicians. It'll be interesting to see if AI implementations can escape that dynamic.
First of all, AI is not intelligent. It is yet another piece of pre-programmed software, with some customization options and fast networking. Software. Nothing more, nothing less.
> synthetic agents that host clinical conversations and then deliver care before a human physician is ever involved
The core programming is secret and nobody really knows who, how often and in what way can alter its parameters, commands or what not. Exactly the same style as with the so called social media.
If I wanted to use this software in HC setting, I would need to check and double check all its productions on a regular basis. For the simple reason that accidental change of units of measure in dosing or similar product names may turn out catastrophic.
Will the HCWs check and double check AI outputs? No way.
Why? Because, first, they are already overburdened and hate their work as a result. They will simply affix a mental stamp “AI generated - must be true”. No liability attached. Second, AI is advertised as a God-like relief tool and miracle. It is preloaded with the trust Trojan horse. It is even streamlined to enhance this trust impression in fake conversations with the user. Many of AI’s responses are meaningless echoes of the user’s statements - adding the feel of a conversation flow, but stealing time and hiding the real machine nature of the software.
Thirdly, when you consult a human being, you can ask “How certain are you?” or “Have you done this or that?” The paperwork says one thing, but the reaction of the other person triggers “Lying” alarms right away. Or you do not have courage to ask such questions because the other person is a pre-AI god in the hospital - bad idea.
All in all, new type of software is being pushed which permanently and deeply alters the rules of human interactions and communication. Not in a good direction.
I suspect that much of what you write is correct. However, I don't think that we're going to put the horse back in the barn. Hopefully more people will think carefully about the potential downsides to this so that we can at least be cognizant of them and work in our own ways to ameliorate them. But I am not confident at all that this will happen.
My opinion is the same as yours. Too much benefit for those who already enjoy huge benefits. Or, laziness begets laziness.
As with many novelties, I guess that we have to wait until a series of deaths or serious injuries occurs in the medical settings, and some real heads will be publicly punished.
The premise of ultrafast access to resources is great, provided that the quality of these resources is impeccable - which in medicine simply does not exist. Maybe widespread application of AI will force pruning the whole body of medical knowledge and resetting this archaic system based on past beliefs or fantasies.
More probably, medicine will split into a high-end super-quality field only for those are can afford it or are allowed to take advantage of it, and the shining and blinking magic screens for the rest of us. Programming it is super easy, and associating service quality with authorization rights of the user is even easier. And it is already in place in every single piece of software used in the medical industry.
Agree with you, change needs to occur for patient welfare. Now to convince the nonsense administrators of healthcare-lawyers-administrators-insurance companies.
Unfortunately, it will not be possible to convince the bureaucrats and parasites that have taken over the system. They must be eliminated or, at least, forced back into the relatively minor roles they played before. When I started in medicine in the 1970s, the thought of allowing an administrator or insurance company to dictate medical decisions or mode of practice was absurd. Although never a truly free market system, medical care needs to return to something resembling that earlier model. Is that likely or even possible? Probably not. It would require a cadre of young doctors with imagination and visions of a better way to set up and re-establish a relatively free market system.
As long as medical records are mostly for documentation for payment, I think it's a hopeless task do try to do it together with medical communication.
I never used an EMR until I joined a group that had one about 3 years before I retired in 2017. Never brought a computer into the room until then, just the paper chart. Always dictated a note after I left the exam room in plain medical prose, saying what the problem was, what the findings were and what we agreed to do. My trusty longstanding transcriber got the note back in a couple of days. In later years I did maintain a problem list and a drug list that I made up myself.
When I ran into the EMR I couldn't bring myself to use the point and click method which I found hopeless for real appropriate detail. I just used Dragon Dictate, which was included in the EMR, after I left the room and sadly had to give up my transcriber, although the dictation software wasn't nearly as good as she was. I had to proofread everything. The computer was useful to look up old labs and such and was admittedly a lot less trouble for this purpose than the paper charts which got pretty voluminous over the years.
Somehow, I always got paid.
Today's records, which I always look at after I go to my docs, are almost worthless from the standpoint of concise and accurate transmission of worthwhile information. They're frequently wrong. I don't mean to be critical. I'm just interested since it's me. The payors just want the correct gobbledygook. My dictated records were what I relied on to review before I went into to see the patient. I'm wondering how today's docs function properly without them. I think it's a real detriment.
I would think, as Dr Schloss is indicating, the modern docs could let AI do its thing automatically for the payors, and then dictate a real note after the visit. AI might be as good as a human transcriber after it gets the hang of your style.
Spot on. I've over 50 years clinical experience plus teaching and administration and have seen only one or two notes worth reading. Typemanship has destroyed clinical medicine such that the mature are deserting ship and the best and the brightest no longer apply.
For myself it's back to the future. I'm becoming "the frugal concierge" and going back to charting only what will help me figure out what is wrong with my patient. Isn't that what charting was created for?
Well said. Let our software talk to their software and exchange codes, approvals and finances; meanwhile,we’ll talk to the sick lady and attempt to make her feel better, with treatment, medication, or listening and providing reassurance.
It’ll never happen. IT and the CFO’s eyes will glaze over, and mutter something about “…we’ll try to get that into the second quarter budget next year,” as they hustle off to another Very Important Meeting.
I did it myself when I encountered an EMR in the last 3 years of my practice. I didn't use the stupid point and click function, but instead just dictated my note into the record after the visit as I was accustomed to do using Dragon. It wasn't hard but took longer because I had to proofread the note. I'm assuming the software could now do a better job at transcribing especially if AI gets into the act.
I remember when what I called mouse click medicine was increasing with Cerner; suddenly patient care came from mouse clicks. Awkward as a provider, scary as a patient. As I have said a time or two, I worry less about those who are thorough with notes and using AI and the EMR as a tool, rather than the brain.
Give it about a year before every large hospital network/university system’s pharmacy and accounting departments have input into their system, allowing the clinical/documenting system to respond with “I’m sorry, Dr Dave, but I’m afraid I cannot let you do that.”
—-But TOTALLY not for financial reasons, don’tcha know…for ‘quality care’.
Our local system literally does not ALLOW clinicians to simply call an Rx to the local pharmacy, for a patient. One is chastised, if it is not entered through The System.
All the wrong people are pushing and supporting AI.
We used a different AI to generate notes, associated with the athena EMR. They contained about 90% of the info I would want, took 5 times the words to do it, and were organized so that it was harder to find the next time I saw the patient, making future chart prep harder.
Also, about 10% of the time the system would fail catastrophically leaving me with only my memory to produce my clinic notes from because I wouldn't have scribbled or typed anything during the visit because AI was supposed to do that.
Not quite ready for prime time was my evaluation, but I'm old. Maybe if I were 30, I'd be able to tolerate its short comings better.
John Mandrola was right. Your crisp perspective on the Epic mess vs. Abridge clarity and simplicity is spot on. I am a semi-retired cardiologist. My joy these days is helping my long-time and elderly patients navigate their way through an increasingly complex and impersonal health care world. Abridge has made it immensely easier to have meaningful and personal conversations with my friends (who masquerade as patients). For better or worse, it is also a career extender.
Google Glass for Medicine has already solved the transcription problem. Augmedix has a great product!
In another life this was right in my wheelhouse - I used to work with a group that studied the usability of medical devices.
Epic didn't always suck as badly as it does now. It's collected cruft over the years partly because administrators, billing and lawyers drive the features more than clinicians. It'll be interesting to see if AI implementations can escape that dynamic.
Can you write an essay where corporate primary care doctors aren't involved in patient care at all?
In the me, the hospital scene from Idiocracy is a documentary:
https://rayhorvaththesource.substack.com/p/time-masheen
It’s also intriguing what kind of “madical” (sic!) paradigm is being applied:
https://rayhorvaththesource.substack.com/p/my-balance-theory-of-health-and-illness
First of all, AI is not intelligent. It is yet another piece of pre-programmed software, with some customization options and fast networking. Software. Nothing more, nothing less.
> synthetic agents that host clinical conversations and then deliver care before a human physician is ever involved
The core programming is secret and nobody really knows who, how often and in what way can alter its parameters, commands or what not. Exactly the same style as with the so called social media.
If I wanted to use this software in HC setting, I would need to check and double check all its productions on a regular basis. For the simple reason that accidental change of units of measure in dosing or similar product names may turn out catastrophic.
Will the HCWs check and double check AI outputs? No way.
Why? Because, first, they are already overburdened and hate their work as a result. They will simply affix a mental stamp “AI generated - must be true”. No liability attached. Second, AI is advertised as a God-like relief tool and miracle. It is preloaded with the trust Trojan horse. It is even streamlined to enhance this trust impression in fake conversations with the user. Many of AI’s responses are meaningless echoes of the user’s statements - adding the feel of a conversation flow, but stealing time and hiding the real machine nature of the software.
Thirdly, when you consult a human being, you can ask “How certain are you?” or “Have you done this or that?” The paperwork says one thing, but the reaction of the other person triggers “Lying” alarms right away. Or you do not have courage to ask such questions because the other person is a pre-AI god in the hospital - bad idea.
All in all, new type of software is being pushed which permanently and deeply alters the rules of human interactions and communication. Not in a good direction.
I suspect that much of what you write is correct. However, I don't think that we're going to put the horse back in the barn. Hopefully more people will think carefully about the potential downsides to this so that we can at least be cognizant of them and work in our own ways to ameliorate them. But I am not confident at all that this will happen.
My opinion is the same as yours. Too much benefit for those who already enjoy huge benefits. Or, laziness begets laziness.
As with many novelties, I guess that we have to wait until a series of deaths or serious injuries occurs in the medical settings, and some real heads will be publicly punished.
The premise of ultrafast access to resources is great, provided that the quality of these resources is impeccable - which in medicine simply does not exist. Maybe widespread application of AI will force pruning the whole body of medical knowledge and resetting this archaic system based on past beliefs or fantasies.
More probably, medicine will split into a high-end super-quality field only for those are can afford it or are allowed to take advantage of it, and the shining and blinking magic screens for the rest of us. Programming it is super easy, and associating service quality with authorization rights of the user is even easier. And it is already in place in every single piece of software used in the medical industry.
Plus, the omnipresent bots. Ah.
Agree with you, change needs to occur for patient welfare. Now to convince the nonsense administrators of healthcare-lawyers-administrators-insurance companies.
Unfortunately, it will not be possible to convince the bureaucrats and parasites that have taken over the system. They must be eliminated or, at least, forced back into the relatively minor roles they played before. When I started in medicine in the 1970s, the thought of allowing an administrator or insurance company to dictate medical decisions or mode of practice was absurd. Although never a truly free market system, medical care needs to return to something resembling that earlier model. Is that likely or even possible? Probably not. It would require a cadre of young doctors with imagination and visions of a better way to set up and re-establish a relatively free market system.