18 Comments

I am always amazed at the length physicians must go to in order to justify the decisions experience has taught them. Machines will never replace the art of the experienced physician. The Herbert Brothers understood what an expert was. Remembering that one patient from 30 years ago which didn't fit the books. Would that those trying to codify the medical decision-making process had a clue.

Expand full comment

The Anti-Defamation League Defames Substack.....

The ADL calls for Substack to be deplatformed from Twitter, and then four days later, Substack is magically deplatformed from Twitter

https://karlstack.substack.com/p/the-anti-defamation-league-defames-substack?utm_source=post-email-title&publication_id=353444&post_id=112553413&isFreemail=true&utm_medium=email

Expand full comment

This ECG is not diagnostic of an STEMI. Looks more like an old mid-LAD occlusion with an anteroapical MI/aneurysm. Elevated Troponin supports ACS, but doubt LAD was culprit vessel.

Would be interesting to see the LVgram.

Expand full comment

We have a saying in Russian: "Trust, but check." I know, not very eloquent, but simple. You always have to check to make sure nothing was missed. Thank you for doing it right!

Expand full comment

My Interpretation of that 12 lead ECG is that this is not an acute anterior STEMI because the T waves are inverted across the precordium, something that indicates the event is subacute. Absence of R wave in v3 would also go along with this MI having been completed. How long had the patient had chest pain? Troponins, of course, will be elevated for days after the acute event. Were they trending downward? You said the patient had CABG rather than stunting, which also indicates to me there was no acute thrombus in the LAD. Although I'm a huge critic of computer reads of ECGs and see anterior MI age indeterminate due to poor R wave progression. I think in this case "age indeterminate" may have been correct.

Expand full comment

The point about the limitations of interpretative algorithms in the current generation of ECG machines is well taken. However, those algorithms are relatively rudimentary compared to what machine learning may be capable of in the foreseeable future. Already, there are examples of AI -enhanced ECG algorithms that automatically detect (and allow for early detection of) genetic causes of arrhythmia, or of cardiomyopathy. These are currently not yet ready for prime time. However, I think what ChatGPT has shown is that a “leap” in capability is indeed possible in the machine learning/deep learning arena (i would say more so than with human diagnostic capability). I think that concept should translate to other areas of data based diagnostics (ie. AI driven algorithms to read ECGs, echo, CT, MRI). Of course, I doubt any AI mechanism will be capable of assessing the patient’s JVP, or auscultate for lung crackles, let alone navigate a left Judkins into the left main anytime soon. But I’m not sleeping on the capacity for AI to be of useful assistance in diagnosis likely within the timeline of my career.

Expand full comment

Very interesting. I wish I could have gotten a printout of my stress test of some 18 months ago. I wish the cardiologist would have taken 10 minutes to explain what he saw...verses what is normal. I know, doctors have no "extra" time for many patients. I guess most patients just want a test, a diagnosis, and a prescription. What is even more interesting is why would a mid 50's man need a triple bypass so early in life? Terribly sad is that medicine has no desire to discover those specific reasons.

No way will I ever trust a machine to interpret test results or much of anything else. A/I may be good for answering the phone (maybe) but not much else. Looks to me like A/I can be directed in any way the programmers want it to go, including offering political motivations, lies, propaganda and eliminating certain things. If it comes from man, it can be manipulated and will never be bias free.

Expand full comment

Good example to support use of AI in urgent care scenarios of a certain type, yet the huge challenge yet to overcome will be discerning case types when AI should not loom too large, if at all, in the overall choreography of particular diagnosis and therapy selections (including timing). How such issues will get hammered out is totally up for grabs. I foresee creation of new journals, new national meetings, maybe even new "board certifications" for doctors/nurses and {God help us} a large blizzard of new bullshit and new buzzwords and new jingles. I even foresee creation of yet another managerial niche in modern healthcare temples: AI Officer, perhaps one for every specialty. Not kidding. And do not forget that the Bean Counters with the green eyeshades will no doubt have a way to lobby for putting into every single AI "software template" various concealed gimmicks that will be carefully crafted LOSS PREVENTORS (i.e. money-saving sidetracks).

Expand full comment
Apr 7, 2023·edited Apr 7, 2023

And the AI Officer will monitor every keystroke, every data entry, and make sure you account for every intervention you initiate, and then schedule some time management training because you're not seeing enough patients. As I said many times in computer related training, I still have a job without your program, without you, but you have no job without me. Not sure how, but docs and everyone else will at some point have to humble the IT people who refuse to ever humble themselves.

Expand full comment

Thanks for this contribution Ruth. I had not envisioned the "after-action snooping" that AI assuredly can achieve with great penetration, silently behind the scenes -- viz. monitoring ALL of our professional actions even MORE than what is now being done manually and slowly by the Chart Reviewers sitting up on the top floors of hospitals right down the hall from the suites where overpaid CEOs with the bleached teeth, the fixed smiles, the Gucci loafers, and the constant streams of bullshit uttering get to sit around thinking of new but inane Power Point slides to spice up their next Morale Boosting effort. This whole AI mess has high potential to take 'Murikan healthcare the rest of the way down the commode. Pretty much a straight flush.

Expand full comment

Exactly. In the court system, we had managerial types, none of whom were respected probation staff prior to their appointments, working with IT staff to monitor keystrokes, content of notes regarding meetings with clients and calendars with client meetings to determine what notes one should have added. I kept written notes to reference prior to future meetings, and a written calendar, and often failed to meet the computerized expectations. Had I not retired, I would likely have been disciplined or worse for the first time in 30 years. Hopefully you and your colleagues are able to infiltrate the top floors and effect some change in this foolishness. After all, they have no job without you.

Expand full comment

That's a good one. A/I officer...will he be a promoted A/I robot who did well in school?

Expand full comment

Please. . . . . .will "SHE/HE" be promoted. . . . . .the AI monitor function would have caught you redhanded being "inappropriate gender-wise" and sent an electronic signal to your supervisor. Then, in more followup of the followup, the AI monitor would watch to see if the supervisor took any action. This is one example of a plausible nightmare that can unfold going forward (buzzword alert). You can't make up this shit.

Expand full comment

I’m optimistic about AI’s possibilities in medicine but only if it’s tempered by human judgment of the sort you describe. Since experiencing AFIB in 2016, I’ve been greatly assisted by my AliveCor Kardia devices. They helped to avoid unnecessary ER visits and at at least once, prompted a necessary visit. But, as you describe, there was always a human at the end of the process issuing judgment on the algorithms.

The optimistic side of me suspects that the AI will for many processes push the need for that human judgment later in the process by reliably automating the early steps.

My pessimistic side worries that the humans at the end of the process will get in the habit of rubber-stamping the algorithm, rather than evaluating it.

AI ought to be thought of as an expert witness and not as a judge.

And on the subject of ChatGPT, my concern is that, for now, at least, it’s output is correct often enough to make it persuasive, but incorrect often enough to make it dangerous. And it’s extreme popularity could make make its a constant and unhelpful component of healthcare by greatly lowering the signal-to-noise ratio in medical conversations. (See my recent piece, "The Talented Doctor Ripley" at https://graboyes.substack.com)

Expand full comment

Ur entire comment is spot on! As long as bean counters don’t further dehumanize medicine to the almighty $ we will be ok for the most part. The problem with computers treating humans is that they lack that human touch, which we all know and need. AI can never have that.

Expand full comment

Thanks, Jim! I once attended a lecture by a well-known figure who argued that ATMs were invented as a way for banks to save money by dehumanizing day-to-day banking tasks--thus harming consumers. I thought the argument was idiotic. I am an off-the-charts extrovert who thrives on small talk. However, I'm thrilled that cashing checks, getting a stack of twenties, and making deposits do not require me to go to a bank, stand in line, and then jabber with a cashier. Those tasks, it turns out, are not enhanced by the human touch. (Plus, when I need a human, I still have the option to walk in and talk to one.) Similarly, people feared robotic pharmacies out of a belief that human judgment would reduce errors; of course, it turns out that the machines made fewer errors than the fallible humans. The real challenge going forward is deciding what needs a human touch and what doesn't.

P.S.: I still have it on my to-do list to write you. Back with you soon.

Expand full comment

I don’t think that Artificial Intelligence will ever be able to beat experienced physicians who are able to read subtle symptoms and look at the whole picture.

Expand full comment

We can only hope that A/I doesn't take over the world.

Expand full comment