11 Comments
User's avatar
Dr. K's avatar

This article both makes and misses the important point. People have loved Eliza for years...and still do. No AI there -- just a sympathetic, psychoanalyst-like set of interactive prompts with which people will positively engage for hours. (This was recently in the news because Eliza did not tell a patient to seek professional help and he committed suicide. Eliza was developed as a "toy" demonstration project decades ago -- it was not meant to DO anything.)

For the kinds of questions that are asked on reddit, I am certain that ChatGPT (4 is even better than 3.5 by a fair amount) will do a better job. It has access to an almost infinite amount of "data" and an almost infinite amount of time (and electricity) to aggregate it. If health care were about answering questions, it would have a definite place -- and likely will get that.

But I always tell my patients two things: 1) You are your own science experiment -- I know lots about lots of patients but which part of what I know applies to you is something we will discover together and 2) Our interaction is based 20% on what I know after medical school, residency, fellowship and decades of practice and 80% on how well I can take that information and correctly make it be about YOU.

ChatGPT and its peers may know more than I do about my areas of medicine (although Larry Weed's research with Knowledge Problem Couplers says otherwise) but it knows nothing about you, and because of the broken structure of health informatics it likely will not for the foreseeable future. What you tell it about you is subject to nuances that you are unlikely to understand and that an LLE will not understand (since LLEs actually "understand" nothing).

Irrespective of the reddit study which is interesting but, in many ways irrelevant to medical practice, it is as easy to make ChatGPT confabulate/hallucinate in health as it is in anything else. And as with the Eliza patient's suicide, the effects can and will be devastating. But even more so (and we have extensive experience with Watson underscoring this) the information one gets, while great for a general answer on a reddit blog, is only coincidentally of value to YOU -- and sometimes can be inimical.

This is about the 10th time during my life (going back to Ted Shortliffe) that AI was going to radically change medicine. Every one of these cycles has failed for the same reason -- knowledge about medical facts has almost nothing to do with appropriately and optimally caring for patients where the N is always ONE. In that regard, we have not yet seen artificial stupidity -- which, based on numbers, will have to precede any kind of actual care-managing artificial intelligence.

Expand full comment
Vandan Panchal's avatar

I am assuming you are a psychiatrist from your post. I agree with you in the sense, that nuance is what makes the difference between general medical information on Reddit/YouTube vs personalized recommendations based on the complexity of a patient’s condition, and considering the 20-30 variables that play a role in the decision-making process. One size never fits all, and will never fit.

However, I have used GPT4 to create sample scripts of patient and doctor’s interactions, and have improvised my own personal vocabulary. Also, sometimes it does come up amazing analogies of explaining medical conditions to relate to the patient a t their level. Physicians have lost some of the art of how to effectively communicate in this time-constrained assembly-line medicine, where profit and money-generating incentives are so misaligned with the care the patient deserves.

Expand full comment
Dr. K's avatar

Lot of good points here. It is not that AI (or any tool) is never useful. It is just not useful to the "there will be AI doctors" conversation that many wish to have. I view ChatGPT and other inferential engines (whether deep learners like Watson or LLEs like Bard/ChatGPT) as additional tools that will do some things that will make some things better. I think an excellent analog is Adobe Photoshop. Photoshop has made the entire field of photography better. But it will not replace photographers. (And neither will DALL-E, MidJourney or Stable Diffusion although they add a different perspective.)

Expand full comment
Vandan Panchal's avatar

I think of them as tools and copilots that make our lives easier. That one on one conversation in the clinic room is still where the magic of reassurance and counseling will happen. I hope that some day they make them less depressing than they are currently. There is an interesting YouTube video by Esther Perel called Artificial Intimacy and she makes some excellent points, about the difference between her and some patient of her who made a AI Esther chatbot

Expand full comment
Mary S. LaMoreaux's avatar

It makes you wonder if ChatGPT is so good, or our health system has gotten so impersonal. My cataract surgeon has no idea who I am, but my regular eye doctor has known me for years. Thats why I go to her. The cataract surgeon just lines everyone up like cattle and makes a fortune.

Expand full comment
Rural Doc Alan's avatar

I'm wondering if the great grades ChatGPT got can primarily be attributed to physicians no longer having any real connection with their patients now that family practice doctors have become so rare. If you don't know your patient, any generic statement may fit. As the Herbert brothers posited years ago, expertise is remembering something that happened once 30 years ago that may not fit the pattern or the "rules." Expertise is not a collection of facts. The collection of facts may be detailed and accurate, but we don't know if the ChatGPT results really covered the patient's specific circumstances.

Expand full comment
Vandan Panchal's avatar

As an internist, there are many times my quick heuristics can lead to wrong diagnoses, and I think I can't wait for someday to see GPT, integrated into Epic, and create a better summary of all the patient's historical data. Chart review has become more cumbersome due to the sheer amount of noise, in terms of redundant note bloat, which makes the signal-to-noise ratio terribly poor.

Expand full comment
Mirine Richey's avatar

I find so much usefulness with ChatGPT, but not for lactation issues. It misses the mark every time I have presented a case to a group vs ChatGPT. But at least it is open to taking correction.

Expand full comment
Kim Lucas's avatar

1. Many questions can be the equivalent of “are we there yet”? It fails to be interesting and challenging to get into. Especially after a full day of cognitive challenges.

2. Next study: What do patients think? Not all patients want/need the same intensity of an answer. Curious how many patients actually read those canned discharge instructions they are given after an ER visit? Waste of paper and I k most of the time. What I would actually measure is the ability for such a message to ally anxiety or understand that is what is driving the need to communicate.

Expand full comment
Alan Sherrill's avatar

Chat "General Practitioner " T?

Expand full comment
Kat W's avatar

That is fascinating! However who are you going to sue for malpractice if ChatGPT is wrong?

Expand full comment