Discussion about this post

User's avatar
Dr. K's avatar

This article both makes and misses the important point. People have loved Eliza for years...and still do. No AI there -- just a sympathetic, psychoanalyst-like set of interactive prompts with which people will positively engage for hours. (This was recently in the news because Eliza did not tell a patient to seek professional help and he committed suicide. Eliza was developed as a "toy" demonstration project decades ago -- it was not meant to DO anything.)

For the kinds of questions that are asked on reddit, I am certain that ChatGPT (4 is even better than 3.5 by a fair amount) will do a better job. It has access to an almost infinite amount of "data" and an almost infinite amount of time (and electricity) to aggregate it. If health care were about answering questions, it would have a definite place -- and likely will get that.

But I always tell my patients two things: 1) You are your own science experiment -- I know lots about lots of patients but which part of what I know applies to you is something we will discover together and 2) Our interaction is based 20% on what I know after medical school, residency, fellowship and decades of practice and 80% on how well I can take that information and correctly make it be about YOU.

ChatGPT and its peers may know more than I do about my areas of medicine (although Larry Weed's research with Knowledge Problem Couplers says otherwise) but it knows nothing about you, and because of the broken structure of health informatics it likely will not for the foreseeable future. What you tell it about you is subject to nuances that you are unlikely to understand and that an LLE will not understand (since LLEs actually "understand" nothing).

Irrespective of the reddit study which is interesting but, in many ways irrelevant to medical practice, it is as easy to make ChatGPT confabulate/hallucinate in health as it is in anything else. And as with the Eliza patient's suicide, the effects can and will be devastating. But even more so (and we have extensive experience with Watson underscoring this) the information one gets, while great for a general answer on a reddit blog, is only coincidentally of value to YOU -- and sometimes can be inimical.

This is about the 10th time during my life (going back to Ted Shortliffe) that AI was going to radically change medicine. Every one of these cycles has failed for the same reason -- knowledge about medical facts has almost nothing to do with appropriately and optimally caring for patients where the N is always ONE. In that regard, we have not yet seen artificial stupidity -- which, based on numbers, will have to precede any kind of actual care-managing artificial intelligence.

Expand full comment
Mary S. LaMoreaux's avatar

It makes you wonder if ChatGPT is so good, or our health system has gotten so impersonal. My cataract surgeon has no idea who I am, but my regular eye doctor has known me for years. Thats why I go to her. The cataract surgeon just lines everyone up like cattle and makes a fortune.

Expand full comment
9 more comments...

No posts