20 Comments

This is a healthy debate, but at the jump it needs more context. How will AI Medicine be used by Insurance Payers "not to pay?" How will the Government regulators use it to "regulate(spy)" on clinical practices? How will Hospital Administrators use AI to re-route resources to more or less efficient practitioners within their orbit..............These need to be addressed head on.

Expand full comment

the biggest concern is "...And they draw from the vastness of knowledge on the Internet."

Expand full comment
May 3, 2023·edited May 3, 2023

I will never use chat-gpt. How do I know that chat-gpt is not controlled by another chat-gpt?...and then another? Is all literature that these A/I dunces might reference really the truth? Is all that is written about history really the truth? Is all that is written about anything really true? Am I gonna trust Dr. google? Not in a million years.

All of these medical chatbots will be underpinned by standard medical info which is mainly controlled by the AMA and big pharma. Where do other reasonable answers and solutions come in such as natural remedies? Is my chatty kathy robot programmed to give me options beyond standard medical care?

Expand full comment

We might ask where these AI's get their data. Advanced literature search in a journal world corrupted by pHarma? While we often use Dr Google, do we trust that for complex cases?

Expand full comment

I agree with the criticism of the experiment being framed as a research paper (with P values and allas). The specifics of this paper will have near zero external validity. But as a 30,000 foot perspective of the lay of the land (and perhaps it would’ve been better suited as a perspective or opinion piece rather than “original research”), I think it serves a purpose.

I’ve used GPT-4 a bit, and not for anything medical, but it’s answers are far more fulsome and useful than a Google search. For starters, I have little doubt patients will start consulting Dr. Chat instead of Dr. Google. So my concern would be that it is only giving circa 2021 answers in 2023. I would take no umbrage (or be at all surprised) that it gave better answers than some Reddit docs.

Expand full comment

While not totally relative to Chatbot, and as a non certified medical professional, I offer up my one comment re: "And deterioration of patient-doctor relationships are on a steep decline. Exhibit A: the rise in walk-in clinics."

Permission granted or coerced or otherwise, for Pharma's to advertise on Television, has medically, destroyed the USA and perhaps New Zealand. ( N.Z. was reported some time ago as the 'Only country other than the USA permitting such outrageous selling out of Doctor-Patient relationships.'

Expand full comment

This article both makes and misses the important point. People have loved Eliza (a long-before AI imitation of a psychotherapist) for years...and still do. No AI there -- just a sympathetic, psychoanalyst-like set of interactive prompts with which people will positively engage for hours. (This was recently in the news because Eliza did not tell a patient to seek professional help and he committed suicide. Eliza was developed as a "toy" demonstration project decades ago -- it was not meant to DO anything.)

For the kinds of questions that are asked on reddit (which is the focus of the article) I am certain that ChatGPT (4 is even better than 3.5 by a fair amount) will do a better job. It has access to an almost infinite amount of "data" and an almost infinite amount of time (and electricity) to aggregate it. If health care were about answering questions, it would have a definite place -- and likely will get that.

But I always tell my patients two things: 1) You are your own science experiment -- I know lots about lots of patients but which part of what I know applies to you is something we will discover together and 2) Our interaction is based 20% on what I know after medical school, residency, fellowship and decades of practice and 80% on how well I can take that information and correctly make it be about YOU.

ChatGPT and its peers may know more than I do about my areas of medicine (although Larry Weed's research with Physician Knowledge Couplers says otherwise) but it knows nothing about you, and because of the broken structure of health informatics it likely will not for the foreseeable future. What you tell it about you is subject to nuances that you are unlikely to understand and that an LLE will not understand (since LLEs actually "understand" nothing).

Irrespective of the reddit study which is interesting but, in many ways irrelevant to medical practice, it is as easy to make ChatGPT confabulate/hallucinate in health as it is in anything else. And, of course, it is pre-wired to only give advice that complies with the diktats of whomever is in power now. As with the Eliza patient's suicide, the effects can and will be devastating. But even more so (and we have extensive experience with Watson underscoring this) the information one gets, while great for a general answer on a reddit blog, is only coincidentally of value to YOU -- and sometimes can be inimical.

This is about the 10th time during my life (going back to Ted Shortliffe and MYCIN) that AI was going to radically change medicine. Every one of these cycles has failed for the same reason -- knowledge about medical facts has almost nothing to do with appropriately and optimally caring for patients where the N is always ONE. In that regard, we have not yet seen artificial stupidity -- which, based on numbers, will have to precede any kind of actual care-managing artificial intelligence.

Expand full comment

The one problem I have with this review is the notion that patients no longer have personal relationships with their physicians and the implication that we might as well let this part of medicine fade away. If anything, this study indicates the real need to bring back to medicine the physician who knows the patient.

Expand full comment

Well, the chatbot is better than Redit. I guess that is what we can conclude. How long will it take before it will be better than any doctor? Not so long. But still, doctors are needed when it actually comes to examine someone and often also to treat. But to come up with a first advice I think the AI bot will soon take over.

Expand full comment

Chat GPT is the ENIAC computer of the AI revolution. It is but a primitive preview of that which is about to happen. AI will bring greater change to our world than we can ever imagine today and that change will happen more rapidly than we can accommodate. We quite simply predict where this is going to go in the next decade, much less the future.

Expand full comment

I agree. It’s not AI or people. It’s AI with people that is the game changer. I personally use ChatGPT to provide a framework for marketing copy for my companies. I don’t use it word for word but it provides a very good starting point

Expand full comment

People can be as critical of the paper as they like. This is definitely a before/after moment.

I encourage anyone who is skeptical about the future of MedicineGPT to try asking Bing a medical question and then try the same query in UpToDate.

The Bing response (which uses GPT4) is likely to be much, much better.

Some recent examples I have tried:

1. What is the evidence for albumin

2. What cancers cause elevated CA 19-9

3. What is the treatment for phantom limb pain

In each case GPT4 was far more helpful than UpToDate. LLMs won’t replace expertise, but they are a leveling up of the average. Medicine is now a strong link problem rather than a weak link problem.

Expand full comment
May 1, 2023·edited May 1, 2023

Folks who care about evidence should and have rip this study apart.

The effect of AI on the doctor/patients is a great topic.

The study? As folks say in the business, this is at best hypothesis forming. And for folks paying attention, it's not even a novel hypothesis.

Expand full comment

Comparing low quality online advice to a bot is meaningless. My limited trail of the bot demonstrated it to be as useful as a basic medical textbook. So to look up an answer to a simple question it is very good. As I ponder the debate, I consider the only way a Bot can ever surpass an in person doctor visit would require a complete surrender of all medical privacy to the Bot. I would not agree to living in that world. Perhaps I am lucky to be over 60. The data input into medical records on a patient visit is never complete and must be contextualized. These limits will limit what the bot can infer. The Bot can search all the available literature fast and will be a useful tool for a medical practitioner. PubMed is already a great tool provided the clinician is not to busy or distracted to use it. A lot of data that a doctor captures on a patient visit is never entered into a medical record. The local, occupational, dietary, variance of a given patient and the likely compliance all is never fully captured by a bot question and the data entry. AI is a misnomer as these Bots are not intelligent but only extract data based on a human entered algorithm. Rapid access to data will always improve medicine but can never replace a doctor. I was told when I entered pathology I would be replaced by PCR testing. NOT! But we do use PCR testing as a diagnostic tool every day. So yes data extraction tools will be in the tool box of competent medical professionals. The "AI" will simply be a tool upgrade but that doesn't really stir emption like pretending your doctor will get replaced. Go online and try to get help via the web on a company that refuses you the connection with a human. It works for the simple repeatable issues but does not allow for the input of complex situations and an appropriate situational response. The same will be for any AI tool.

Expand full comment

Chatbots can also be programmed to prescribe the latest drugs and widen further the prescription of best sellers (think statins with actualized LDL thresholds for example) more easily than trying to convince all MD of new guidelines.

Expand full comment
Comment deleted
Expand full comment