I'm more concerned about what we'll do to ourselves in trying to meet the standards set by machines, than what will machines might do.


Expand full comment

My only reservation is throwing steroids at sciatic nerve pain is symptom responding and not addressing the underlying cause. Even the human expert response does not effectively communicate the risks / benefits / purpose of steroids in this context. His sciatic nerve is flaring because of somatic dysfunction which may or may not be addressed with a good PT. This post also illustrates what’s disappointing to me in modern medicine. Physicians just respond and often satisfy patients prescription needs because it’s lucrative and less laborious.

Expand full comment

That's a good read, thanks. I totally agree that there is currently no acceptable method of getting assessable health metrics into a LLM for diagnosis, without a professional that can draw it out of the patient. These kind of exercises might be made more interesting if the LLM could be set with parameters to always ask follow up questions to collect inputs and perhaps even pre-load the prompts with issues that should be teased out of the patient. This too will require expert input and might start to progress the availability of focused LLMs for specialist areas.

Expand full comment

I would add a good physician takes a case history and reviews medications at every appointment. Most Drs I’ve visited have their eye on the clock to move on to their next client. It’s a case of buyer beware when consulting a Dr. Due diligence on the client’s end is a necessary safe guard to checking Dr advice. Sadly most have little to no health education and rely upon ‘expert’ status instead.

I’d say if the public had access to a truly free search engine, all manner of cheap and readily available remedies would become household knowledge again and big pHARMa would lose 70% of its trade in drugs. The placebo effect or the ‘expert’ status confers a 70% improvement in a believer.

Health education must be placed back into the hands of the people.

I have a new take on lung and blood physiology that dismisses the gaseous exchange of oxygen and carbon dioxide.

My Substack article is titled:

We breathe air not oxygen.

We measure air by its moisture content. It’s humidity.

We measure oxygen by its dryness. For example: medical oxygen has 67ppm of water contamination. Industrial oxygen has 0.5ppm of water contamination.

The lung alveoli requires air to reach 100% humidity. This tells us the lungs are a wet system.

Can you see the mismatch? Can you see a path for damage and destruction?

Research oxygen toxicity.
Oxygen is prescribed primarily for the terminally ill, not for breathlessness.

Palliative Care is not kind.

The lungs hydrate the RBCs as they pass through the alveoli capillary nets with salt plus water.

The saline drip is the hospital’s no 1 reviver because it rehydrates red blood cells with venous exposure.

The RBCs have two states, dark contracted = DEHYDRATED and light expanded = HYDRATED.

The red light monitoring is checking hydration not oxygen saturation.


Expand full comment
Dec 20, 2023·edited Dec 21, 2023Liked by Adam Cifu, MD


I love your work, but you are likely wrong vis-a-vis the impacts of AI. This is because of profound foundational issues in the probabilistic AI model. Your illustrations make the point -- so why you would think that this will all get better over time is elusive.

Of course, there are areas that the current high-iteration AI approaches can do/will do better than people. However, we could have a more-than-interesting conversation about what the actual impacts of AI will be on medicine.

I have been working in this space since Shortliffe/Feigenbaum. The difference in medicine (and fields like it) is that the probabilistic underpinnings of virtually all AI models render the tool inappropriate for the job of caring for patients (excluding niche cases like computer vision for image scanning) which, fundamentally, is the only important job of medicine.

A solution that is 90% likely to be right still kills 10% of the patients; and because of fundamental technology issues something between 90% and 95% correct seems to be the min/max for probabilistic AI approaches (even with "perfect" training, accuracy/reliability fade somewhere between 90-95% as the tool is overtrained). And the lack of back traceability because of uncertain stochastic operations foundational to the technology makes this failure to provide a "path to truth" FOR EACH PATIENT a permanent issue...as your examples show. And last I looked, that is the whole point of medicine.

These foundational mathematical structural issues must also be considered in light of the fact that there is not (and will never be) a training set for Adam Cifu, or me, or anyone else (which essentially was the reason for the failure of Watson in health...to its credit, Watson does better in weather) and the medical AI problem only expands from there. There have been literally 10 generations of medical AI announced as "saving the world" since Ted Shortliffe did his work. They all have ended up in the same place. This current, largely LLM driven, generation once again has more iterations than the one before but is essentially still the same technology and has the same failings.

There is a potential alternative (on which some of us are working) that involves inserting a deterministic, mathematically sound "Cognitive AI" component into the mix. Description logics are an example of how one underpins this kind of approach. Perhaps this will be the proverbial brass ring...because there will need to be one. Intel has written a decent paper on this problem set that will interest those readers who find this space compelling: https://towardsdatascience.com/the-rise-of-cognitive-ai-a29d2b724ccc (It is on Medium so you will need an account there (free) to read.)

Your "amazing physician expert" will not be able to surmount these issues in traditional LLM/Deep Learning models because they are foundationally unsurmountable as the Intel paper notes. There is much more heat than light in this space if one is actually interested in improving the care of each patient.

Expand full comment
Dec 20, 2023Liked by Adam Cifu, MD

One word; incrementalism.

One injunction; follow the money.

Medicine is subject to the same economic limitations as every other undertaking that requires funding. having already been medically harmed by inappropriate over-reliance on software by medical corporations (on more than one occasion,) I suggest that your optimism is misplaced.

Expand full comment
Dec 20, 2023Liked by Adam Cifu, MD

How many times can you read that canned dialogue?! It’s not even smart enough to give an option of clicking on a request to be called by somebody 😵‍💫. It also cannot pick up the fact that someone is just plain clueless which humans are more likely to. Although the average person answering the phone at most call centers is not much better...

Expand full comment

Hi I was diagnosed with terminal lung cancer that metastasized to the brain. They did 10 radiations treatments to shrink the lesions and that has helped. In the meantime with it happening I have now lost some mobility and coordination in my left leg and I am dependent on a walker to get around. They say this happens one in a million.

When I found out I was terminal I decided to take Ivermectin and Fenbendazole to kill the cancer. The problem there is I have no doctor that will help with what I need to do so I’m on a wing and a prayer.

If anyone out there can help me kill this shit, lengthing my life I would be eternally grateful for the rest of my existence.

Expand full comment

I predict that we are going to see mid-levels practicing with AI overview and that will be touted to be as good as seeing an MD. And sadly in many cases, it probably will be.

Expand full comment

Interesting article that starts out with a statement " I am confident that AI will improve healthcare" and then lists a number of reasons why it will not. I would say that the unavoidable roadblock is that no one can computerize judgement. I learned this many years ago by playing bridge and using a number of computerized programs. One would think that with only 52 cards distributed to four hands that this might be easily adapted to computer play. But I could never find a program that could play decent defense or bid better than a beginning level player.

Expand full comment

What I would like AI to accomplish is dealing with all the black box processes in medical practice. Pre authorization for medications, studies, procedures. Reverse engineering of all the preventive algorithms developed by insurers to restrict access to care. I believe this is a perfect application for the “deep learning “ that constitutes AI. Patterns are recognized and response’s developed. Save physician and staff time and misery. It may come down to pitting provider AI VS. Insurers AI in a battle of the ages.

The other use that occurs to me is the opportunity to organize/analyze the vast amount of data generated daily through medical care ( essentially ongoing trials of both diagnostics and therapies) to recognize outcomes associated with all of the various activities. This was the promise of EHRs and ICD 10 with it’s 125K diagnoses. Hopefully the use of AI technology will be able to derive value from this source.

I do not think the current technology has much bedside/ clinic room use at present as well presented by Dr Cifu and multiple others in the community. The use of AI in radiology is very important however, as pattern recognition is a strength of this technology. Interpretation of a 3D mammogram would be daunting in the absence of CAD technology.

I am optimistic that multiple valuable applications for AI will be developed in the next decade.

Expand full comment

Context is actually something LLMs do reasonably well, so it’s plausible that you could have an AI in the future the ingests all of a patient’s historical medical charts and is able to catch things a doctor might miss. However you’re not going to train that kind of AI by ingesting WebMD since the stuff online is focused on giving bland advice that is theoretically applicable to anyone.

Expand full comment
Dec 20, 2023·edited Dec 20, 2023Liked by Adam Cifu, MD

Thank you for inspiring my article for today. This is actually my field of expertise (although cooperating with doctors for several helped, too):


Expand full comment
Dec 20, 2023·edited Dec 20, 2023Liked by Adam Cifu, MD

" I cannot foresee a time that people will be comfortable receiving counseling from a computer or receiving recommendations about diagnostics or treatments."

Adam, I'm sorry, but I'm already there. I've been blown off so many times by doctors including being refused treatment for a condition that can result in stillbirth that I had in two previous pregnancies before I moved across country. At this point, I'd rather argue with a computer than a human. I suspect a lot of older women are the same way. I was listening to a podcast on menopause by a female doctor. She pointed out that earlier in her career, women who came in complaining of menopause symptoms were known as "WW" - Whiny Women. They rarely got their needs met and were simply blown off. It was only after going through menopause and realizing that their symptoms and problems were hers did she take them seriously. She says possibly only 10% of women with these symptoms are getting help or treatment like HRT. Bring on the AI.

Expand full comment

Oh I just hate to go here but can you imagine AI trying to interpret biology just on gender alone in today’s world? Recommending a PAP for a biological male who now identifies as a woman, or checking for testicular ca on a biological female who now identifies as male would cause the CPU to smoke a bit...

In all seriousness though, this reminds me of when EMR’s came out / designed to make things easier and more efficient. We all know how THAT works...not to mention that it can lead to a lot of “mouse click” medicine.

As far as what I think AI will be used for in medicine, follow the “usefulness” to big business healthcare. Which means potential for $. I doubt that a large percentage of PCP’s will think that way given the nature of compassion they have, but big system bean counters will love anything that AI will do to add $ to the coffers. We all get that.

I agree that no AI can do the thing humans - at least this human - need most when seeking care...compassion and understanding at a person’s often most vulnerable time.

I’ve heard of AI doing music (my chosen post retirement passion) and it cannot touch what humans do. So with that I’ll stick with egalitarian human healthcare.


Expand full comment
Dec 20, 2023Liked by Adam Cifu, MD

It’s been a bit, but I commented in a past S Med post that Medical AI will be how everyone can have Access to healthcare. But at this time and per Dr Cifu’s account it appears it will give access but it may not truly bring a patient Healthcare. Furthermore if AI opens the proverbial flood gates of patients, this piece implies that more physicians will be needed. Which is a present day problem that needs addressing today even without AI.

Access and Healthcare capitalized for emphasis.

Expand full comment