When I began practicing medicine, use of the Internet or a smartphone could not have been imagined because neither existed. I believe, and Adam has written, that we may be at a similar inflection point with AI. My friend and colleague, Jay Schloss, writes about his early adoption of AI and how it will utterly transform the practice of medicine. Many doctors have bemoaned their loss of professional autonomy. What AI is likely to bring could be far worse for some doctors—extinction. JMM
We often confuse healthcare documentation with healthcare delivery. But any clinician can tell you: the two are not the same. In practice, the EHR is often a shadow play — dot phrases, notes copied forward, boxes mechanically checked to satisfy billing requirements, a flurry of structured data that says very little about the actual care delivered.
Meanwhile, real medicine happens in the room: a patient explains, a doctor thinks aloud, a plan emerges. That subtle, high-context exchange is what actually drives diagnosis, treatment, and patient trust. And it’s largely invisible in the record.
Today, for the first time, new digital tools can directly observe how healthcare is delivered. Ambient AI systems are listening in on thousands of real-world patient encounters — not just what was charted, but what was actually said. In the past, what happened in the room stayed in the room. Now, with machines capturing tone, reasoning, and trust at scale, we’re seeing clinical work in ways that were never visible before. That messy, human data has the potential to reshape how we think about care and who delivers it.
That’s why ambient voice technology, and what companies like Abridge, Inc. are doing, matters more than many realize. (Note: I use Abridge’s ambient scribe tool in my cardiac electrophysiology practice but have no financial interest or inside access.) Unlike many health tech entities trying to model care from flawed EHR outputs, Abridge is collecting thousands of real-world conversations between patients and clinicians. These de-identified conversations include nuance, tone, and decision-making — the backbone of real care. The level of intimacy captured at scale might be unprecedented in human voice communication.
Abridge is the biggest player, but not alone. Other companies, like Microsoft/Nuance (with its Dragon Ambient eXperience, or DAX CoPilot), are also pushing ambient voice tech forward. Augmedix offers a hybrid model combining AI with human scribes.
Ambient voice applications have already proven valuable as physician note-writing scribes. But they aren’t stopping there. Abridge recently introduced a Contextual Reasoning Engine that generates structured data, including predicted diagnoses, orders, and billing-ready documentation. All this current tech largely addresses the physician administrative burden. The raw information is there to do more, but whether AI conversational models will be trained to understand the medical literature, apply clinical reasoning, and even advance to direct patient engagement through simulated empathic speech remains to be seen.
Investors are pouring millions into these solutions, betting they’ll be the foundation of the next wave of healthcare AI. Companies are moving toward real-time clinical decision support — suggesting orders, offering plans, and guiding reasoning. Clinicians have already adopted the core ambient documentation functions enthusiastically. Once they grow confident using ambient AI support tools, and find them hard to work without, they may be ready to adopt new functionality. That earned trust becomes the moat investors value most.
Earlier this year, Eric Topol and Pranav Rajpurkar wrote in The New York Times how available AI-based clinical decision support can outperform human doctors in specific tasks. They envision a future where clinicians and AI work side by side, dividing labor in a way that preserves human judgment and empathy.
But ambient listening systems may enable a synthetic model beyond what Topol and Rajpurkar describe — one that delivers care before a human physician is even involved. The incentives driving these systems don’t necessarily favor partnership; they favor scale and efficiency.
My prediction
The next frontier isn’t just automated note writing. It’s automated care delivery.
Imagine an advanced voice agent conducting a full patient visit. It’s available 24/7, no wait, at the click of a mouse. The agent takes the history via voice chat from a smartphone using empathic words and tone, asks clarifying questions, offers plans based on clinical reasoning models backed by medical evidence. Orders are queued. Counseling is delivered. The encounter is summarized and placed into an inbox for review. The physician later reviews the summary, signs off, sends the suggested orders and maybe follows up with a quick call. A CPT code for AI care is established, and billing could flow in a manner similar to a tele-health encounter. The doctor “sees” 10x the number of patients they used to, asynchronously. The system collects quality metrics. Outcomes are tracked.
Need a physical exam? The system offers an offramp: it can schedule an in-person visit, upload a photo, or route to a nurse. Some encounters start with the agent and follow up in person: “Let’s start the antibiotic today and have someone take a look Friday.” Sometimes the agent and patient can’t reach resolution, or the situation calls for a more human touch — a traditional call or visit is scheduled.
To many, this sounds like science fiction. But today’s advanced voice models — like OpenAI’s “Sol” — show how real this can feel. These agents simulate empathy with a responsiveness that would have seemed impossible a year ago. Listen to an example of simulated earache care here. Pair that realism with ambient conversational training data and deep knowledge of the medical literature, and you have the foundation for something far beyond documentation — a synthetic physician.
The incentives for health systems to create an automated care model are obvious: scalable, always-on care. More patients per clinician. Seamless scheduling, referrals, and downstream revenue. The ROI is clear. Patient acceptance is maximized with great user experience, hospital system branding and the personal doctor’s oversight and stamp of approval.
This isn’t purely hypothetical. The pieces are moving. Utah passed early AI legislation. The FDA is issuing AI tool guidance. Regulation may be the biggest obstacle, but the groundwork is already being laid.
But physicians may not see how thoroughly they are being sidelined. The agent doesn’t replace them outright. Rather, it redefines the encounter so completely that the physician becomes optional.
At first, these tools may simply make everyone’s jobs easier — a productivity boost protected, for now, by the inertia of medical regulation.
But disruption is coming. The first wave might hit nurse practitioners, and physicians wouldn’t be far behind. This model is economically attractive. It’s also disruptive in the classic sense Clayton Christensen described in The Innovator’s Dilemma.
Eric Topol wrote The Patient Will See You Now in 2016, describing a hopeful vision of technology empowering patients and shifting power away from doctors. This new vision, The Agent Will See You Now, may be the mirror image of that promise: the conversation still happens, but on terms set by algorithms, where both patient and doctor cede control to something beyond them.
Make no mistake, I use ambient AI in my own clinic and love it -- “best healthcare IT of my career.” Most importantly, it’s earning my trust. That’s exactly why I’m watching closely — not to resist it, but to understand where it’s going.
Does my vision sound dystopian? In some ways, I think it is.
I’m not endorsing this future — just describing where the incentives lead. That’s why it’s critical for clinicians to assert stewardship over this technology to preserve our clinical and ethical values – and our jobs.
Editor’s Note: Stay tuned because we have more coming from Dr.Schloss. Next will be a personal note on how AI is already improving everyday EHR workflow. JMM
SO pleased that I will retire before 2030. I enjoy the direct doctor:patient relationship and resent the intrusive nature of screen-based data entry, etc that trivializes that service and interaction.
To paraphrase famed investor Peter Lynch: “Everything that (hospitals) touch turns to ****.” Those of us who became clinicians, and not data entry clerks or tech geeks mourn what has been lost.
Somebody always points out: signing it off will still require medical training and somebody has to stick in chest drains and art lines. But the writing has been on the wall for consultant physician numbers for a while now in the NHS-- the most expensive part of any healthcare system. The future for us seems to be as glorified liability sponges overseeing an army of minimally trained flowchart adepts. And AI systems will be one more unit in that army. But realistically, the alphabet soup brigade already started the decline of the medical profession long ago. The question is: what will be cheapest? Near minimum wage nursing associates or a cloud-based super AI system. As far as I've ever seen, patient safety always gets ignored in the name of minimising costs.