Dr Jay Schloss is back with a follow-up essay on the the use of AI in the clinic. His first essay looked into the distant future, and it was a bit scary. This one looks at AI today and maybe next month. It’s less scary but still provokes me to think back just a few years ago—when none of this could have been imagined. I love reading Jay because few writers can tell the story from the clinician’s side with such clarity, and, in this case, hopefulness. JMM
In last week’s essay for Sensible Medicine, When AI listens In, I explored a speculative but increasingly plausible future for healthcare enabled by modern AI tools: synthetic agents that host clinical conversations and then deliver care before a human physician is ever involved. That unsettling yet strangely intriguing future may or may not come. But a more grounded, tangible AI-driven process is already underway. It’s happening where we review the chart and write the note.
The computer interaction at the point of care has always been awkward. We review thousands of words of chart clutter to find the few that matter. We ask patients intimate questions, then break eye contact to type. We mentally switch between thinking and digging, back and forth, all day long. The mouse clicks are endless – I once measured 100 per patient. It’s numbing.
But we now have AI tools that can sit between clinicians and the record — absorbing the chaos and giving us back what we need. These tools are reshaping medical communication in two places: summarization and implementation.
Summarization: What happened before today
Chart prep before an office visit or consult isn’t complicated — it’s just thankless, grinding work. Pages of boilerplate nursing notes, templated patient instructions, and administrative clutter bury the clinical details that matter. Amidst the noise, critical facts—why the Eliquis was stopped, whether the echo was ever done, the nature of the contrast reaction—are in there somewhere. But finding them can feel like panning for gold.
Over a decade ago, I shared public frustrations about our EHR on Twitter and got some unexpected feedback. Carl Dvorak, then Epic’s President and effectively second-in-command, called and wanted to talk. He politely but firmly asked me to take down Epic UI screenshots I’d shared (as was their style at the time). To his credit, he also listened. That call prompted a site visit from executive Sumit Rana (who recently took over Dvorak’s role). He and another colleague spent a day with me seeing patients in my Cincinnati office. They listened to me rant and pitch ideas — including one I called “Review Folders,” a feature meant to streamline collaborative chart review.
I later wrote up my idea in a blog post: EHR Review Folders: Saving Trees, Improving Care, which included an email I’d sent to Epic. My goal was to replace the mass printing of records for pre-visit review with an EHR sorting tool that worked like an iTunes playlist. It sounded simple enough to me, the naive end user. But as is often the case in IT – easier said than done.
Epic eventually built a feature called Bookmarks that mirrored parts of my proposal. But it never fully worked for me — the UI was tough to pick up and bookmarks weren’t shareable between our clinicians and MAs. The summarization problem remained largely unsolved.
Finally, things now appear to be changing. New tools – like Epic’s own AI note summarization pilot and research efforts like Stanford’s ChatEHR – are beginning to extract and present the most relevant data from the chart. The promised model is an interactive, AI-enabled chart review that produces formatted summaries and responds to queries like: “Who’s the referring PCP?” or “When was the last dose of Toprol given?” An Epic AI tool is already being used at my institution, The Christ Hospital, to track lung nodules.
AI summarization promises the kind of experience you’d get if your patient were being presented to you by a top-notch internal medicine resident — a clear, relevant summary of the clinical story, with the ability to ask follow-up questions and fill in gaps in real time.
Whether summarization tools will be reliable remains to be seen, but the early promise of AI in medical documentation — as we’ll see in the next section — gives me hope.
Implementation: What happens next
The second breakthrough is happening on the other side of the visit — where AI tools are beginning to turn conversations into structured output: notes, orders, billing codes, and more.
I’ve used Abridge ambient AI scribe in my practice for about a year. It’s the only ambient tool I’ve tried, so I’ll focus on that here. A microphone captures the patient visit and my narrated chart prep beforehand. It then generates a surprisingly good clinical note — one that’s not too far off from what I would have written myself. When I’m in the exam room, it saves me from staring at the screen or frantically typing during the encounter. If I’m interrupted, I don’t lose my place. The system remembers what was said, understands the context, and formats it cleanly. It’s like having a smart medical student scribe who knows your style, tolerates distractions, and drafts your note.
Abridge’s new Contextual Reasoning Engine – live, but not yet up in my clinic – adds an administrative layer: generating orders, assigning codes, and confirming compliance. If this can further ease the mouse clicks and navigation maze runs, I’ll be thrilled. And if that means fewer reimbursement headaches later, the billing office will be too.
It’s refreshing to finally work with a healthcare IT company that speaks our language. For years, it felt like clinician-facing software was being built by people who’d never seen a patient. But Abridge was founded by a practicing cardiologist, Shiv Rao MD — like me, it turns out, a product of UPMC fellowship — and that clinical perspective shows. The product reflects an understanding of what doctors do in the room. Abridge seems less concerned with impressing hospital administrators and more focused on easing the challenges we face at the keyboard. It reminds me a little of the good parts of paper charting — less typing, more talking.
In my daily workflow, I use both Abridge and Epic — and the design contrast between the two is striking. Abridge feels like stepping into an IKEA display: clean surfaces, clear labels, and everything where you’d expect it. Epic, by comparison, feels like rummaging through a hoarder’s garage: unlabeled bins full of trash, dusty cabinets, and piles of old magazines. One invites you to breathe deeply and settle in; the other makes you want to walk out and get some fresh air.
Of course, Abridge notes aren’t perfect. The prose can be generic. Results aren’t always in the right order. Pacemaker/ICD vendors usually need to be added manually. But that’s okay. I’ll gladly take the trade-offs to get my sanity and time back. Some clinicians want the record to read like a polished work of literature. But I’ve always held that documentation isn’t care delivery. The note exists for communication and billing – it’s not the point of the visit. I’d rather spend my time engaging with the patient and making the right decisions than fussing over every sentence. The AI helps me do that.
Together, these two AI-driven workflow improvements — summarization and implementation — offer the promise of something we’ve been waiting for forever: a clinician-friendly user experience with the EHR. Not a healthcare revolution. Just a better way to manage the mess. If we can do all that and still preserve the human touch, sign me up.
Disclosure: I have no financial relationship with Abridge, Epic, or any other EHR company.
John Mandrola was right. Your crisp perspective on the Epic mess vs. Abridge clarity and simplicity is spot on. I am a semi-retired cardiologist. My joy these days is helping my long-time and elderly patients navigate their way through an increasingly complex and impersonal health care world. Abridge has made it immensely easier to have meaningful and personal conversations with my friends (who masquerade as patients). For better or worse, it is also a career extender.
We used a different AI to generate notes, associated with the athena EMR. They contained about 90% of the info I would want, took 5 times the words to do it, and were organized so that it was harder to find the next time I saw the patient, making future chart prep harder.
Also, about 10% of the time the system would fail catastrophically leaving me with only my memory to produce my clinic notes from because I wouldn't have scribbled or typed anything during the visit because AI was supposed to do that.
Not quite ready for prime time was my evaluation, but I'm old. Maybe if I were 30, I'd be able to tolerate its short comings better.