Charting a New Course: The Promise and Peril of AI in Medicine

I am so excited about the promise that technology holds in solving our charting problems! Honestly, right now with EMRs, it feels like technology works against more than FOR us. 


But I imagine a world where AI helps us document efficiently, thoroughly, and accurately—so that we can focus on what we were actually trained to do: care for patients.

Here’s what excites me most—and what gives me pause.

The Promise: Smarter Charting, Better Care

Charting is a real burden on most providers, but AI offers some real solutions that can actually reduce, or eliminate, charting fatigue. Here’s how:

Real-time charting
AI could listen in during patient visits and transcribe a SOAP note on the spot. We could actually verbalize our physical exam findings as we go, leading to a more complete and accurate physical exam. No more trying to remember at the end of the day what we saw or felt—or just using a generic template for every patient, only changing grossly positive findings.

Pre-visit HPI collection
In fact, what if the patient could speak directly to AI before we even entered the room?!? We know patients often ramble and get focused on unimportant details. With AI, they can ramble to their heart’s content and tell the AI all the stuff they want. You know, the stuff they don’t because we usually cut them off. Then AI could sift all that and generate a clean, focused HPI based on the patient’s own words. We’d still verify and build rapport, of course—but when we are in the room, we’d be free to listen fully, rather than half-type while nodding politely. 

Differential diagnosis support
AI could even help us think more broadly. It could use the visit information to generate a list of differential diagnoses, some we may not have even considered. It could also suggest lab tests, or guide us on whether to order the imaging with or without contrast.

Instant discharge summaries
And, let’s be honest. Many of us don’t get to the discharge summary until hours after the patient is long gone, so handing them a summary of the visit and our plan is often impossible. Also, taking the time to find condition specific patient education is time consuming so it doesn’t always happen. But, if AI already had the HPI and PE finished. We could put in the final diagnosis and any specific patient information/instructions we’d like, then AI could auto-generate clear discharge instructions, educational handouts, and follow-up plans as soon as orders are placed. That’s better for patients and more satisfying for us.

But What Are We Trading? What are the Hidden Dangers?

Honestly, until recently, I hadn’t thought much about the downsides. In fact, I thought they were fairly negligible when it comes to charting. But a therapist’s post on LinkedIn gave me pause and made me rethink some things.

They raised concerns about recording sessions into cloud-based systems. Honestly, I’m a pretty open person, so I don’t really care if people know my medical history. So, my first instinct is to roll my eyes a little when people are so concerned about HIPAA. I think, no one really cares about your life enough to sort through hours of recordings, just to learn about you. But the therapist brought up the idea that insurance companies might. They might just decide that they should have access so they can scrub the chart for information themselves. Then, in addition to the billing and diagnoses codes, they’d have access to the patient’s backstory. I can only imagine how they’d use that! “Oh, you had a drink or two before you fell down and broke your leg. Well, we consider that self-inflicted so we won’t cover that.”

Sure, it might sound paranoid to think insurance companies would someday demand recordings for reimbursement decisions—but is it really that far-fetched?

It used to be unthinkable that insurance companies would tell us what we could prescribe or which procedures we could do. Now it's standard.

So I have to wonder:

  • What happens to all this data?

  • Who gets access to it?

  • Will patients trust us less and withhold the truth if they know everything they say is being recorded and processed?

HIPAA was written long before AI entered the exam room. If our tools are cloud-based, handled by third parties, and shared across platforms—can we really promise privacy?

Patient Choice—or Illusion?

You may argue that patients will always have a choice. They can opt out of AI-supported visits. But what happens when AI becomes the default?

Will opting out come with higher charges? Longer wait times? Insurance refusals?

What starts as an enhancement could quickly become the norm, and patient “choice” might become little more than a checkbox that doesn’t carry much weight.

And Let’s Not Forget AI Is Still... Well, AI

AI is only as good as its data and design. It can absolutely misinterpret what’s said—especially with accents, sarcasm, soft-spoken patients, or medical jargon.

If a patient says “no history of diabetes” and AI hears “history of diabetes,” that’s a problem. A provider would likely realize they didn’t hear it fully, or will ask a follow up question for clarification. That won’t happen with AI, at least not for a long time to come.

Even if we always review and confirm, the risk of error still exists—and the legal responsibility still lands on us.

Proceed with Excitement—and Caution

So, I’m still excited about AI!  I truly believe it could restore some of the joy to practicing medicine by reducing the charting burden and allowing us to focus on our patients. But we can’t let our eagerness override our caution.

Let’s move forward with our eyes wide open—asking the right questions about ethics, privacy, and unintended consequences.

Because if we do this right, we could make medicine feel like medicine again, and that would be truly amazing!

Next
Next

"When It Feels Like Everyone Else Has It Figured Out…"