Artificial intelligence has arrived in healthcare, and it didn't even have the courtesy to knock. For medical practices already juggling thin margins, skeleton crews, and the overbearing weight of administrative work, AI has gone from novelty to expectation almost overnight.
But on a recent episode of the MGMA Insights Podcast, host Daniel Williams sat down with Michael Blackman, MD, Chief Medical Officer at Greenway Health, to push back on the breathless hype — and ask a harder question: Is the way we're deploying AI actually making things better, or just more complicated?
The answer, it turns out, depends entirely on whether you're willing to rethink everything.
Decades in the Making, Two Years in the Spotlight
Blackman isn't someone who gets swept up in buzzwords. He's been watching AI tools evolve in clinical settings for years — including clunky early versions of ambient documentation that held genuine promise but demanded heavy human correction at every turn.
What's changed recently, he argued, is the execution — not the concept.
"It's an overnight success that was years or decades in the making," Blackman noted, pointing to ambient documentation as a prime example of technology that has finally matured enough to be practical.
Today's tools can draft high-quality clinical notes in real time, dramatically shifting the burden off clinicians without replacing their judgment. He drew a straight line to e-prescribing — once novel, now table stakes — and predicted ambient documentation will follow the same trajectory.
"Ambient documentation will be the way people document. Full stop," he declared. Not tomorrow, he acknowledged, but well within the next three to four years.
The Rube Goldberg Problem
Here's where things get uncomfortable for the industry. Williams raised a concern that resonates with a lot of MGMA members: that AI often lands in a practice not as a transformation, but as yet another layer stacked on top of workflows that were already held together with duct tape and good intentions.
Blackman had a name for this problem: healthcare's version of a Rube Goldberg machine (a complex contraption designed to perform a simple task in an elaborate, roundabout way).
"Every time [we] add these tools … we make it more complicated," he cautioned, describing an approach that produces the opposite of its intent — more cognitive load, more frustrated staff, and systems that demand constant accommodation instead of quietly supporting the people using them.
The fix isn't efficiency for efficiency's sake, either. Blackman pushed back on the assumption that "saved time" automatically translates to more patients seen. Practice administrators might see it that way. The physicians in the room might just want to make it home for dinner.
"The answer is you get to use the time for what you want to use it for," he explained, "and what your practice wants to use it for."
Rethinking the Whole Platform
The phrase that stopped Williams in his tracks was "AI by design" — and it's worth unpacking, because it's not just branding.
Traditional EHR systems were built for a different era. Layers of automation have been bolted onto aging architectures that were never designed to support them.
Blackman was blunt about where that leads: "At some point, the entire enterprise starts collapsing under its own weight." AI by design asks a fundamentally different question — not "how do we add AI?" but "if we started from scratch knowing what AI can do, what would we build?"
That distinction is important: A lot of AI sales pitches are tool-first; Blackman's argument is outcomes-first.
Where the Pain Actually Lives
Blackman returned frequently to a theme familiar to anyone who's spent time thinking about healthcare operations: pull one lever, and something else moves. Staffing, documentation, access, revenue cycle — these problems are interconnected, and targeting symptoms rather than root causes is how you end up managing the same crises year after year.
He reframed the revenue cycle challenge: "The real question in my mind … is not how do we manage denials better, but how do we prevent needing to manage them in the first place?"
Better documentation through ambient AI leads to more accurate coding, which leads to fewer downstream denials.
Enter Novare
Williams noticed the platform on screen behind Blackman during the recording and asked about it. Blackman explained that Novare — drawn from the Latin novare, meaning "to begin again" — is Greenway Health's attempt to put its money where its philosophy is.
The platform is designed to connect the full care continuum, from patient intake through clinical documentation and revenue cycle management, with AI embedded from the foundation rather than layered on top. Early adopters are already engaging with it, and Blackman described the feedback he finds most encouraging: not uncritical praise, but users who want more.
"There's nothing better than somebody saying, 'Hey, I like what you're doing, but could you also do these other things?'"
A Tool, Not a Replacement — Yes, Really
The trust conversation came up more than once, and Blackman didn't retreat into vague reassurances. AI catches things humans miss — not because clinicians are careless, he was careful to clarify, but because they're human. Reviewing AI-generated content is no longer optional. It must be part of the job.
"It is an assistant tool. It's not a replacement," he said.
He reached for an analogy that hits harder the longer you think about it: Ordering the wrong thing on Amazon is an inconvenience, but sending the wrong medication to a pharmacy is a different category of problem entirely. The stakes in healthcare demand that AI support clinical judgment — not quietly route around it.
So What Should Practice Leaders Actually Do?
For anyone evaluating AI solutions right now, Blackman's advice was simple: start with the problem, not the product.
"Make sure you properly identify your problems and goals," he urged, likening the technology selection process to choosing a vehicle. A race car is impressive. It is also completely useless if you just need to get to the grocery store.
The question worth asking isn't, is this AI impressive? They're more like: Does this integrate with what we already have? Does it solve something real? Will our people actually use it?
What Does Success Look Like?
For Blackman, the measure of AI's success won't show up on a dashboard.
"The system is working for them, not the other way around," he put it plainly — documentation as a byproduct of care, quality reporting that happens automatically, clinicians freed from checkbox compliance and returned to the work they actually trained to do.
But maybe the clearest window into Blackman's philosophy is this: "I truthfully do not care about the tools. I care about what we can do with them and what people can do with them."










































