Thought Leadership In Healthcare Digital Transformation

Damo Consulting thought leaders regularly write in different media articles. We also invite guest posts from industry leaders.

What Does Responsible AI Adoption in Healthcare Really Look Like?

What Does Responsible AI Adoption in Healthcare Really Look Like

Insights by Dr. Amit Phull, Chief Clinical Experience Officer, Doximity

In a recent episode of The Big Unlock podcast, Dr. Amit Phull, Chief Clinical Experience Officer at Doximity sat down with hosts Rohit Mahajan, and Ritu M Uberoy, both Managing Partners at BigRio and Damo, to answer a question that’s becoming harder and more urgent every day, “What does responsible AI adoption in healthcare really look like when you move beyond the hype and headlines?”

Dr. Phull’s perspective is grounded in the realities of clinical work. He’s an emergency medicine physician, with academic roles at Northwestern and George Washington University. He has also spent the last decade-plus building technology with one of the most clinician-centered platforms in healthcare.

What makes the conversation valuable is that it doesn’t treat “responsible AI” as an abstract principle. It treats it as an implementation discipline.

Throughout the podcast, Dr. Phull repeatedly returns to a simple pragmatic truth, clinicians adopt what helps them, trust what they can verify, and reject anything that feels like “one more thing.”

As he explained to Ritu “From the clinician perspective, ease of use is paramount… Being able to trust the technology is paramount as well… If they can’t trust the output… or god forbid it adds time to their day… it’s going to be very, very difficult to compel those clinicians to actually pick up that piece of software and leverage it.”

That one statement is basically a responsible adoption blueprint.

Let’s break down what Dr. Phull says responsible AI adoption looks like through the lens of workflow, trust, education, and the coming shift toward agentic AI.

Listen to the full conversation

A Clinician-First Origin Story: “Build with Us” Is the Operating Model

Dr. Phull shared a helpful origin story, not just about himself, but about how Doximity’s approach evolved. He explained that Doximity was founded in 2010 with an initial mission to “rewire healthcare,” specifically by building tools that help physicians be more productive so they can provide better care. He noted that Doximity’s CEO and co-founder Jeff Tanney previously built Epocrates, which helped anchor the company in practical clinician utility from day one.

Dr. Phull’s own path mirrors that bridge between domains. He describes a “prior life” as a computer engineer, and how he’s spent his career living at the intersection of medicine and technology. That intersection is the key implementation detail to how Doximity builds successful AI tools – with physician involvement.

He explained how he first joined the company through a Physician Advisory Panel, where clinicians volunteer time to beta test tools and provide direct feedback on what should be built next. That same model continues to this day, including their upcoming 2026 medical advisory board, where clinician input shapes product direction.

This matters because according to Dr. Phull, responsible AI adoption isn’t just about “what the model can do,” it’s about whether clinicians see themselves in the design, and whether the tool feels like it understands the realities of care delivery.

 

In Healthcare, Adoption Starts With Ease of Use and Dies with Added Time

A core theme of the conversation is that clinicians are not resistant to innovation, they are resistant to burden. Dr. Phull explains that if a tool is difficult to use, or worse, if it adds time to the day, that added burden makes adoption nearly impossible.

This is where he makes a sharp comparison to EHRs.

“I would view EHRs as an interesting counter example. If EHRs were deployed not as they were, as part of a government rollout with mandates, I think there would’ve been an extreme increase in the amount of difficulty that it took all of us to adopt that sort of technology.”

Even today, after years of implementation, many clinicians still experience EHRs as a workflow tax. So, when Dr. Phull talks about AI adoption metrics, he points to signals that reflect real-world use:

  • recurrent use
  • increased use
  • time savings
  • burnout (as a proxy for clinician welfare)

And he pairs those “hard metrics” with the lived outcomes that actually motivate adoption.

“Just by being able to go home for that additional hour… doctors… can have dinner with their families or be a little bit more human outside of the practice of medicine.”

That is an operational definition of value.

Responsible AI adoption, in this framing, is not about novelty, It’s about time returned to clinicians, and friction removed from the day.

 

Medical Education Can Use AI—But It Must Protect Clinical Reasoning

Dr. Phull also speaks as a faculty member, and his comments here are especially relevant to “responsible AI adoption” because adoption isn’t only about today’s clinicians.

It’s about the next generation. He describes AI as a “double-edged sword” in training environments. AI can empower young clinicians, but it can also allow them to “skip a step,” bypassing the hard work of developing critical thinking.

The most memorable line in this section is his emphasis on maintaining a “spidey sense,” or the value of human intuition.

“It’s very important that clinicians still develop and maintain a ‘spidey sense.’ We do not want to reduce clinicians to being messenger pigeons in terms of looking up information and then kind of handing that off to their patients in regards to advancing their care.”

So, what’s the responsible approach in training?

He describes allowing use of tools (including Doximity GPT) but requiring trainees to justify their thinking in real time.

“Medicine cannot be reduced to… a book report… You actually have to demonstrate an understanding of the validity and the context…”

This is a key insight for leaders building responsible AI programs inside health systems:

For Dr. Phull, AI literacy isn’t just “how to use the tool,” It’s how to interrogate outputs, apply judgment, and sustain clinical reasoning.

 

AI Could Actually Return Humanity to the Practice of Medicine

He concludes in an interesting way, “In a very paradoxical way, the [AI robots] actually reintroduce humanity to the practice of medicine.”

He describes the emotional frustration clinicians express through a common phrase:

“I didn’t go to medical school to be a data entry clerk…”

Then he paints a picture of what responsible adoption could enable; “When I enter a patient’s room, I can shake their hand, look them straight in the eye, put my hands on my patient… while the documentation, the coding… is taken care of.”

 

The Takeaway

Dr. Amit Phull’s view of responsible AI adoption is practical and clinician centered. His message is clear; healthcare doesn’t need more AI excitement or “one-off” pilots that never stick. It needs tools clinicians can actually trust and use; tools that reduce friction, return time, and protect clinical judgment rather than replace it. In his framing, responsible adoption happens when AI is built with clinician input, grounded in security and verifiability, and supported by human review where it matters most. The organizations that lead won’t be the ones chasing the newest model. They’ll be the ones that make AI dependable in the real world because their workflows, safeguards, and adoption strategy are designed for trust at scale.

Dr. Phull brings a clinician-operator lens to responsible AI adoption. His insights are especially useful because they translate “responsible AI” from principle into practice:

  • Ease of use is the gateway to adoption—clinicians won’t tolerate tools that add steps, context switching, or time.
  • Trust is built through verifiability: HIPAA compliance, rigorous security, and AI outputs anchored in citations.
  • Human clinical review can be a product feature, not just a governance afterthought—Peer Check is a concrete model.
  • Medical education must protect clinical reasoning—trainees can use AI, but they must justify thinking and build a real “spidey sense.”
  • The most realistic future is a middle path: AI that augments clinicians, reduces burnout, and expands access to care.
  • The best version of agentic AI may be paradoxical: by removing admin burden, it can “reintroduce humanity” to medicine.

A Pediatric-Centric Approach to AI

Pediatric healthcare comes with its own unique challenges—fewer available data points, smaller population sizes, and higher sensitivities around communication and consent. This makes the responsible use of AI even more critical.

Dr. Morse noted that solutions must be designed with children and families in mind, not simply adapted from adult care settings. Whether deploying ambient tools, summarizing clinical notes, or streamlining administrative workflows, every use case must prioritize trust, safety, and patient experience.

“We are ultimately responsible for how these tools impact our providers and our patients.”
– Keith Morse, MD, MBA
Clinical Associate Professor of Pediatrics & Medical Director of Clinical Informatics – Enterprise AI, Stanford Children’s Health
Damo Logo White

Get in touch!

One of our experts will get in touch with you shortly.

(815) 900-9840

info@damoconsulting.net

THE HEALTHCARE DIGITAL TRANSFORMATION LEADER

Join the digital healthcare revolution. Stay on top of the latest news, trends, and insights with Damo Consulting.

Sign me up for the latest news, trends, and insights from Damo.

Sign me up for the latest news, trends, and insights from Damo.

THE HEALTHCARE DIGITAL TRANSFORMATION LEADER

Join the digital healthcare revolution. Stay on top of the latest news, trends, and insights with Damo Consulting.

Sign me up for the latest news, trends, and insights from Damo.

THE HEALTHCARE DIGITAL TRANSFORMATION LEADER

Join the digital healthcare revolution. Stay on top of the latest news, trends, and insights with Damo Consulting.