Thought Leadership In Healthcare Digital Transformation

Damo Consulting thought leaders regularly write in different media articles. We also invite guest posts from industry leaders.

Empowering Pediatric Care Through Responsible Generative AI

silent barrier to healthcare in ai

As generative AI becomes more prevalent in healthcare, its potential to transform documentation, operations, and clinical decision-making is clearer than ever. But alongside that promise comes a critical responsibility—to use AI tools in a way that is safe, equitable, and grounded in human oversight.

In the recent episode of The Big Unlock podcast, Keith Morse, MD, MBA, Clinical Associate Professor of Pediatrics and Medical Director of Clinical Informatics – Enterprise AI at the Stanford Medicine Children’s Health, discussed how his organization is deploying large language models (LLMs) to drive real-world outcomes in pediatric care.

From reducing documentation burden to enabling large-scale data analysis, Dr. Morse shared both the current state and future vision of AI in healthcare.

Stanford is forging a path for safe, effective AI adoption in one of the most sensitive domains of care— pediatric care.

“We are ultimately responsible for how these tools impact our providers and our patients. Nobody is going to take that responsibility from us.”
– Keith Morse, MD, MBA
Clinical Associate Professor of Pediatrics & Medical Director of Clinical Informatics – Enterprise AI, Stanford Children’s Health
Listen to the full conversation

Bringing Generative AI to the Frontlines of Pediatric Care

Unlike retrospective academic research or theoretical pilots, Stanford’s approach to GenAI is grounded in daily workflows. The goal is to integrate AI in ways that truly lighten the load on clinicians and improve care delivery.

Stanford’s internally developed LLM chatbot – Ask Digi – offers a safe and HIPAA-compliant space for providers and staff to explore how generative AI can assist with administrative tasks, summarization, research support, and more.

Dr. Morse shared that what makes these efforts successful is not just the technology—but how it’s introduced to the workforce. LLMs, while powerful, are still largely unfamiliar to healthcare workers. That’s why the health system focuses just as much on education as it does on implementation.

“Nobody outside your organization can credibly provide a use case that is guaranteed to work for you because, while they may know about AI in general, they don’t understand your AI infrastructure or your specific workflows.”
– Keith Morse, MD, MBA
Clinical Associate Professor of Pediatrics & Medical Director of Clinical Informatics – Enterprise AI, Stanford Children’s Health

Upskilling the Healthcare Workforce for the AI Era

Generative AI tools like ChatGPT and domain-specific large language models have exploded into public consciousness, but the skills needed to use them effectively haven’t kept pace. Dr. Morse highlights that healthcare workers are expected to engage with technology that is barely two years old—with little to no training. Stanford has prioritized structured, hands-on education including:

  • Foundational Online Training – to explain how LLMs function and how to use tools like Ask Digi effectively in daily work.
  • Prompt Engineering Workshops – led by informatics fellows help staff, from physicians to support roles, to understand how to ask the right questions and get the most from AI tools.
  • Pilot Projects with Frontline Engagement – where clinicians and administrative personnel use GenAI tools in real workflows, then serve as local champions for broader adoption.

By embedding learning into actual work contexts, Stanford ensures that AI isn’t just introduced—it’s understood.

This recognition—that each health system must tailor AI to its own realities—guides Stanford’s entire enterprise AI strategy.

Ambient Listening, Agentic AI, and the Future of Clinical Support

Looking ahead, Dr. Morse is especially excited about ambient technologies and agentic AI—two areas that hold promise to significantly reduce clinician burden and improve operational efficiency.

Ambient listening tools like DAX, currently being evaluated at Stanford, use voice recognition and LLMs to summarize patient-provider conversations. By automating clinical documentation, these tools free up clinicians to focus more on patient care and less on keyboard time.

Agentic AI, meanwhile, moves beyond reactive tools into systems that can independently complete tasks based on predefined goals. For example, a future agentic system might detect an abnormal trend in EHR data and proactively notify a care team, reducing the need for manual oversight.

However, Dr. Morse is clear-eyed about the risks. These powerful tools demand rigorous oversight, particularly in pediatrics where stakes are high and data is nuanced.

“As we sort of get into the world of large language models, it’s not too hard to envision a future where we are able to have a large language model help us process unstructured information from these different sites—to extract relevant things of interest from notes and do large-scale studies that look at unstructured data.”
– Keith Morse, MD, MBA
Clinical Associate Professor of Pediatrics & Medical Director of Clinical Informatics – Enterprise AI, Stanford Children’s Health

Stanford’s participation in PEDSnet, a national pediatric research network, positions it well for this future. The network’s adoption of the OMOP data model enables consistent, scalable AI research on structured and unstructured data across multiple institutions.

In a future where AI can reliably mine insights from free-text clinical notes across institutions, healthcare research will become faster, cheaper, and more precise. Pediatric populations—which often suffer from a lack of representation in studies—stand to benefit immensely.

“Historically, we’ve relied on federal or state agencies to tell us what’s allowed or not. That doesn’t exist yet for LLMs. But that doesn’t mean we wait—it means we lead with caution.”
– Keith Morse, MD, MBA
Clinical Associate Professor of Pediatrics & Medical Director of Clinical Informatics – Enterprise AI, Stanford Children’s Health

The Ethics and Responsibility of Innovation

While the technology is moving fast, regulation is not. There’s currently no robust external framework to govern how LLMs are used in healthcare. That reality makes internal governance and ethical oversight not optional—but essential.

Stanford’s approach is guided by a sense of proactive accountability rather than waiting for external directives.

This leadership involves:

  • Comprehensive internal testing before deployment
  • Multidisciplinary review of pilot programs
  • Transparency around how AI tools are used and what limitations exist
  • Ongoing education on the risks and benefits of AI

 

By building oversight into every step of the process, Stanford ensures that its AI deployment is not just effective—but ethical.

A Pediatric-Centric Approach to AI

Pediatric healthcare comes with its own unique challenges—fewer available data points, smaller population sizes, and higher sensitivities around communication and consent. This makes the responsible use of AI even more critical.

Dr. Morse noted that solutions must be designed with children and families in mind, not simply adapted from adult care settings. Whether deploying ambient tools, summarizing clinical notes, or streamlining administrative workflows, every use case must prioritize trust, safety, and patient experience.

“We are ultimately responsible for how these tools impact our providers and our patients.”
– Keith Morse, MD, MBA
Clinical Associate Professor of Pediatrics & Medical Director of Clinical Informatics – Enterprise AI, Stanford Children’s Health

Stanford Medicine Children’s Health is demonstrating what responsible, real-world AI adoption looks like by focussing on outcomes: reducing clinician workload, improving documentation quality, and setting the stage for scalable research.

Their success is rooted not only in technical innovation—but in organizational readiness, workforce education, and ethical clarity.

The takeaway for other health systems?

  • Invest in upskilling across roles
  • Provide safe spaces for AI experimentation
  • Tailor tools to your specific workflows and infrastructure
  • And most importantly—own the responsibility of AI’s impact

In a field where every decision affects vulnerable lives, that kind of leadership isn’t optional. It’s essential.

Damo Logo White

Get in touch!

One of our experts will get in touch with you shortly.

(815) 900-9840

info@damoconsulting.net

THE HEALTHCARE DIGITAL TRANSFORMATION LEADER

Join the digital healthcare revolution. Stay on top of the latest news, trends, and insights with Damo Consulting.

Sign me up for the latest news, trends, and insights from Damo.

Sign me up for the latest news, trends, and insights from Damo.

THE HEALTHCARE DIGITAL TRANSFORMATION LEADER

Join the digital healthcare revolution. Stay on top of the latest news, trends, and insights with Damo Consulting.

Sign me up for the latest news, trends, and insights from Damo.

THE HEALTHCARE DIGITAL TRANSFORMATION LEADER

Join the digital healthcare revolution. Stay on top of the latest news, trends, and insights with Damo Consulting.