Turning Healthcare AI Hype into Real-World Execution
Insights by Aditya Bansod, CTO and Co-Founder of Luma Health
“Healthcare doesn’t have an ‘AI imagination’ problem. It has an execution problem…” said Aditya Bansod, CTO and co-founder of Luma Health, on a recent episode of The Big Unlock podcast. It was a point that he made repeatedly as he discussed why healthcare AI often underdelivers, and what leaders must do to turn promise into performance with our host, Ritu M. Uberoy.
With a lifelong passion for building software, Aditya leads Luma Health’s technical vision and strategic direction for building a platform that empowers healthcare providers to better serve their patients and improve healthcare outcomes. His central claim is simple: many AI tools underperform in healthcare not because the models are weak, but because the workflows are messy, the handoffs are human, and the infrastructure is stitched together with a mix of modern platforms, legacy systems, and unstructured communication.
Or as the episode hints early on: healthcare still runs on “cutting-edge tech and clipboards.”
Aditya’s perspective is grounded in a very specific mission. Luma doesn’t “do medicine” or “do science,” as he puts it. Its job is to make it easier for patients to see their doctor. It sounds simple. It’s not. And that “unsexy last mile” is exactly where AI hype either becomes real—or collapses under the weight of reality.
As he told Ritu, “We don’t do medicine, we don’t do science. Our simple job is to make it easy for the patient to see their doctor… and that is just like such a hard part of the healthcare experience.”
Listen to the full conversation
Healthcare AI Often Fails at the “Handoff Problem”
One of Aditya’s most practical insights is that healthcare isn’t a single workflow. It’s a chain of workflows, where patient care passes from Medical Assistant to RN, RN to physician, physician to billing, etc.
He describes it as the “connective tissue” of healthcare operations. And he argues that humans, for all their imperfections, are still better than software at passing context along.
“The amount of connectivity and the amount of tissue inside the health system that exists to do that. It’s honestly, kind of unbelievable. And it works because humans are exceptionally good at passing context along – in fact, despite what most people may think, they’re better than computers at it.”
That becomes a direct critique of many AI “point solutions,” especially those that operate with limited connectivity to the broader system. You can build an AI voice agent that schedules an appointment. But what happens to the nuance a human would capture in the call? Does it land in the chart? Does it trigger transportation assistance? Does it cue interpreter services? Does it flag a mobility issue that affects how the patient should arrive? Aditya’s argument is that these details are often the difference between “automation” and “execution.” “All the little nuances that a human would pick up… do those make it into the chart?”
He gives a simple example: a patient might mention pain, difficulty walking, or barriers to getting into the car. In a human handoff, someone says, “Get them a wheelchair.” In a disconnected automation, that context can evaporate.
In other words, AI isn’t failing because it can’t talk. It’s failing because it can’t reliably connect that talk to the operational reality of healthcare.
Why “More AI Tools” Can Make Execution Worse
A second theme in the episode is what Aditya calls a “Cambrian explosion” of AI solutions, which he defines as “massive funding and rapid product creation aimed at a limited set of problems.”
The result is predictable. CIOs and CTOs are now flooded with tools that overlap. In real health systems, that means multiple vendors trying to solve adjacent workflow steps, each with its own UI, logic, and integration story.
Aditya describes the situation bluntly: health systems often end up buying overlapping “Venn diagrams.”
“You’ve effectively purchased eight overlapping Venn diagrams.”
This isn’t just annoying. It’s operationally dangerous. It can create fragmented workflows where each tool works “in isolation,” but the overall journey breaks.
He uses colonoscopy scheduling and prep as a vivid example. Even when a patient shows up, if prep wasn’t done correctly, it’s functionally a no-show. That’s a workflow with multiple inputs: patient outreach, prep instructions, prior authorizations in some cases, outside medical records, follow-up confirmations, and staff readiness.
The lesson is that patient access is not a single “AI function.” It’s an orchestration problem.
Aditya argues that most health systems do not want to become software companies. They don’t want to build massive development competencies. They want to deliver care. But they will need something in between which he explained is “an integration and workflow-orchestration competency.”
“Most CIOs… don’t want to build a software competency… but they ultimately have to build an integration competency.”
He makes a useful comparison to the post-2020 rush. “Many systems rapidly bought telehealth, texting, and remote monitoring platforms during COVID, then spent the following years rationalizing app sprawl.” He predicts healthcare AI will follow the same path, except this time the emphasis won’t be only rationalization. It will be workflow-level orchestration.
The implication for leaders is uncomfortable but clear: the “AI era” will create more integration work before it creates less.
Autonomy is Coming Faster Than Expected, and it’s Already “Exception-Driven.”
In the episode, Aditya also tackles one of the biggest tensions in healthcare AI right now: “human in the loop” versus agentic autonomy. The CTO offers a nuanced and very practical way to reconcile it. He jokingly compares health systems to Maslow’s hierarchy of needs. Each system has its own “AI hierarchy.” Some are just trying to help physicians with basic burden reduction. Others are experimenting with agents handling more autonomous interactions.
“Every health system has their AI hierarchy… everyone’s kind of converging.”
What surprised him was how quickly organizations are moving up that hierarchy.
He shares a recent conversation where a health system was already letting AI agents perform medication reconciliation with patients. His reaction is basically: “Already?” That moment captures the acceleration happening in 2026: many organizations are moving toward autonomy faster than the cautious voices predicted.
One reason is consumer normalization. Patients are already using AI tools in their own lives, sometimes even connecting personal health records to ask questions. The gap between consumer behavior and health system adoption is shrinking.
“Consumers are demanding it… patients are consumers.”
The second reason is more operational: exception-based work is already how healthcare runs in many places, especially in the revenue cycle. That pattern is now being applied to AI.
Aditya describes Luma’s AI fax and order-processing workflows. He expected customers would want humans verifying everything. Instead, some asked for exception-only review: let the system handle what it’s confident about, and route low-confidence cases to people.
“Just give us the exceptions… the stuff where we have like 95% confidence, let it ride.”
That’s an execution mindset, not a hype mindset. It treats AI as a workflow engine with adjustable controls not some kind of a “magic brain.”
Aditya explains how Luma approaches guardrails in two layers:
First, compliance and standards. He points to emerging frameworks and programs that are beginning to create “best practice” scaffolding for AI governance and auditability.
Second, product design. Different health systems want different thresholds. Some want higher automation earlier. Others want more human review. The software needs to let clients “turn the knob” and increase autonomy over time as confidence grows.
He even offers a clear Luma opinion: full automation above 90% confidence, based on a “judge” pattern where one model produces output and another evaluates it.
“If the AI thinks it’s 90% right… fully automate anything above 90%.”
It’s a practical approach to what he says is a practical reality; healthcare doesn’t require perfection to move forward, but it does require controllable risk.
The Messy Middle of Healthcare AI: Platform Promises vs Real Connectivity
If there’s one phrase that captures Aditya’s overall stance, it’s “messy middle.”
He argues that the true “platform moment” in healthcare AI doesn’t exist yet. Not fully. Not in a way that makes workflows seamlessly orchestrated across systems.
He points to signs of progress such as vendors talking about workflow frameworks, others exposing the capabilities that agents can do, but he emphasizes that connectivity is still the bottleneck. A standalone capability doesn’t solve orchestration.
“I don’t think that platform exists today.”
His timeline is realistic: likely two to three years before the market rationalizes from “a vendor for every job” to “a few vendors covering most jobs,” creating a workable ecosystem.
He describes it as moving from 70 vendors solving 70 tasks to a small number of vendors solving most of the jobs-to-be-done between “I need care” and “care delivered.”
That’s a clear execution thesis: healthcare AI won’t win by stacking more tools. It will win by consolidating, integrating, and orchestrating.
The Takeaway
Aditya Bansod’s message is one we do not hear often enough: healthcare doesn’t need more AI hype or more “shiny” point solutions. It needs workflow-level execution that actually gets patients from intent to appointment to completed care. In his view, AI underperforms when it can’t carry context across human handoffs, when it adds another disconnected tool into an already fragmented ecosystem, and when health systems are forced to stitch together overlapping solutions without a true orchestration layer. The path forward is practical: build integration competency, design AI to work as an exception-driven engine rather than a brittle automation and give health systems adjustable guardrails so autonomy expands as confidence grows. The winners won’t be the organizations with the most pilots. They’ll be the ones that can connect the dots because their technology choices are aligned to real workflows, real handoffs, and real execution.
Sitting at the intersection of Silicon Valley product discipline and the unglamorous “last mile” of healthcare access, Aditya Bansod’s unique insights are especially valuable:
- Healthcare AI fails most often at the handoff—because context is passed through humans, not systems, and disconnected tools lose nuance.
- AI voice and scheduling agents won’t scale as point solutions unless they can push meaningful context into downstream workflows (charting, services, escalation).
- The market is experiencing a “Cambrian explosion” of overlapping tools, forcing CIOs into integration and rationalization—whether they want it or not.
- The near-term goal isn’t one perfect platform; it’s workflow orchestration across multiple tools until consolidation catches up.
- Autonomy is arriving faster than expected, and exception-based work queues are the practical bridge between “human in the loop” and agentic workflows.
- Responsible scaling requires adjustable confidence thresholds and clear guardrails—so automation increases gradually as systems build trust in performance.
A Pediatric-Centric Approach to AI
Pediatric healthcare comes with its own unique challenges—fewer available data points, smaller population sizes, and higher sensitivities around communication and consent. This makes the responsible use of AI even more critical.
Dr. Morse noted that solutions must be designed with children and families in mind, not simply adapted from adult care settings. Whether deploying ambient tools, summarizing clinical notes, or streamlining administrative workflows, every use case must prioritize trust, safety, and patient experience.

