
AI-Driven Healthcare: Turning Insights into Action with Nasim Afsar
The growing use of AI chatbots in healthcare is making medical information easier to access and understand than ever before. Platforms such as ChatGPT Health show how AI can help patients and clinicians interpret health data in simple, human terms. However, an important question remains: will better information actually lead to better health outcomes?
As healthcare organizations explore AI adoption, the focus is shifting from simply explaining health information to predicting risks, preventing illness, and supporting better care decisions.
Leaning deep into the topic, Nasim Afsar, Managing Director at Healthcare Innovation Partners explains why today’s AI tools are only the first step toward what she calls “Intelligent Health.” She offers a pragmatic lens on what it actually takes to move from AI pilots to real-world impact.
Framing the moment: ChatGPT health as a signal, not the destination
You’ve described ChatGPT Health as ‘the first layer’ of Intelligent Health. From your perspective, what capabilities must come next for AI to truly improve health outcomes?
Afsar: ChatGPT Health helps us interpret and understand our health data, but outcomes require more than comprehension.
Intelligent Health must act as a proactive partner. Something that continuously identifies risks for harm and opportunities for health and guides action before health deteriorates. That means care that is personalized to the individual, predictive of future risk, preventive in its interventions, and participatory by design-engaging people as active partners in their health.
The next layer is realigning the entire health and care ecosystem – providers, payers, pharma, med-tech, employers, retail, and regulators – around the one constant they share: the individual.
The goal is to work together to improve the health outcomes that matter most to that person.
What does ChatGPT Health get fundamentally right about how patients and clinicians want to interact with health data? And where does it risk over-indexing on convenience versus care transformation?
Afsar: ChatGPT gets the interaction model exactly right. People want health explained in human terms, in real time, and in context. That’s a breakthrough. On the other hand, clinicians are looking for cognitive support to synthesize the overwhelming volumes of information, evidence, and context into better decisions for the patient in front of them. And now they have an accessible tool for that.
The risk is mistaking convenience for transformation: assuming that better explanations alone lead to better care. For people, understanding health does not automatically translate into changed behavior or improved outcomes. For clinicians, AI still operates largely on data generated within healthcare delivery. This accounts for only about 20% of what determines health.
The remaining 80% – behavioral, social, environmental, and economic factors-remains largely unintegrated. It limits AI’s ability to drive the outcomes people actually need.
AI-driven healthcare- from intelligence to outcomes
AI in Healthcare has been information-rich but outcome-poor for a while. What is the hardest technical or organizational leap required to move AI from explanation to prediction? And from prediction to prevention?
Afsar: The hardest leap is moving from decisions based on partial information to decisions grounded in the full reality of a person’s life. Today, AI predictions in healthcare are largely built on clinical data generated during care delivery. This represents only about 20% of what actually drives outcomes in conditions like diabetes or hypertension.
You wouldn’t get on a plane if the pilot told you they had only 20% of the data needed to get you to your destination safely. Yet that’s exactly how we manage disease today.
Subscribe to our bi-weekly newsletter
Get the latest trends, insights, and strategies delivered straight to your inbox.
Prediction becomes prevention only when we integrate the remaining 80%-behavioral, social, environmental, and economic context-into how health and care are designed and delivered.
In your framework, intelligence must drive coordinated action. Where have you seen AI insights fail? Not because the model was wrong. But because the system couldn’t act on them?
Afsar: I’ve seen this repeatedly in risk prediction, readmissions to hospitals, clinical deterioration, and social risk. Where the AI signal was accurate, timely, and clinically meaningful, but no one owned the next step. Alerts surfaced in dashboards or EHR inboxes without clear accountability, capacity, or workflow to act on them. In those moments, intelligence didn’t fail; the system did. Without aligned incentives, resources, defined ownership, and operational pathways, even the best AI simply documents risk rather than preventing harm.
At the same time, models can and do get things wrong when training data is incomplete or biased. Something we’ve seen repeatedly with common yet historically overlooked women’s health conditions. Both failures point to the same conclusion. Without inclusive data, clear ownership, and operational pathways, AI documents risk instead of preventing harm.
The big picture
You have said no single company can build Intelligent Health alone. So what kind of partnerships are essential to make this work?
Afsar: The technical foundations of Intelligent Health are not the hardest part. Many players from startups and big tech to health systems and governments can build pieces of the technology.
What’s difficult is achieving outcomes, which requires going beyond tools to true ecosystem alignment.
Intelligent Health works only when technology empowers the individual and the entire health and care ecosystem-delivery systems, payers, pharma, med-tech, employers, retail, and regulators-organize around that person’s goals.
For example. A person with diabetes could receive personalized grocery credits from their insurance company that activate seamlessly at checkout, be notified by a pharmaceutical partner about a clinical trial they qualify for, and have their care team automatically adjust medications and follow-up based on real-time data- all coordinated around improving their health, not navigating the system.
What’s the biggest misconception leaders have about AI’s role in fixing healthcare?
Afsar: The biggest misconception is that AI is a panacea for poor quality, inefficiency, clinician burnout, and rising costs. AI is a powerful tool that will fundamentally change how we think and work, but it does not eliminate complexity. In fact, it exposes the fault lines already embedded in healthcare. When we layer AI on top of broken incentives, fragmented workflows, and misaligned accountability, we don’t fix the system; we scale its dysfunction.
We saw this with the electronic health record. Instead of redesigning care around patients and clinicians, we digitized billing and documentation requirements, increasing cognitive burden without improving outcomes. AI will only deliver value if we use it to rethink and redesign healthcare for the modern era. And not simply to automate yesterday’s problems.
AI pilots and responsibility
Many AI pilots stall at the point of clinical adoption. From your experience, what operational constraints do technologists consistently underestimate?
Afsar: Many great solutions underestimate operational friction. Clinical capacity, workflow disruption, liability concerns, and change fatigue are real constraints. If AI adds cognitive load or uncertainty without reducing work elsewhere, adoption will stall, no matter how accurate the model is. We have to integrate AI solutions seamlessly into the busy workflows of clinicians, nurses, and staff.
How should responsibility be assigned when AI identifies risk but human systems fail to intervene? Where does accountability sit in an AI-augmented care model?
Afsar: In an AI-augmented care model, accountability belongs to the organizations and the individuals who design workflows, assign ownership, and resource interventions when risk is identified.
AI can surface insight, but humans and the systems they operate within decide whether that insight has an operational home, a clear owner, and the authority to act. If an algorithm correctly identifies risk and a human fails to intervene, responsibility sits with both the individual and the system that enabled or constrained that decision. Healthcare has spent decades building rigorous approaches to quality and safety that examine not just individual error, but the structures, incentives, and culture that shape behavior; AI must be governed within that same framework so humans are supported to do the right thing.
Public trust is fragile in healthcare AI. What governance models are necessary when AI becomes a longitudinal health partner rather than a point solution?
Afsar: Trust is a foundational prerequisite for the success of Intelligent Health and AI in healthcare.
In fact, I devote an entire chapter of Intelligent Health to trust because public confidence in healthcare institutions and clinicians has eroded, and high-profile data breaches and opaque data practices by large technology companies have only deepened skepticism toward AI tools and the motives behind them.
To address this, we need trust in two distinct but inseparable forms.
First, trust in the algorithms themselves, demonstrated through transparency, clinical validity, safety, continuous monitoring, and evidence that they deliver meaningful value.
Second, trust in the institutions behind the algorithms, earned through clear governance, consent-driven data use, accountability for misuse, and alignment of incentives with the long-term health of the individuals they serve.
The future of AI-driven healthcare
If we fast-forward five years, what would the world look like for this moment in healthcare AI? And what early signals should leaders watch for?
Afsar: Five years from now, we’ll look back on this moment as the point when healthcare stopped digitizing transactions and started interpreting and acting on health itself.
AI will be embedded as a continuous partner that anticipates risk, guides prevention, and ensure health and coordinates care around people rather than institutions. The early signals leaders should watch for are fewer pilots and point solutions, more unified platforms; fewer retrospective metrics, more real-time outcome accountability; and incentives shifting from volume to sustained health. The organizations that recognized this early will have redesigned care, not just automated it.
What key decision must CTOs and health system leaders make in the next 12 months? To ensure AI becomes an advantage rather than a liability?
Afsar: The defining decision is whether AI will be layered onto existing systems or used to fundamentally redesign health and care around the person.
That choice requires leaders to move beyond tools and commit to deep data integration, workflow redesign, and shared accountability across the ecosystem-partnering not only within health systems, but with payers, pharma, med-tech, retail, employers, and regulators. AI becomes leverage only when incentives, information, and action are aligned across these players in the service of individual health.
Underlying message
The conversation ultimately makes one point clear: AI alone will not fix healthcare. As Nasim Afsar explains, real progress will come only when technology is combined with better data integration, redesigned workflows, and stronger collaboration across the healthcare ecosystem.
For leaders, the challenge with AI adoption in healthcare is not simply adopting new AI tools, but ensuring that insights drive clear action and measurable outcomes. Those who treat AI as an opportunity to rethink how care is delivered – rather than just automate existing processes- will be the ones who unlock its true value in the years ahead.