responsible AI in healthcare

Trust, Transparency and AI in Healthcare: A Strategic Dialogue with Wolters Kluwer’s CTO

Balancing innovation and ethics: This exclusive interview reveals how healthcare leaders can adopt AI responsibly, ensuring ethics, patient safety, and strong governance in clinical workflows.

Generative AI is becoming an integral part of everyday clinical work, but its governance hasn’t kept pace. In the rush to adopt artificial intelligence, many healthcare teams are utilizing large language models without clear guidelines, proper training, or clear ownership. This gap between innovation and oversight has led to the emergence of “shadow AI,” where unapproved tools operate within healthcare workflows with limited visibility or control.

In this interview, Alex Tyrrell, SVP & CTO, Health at Wolters Kluwer, explains why AI governance in healthcare remains fragmented, the real-world risks associated with ungoverned tools, and what a truly patient-first approach to AI looks like in practice. He also explains how leaders can mitigate risks without slowing progress, and why transparency and accountability are essential for earning trust in AI-driven healthcare.

This interview is a must-read for technology leaders navigating the next phase of AI.

The state of AI governance in healthcare

Studies highlight that only a fraction of healthcare organizations have formal GenAI policies or training. From your perspective, what’s driving this slow pace of governance adoption?

Tyrrell: In the rush to deploy AI, gain efficiency and avoid falling behind, governance is often pushed to the sidelines. The lack of formal regulation can leave organizations effectively flying blind when it comes to AI oversight. Many organizations are operating with significant blind spots.

Even when there is a strong intent to implement governance policies, doing so is challenging because employees are often using AI within their workflows without organizational visibility or formal approval. Concerns arise about how close AI apps get to confidential patient data.  

Equally, are any hospital systems or user data in the health system’s information systems at risk of being collected and sold? Implementing robust safeguards is essential for protecting patient safety and securing sensitive data.

Do you believe tech leaders in healthcare underestimate the risk of “shadow AI”?This refers to situations where clinicians or staff use unapproved AI tools in their workflow.

Tyrrell: Healthcare leaders often underestimate the risks and challenges posed by shadow AI. When staff members use AI tools without leadership’s knowledge, organizations can face a variety of potential issues, from noncompliance with internal policies to regulatory violations around improper data use or storage. There are also concerns about whether shadow AI apps collect and potentially sell data related to patients, even if deidentified. How would patients feel if AI app queries during their office visit were used to sell advertising to doctors?

Shadow AI can also result in poor patient outcomes, as not all healthcare AI tools are created equal.

For example. An Ontario hospital suffered a privacy breach after an AI transcription tool sent sensitive information to patients after a doctor accidentally recorded a meeting with a personal AI tool.

Clinicians should be equipped with fit-for-purpose technology that is explicitly designed to support more accurate and efficient clinical decision-making, not just the “shiny new tool” whose underlying technology functions as a black box.

Subscribe to our bi-weekly newsletter

Get the latest trends, insights, and strategies delivered straight to your inbox.

Healthcare leaders should prioritize AI tools built specifically for clinical applications and designed to put optimal patient outcomes first, with accuracy, patient privacy, and efficient clinical decision-making at the top of the list when adopting new solutions.

Responsible innovation and patient-first AI

You’ve emphasized a patient-first approach to LLM implementation. How can healthcare CTOs ensure that AI innovation doesn’t compromise patient trust or safety?

Tyrrell: In the absence of regulation at the state and federal level, organizations should still follow existing best practices that have guided clinical and staff operations for decades.

When it comes to leveraging AI in patient care, organizations should always ask, “Is this in the best interest of the patient?“. They should then think about how their use of AI might be governed by existing regulations around patient privacy and safety like HIPAA or PSQIA.

Healthcare CTOs can also help build trust by being transparent with patients about how their organizations are using AI. Clearly communicating AI guidelines and approaches with the public will go a long way in making patients feel comfortable with their care providers and, ideally, adhere to treatment plans.

For example. Organizations can place an FAQ on their website about how they’re using AI. Or they can create a proactive outreach campaign targeted to patients and customers to convey where they stand on their AI journey.

Responsible AI use is attainable; the key is making it a priority rather than an afterthought.

As a leader, how do you balance innovation velocity with ethical responsibility? Especially when business teams are eager to integrate GenAI into their products.

Tyrrell: In healthcare, ethical responsibility must be a core priority, and it doesn’t have to slow innovation. When embedded at every stage of the AI lifecycle, it can enable safer, more effective solutions. CTOs should clearly communicate that the risks of weak or haphazard governance can be far more costly (clinically, reputationally, and financially) than any short-term efficiency gains.

The role of the CTO in a new AI frontier

As a CTO, how do you foster a culture of AI literacy across clinical and technical teams?

Tyrrell: With AI, you are talking about a technology that is non-deterministic; it makes mistakes.  Clinicians generally understand this concept. They are used to experimental design and important concepts like measurement error and bias. This is incredibly important to ensuring safe and effective GenAI solutions in a clinical setting. So we harness this expertise and have clinicians lead testing and evaluations, they measure risk and ensure we mitigate bias.

The technical teams focus on what they know best, the underlying technology, how to build and design agents and how to pick the right model for the task. By working together, the engineers drive up the quality and performance while the clinicians maintain the standards and guardrails. Together, they solve the problem as a team, each focused on what they do best. We call this process Expert AI, expert-led and in the loop.

What’s the most critical skill a healthcare CTO must develop to lead in the era of LLMs and Gen AI?

Tyrrell: You need to foster a culture of curiosity and experimentation, while you develop a strong intuition for the art of the possible and balance that against risk.

You can’t program incrementally plan your way to AI success; it’s not a linear process, and you must be able to adapt quickly and pivot. You are becoming a chief experiment officer, not just a CTO.

Looking ahead: The future of AI in healthcare

What role do you see Wolters Kluwer playing in establishing global standards for AI governance and safety in healthcare?

Tyrrell: Our history in healthcare spans 189 years. We have established our brand and reputation on providing the gold standard: evidence-based decision support that our customers need to do their jobs safely and effectively, leading to better patient outcomes. The organization aims for transparency and trust and has a tagline: “when you have to be right.” Accordingly, we play an important role in the healthcare ecosystem, and we operate globally. 

With our heritage, our global reach and our commitment to the responsible and ethical use of AI, we can be an important voice in the industry.

Shadow AI is emerging as a significant risk in healthcare settings, where unapproved and ungoverned tools near the point of care could expose critical PHI or lead to patient harm due to unvetted tools. We are actively working to bring transparency to the use of AI in the enterprise and across the industry.

We foster collaboration and work directly with stakeholders to ensure they understand and are comfortable that our solutions meet the same high standard we have always been known for.

If we look five years ahead, what does a “well-governed” AI-driven healthcare organization look like?

Tyrrell: Ultimately, AI’s potential in healthcare is strong, but only if organizations take the necessary steps to ensure its responsible and ethical use. A well-governed organization will bear hallmarks such as a documented, C-suite-approved governance policy that is communicated and enforced across all levels of the organization.

Employee trainings will regularly reinforce these policies and secure buy-in. An AI compliance team will routinely review and update existing policies as AI capabilities and use cases continue to expand.

Finally, approved and unapproved AI applications will have been identified and either white or blacklisted accordingly, minimizing the possibility of Shadow AI.

With that in place, responsible AI can drive efficiency at every stage of the patient journey. Clinicians can use AI with expert-in-the-loop oversight to diagnose and treat more accurately. While staff can leverage it to streamline administrative tasks like documentation and record-keeping.

Finally, what advice would you offer to CTOs who wish to adopt GenAI but are hesitant about the risks?

Tyrrell: I feel that GenAI holds great promise for healthcare, and organizations should absolutely be determining how and where to integrate it into their processes.

CTOs looking to achieve responsible AI shouldn’t go it alone. Working together with other leaders in the organization to ensure buy-in and responsible adoption across the team will be an essential strategy. Additionally, CTOs should tap into their networks and connect with peers who have faced similar challenges. Determine what worked and what didn’t, and use those lessons for your own AI implementations.

Overall, as a CTO, you need to ensure that you tackle AI deployment with intention and a keen eye towards governance. The journey towards responsible AI is possible.

Key takeaways

Here are a few key takeaways from this interview:

CTOs must balance innovation with responsibility:

Rapid AI deployment should not come at the expense of ethical and regulatory safeguards. Leaders need to embed governance into every stage of the AI lifecycle.

AI culture and training matter:

Building AI-literate teams and fostering curiosity and experimentation are critical for managing complex AI systems and adapting to evolving technologies.

Frameworks for accountable AI are a must:

Well-governed AI encompassing documented policies, clear ownership, routine audits, and a system for approving tools is a must. It is helpful in mitigating the risk associated with AI.

Collaboration across the ecosystem is key:

CTOs, vendors, and regulators must collaborate to develop transparent, trustworthy, and responsible AI solutions that foster safe innovation in healthcare.

About the speaker: Alex Tyrrell, PhD, serves as Head of Advanced Technology Wolters Kluwer and Chief Technology Officer for Wolters Kluwer Health. He oversees the Wolters Kluwer AI Centre of Excellence, which focuses on accelerating innovation across all Wolters Kluwer divisions in the areas of GenAI/ Agentic/machine learning and data analytics. Alex has extensive experience in designing and delivering commercial-scale, machine learning and analytics platforms, and setting technology strategy for enterprise content management, digital transformation and new product development.

Avatar photo

Gizel Gomes

Gizel Gomes is a professional technical writer with a bachelor's degree in computer science. With a unique blend of technical acumen, industry insights, and writing prowess, she produces informative and engaging content for the B2B leadership tech domain.