Blog Zscaler
Recevez les dernières mises à jour du blog de Zscaler dans votre boîte de réception
From Shadow AI to Trust: How Healthcare Can Secure the Future of Artificial Intelligence
Artificial Intelligence (AI) tools like ChatGPT are disrupting industries at an unprecedented pace, and healthcare is no exception. This swift adoption has ushered in challenges, particularly “shadow AI”—the unsanctioned and unmanaged use of AI tools by employees. In healthcare, where trust, privacy, and security are critical, the stakes could not be higher.
On a recent episode of We Have TRUST Issues, we had the pleasure of speaking with Nate Couture, CISO of the University of Vermont Health System (UVM Health), who shared his journey addressing shadow AI and enabling secure, meaningful experimentation within his organization. His insights reveal how healthcare can leverage AI responsibly without undermining the trust of employees, patients, or stakeholders.
When Shadow AI Takes Organizations by Surprise
AI adoption often starts inconspicuously. Employees hear about tools like ChatGPT and begin experimenting on their own. What starts as simple curiosity quickly grows, becoming shadow AI—unvetted usage of AI tools without oversight or safeguards.
For Nate and his team at UVM Health, this discovery was daunting. Using Zscaler logs to track activity, they found that ChatGPT and similar tools were being used thousands of times across various departments. Beyond just marketing teams, clinicians, residents, and IT employees were also entering sensitive healthcare environments to prompt AI tools.
This spike in shadow AI activity presented a challenge that would resonate across industries: how do you introduce measured governance when innovation is outpacing your ability to secure it?
“Discovering shadow AI in our health system was jarring,” Nate recalled. “Blocking it wasn’t a sustainable solution. We had to find a way to enable safe exploration while protecting sensitive data and earning the trust of our employees.”
The Right Approach: Enable, Don’t Block
Faced with shadow AI, many organizations react by blocking access to these tools entirely, leaving employees frustrated and often pushing them to find workarounds. UVM Health took a different route entirely—choosing to foster controlled experimentation.
This decision was deeply strategic. AI in healthcare is poised to revolutionize patient care, but without foundational literacy, employees wouldn’t be equipped to assess and leverage purpose-built AI tools down the road. UVM Health’s approach was to balance innovation with security. Here’s how they succeeded:
- Data Loss Prevention (DLP): The health system implemented controls to catch and block patient data from being entered into public AI platforms like ChatGPT, preventing confidentiality breaches.
- Education Campaigns: They developed guidance materials to help employees understand what was appropriate to use AI for (and what wasn’t). Ethical considerations, compliance reminders, and risk warnings were part of this training.
- Proactive Prompts: Users engaging with AI tools encountered a splash page reminding them to use AI responsibly, with clear links to more detailed usage guidance.
By enabling safe experimentation, UVM Health avoided stifling curiosity while ensuring sensitive information remained secure. Or as Nate put it, “We created an environment where employees didn’t have to hide their usage; instead, we equipped them with the guardrails they needed to explore AI responsibly.”
Governance: The Backbone of Trust
Governance was the next logical step in transforming shadow AI into a secure, trusted tool. UVM Health launched an AI Governance Council to evaluate AI tools before adoption, ensuring risks were mitigated while still enabling innovation. Uniquely, this council’s structure focused on collaboration rather than IT-driven mandates. It includes representatives from:
- Clinical leadership (via the Chief Nursing Officer and Chief Health Information Officer)
- Cybersecurity and IT stakeholders
- Ethics, privacy, and legal experts
- Marketing and communications
“Governance isn’t about saying ‘no.’ It’s about providing the structure organizations need to innovate safely,” Nate explained. “By gathering insights from across departments, we were able to build buy-in for AI tools and ensure every potential risk—security, ethical, or legal—was addressed.”
The governance team also helped foster a culture of trust by being clear about their processes. Breaking down siloes between clinical and technical staff ensured that the council wasn’t just another "IT security team," but instead a resource supporting system-wide innovation.
AI in Practice: Elevating Patient Care
With shadow AI under control, UVM Health has since implemented AI tools that are transforming healthcare delivery. For example:
- Ambient AI for clinical documentation: AI listens in on patient consultations, generating draft notes for doctors to review. This allows clinicians to focus on their patients rather than their keyboards.
- Nursing Support through AI: Tools monitor patient conditions, such as fall risks or bedsores, and send alerts to nursing staff, ensuring preventative measures are taken promptly.
These tools show how AI can reduce clinicians’ cognitive workloads, get them back to patient-first care, and allow healthcare providers to work more efficiently without sacrificing trust or oversight.
Preparing for What’s Next
While today’s focus in healthcare AI centers on tools that assist humans, Nate underscored the need to prepare for the next phase: agentic AI. This is where artificial intelligence evolves to make autonomous decisions—something that could bring extraordinary value, but also significant risk.
Threats like “prompt injections” (where AI outputs can be manipulated maliciously) require proactive solutions, and Nate likened this phase to the early days of the internet. “We don’t want AI to repeat the mistakes of unsecured web applications,” he explained. “AI needs guardrails at this stage to avoid vulnerabilities being exploited down the line.”
Organizations need to treat agentic AI as they would human users: defining access levels, monitoring for misuse, and continuously validating security protocols. UVM Health is focused on ensuring these protections are ready before AI becomes fully autonomous in their workflows.
Listen to the full radio show on demand at Healthcare Now Radio.
Cet article a-t-il été utile ?
Clause de non-responsabilité : Cet article de blog a été créé par Zscaler à des fins d’information uniquement et est fourni « en l’état » sans aucune garantie d’exactitude, d’exhaustivité ou de fiabilité. Zscaler n’assume aucune responsabilité pour toute erreur ou omission ou pour toute action prise sur la base des informations fournies. Tous les sites Web ou ressources de tiers liés à cet artcile de blog sont fournis pour des raisons de commodité uniquement, et Zscaler n’est pas responsable de leur contenu ni de leurs pratiques. Tout le contenu peut être modifié sans préavis. En accédant à ce blog, vous acceptez ces conditions et reconnaissez qu’il est de votre responsabilité de vérifier et d’utiliser les informations en fonction de vos besoins.
Recevez les dernières mises à jour du blog de Zscaler dans votre boîte de réception
En envoyant le formulaire, vous acceptez notre politique de confidentialité.


