Blog Zscaler

Recevez les dernières mises à jour du blog de Zscaler dans votre boîte de réception

Partner

The AI Governance Trap: Policy Won't Save You - Fluency Will

image
ROBERT KIM
avril 16, 2026 - 16 Min de lecture

The AI Governance Trap: Policy Won't Save You - Fluency Will

By: Robert Kim, CTO, Presidio 

Why restriction fails and what a “fluency-first” model looks like in practice

I’ve been very fortunate to spend significant time with smart, experienced technology leaders learning and gaining feedback on the practical application of new technology trends to their organizations. However, when the topic of “enterprise AI use” comes up, the mood tends to shift – the previously noted positive air and confidence transitions to pessimism and frustration.

It’s emblematic over the past two years, where I've had versions of the same client conversation (across industries and company sizes) with remarkably consistent challenges:

“My data* isn't in good enough shape. I gotta fix that first.”

“I don't know what tools to standardize on as there are so many choices.”

“I don't have a prioritized list of good, feasible use cases.”

“I don't have a governance strategy and worried about ongoing cost.”

“I don't trust the model's output or trust external APIs with our data.”

“I don't know what I'm doing, nor does my team.”

*Quick data note: Waiting for perfect data governance before starting AI initiatives is a path to irrelevance. Many high-value use cases need a surprisingly small, targeted corpus of data. More importantly, AI can be used as a tool for getting “your data right” - classifying unstructured content, surfacing inconsistencies, enriching incomplete records. The organizations winning aren't waiting for clean data to use AI. They're using AI to clean their data.

But the one dreaded comment that sits underneath all the others is:

"I don't want to be the one who says no."

This is the challenge that makes every other challenge harder. Technology leaders are caught between two legitimate obligations that feel, in the moment, like they're in direct conflict; move fast on AI because of competitive pressures and do it securely because “getting it wrong” has profound consequences. And nobody has handed them a framework that lets them do both at the same time.

That's why I wrote this. Not to add another governance policy to the pile, but instead, to argue that the way most organizations are approaching AI governance is structurally the wrong model. A better option exists, and we've already field-tested it.

Thinking back to when I started my IT career, I want to draw a parallel to early enterprise security. Every application lived inside a tightly controlled perimeter. The network had a hard edge. Security strategy was simple: build a wall, guard the gate, assume anything inside was safe – 100% breach prevention.

Then the internet happened, which gave way to cloud. With the pandemic, employees started working from anywhere, connected to services IT had never approved, from devices IT had never provisioned. The perimeter dissolved and the security doctrine that had made perfect sense inside a corporate DMZ became, slowly and then all at once, completely inadequate.

Our industry's response was one of the most important mindset shifts in enterprise technology history: the acceptance that you cannot prevent every breach.

Not as a failure but as a foundational design principle.

The question stopped being "How do we keep attackers out?" and became "How do we detect intrusions fast, contain damage, and recover quickly?" That shift gave rise to Managed Detection and Response (MDR) a model built on visibility, behavioral monitoring, and rapid remediation. You still had locks on the doors, but now you also had cameras.

AI governance is standing exactly where cybersecurity stood two decades ago. And like history repeating itself, we are potentially making the same mistake.

Walk into almost any enterprise and ask about their AI governance strategy. What you'll find is a document - a policy. A list of approved tools and prohibited use cases. Some access controls.

The problem is that no policy can fully prevent shadow AI use. The tools are free, accessible from any personal device, and woven into apps employees already use daily. The moment the approved AI tool feels slower or more cumbersome than the one on their phone (or worse if no AI tools are allowed), the policy has lost - not out of defiance, but out of usability.

The result is the worst of all possible worlds with the organization bearing the risk of AI use it can't see, govern, or remediate. The compliance dashboard might even reflect “all green” status, yet somewhere on a personal laptop, sensitive data is being pasted into a prompt.

Locks on the door. No cameras. No sensors.

The cybersecurity parallel acts as a practical blueprint. When the industry accepted that breaches were inevitable, the response wasn't to abandon prevention. It was to build a second layer: real-time visibility, anomaly detection, rapid response. The cameras alongside the locks.

AI governance needs the same reboot. Instead of asking "How do we prevent employees from using AI in ways we haven't approved?", the right question is "How do we build a workforce so fluent in AI that they make good decisions with it, regardless of the tool or the context?"

Compliance through literacy vs. compliance through restriction. An AI Governance policy that opens access to all, where you earn new privileges through demonstrating competency.

A workforce that genuinely understands AI - how models work, where they fail, what they should and shouldn't be trusted with - is your most durable governance layer. They don't need a policy to tell them not to paste PII into a public model. They know why it's a bad idea.

This is the shift in mindset: moving from a compliance-first posture to a capability-earned model. Instead of governance that tells you what you can't, governance that unlocks what you can - because you demonstrate you're ready for it.

Think about how video games work. You don't start with the best weapons, the highest skills, or access to the most complex levels. You earn them – one boss fight at a time. More importantly, the progression feels fair because the challenges get harder as your capabilities grow. And nobody resents unlocking a new level because they worked for it. The reward could be a new weapon or new skill – a tangible “level up” recognizing you effort.

Apply that same psychology to AI tooling and you get governance organized around three tiers of earned access.

Gate 1 (AI-Literate): The on-ramp, open to everyone. Pass it by demonstrating foundational understanding: how models work, where hallucination comes from, which categories of data should never enter a prompt. Unlocks access to AI tools with appropriate compute limits, pre-built agent templates, and managed integrations. Think of it as the starter kit; enough to play, enough to learn, enough to discover where you want to go next.

Gate 2 (AI-Proficient): Proficiency means applied judgment: architecting multi-step workflows, evaluating outputs critically, understanding basic token economics well enough to impact design decisions. It also means knowing when not to use AI. Unlocks the ability to build agents rather than just deploy them, customize tools (MCP) and integrations (skills), and access higher compute tiers with more model diversity. This is where practitioners start creating leverage - not just for themselves, but for everyone around them.

Gate 3 (AI-Fluent): A fluent practitioner builds token-efficient, secure, compliant workloads from the ground up. They understand model security, data handling obligations, and the difference between a system that's performing well and one that's quietly going off the rails. Unlocks unrestricted access to the full AI stack, infrastructure-level changes, and the ability to define the patterns others follow.

The progression matters as much as the tiers, where each gate isn't a barrier; instead, a skill check that prepares you for what comes next. Just like a game that would be boring without challenge, an AI governance model without progression produces a workforce that never develops real capability. The levels exist to make the reward worth having.

This also solves the "I don't want to say no" problem directly. The answer to every shadow AI conversation becomes: "You don't have to stop. You have to level up." Governance stops being a blocker and becomes an accelerant.

Building a fluent workforce is a multi-year investment. In the interim, employees are already using AI tools - some approved, many not. Data is moving through prompts that no DLP policy is even aware of.

This is where dual architecture matters. While your literacy program builds, you need visibility into how AI is being used today: which tools employees are reaching for, what data is entering them, where your exposure is concentrated. And you need data loss prevention controls that extend to AI interactions, not just email and file transfers.

This is the gap that platforms like Zscaler are closing. Zscaler AI Protect gives organizations the ability to discover and govern AI usage across the enterprise, both sanctioned and unsanctioned. AI Asset Management delivers visibility and posture management across AI apps, models, agents, and MCPs—surfacing the shadow AI footprint no policy document has ever seen. AI Access Security then applies real-time guardrails to AI use with prompt/response visibility, classification, adversarial attack protection, and AI-aware DLP. And because the DLP understands the content and context of what’s entering a prompt—not just static pattern matching—the guardrails your policy promises are enforced at the point of interaction.

Locks and cameras. Literacy programs and enforcement infrastructure. Not in tension but get to the same answer at different time horizons.

No policy prevents shadow AI use, just as no access control stops a motivated employee with a personal device. The perimeter is already porous, and it will only become more so.

But the answer isn't to give up on governance. It's to reimagine governance that works with human behavior instead of against it - literacy as a long-term control, visibility infrastructure as a near-term one, coupling an incentive model that makes the path of least resistance run through the framework instead of around it.

Build the cameras. Build the sensors. Build the fluency.

The organizations that get all three of these right won't just manage AI risk but will also be the ones setting the pace.

form submtited
Merci d'avoir lu l'article

Cet article a-t-il été utile ?

Disclaimer: This blog post has been created by Zscaler for informational purposes only and is provided "as is" without any guarantees of accuracy, completeness or reliability. Zscaler assumes no responsibility for any errors or omissions or for any actions taken based on the information provided. Any third-party websites or resources linked in this blog post are provided for convenience only, and Zscaler is not responsible for their content or practices. All content is subject to change without notice. By accessing this blog, you agree to these terms and acknowledge your sole responsibility to verify and use the information as appropriate for your needs.

Recevez les dernières mises à jour du blog de Zscaler dans votre boîte de réception

En envoyant le formulaire, vous acceptez notre politique de confidentialité.