Zscalerのブログ
Zscalerの最新ブログ情報を受信
Shadow AI & Shadow AI Agents: Regaining Visibility and Control Over Public GenAI + Embedded SaaS Copilots
Introduction
Artificial intelligence (AI) is already part of how work gets done.
Employees are using public GenAI tools to move faster, while SaaS platforms are rolling out copilots by default. AI is no longer a separate tool. It is being embedded directly into applications that were already trusted, which changes their risk profile overnight. At the same time, developers are integrating AI directly into their workflows.
What most organizations have not kept up with is visibility.
Enterprise AI and machine learning activity increased 83.3% year over year, and during that same period, organizations transferred over 18,000 terabytes of data to AI tools, a 92.6% increase.
Most of that activity is happening outside the scope of existing security controls, not because teams are ignoring risk, but because existing security architectures were never designed to govern AI interactions.
This is what defines shadow AI today. It is not just unsanctioned tools. It is the growing gap between how AI is actually being used across the business and what security teams can confidently monitor or control.
| Shadow artificial intelligence (AI) is the practice of employing advanced AI tools or AI applications without formal approval from an organization’s technology leadership. This often occurs when department heads or individuals seek quick fixes, like ChatGPT, beyond standard policies, ultimately raising data privacy and compliance concerns. |
What shadow AI looks like in modern workflows
In most organizations, shadow AI is not isolated to a single category. It shows up across multiple layers of the business, often overlapping in ways that make it difficult to track.
In practice, that footprint includes:
- Public GenAI tools accessed through browsers, apps, and extensions
- Embedded AI copilots inside software-as-a-service (SaaS) platforms already in use
- AI agents executing tasks across systems and maintaining context
- Developer tools sending source code and system data to external models
- Internally developed AI systems, including models and datasets
- Emerging infrastructure such as cloud AI platforms and Model Context Protocol (MCP) servers
Many of these interactions rely on persistent protocols such as WebSockets and MCP, which traditional security tools were never designed to inspect or control. Each introduces a different type of data exposure, and together they create a much larger and less visible attack surface.
What makes this challenging is how these tools interact with each other and with your data.
Why AI agents change the security model
AI agents introduce a different kind of risk. Their behavior doesn’t align with how traditional security models were designed to operate.
Most enterprise systems are built around discrete interactions. A user submits a request, receives a response, and the transaction ends. Security controls were designed to inspect that exchange and enforce policy at a single point in time.
Agents change that model.
They carry context across interactions, build on previous inputs, and continue operating over longer sessions. Instead of responding to a single prompt, they can execute a series of actions across multiple systems, often using delegated credentials and preconfigured access.
That shift creates a different set of challenges:
- Sensitive data can accumulate across conversations, not just single prompts
- Sessions remain active, which limits the effectiveness of transaction-based inspection
- Agents can act autonomously, increasing the impact of compromise
- Access often spans multiple systems, expanding the blast radius
The real concern is not just access, but unintended actions at scale when agents operate without clear guardrails. When something goes wrong, it does not stay contained. It moves across systems in ways that most governance models were not built to handle.
The business impact of uncontrolled AI usage
The risks associated with shadow AI are no longer theoretical. They are showing up in measurable ways across both security outcomes and business impact.
Organizations with higher levels of unmanaged AI usage are seeing an average of $670,000 in additional breach costs, according to IBM. In the same research, 20% of organizations reported experiencing a breach tied to shadow AI, reinforcing how quickly unmonitored usage can translate into real exposure.
The impact comes from how AI is being used without sufficient control or oversight.
IBM found that 97% of organizations that experienced an AI-related breach lacked proper access controls on those systems. At the same time, nearly two-thirds of organizations either have no AI governance policies in place or are still developing them.
That combination creates a pattern: AI adoption is accelerating faster than the controls needed to manage it.
The downstream impact tends to fall into a few consistent areas:
- Intellectual property exposure through developer workflows and internal documentation
- Sensitive data compromise, particularly customer personally identifiable information (PII) and regulated information
- New attack vectors such as prompt injection and agent manipulation
- Compliance gaps as AI usage outpaces governance frameworks
- Reputational risk from inaccurate or unsafe AI-generated outputs
IBM’s findings reinforce how these risks play out in practice. In shadow AI-related incidents, customer PII was the most commonly compromised data type, affecting 65% of cases, while intellectual property was exposed in 40% of incidents. Many of these breaches also led to broader business impact, including operational disruption and increased security costs.
| The issue comes down to visibility and control, not how employees are using AI. |
Most employees are not trying to bypass policy. They are trying to work faster. The issue is that AI usage is happening in environments where visibility is limited and guardrails are either incomplete or missing entirely.
You cannot govern what you cannot see.
Building a complete AI asset inventory
Before organizations can enforce policy or reduce risk, they need a clear understanding of where AI exists across the environment.
This is where many programs fall short.
An effective AI asset inventory goes beyond listing tools. It requires understanding how AI is used, how data flows through those systems, and where risk is introduced.
Two foundational components help structure this:
- AI Bill of Materials (AI-BOM): A unified inventory of AI models, workflows, agents, MCP servers, and guardrails that provides a consolidated view of AI assets and how they are connected across the environment
- AI Security Posture Management (AI-SPM): Identifies misconfigurations, excessive permissions, and vulnerabilities
Together, they provide a working view of the AI landscape rather than a static inventory.
In practice, this means building visibility across four key areas:
- Workforce usage: Understanding how employees interact with AI tools, including both approved and unapproved usage, and how data is shared across those interactions.
- SaaS copilots: Tracking embedded AI features inside trusted applications, including what data they can access and how they are configured.
- Developer environments: Monitoring AI-powered integrated development environments (IDEs), command-line tools, and repository integrations that connect directly to external models and process sensitive code.
- Internal AI systems: Mapping models, agents, datasets, and infrastructure, along with identity and access controls that govern how those systems operate.
Each layer introduces a different type of risk. Without visibility across all of them, governance remains incomplete.
Governing AI without slowing it down
Blocking AI access often creates more risk than it removes. When approved tools are restricted, employees turn to alternatives that are harder to monitor.
A more effective approach is to define clear boundaries and enforce them consistently.
That starts with clarity around what is allowed. Organizations need to define approved tools, acceptable use cases, and what types of data can be shared. When expectations are clear, employees are more likely to operate within them.
At the same time, it is important to define what is not allowed. Certain applications and use cases introduce higher risk and need to be restricted or closely monitored, particularly in developer workflows and agent-based systems.
Governance should also align with established frameworks. Common starting points include:
- National Institute of Standards and Technology (NIST) AI Risk Management Framework
- EU AI Act
- Open Web Application Security Project (OWASP) LLM Top 10
- MITRE ATLAS (developed by the MITRE Corporation)
- International Organization for Standardization (ISO) 42001
| The goal is not to slow AI adoption. It is to make it scalable and defensible. |
Control patterns that scale across the enterprise
Many organizations try to address AI risk by layering point solutions across visibility, access, and testing. In practice, that approach increases complexity without closing the gaps between those controls. Effective AI security requires a coordinated set of controls that operate across multiple layers.
At a high level, that system includes five core layers:
- AI asset visibility and inventory: A complete view of AI usage, assets, and risk across the environment—the foundation for every control that follows.
- Access and policy enforcement: Controls determine who can use which AI tools and under what conditions, using identity and context to make real-time decisions.
- Prompt and interaction visibility: Sensitive data is often typed directly into AI systems. Visibility needs to extend into prompts, responses, and full conversations.
- Data protection: In 2025 alone, enterprise environments recorded more than 410 million data loss prevention (DLP) violations tied to AI usage. Protection must cover prompts, uploads, and generated outputs as a single surface.
- Runtime and infrastructure security: Internally developed AI systems require continuous testing, monitoring, and posture management to address vulnerabilities and misconfigurations.
These layers are most effective when they work together, creating consistent visibility and enforcement across the AI lifecycle.
How Zscaler secures the AI lifecycle
Most organizations approach AI security in parts, focusing on visibility, access, or testing in isolation. The challenge is that risk spans the full lifecycle, and gaps between those areas are where exposure emerges.
Zscaler connects these capabilities within a single platform built on a zero trust architecture.
It starts with visibility across AI usage, including public GenAI tools, embedded SaaS features, developer environments, and internally developed systems. Proven inline inspection at scale enforces policy on prompts, responses, and data in real time, while identity and context-based access controls govern who can use which tools and under what conditions.
For internally developed AI, continuous testing and runtime protection extend coverage across development and production, helping organizations identify vulnerabilities early and adapt controls as systems evolve.
The result is a more unified approach that reduces fragmentation and allows AI adoption to scale without losing control. This includes extending zero trust to AI agents: Ensuring that agentic workflows operate within defined boundaries, even as they interact across systems at machine speed.
Enable AI safely, not slowly
AI is already embedded in how modern organizations operate. The question is not whether it will be adopted, but how it will be governed.
The organizations that move ahead will be the ones that build visibility early, define clear boundaries, and implement controls that reflect how AI actually works across users, applications, and systems.
That foundation allows teams to move faster without increasing risk.
When visibility, governance, and protection are aligned, AI becomes something the business can scale with confidence.
Explore how Zscaler enables secure AI adoption with visibility, governance, and runtime protection.
FAQ
Shadow AI is the use of AI capabilities—public GenAI apps, embedded SaaS copilots, developer tools, or internal agents—outside consistent security monitoring and policy control. Unlike shadow IT, these tools may already be approved SaaS platforms, but AI features change data flow, permissions, and risk overnight.
Start by monitoring AI access across browsers, apps, and extensions, then map which SaaS copilots are enabled and what data they can reach. Add prompt-and-response visibility, DLP for uploads and generated outputs, and policy enforcement based on identity and context. Include traffic over WebSockets and Model Context Protocol (MCP) where agents operate.
AI agents keep context over long sessions and can take actions across multiple systems using delegated credentials. That persistence reduces the effectiveness of one-time, transaction-based inspection and increases blast radius if compromised. Risks include sensitive data accumulation, unintended automation at scale, prompt injection, and agent manipulation.
An AI asset inventory tracks where models, agents, copilots, datasets, and AI-enabled apps exist—and how data moves between them. An AI-BOM documents sources, prompts, connectors, and runtime usage, while AI-SPM highlights misconfigurations and excessive permissions. Together, they give a practical starting point for AI governance.
Zscaler helps secure the AI lifecycle by combining visibility, access control, and inline inspection in a zero trust platform. It can discover public GenAI usage and SaaS copilots, enforce identity-based policies, inspect prompts and responses, and apply DLP to uploads and outputs. For internal AI, it supports continuous testing and runtime protection.
このブログは役に立ちましたか?
免責事項:このブログは、Zscalerが情報提供のみを目的として作成したものであり、「現状のまま」提供されています。記載された内容の正確性、完全性、信頼性については一切保証されません。Zscalerは、ブログ内の情報の誤りや欠如、またはその情報に基づいて行われるいかなる行為に関して一切の責任を負いません。また、ブログ内でリンクされているサードパーティーのWebサイトおよびリソースは、利便性のみを目的として提供されており、その内容や運用についても一切の責任を負いません。すべての内容は予告なく変更される場合があります。このブログにアクセスすることで、これらの条件に同意し、情報の確認および使用は自己責任で行うことを理解したものとみなされます。



