Blog da Zscaler
Receba as últimas atualizações do blog da Zscaler na sua caixa de entrada
What AI Risks are Hiding in Your Apps?
To learn more how these cutting-edge AI innovations can secure and accelerate AI adoption, register now for our exclusive launch event Accelerate Your AI Initiatives with Zero Trust
AI is transforming business operations, offering unprecedented productivity, faster decision-making, and new competitive edges. As per Gartner, by 2028, more than 95% of enterprises will be using generative AI APIs or models, and/or will have deployed GenAI-enabled applications in production environments. At Zscaler, we have witnessed exponential increase in AI transactions, with a 36x increase year-over-year, highlighting the explosive growth of enterprise AI adoption. The surge is fueled by ChatGPT, Microsoft Copilot, Grammarly, and other generative AI tools, which account for the majority of AI-related traffic from known applications.
However, AI adoption and its integration into daily workflows introduces novel security, data privacy, and compliance risks. For the past two years, security leaders have been grappling with shadow AI—the unsanctioned use of public AI tools like ChatGPT by employees. The initial response was often reactive—block the domains and hope for the best—but the landscape has shifted dramatically. AI is no longer just a destination tool or website; it's an integrated feature, embedded directly into the sanctioned, everyday business applications we rely on.
According to the 2025 Gartner Cybersecurity Innovations in AI Risk Management and Use Survey, 71% of cybersecurity leaders suspect or have evidence of employees using embedded AI features without going through necessary cybersecurity risk management processes.
This evolution from standalone shadow AI to embedded, pervasive AI creates far more complex and layered security challenges. Blocking is no longer a viable strategy when AI is part of your core collaboration suite. To safely harness the productivity benefits of AI, enterprises need a new security playbook—one that goes beyond simply blocking shadow AI and embraces a zero trust + AI security approach focused on visibility, context, and intent. This post will explore the new frontier of AI security challenges, risks and outline a modern framework for securing it.

Fig: Zscaler ThreatLabz Report: Top AI application usage
Emerging AI security challenges
As organizations integrate AI deeper into their operations, our findings indicate they face a growing twofold challenge:
- Securing the inevitable and rapid adoption of AI within their environments; and
- Recognizing and mitigating the growing vulnerabilities that come with it.
Below, we outline the five biggest AI security challenges that will shape how you protect the AI ecosystem and how to address them.
1. Shadow AI: A silent insider threat
Shadow AI can enable innovation, but it also exposes organizations to significant risks, particularly concerning data loss and breach potential. BCG’s latest “AI at Work” study reveals that 54% of employees openly admit they would use AI tools even without company authorization. The consequences of staying blind? As per a recent IBM report, 20% of organizations experienced breaches linked to unauthorized AI use, adding an average of $670,000 to breach costs. Additionally, shadow AI incidents had serious downstream effects beyond security concerns:
- 44% suffered data compromise
- 41% reported increased security costs
- 39% experienced operational disruption
- 23% faced reputational damage
These impacts demonstrate that shadow AI isn't just a security concern—it's a business risk that affects operations, finances, and reputation.
2. Embedded AI simplifies workflows but complicates security
The new front line for AI security isn't a standalone website. It's the "AI" button inside the tools employees use every day. Countless SaaS applications—from CRMs to design tools—are embedding generative AI features.
Enterprises significantly underestimate the security risks posed by Embedded AI, which accounts for over 40% of their AI usage and often operates opaquely. Current AI TRiSM (Artificial Intelligence Trust, Risk, and Security Management) solutions and vendor-provided security assurances are largely ineffective for Embedded AI, which frequently lurks in Shadow AI. This leaves organizations vulnerable, relying on outdated audits and inadequate clickwrap agreements that fail to address the complex orchestration and interfaces of these embedded systems.

Fig: Gartner IT Symposium Keynote Survey, 2024
Here are a few classic examples of Embedded AI security challenges:
- Teams use Jira and Confluence to manage sensitive projects, track critical software bugs, and document internal processes. With some intelligence models, a user can now prompt AI with sensitive projects and data. Security teams often lose visibility on the model used—they’re left to wonder where it is hosted, whether sensitive data is being used for training, whether it’s exposed, and more.Microsoft Copilot represents the ultimate integration of AI into the enterprise workflow. It has access to a user’s entire M365 Graph—their emails, chats, calendars, and documents. A single prompt could exfiltrate highly confidential data. As the interaction happens within the trusted Microsoft ecosystem, traditional DLP and CASB solutions are often blind to the content and context of the AI query itself.
Each AI integration represents a new, unvetted channel for data to leave the environment. An organization might have 20 different sanctioned SaaS apps, each with its own embedded AI that communicates with a different large language model (LLM) under different data privacy terms. Manually tracking and governing this hidden mesh of AI interactions is a challenging task. Security teams often have no visibility into the data being exchanged in these interactions, creating a massive blind spot.
3. AI prompts and outputs: A data loss hotspot
AI Prompts may contain sensitive data that includes source code, unreleased financial data, customer personally identifiable information (PII), healthcare records, and strategic plans. As per the Zscaler ThreatLabz AI Security Report, 59.9% of AI transactions were blocked, signaling concerns over data security and the uncontrolled use of AI applications.
The risk isn't just with the input. The output from AI models carries its own set of dangers like:
- Hallucinations: AI models can confidently invent facts, statistics, or code snippets. An employee who unknowingly incorporates this fabricated information into a report, financial model, or software build introduces errors and risk into the business.
- IP and copyright issues: Models trained on public data may generate outputs that include copyrighted material or even proprietary code from other organizations, creating serious legal and IP risks.
- Sensitive data exposure: An AI model may regurgitate sensitive data it was trained on or expose data from another user's session, leading to an unpredictable data leak.
Security teams need to regularly sanitize and validate AI inputs and outputs and implement comprehensive prompt monitoring strategies.
4. Evolving data privacy and compliance risks
AI’s reliance on large datasets introduces compliance risks for organizations bound by regulations such as GDPR, CCPA, and HIPAA. Improper handling of sensitive data within AI models can lead to regulatory violations, fines, and reputational damage. One of the biggest challenges is AI’s opacity—in many cases, organizations lack full visibility into how AI systems process, store, and generate insights from data. This makes it difficult to prove compliance, implement effective governance, or ensure that AI applications don’t inadvertently expose PII.
As regulatory scrutiny on AI increases, businesses must prioritize AI-specific security policies and governance frameworks to mitigate legal and compliance risks.
5. The strategic AI governance gap
Effective AI governance remains out of reach for most organizations because they fail to:
- Gain comprehensive visibility: They cannot see which AI tools are being used, by whom, or what sensitive data is being exposed in user prompts and sessions.
- Enforce least-privileged access: Identify and revoke excessive permissions across the organizations that aren’t necessary for AI functionality.
- Understand user intent: Fail to analyze the purpose behind an AI prompt, forcing them to use outdated keyword blocking instead of intelligent, intent-based policies that prevent high-risk activities.
- Prevent drift: Consistently detect and remediate risky misconfigurations or compliance violations before they lead to breaches or fines.
Accelerate Your AI Initiatives with Zero Trust
AI is moving faster than traditional security and governance policies—and that’s exactly where risk grows. Organizations can follow the basic steps below to ensure the safe use of AI:
- Identify Shadow AI: Gain end-to-end visibility into all of the AI used within your organization, including AI embedded into enterprise applications. Understand how AI is being used, what data is being leveraged and related risks.
- Enforce least-privileged access: Identify, restrict or revoke access to AI systems based on appropriate policies and user risk profiles. Isolate user sessions to risky apps.
- Control data: Use labeling and controls to ensure inappropriate data is not used to train AI applications, such as Microsoft Copilot, and enforce advanced DLP policies to safeguard use of sensitive data in AI prompts/responses.
- Ensure responsible use of AI: Enforce AI Governance best practices and guardrails to secure design, development, deployment and use of AI. Prevent toxic or high-risk prompts with intent-based controls.
These steps help apply the principles of a zero trust architecture to your use of AI applications, enabling organizations to stay resilient even as AI evolves at lightspeed. Learn more about Zscaler’s latest innovations to secure and accelerate AI adoption in our exclusive launch: Accelerate Your AI Initiatives with Zero Trust
This blog post has been created by Zscaler for informational purposes only and is provided "as is" without any guarantees of accuracy, completeness or reliability. Zscaler assumes no responsibility for any errors or omissions or for any actions taken based on the information provided. Any third-party websites or resources linked in this blog post are provided for convenience only, and Zscaler is not responsible for their content or practices. All content is subject to change without notice. By accessing this blog, you agree to these terms and acknowledge your sole responsibility to verify and use the information as appropriate for your needs.
Esta postagem foi útil??
Aviso legal: este post no blog foi criado pela Zscaler apenas para fins informativos e é fornecido "no estado em que se encontra", sem quaisquer garantias de exatidão, integridade ou confiabilidade. A Zscaler não se responsabiliza por quaisquer erros, omissões ou por quaisquer ações tomadas com base nas informações fornecidas. Quaisquer sites ou recursos de terceiros vinculados neste post são fornecidos apenas para sua conveniência, e a Zscaler não se responsabiliza por seu conteúdo ou práticas. Todo o conteúdo está sujeito a alterações sem aviso prévio. Ao acessar este blog, você concorda com estes termos e reconhece que é de sua exclusiva responsabilidade verificar e utilizar as informações conforme apropriado para suas necessidades.
Receba as últimas atualizações do blog da Zscaler na sua caixa de entrada
Ao enviar o formulário, você concorda com nossa política de privacidade.



