Blog Zscaler

Ricevi gli ultimi aggiornamenti dal blog di Zscaler nella tua casella di posta

Security Research

AI is Now Default Enterprise Accelerator: Takeaways from ThreatLabz 2026 AI Security Report

DEEPEN DESAI, HEATHER BATES, DEEPAK SHANKER
gennaio 27, 2026 - 7 Minuti di lettura

Artificial intelligence and machine learning (AI/ML) are no longer emerging capabilities inside enterprise environments. In 2025, they became a persistent operating layer for how work gets done. Developers ship faster, marketers generate more content, analysts automate research, and IT teams rely on AI to streamline troubleshooting and operations. The productivity gains are real, but so are the tradeoffs.

As AI adoption accelerates, sensitive data increasingly flows through a growing number of AI-enabled applications. These systems often operate with less visibility and fewer guardrails than traditional enterprise software. At the same time, threat actors are following the data. The same forces making AI more accessible, with faster automation and more realistic outputs, are also compressing the timeline for attacks and making them harder to detect.

The newly released Zscaler ThreatLabz 2026 AI Security Report examines how enterprises are navigating this shift. The report draws on analysis of nearly one trillion AI and ML transactions observed across the Zscaler Zero Trust Exchange™ throughout 2025. That activity translates to hundreds of thousands of AI transactions per organization per day, offering a grounded view into how AI is actually being used across global enterprises.

The findings reinforce what many security teams already feel. AI is now embedded across daily workflows, governance remains uneven, and the enterprise attack surface is expanding in real time.

This blog highlights a subset of the most significant findings and implications for security teams. The full report provides deeper analysis of risk patterns, and practical guidance for enterprise leaders tasked with safely operationalizing AI at scale.

5 key takeaways for security teams in 2025

  1. Enterprise AI adoption is accelerating fast and expanding the attack surface

    Enterprise AI/ML activity increased more than 90% year-over-year in 2025. ThreatLabz analysis now includes over 3,400 applications generating AI/ML traffic, nearly four times more than the previous year. This growth reflects how quickly AI capabilities are being embedded into day-to-day workflows.

    Even when individual applications generate modest volumes of traffic, the overall ecosystem effect matters. Risk scales with sprawl. As AI features appear across vendors and platforms, security teams inherit governance responsibility across thousands of applications rather than a small set of standalone tools. What was once a limited category has become a distributed system.

  2. The most used AI tools sit directly in the flow of work and the flow of data

    While the enterprise AI adoption landscape continues to evolve, with models such as Google Gemini and Anthropic gaining traction more recently, enterprise usage in 2025 remained concentrated around a small set of productivity-layer tools. When analyzing AI/ML activity across the full year, the most widely used applications were Grammarly, ChatGPT, and Microsoft Copilot, reflecting how deeply AI is now embedded in everyday work. Codeium also ranked among the top applications by transaction volume, underscoring the growing role of AI in development workflows where proprietary code is constantly in motion.

    ThreatLabz also examined data transfer volumes between enterprises and AI applications. In 2025, data transfer to AI tools rose 93% year-over-year, reaching tens of thousands of terabytes in total. The same applications driving productivity gains from writing/editing to translating/coding - are often the ones handling the highest volumes of sensitive enterprise data - reinforcing how closely AI adoption and data risk are now linked.

  3. Many enterprise organizations are still blocking AI outright

    Not every organization is ready to enable broad AI access across the business. While overall blocking declined year-over-year, suggesting progress toward more policy-driven AI governance, enterprises still blocked 39% of all AI/ML access attempts in 2025.

    This pattern reflects unresolved risk rather than resistance to AI itself. Blocking is often used when organizations lack confidence in visibility, internal guardrails, or how AI systems behave once deployed at scale. ThreatLabz red team testing supports this caution. Every enterprise AI system tested failed at least once under realistic adversarial pressure, with failures surfacing quickly.

    Blocking may reduce exposure, but it does not stop AI-driven work. Users often shift to unsanctioned alternatives, personal accounts, or embedded AI features inside approved SaaS platforms, frequently with less visibility and fewer controls. The long-term goal is safe enablement, allowing organizations to support AI use while managing risk consistently.

  4. AI adoption varies widely by industry, concentrating risk unevenly

    AI/ML usage increased across every industry in 2025, but adoption was not uniform. Each sector is moving at a different pace and with different levels of oversight. Finance & Insurance once again generated the largest share (23.3%) of enterprise AI/ML activity. Manufacturing remained highly active at 19.5%, driven by automation, analytics, and operational workflows.

    Industry context matters. In sectors where AI intersects with regulated data, operational technology or supply chain systems, the stakes for data protection and access control are higher. Blocking patterns also varied widely, highlighting that AI governance cannot be one-size-fits all. Controls must align with industry risk profiles, compliance requirements, and operational dependencies.

  5. Threat actors are already using AI across the attack chain   

    ThreatLabz case studies show that generative AI is actively being used by adversaries to accelerate existing tactics rather than replace them. Attackers are using AI to support initial access, social engineering, evasion, and malware development, making malicious activity harder to distinguish from legitimate use.

    Campaigns analyzed in the report include AI-assisted social engineering, fake personas, and signs of AI-assisted code generation. For defenders, this means AI security must account not only for how employees use AI, but also for how adversaries are using it to move faster and blend in once they gain access.

The "hidden" growth story: embedded AI is expanding risk where least expected

Not all enterprise AI shows up as standalone generative AI usage. Increasingly, AI operates through embedded features built into everyday SaaS applications. These capabilities often activate by default, run continuously in the background, and interact with enterprise data without being labeled or governed as AI.

Embedded AI may feel like a simple feature enhancement, but it often introduces new data pathways. As a result, AI can interact with sensitive enterprise content in places security teams are not actively monitoring or classifying as AI usage at all. This is a growing blind spot that requires ongoing monitoring and significant attention across security teams and the industry. 

How Zscaler secures AI adoption and accelerates AI initiatives

As AI becomes more embedded across the enterprise, from public GenAI tools to private models, pipelines, agents, and supporting infrastructure, security teams need controls that extend beyond traditional app security. They need visibility into how AI behaves across the system.

Zscaler helps organizations secure AI usage with protections that span the AI security lifecycle:

AI asset management
Gain full visibility into AI usage, exposure, and dependencies across applications, models, pipelines, and supporting infrastructure (ex: MCP pipelines), including AI bills of material (AI-BOM) to discover your full footprint and identify risks.

Secure access to AI
Enforce granular access controls for AI applications and users. Inspect prompts and responses inline to ensure safe and responsible use of AI apps by preventing sensitive data from being sent to external models or returned in unsafe outputs.

Secure AI applications and infrastructure
Protect the AI systems enterprises are building and deploying, not just the tools employees use. This includes hardening systems and enforcing runtime protections with vulnerability detection across models and pipelines, adversarial red team testing, and securing against common and evolving threats like prompt injection, data poisoning, and unsafe use of sensitive information.

Get the report—stay ahead of enterprise AI risk

The ThreatLabz 2026 AI Security Report provides a data-backed view into how AI is being used across enterprise environments, where security teams are drawing the line, and where risk is emerging. Beyond the findings highlighted here, the full report examines top AI applications and vendors, regional usage patterns, and reveals ThreatLabz expert predictions for AI security in 2026—along with additional insights and guidance throughout. 
   
Download the full report to explore the data, insights, and recommendations shaping the next phase of enterprise AI security.

form submtited
Grazie per aver letto

Questo post è stato utile?

Esclusione di responsabilità: questo articolo del blog è stato creato da Zscaler esclusivamente a scopo informativo ed è fornito "così com'è", senza alcuna garanzia circa l'accuratezza, la completezza o l'affidabilità dei contenuti. Zscaler declina ogni responsabilità per eventuali errori o omissioni, così come per le eventuali azioni intraprese sulla base delle informazioni fornite. Eventuali link a siti web o risorse di terze parti sono offerti unicamente per praticità, e Zscaler non è responsabile del relativo contenuto, né delle pratiche adottate. Tutti i contenuti sono soggetti a modifiche senza preavviso. Accedendo a questo blog, l'utente accetta le presenti condizioni e riconosce di essere l'unico responsabile della verifica e dell'uso delle informazioni secondo quanto appropriato per rispondere alle proprie esigenze.

Ricevi gli ultimi aggiornamenti dal blog di Zscaler nella tua casella di posta

Inviando il modulo, si accetta la nostra Informativa sulla privacy.