Blog de Zscaler
Reciba en su bandeja de entrada las últimas actualizaciones del blog de Zscaler
AI is Now Default Enterprise Accelerator: Takeaways from ThreatLabz 2026 AI Security Report
Artificial intelligence and machine learning (AI/ML) are no longer emerging capabilities inside enterprise environments. In 2025, they became a persistent operating layer for how work gets done. Developers ship faster, marketers generate more content, analysts automate research, and IT teams rely on AI to streamline troubleshooting and operations. The productivity gains are real, but so are the tradeoffs.
As AI adoption accelerates, sensitive data increasingly flows through a growing number of AI-enabled applications. These systems often operate with less visibility and fewer guardrails than traditional enterprise software. At the same time, threat actors are following the data. The same forces making AI more accessible, with faster automation and more realistic outputs, are also compressing the timeline for attacks and making them harder to detect.
The newly released Zscaler ThreatLabz 2026 AI Security Report examines how enterprises are navigating this shift. The report draws on analysis of nearly one trillion AI and ML transactions observed across the Zscaler Zero Trust Exchange™ throughout 2025. That activity translates to hundreds of thousands of AI transactions per organization per day, offering a grounded view into how AI is actually being used across global enterprises.
The findings reinforce what many security teams already feel. AI is now embedded across daily workflows, governance remains uneven, and the enterprise attack surface is expanding in real time.
This blog highlights a subset of the most significant findings and implications for security teams. The full report provides deeper analysis of risk patterns, and practical guidance for enterprise leaders tasked with safely operationalizing AI at scale.
5 key takeaways for security teams in 2025
Enterprise AI adoption is accelerating fast and expanding the attack surface
Enterprise AI/ML activity increased more than 90% year-over-year in 2025. ThreatLabz analysis now includes over 3,400 applications generating AI/ML traffic, nearly four times more than the previous year. This growth reflects how quickly AI capabilities are being embedded into day-to-day workflows.
Even when individual applications generate modest volumes of traffic, the overall ecosystem effect matters. Risk scales with sprawl. As AI features appear across vendors and platforms, security teams inherit governance responsibility across thousands of applications rather than a small set of standalone tools. What was once a limited category has become a distributed system.
The most used AI tools sit directly in the flow of work and the flow of data
While the enterprise AI adoption landscape continues to evolve, with models such as Google Gemini and Anthropic gaining traction more recently, enterprise usage in 2025 remained concentrated around a small set of productivity-layer tools. When analyzing AI/ML activity across the full year, the most widely used applications were Grammarly, ChatGPT, and Microsoft Copilot, reflecting how deeply AI is now embedded in everyday work. Codeium also ranked among the top applications by transaction volume, underscoring the growing role of AI in development workflows where proprietary code is constantly in motion.
ThreatLabz also examined data transfer volumes between enterprises and AI applications. In 2025, data transfer to AI tools rose 93% year-over-year, reaching tens of thousands of terabytes in total. The same applications driving productivity gains from writing/editing to translating/coding - are often the ones handling the highest volumes of sensitive enterprise data - reinforcing how closely AI adoption and data risk are now linked.
Many enterprise organizations are still blocking AI outright
Not every organization is ready to enable broad AI access across the business. While overall blocking declined year-over-year, suggesting progress toward more policy-driven AI governance, enterprises still blocked 39% of all AI/ML access attempts in 2025.
This pattern reflects unresolved risk rather than resistance to AI itself. Blocking is often used when organizations lack confidence in visibility, internal guardrails, or how AI systems behave once deployed at scale. ThreatLabz red team testing supports this caution. Every enterprise AI system tested failed at least once under realistic adversarial pressure, with failures surfacing quickly.
Blocking may reduce exposure, but it does not stop AI-driven work. Users often shift to unsanctioned alternatives, personal accounts, or embedded AI features inside approved SaaS platforms, frequently with less visibility and fewer controls. The long-term goal is safe enablement, allowing organizations to support AI use while managing risk consistently.
AI adoption varies widely by industry, concentrating risk unevenly
AI/ML usage increased across every industry in 2025, but adoption was not uniform. Each sector is moving at a different pace and with different levels of oversight. Finance & Insurance once again generated the largest share (23.3%) of enterprise AI/ML activity. Manufacturing remained highly active at 19.5%, driven by automation, analytics, and operational workflows.
Industry context matters. In sectors where AI intersects with regulated data, operational technology or supply chain systems, the stakes for data protection and access control are higher. Blocking patterns also varied widely, highlighting that AI governance cannot be one-size-fits all. Controls must align with industry risk profiles, compliance requirements, and operational dependencies.
Threat actors are already using AI across the attack chain
ThreatLabz case studies show that generative AI is actively being used by adversaries to accelerate existing tactics rather than replace them. Attackers are using AI to support initial access, social engineering, evasion, and malware development, making malicious activity harder to distinguish from legitimate use.
Campaigns analyzed in the report include AI-assisted social engineering, fake personas, and signs of AI-assisted code generation. For defenders, this means AI security must account not only for how employees use AI, but also for how adversaries are using it to move faster and blend in once they gain access.
How Zscaler secures AI adoption and accelerates AI initiatives
As AI becomes more embedded across the enterprise, from public GenAI tools to private models, pipelines, agents, and supporting infrastructure, security teams need controls that extend beyond traditional app security. They need visibility into how AI behaves across the system.
Zscaler helps organizations secure AI usage with protections that span the AI security lifecycle:
AI asset management
Gain full visibility into AI usage, exposure, and dependencies across applications, models, pipelines, and supporting infrastructure (ex: MCP pipelines), including AI bills of material (AI-BOM) to discover your full footprint and identify risks.
Secure access to AI
Enforce granular access controls for AI applications and users. Inspect prompts and responses inline to ensure safe and responsible use of AI apps by preventing sensitive data from being sent to external models or returned in unsafe outputs.
Secure AI applications and infrastructure
Protect the AI systems enterprises are building and deploying, not just the tools employees use. This includes hardening systems and enforcing runtime protections with vulnerability detection across models and pipelines, adversarial red team testing, and securing against common and evolving threats like prompt injection, data poisoning, and unsafe use of sensitive information.
Get the report—stay ahead of enterprise AI risk
The ThreatLabz 2026 AI Security Report provides a data-backed view into how AI is being used across enterprise environments, where security teams are drawing the line, and where risk is emerging. Beyond the findings highlighted here, the full report examines top AI applications and vendors, regional usage patterns, and reveals ThreatLabz expert predictions for AI security in 2026—along with additional insights and guidance throughout.
Download the full report to explore the data, insights, and recommendations shaping the next phase of enterprise AI security.
¿Este post ha sido útil?
Descargo de responsabilidad: Esta entrada de blog ha sido creada por Zscaler con fines únicamente informativos y se proporciona "tal cual" sin ninguna garantía de exactitud, integridad o fiabilidad. Zscaler no asume ninguna responsabilidad por cualquier error u omisión o por cualquier acción tomada en base a la información proporcionada. Cualquier sitio web de terceros o recursos vinculados en esta entrada del blog se proporcionan solo por conveniencia, y Zscaler no es responsable de su contenido o prácticas. Todo el contenido está sujeto a cambios sin previo aviso. Al acceder a este blog, usted acepta estos términos y reconoce su exclusiva responsabilidad de verificar y utilizar la información según convenga a sus necesidades.
Reciba en su bandeja de entrada las últimas actualizaciones del blog de Zscaler
Al enviar el formulario, acepta nuestra política de privacidad.



