Zscaler Blog

Erhalten Sie die neuesten Zscaler Blog-Updates in Ihrem Posteingang

Security Research

Latest Public Sector AI Adoption Trends: What Government, Healthcare, and Education Security Teams Need to Know

image
CHAD TETREAULT
Februar 05, 2026 - 6 Lesezeit: Min

The public sector isn’t taking a “trial-and-error” approach to AI adoption. Government, healthcare, and education systems have to work—often under tight budgets, legacy constraints, and high uptime expectations—and data must be protected, especially when it includes citizen records, patient information, and student data. 

The ThreatLabz 2026 AI Security Report examined 989.3 billion total AI/ML transactions across the Zscaler Zero Trust Exchange throughout 2025, revealing a public sector AI adoption story defined by accelerating (albeit uneven) adoption. Some sectors are scaling quickly; others, more gradually and quietly. 

ThreatLabz also examined data transfer volumes between enterprises and AI applications. In 2025, data transfer to AI tools rose 93% year-over-year, reaching tens of thousands of terabytes in total. The same applications driving productivity gains from writing/editing to translating/coding are often the ones handling the highest volumes of sensitive enterprise data, reinforcing how closely AI adoption and data risk are now linked. Codeium, as an example, ranked among the top applications by transaction volume, underscoring the growing role of AI in software development workflows where proprietary code is constantly in motion.

This blog post outlines key findings specific to government, healthcare, and education, along with guidance on where public sector security teams should prioritize efforts for securing AI usage in 2026.

AI adoption is picking up across every industry, but public sector patterns stand out

Across the board, AI adoption increased in 2025. Every industry tracked in the Zscaler cloud saw year-over-year growth in AI/ML activity, reinforcing that AI is no longer an emerging capability, but a persistent operating layer across daily workflows. What stands out in the public sector is the combination of rising AI usage volume and wide variation in how much AI/ML traffic is being blocked across sectors that all handle highly sensitive data.

Healthcare generated 71 billion AI/ML transactions in 2025, making it the largest public sector contributor by volume. Government followed with 38 billion transactions, reflecting steady year-over-year growth as agencies apply AI to operational and administrative workflows. Despite being the smallest by share, Education reached 16 billion transactions and grew 184% year-over-year—one of the fastest growing rates observed.

Blocked AI/ML traffic also varied sharply. Healthcare blocked 8.5% of AI/ML activity, government blocked 4%, and education blocked just 0.6%. AI adoption is rising across the public sector, but the level of blocked activity and the resulting visibility into how AI is being used looks very different across government, healthcare, and education.

Healthcare: high AI usage, higher blocking rates

Healthcare drove 7.2% of total AI/ML activity observed in the Zscaler cloud in 2025, as AI is increasingly integrated into both patient-facing and back-office workflows, including patient access and administrative processes.

Healthcare also recorded the highest percentage of blocked AI/ML transactions among public sector industries, with 8.5% blocked. In a sector where AI routinely intersects with regulated data, this level of blocked traffic reflects how quickly AI use cases can become data protection challenges.

Government: steady expansion with measured constraints

Government agencies and entities accounted for 3.8% of total AI/ML activity in the Zscaler cloud last year. Government use cases for AI are varied, from drafting, summarization, and research to internal operations, especially where departments are under pressure to improve efficiency.

The Government sector also blocked 4% of AI/ML activity, pointing to a more cautious posture than Education but fewer outright restrictions than Healthcare.

A key challenge in government AI adoption is consistency: AI usage spans agencies, departments, and environments with different governance maturity. A 2025 Government Accountability Office review found that generative AI use cases increased significantly across agencies over the past several years, but officials cited ongoing challenges in complying with federal policies and keeping up with evolving guidance.

Education: fastest growth, minimal blocked activity

The Education sector's 16 billion AI/ML transactions in 2025 represented just 1.6% of total activity—but 184% year-over-year growth made it one of the fastest-growing sectors in ThreatLabz analysis.

At the same time, Education blocked only 0.6% of AI/ML activity. That low level of blocked traffic suggests AI is being used broadly with limited friction, even as schools and universities work through privacy, integrity, and governance concerns. With AI adoption rising this quickly, visibility and guardrails will need to mature fast to reduce exposure.

GenAI is accelerating adversary operations

Generative AI is not only reshaping how public sector organizations operate, but it’s also changing how threat actors execute their operations. Several real-world examples of threat actors using generative AI in active campaigns surfaced this past year. One of the most notable involved cyber-espionage operations in which a state-sponsored group used agentic AI to automate an estimated 80-90% of the intrusion chain. Human operators intervened primarily for higher-risk decisions, demonstrating how autonomous agents can execute traditional attack playbooks at machine speed. 

Beyond this case, ThreatLabz observed multiple campaigns where adversaries weaponized GenAI in familiar tactics across the attack chain—from social engineering and AI-generated fake personas to malware development and AI-assisted code generation. The full report provides detailed case studies and analysis of these campaigns. For public sector defenders (and enterprises more broadly), this reinforces a critical reality: AI security must account not only for how employees use AI, but also for how adversaries leverage it to accelerate operations and blend into legitimate workflows.

What public sector security teams should do next

To support AI use without increasing exposure, public sector teams should focus on a few foundational moves:

  • Build visibility into AI usage. Identify the most-used AI tools, where usage is concentrated, and how data flows through them—establishing a baseline that supports AI governance, compliance, and audit readiness across agencies.
  • Apply risk-based access controls. Limit higher-risk tools and features by role, mission, and data sensitivity. In parallel, use deception capabilities to create targets that help expose adversary reconnaissance and misuse of AI-enabled workflows.
  • Protect sensitive data. Prevent regulated data from being shared through prompts, uploads, and AI integrations, with consistent policy enforcement across cloud, SaaS, and user-driven AI interactions.
  • Monitor for shadow AI. Track unsanctioned tools and personal account usage that reduces visibility and increases the risk of data exposure outside approved environments.
  • Account for embedded AI. Extend governance beyond standalone GenAI apps into SaaS platforms that now include AI by default. AI lifecycle controls and AI red teaming help validate security assumptions and provide protection not just at the application layer, but down to the models and underlying code.   

Download the report to stay ahead of public sector AI risk

The ThreatLabz 2026 AI Security Report provides a data-backed view into AI adoption across industries. The full report explores broader usage patterns, blocked activity trends, regional insights, risks posed by embedded AI, and case studies demonstrating how GenAI is bolstering attacker tactics, along with guidance for enabling AI securely and at scale.

Download your copy of the full report to get the latest AI security research and recommendations.

form submtited
Danke fürs Lesen

War dieser Beitrag nützlich?

Haftungsausschluss: Dieser Blog-Beitrag wurde von Zscaler ausschließlich zu Informationszwecken erstellt und wird ohne jegliche Garantie für Richtigkeit, Vollständigkeit oder Zuverlässigkeit zur Verfügung gestellt. Zscaler übernimmt keine Verantwortung für etwaige Fehler oder Auslassungen oder für Handlungen, die auf der Grundlage der bereitgestellten Informationen vorgenommen werden. Alle in diesem Blog-Beitrag verlinkten Websites oder Ressourcen Dritter werden nur zu Ihrer Information zur Verfügung gestellt, und Zscaler ist nicht für deren Inhalte oder Datenschutzmaßnahmen verantwortlich. Alle Inhalte können ohne vorherige Ankündigung geändert werden. Mit dem Zugriff auf diesen Blog-Beitrag erklären Sie sich mit diesen Bedingungen einverstanden und nehmen zur Kenntnis, dass es in Ihrer Verantwortung liegt, die Informationen zu überprüfen und in einer Ihren Bedürfnissen angemessenen Weise zu nutzen.

Erhalten Sie die neuesten Zscaler Blog-Updates in Ihrem Posteingang

Mit dem Absenden des Formulars stimmen Sie unserer Datenschutzrichtlinie zu.