Blog de Zscaler
Reciba en su bandeja de entrada las últimas actualizaciones del blog de Zscaler
States, Municipalities, and AI: How to Secure GenAI in Government
As generative AI (GenAI) promises new capability and efficiency, while at the same time raising concerns about uncontrolled use, state and local governments across the U.S. are considering adoption through a lens of both opportunity and risk. A security-first approach, paired with enforceable technical controls, helps agencies adopt GenAI with confidence while reducing operational, legal, and data-loss risk in a dynamic, fast-moving environment. In practice, three fundamentals consistently separate secure deployments from risky experimentation: visibility, guardrails, and continuous validation (including red teaming).
For security leaders, the challenge isn’t whether GenAI will be used—it’s whether it will be used with visibility, enforceable controls, and audit-ready accountability. Before selecting tools or drafting policy, it helps to anchor on the failure modes agencies are already seeing as GenAI use expands.
Key Issues Governments Are Facing
State security teams are flagging several common issues, many of which align with themes reported by Zscaler's ThreatLabz 2026 AI Security Report. Taken together, they highlight where unmanaged GenAI adoption most often collides with existing privacy, security, and oversight requirements.
- Data privacy & protection: Collection, usage, retention, and exposure of personal/sensitive data
- Government use of AI: Limitations, human oversight, review, and accountability
- Transparency: Notifying when AI is used, who is responsible, and providing oversight
- Unauthorized “digital replicas”: Creation or use of voice, image, or likeness without authorization
These issues tend to surface first as “shadow AI” usage—teams adopting public GenAI tools faster than security can standardize access, logging, and data protections. Without guardrails, GenAI becomes a new pathway for sensitive-data exposure, policy violations, and operational risk at scale.
Why States Need Strong GenAI Controls
For state and local governments, addressing GenAI security helps reduce risk across cost, mission, and trust. It also creates the foundation to enable approved GenAI use cases without forcing teams into unsafe workarounds.
- Financial risk
- Citizen data leakage, misuse, or inadvertent exposure
- Loss of public trust
- Legal liability
- Reputational damage
The practical question is how to translate these risks into controls that can be deployed and measured. Most state security teams prioritize capabilities that (1) establish AI usage and data visibility, (2) reduce the likelihood of data loss or unsafe outputs, and (3) support forensics, oversight, and reporting.
How Zscaler’s Capabilities Map to State Needs
Below are the capabilities that Zscaler offers through its GenAI protection/data protection suite. The goal is to operationalize GenAI security using familiar control categories – discovery, data protection, access control, and audit – so agencies can implement quickly and measure impact.
The mapping below is organized the way many security programs implement GenAI controls: start with discovery and classification, then add guardrails and least privilege, and finally operationalize with monitoring, remediation, and compliance reporting.
Capability | What it does / key features | How it helps |
AI/Data Visibility & Discovery / Classification (Zscaler AI-SPM, DSPM, etc.) | Automatically discover and classify datasets, models, vectors, and AI services (managed and unmanaged) to understand what data is in use and where exposure might exist. | Shows where “high-risk” data is used; supports risk assessments; improves transparency and reporting. |
Prompt / Input / Output Monitoring & Guardrails | Inspect, classify, and block inputs/prompts that violate policy; control outputs; help prevent PII exposure or data exfiltration through GenAI workflows. | Helps prevent misuse (e.g., disallowed content); supports guardrails when GenAI is used for communications or decisions that require controls. |
Browser/Session Isolation & Data Leakage Prevention (DLP) | Isolate GenAI applications so risky actions (cut/paste, upload/download) can be controlled; enforce DLP across AI interactions. | Helps protect sensitive or regulated data (e.g., identity, health, financial) from leaking through GenAI channels, safeguarding citizen privacy. |
Least Privilege / Entitlement Control | Minimize which users/roles can access which AI services or data; revoke overprivileged rights; restrict high-risk app usage. | Reduces attack surface and limits misuse; supports protection of regulated data and critical systems. |
Audit Trails, Logging, & Reporting | Maintain logs of AI usage: who submitted which prompt, when, and what response was returned; capture system/model interaction metadata. | Supports transparency, accountability, oversight, and audit/readiness reporting. |
Policy Enforcement / Guided Remediation | Identify misconfigurations and data exposure; provide remediation guidance and real-time alerts. | Enables continuous monitoring and correction; supports risk assessments, internal controls, and prevention of configuration drift. |
Framework Alignment | Map controls to frameworks (e.g., NIST AI RMF, HIPAA where applicable) via compliance modules and reporting. | Helps demonstrate alignment to best practices and applicable frameworks. |
Practical Steps State Entities Should Consider
Here are suggestions for how state agencies/entities can build (or upgrade) their GenAI security program to prepare for rapid advancement. These steps are intended to fit into existing security operations—policy, identity, data protection, and monitoring—rather than creating a separate “AI-only” track.
- Inventory AI Use
- Identify all GenAI tools in use (chatbots, assistants, third-party tools, open tools)
- Identify what data is being used or referenced, where it’s stored, and how it’s accessed
- Data Classification & Sensitivity Mapping
- Define categories of data sensitivity (PII, health, financial, etc.)
- Map which AI services have access to sensitive data
- Define Clear Policies & Guardrails
- Policies around who can use GenAI and for what purposes
- Prohibitions consistent with agreed-upon use (including data handling and disclosure)
- Implement Technical Controls
- Prompt/input filters, DLP blocking, browser/session isolation
- Entitlement/restriction controls
- Logging/auditing
- Continuous Monitoring & Risk Assessment
- Monitor for misuse and privacy violations
- Periodically assess risk and compliance
- Training & Awareness
- Ensure staff understand which GenAI tools are allowed and what data they can/can’t use
- Reinforce awareness of legal and regulatory obligations
- Governance & Oversight
- Assign a responsible party/team (e.g., a state CIO/CISO or AI Oversight Board)
- Embed human review/oversight for higher-risk use cases (e.g., decisions affecting citizens)
Capabilities only reduce risk when they’re implemented as part of a repeatable program. The steps above provide a security-team-friendly sequence that can plug into existing IRM/GRC, data protection, and zero trust initiatives.
How Zscaler Supports States
Zscaler’s GenAI protection and data security portfolio offers a toolkit that aligns well with the current environment. In practice, many agencies start by using these capabilities to define “approved GenAI usage” (tools, users, data types), then expand into continuous monitoring and audit support as adoption scales.
- Pre-Deployment Risk Assessment: Before deploying a GenAI model or enabling a GenAI tool for public-facing use, use Zscaler’s AI-SPM (Service & Posture Management) to discover what data and models are involved, classify their risk, test policy violations, and understand exposure.
- Implementing Transparency/Disclosure Controls: Use logging and audit trail features to capture prompts, response metadata, and user activity—supporting oversight, disclosure obligations, and responses to legal requests.
- Restricting/Blocking Sensitive Data Exposure: Use DLP integration, prompt filtering, and browser/session isolation to block high-risk actions (e.g., uploading sensitive documents, copying/pasting PII) when interacting with GenAI tools.
- Enforcing Use Policies (Entitlements, Privileges): Allow only approved roles to access external GenAI apps; enforce least privilege; quarantine or block risky apps/services until controls are validated.
- Monitoring & Remediation: Use guided remediation to address misconfigurations (e.g., over-entitled roles, open access to datasets, insecure storage). Trigger alerts when policy thresholds are crossed.
- Compliance Reporting & Audit Support: Generate reports on AI usage, data access, and incidents to support oversight and respond to inquiries, litigation, or citizen complaints.
With a baseline program in place, agencies can phase implementation—often starting with discovery and DLP coverage for GenAI, then expanding into entitlement controls, isolation for higher-risk use cases, and centralized logging/reporting for oversight.
Conclusion
Generative AI is reshaping how government works. Alongside opportunity, it also brings real legal, ethical, and operational risks—especially as adoption accelerates. States and municipalities bear responsibility in uncharted territory, and the time is now to put in place strong controls that increase resilience while maximizing the benefits of GenAI.
Tools like those from Zscaler (AI-SPM, DLP for GenAI, prompt monitoring and filtering, isolation, audit trails, etc.) provide technical building blocks needed for secure adoption. Combined with strong policy, oversight, and continuous risk assessment, state and local governments can harness the power of GenAI while protecting citizens, supporting compliance, and reducing legal exposure.
¿Este post ha sido útil?
Descargo de responsabilidad: Esta entrada de blog ha sido creada por Zscaler con fines únicamente informativos y se proporciona "tal cual" sin ninguna garantía de exactitud, integridad o fiabilidad. Zscaler no asume ninguna responsabilidad por cualquier error u omisión o por cualquier acción tomada en base a la información proporcionada. Cualquier sitio web de terceros o recursos vinculados en esta entrada del blog se proporcionan solo por conveniencia, y Zscaler no es responsable de su contenido o prácticas. Todo el contenido está sujeto a cambios sin previo aviso. Al acceder a este blog, usted acepta estos términos y reconoce su exclusiva responsabilidad de verificar y utilizar la información según convenga a sus necesidades.
Reciba en su bandeja de entrada las últimas actualizaciones del blog de Zscaler
Al enviar el formulario, acepta nuestra política de privacidad.



