<?xml version="1.0" encoding="utf-8"?>
<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/">
    <channel>
        <title>Products &amp; Solutions | Blog</title>
        <link>https://www.zscaler.com/de/blogs/feeds/product-insights</link>
        <description>View for blog content type.</description>
        <lastBuildDate>Mon, 09 Mar 2026 18:38:12 GMT</lastBuildDate>
        <docs>https://validator.w3.org/feed/docs/rss2.html</docs>
        <generator>RSS 2.0, JSON Feed 1.0, and Atom 1.0 generator for Node.js</generator>
        <language>de</language>
        <item>
            <title><![CDATA[Automating Data Governance: Strengthening Security with Zscaler DSPM and MPIP Integration]]></title>
            <link>https://www.zscaler.com/de/blogs/product-insights/automating-data-governance-strengthening-security-zscaler-dspm-and-mpip</link>
            <guid>https://www.zscaler.com/de/blogs/product-insights/automating-data-governance-strengthening-security-zscaler-dspm-and-mpip</guid>
            <pubDate>Thu, 05 Mar 2026 18:00:23 GMT</pubDate>
            <description><![CDATA[In the modern enterprise, tracking business-critical data has moved beyond a simple administrative task—it has become a "superhuman" challenge. As data is generated, modified, and moved across sprawling multi-cloud environments and SaaS applications, maintaining visibility and control is increasingly difficult for even the most well-resourced security teams.To manage this complexity, many organizations rely on data labeling. By classifying data at the point of creation, organizations can help end-users understand the sensitivity of the information they handle. Furthermore, labeling is no longer just a "best practice"; it is a core requirement for many global compliance frameworks that mandate the identification of critical business assets.&nbsp;The Role of Microsoft Purview Information Protection&nbsp;Most organizations center their labeling strategy around user-generated data residing in cloud or on-premises file shares. To do this, they leverage Microsoft Purview Information Protection (MPIP)—formerly known as Azure Information Protection (AIP) —to map sensitive data, control access, and trigger security settings like encryption.Because MPIP labels are stored as persistent metadata within the files themselves, the protection "travels" with the data. This allows security teams to use these labels as anchors for Data Loss Prevention (DLP) and Cloud Access Security Broker (CASB) policies, ensuring consistent enforcement regardless of where the file resides.Bridging the Gap: Zscaler DSPM and MPIP IntegrationWhile MPIP provides the framework for labeling, Zscaler Data Security Posture Management (DSPM) provides the global engine for discovery, classification and validation.Zscaler DSPM continuously scans your data universe ranging from cloud, SaaS applications to on premise data centres—to identify and catalog files. With this integration, Zscaler DSPM now detects the MPIP labels associated with every file.Zscaler DSPM&nbsp; doesn't just read the label; it scans the content of the file using prebuilt and custom classifiers. By comparing the actual data against the existing label, Zscaler DSPM helps enable organizations to:Identify and correct mislabeled sensitive files.Automatically apply MPIP labels to unlabeled sensitive data.Validate labeling accuracy across the entire data estate.This automated validation reduces the manual "toil" on IT and security operations teams while significantly hardening the organization’s overall security posture.&nbsp;Key Benefits of the Zscaler DSPM MPIP Integration&nbsp;1. Comprehensive Visibility and Historical RemediationTraditional labeling often misses legacy data or "shadow data" created before strict policies were in place. Zscaler DSPM identifies sensitive data missing MPIP labels and allows you to apply classifications to both historical archives and newly created or modified data.2. Cross-Cloud Labeling EnforcementOne of the primary challenges of MPIP is extending its logic beyond the Microsoft ecosystem. Zscaler DSPM bridges this gap by detecting and applying MPIP labels to files stored in non-Microsoft environments, such as Amazon S3 buckets. This helps to ensure a unified classification standard across your entire multi-cloud strategy.3. Optimized Business ContextSecurity labels are often siloed within IT departments and underutilized by security teams. Zscaler DSPM breaks these silos by correlating MPIP labels with other risk signals and data profiles. By seeing the actual content inside a labeled file, security teams can demystify labeling schemes and ensure they align with specific business objectives.4. Unified Policy Management and "Label-Driven" SecurityTo prevent policy drift, Zscaler allows you to use sensitivity labels as automated policy triggers. This ensures that a label of "Highly Confidential" automatically invokes encryption or restricts exfiltration in high-risk scenarios. Making MPIP labels the "source of truth" for Zscaler security policies helps create a seamless enforcement experience for both admins and end-users.5. Simplified Regulatory ComplianceFor organizations navigating the complexities of GDPR, HIPAA, or PCI-DSS, this integration provides a robust technical control. It streamlines the labeling of business-critical data, providing a clear, automated audit trail ready for internal auditors and external regulators alike.ConclusionThe integration of Zscaler DSPM and MPIP represents a shift from passive monitoring to active, automated enforcement. By ensuring your data is correctly classified and protected everywhere it travels, you can finally close the "enforcement gap" and reduce the risk of high-impact data breaches.&nbsp;Ready to see Zscaler DSPM in action?While the MPIP integration is a powerful component of our platform, Zscaler’s DSPM solution offers even deeper capabilities for risk reduction and data discovery. A picture is worth a thousand words—schedule a session with one of our experts to see how we can secure your data estate.]]></description>
            <dc:creator>Mahesh Nawale (Product Marketing Manager)</dc:creator>
        </item>
        <item>
            <title><![CDATA[States, Municipalities, and AI: How to Secure GenAI in Government]]></title>
            <link>https://www.zscaler.com/de/blogs/product-insights/states-municipalities-and-ai-how-secure-genai-government</link>
            <guid>https://www.zscaler.com/de/blogs/product-insights/states-municipalities-and-ai-how-secure-genai-government</guid>
            <pubDate>Mon, 23 Feb 2026 15:58:59 GMT</pubDate>
            <description><![CDATA[As generative AI (GenAI) promises new capability and efficiency,while at the same time raising concerns about uncontrolled use, state and local governments across the U.S. are considering adoption through a lens of both opportunity and risk. A security-first approach, paired with enforceable technical controls, helps agencies adopt GenAI with confidence while reducing operational, legal, and data-loss risk in a dynamic, fast-moving environment. In practice, three fundamentals consistently separate secure deployments from risky experimentation: visibility, guardrails, and continuous validation (including red teaming).For security leaders, the challenge isn’t whether GenAI will be used—it’s whether it will be used with visibility, enforceable controls, and audit-ready accountability. Before selecting tools or drafting policy, it helps to anchor on the failure modes agencies are already seeing as GenAI use expands.Key Issues Governments Are FacingState security teams are flagging several common issues, many of which align with themes reported by Zscaler's ThreatLabz 2026 AI Security Report. Taken together, they highlight where unmanaged GenAI adoption most often collides with existing privacy, security, and oversight requirements.Data privacy & protection: Collection, usage, retention, and exposure of personal/sensitive dataGovernment use of AI: Limitations, human oversight, review, and accountabilityTransparency: Notifying when AI is used, who is responsible, and providing oversightUnauthorized “digital replicas”: Creation or use of voice, image, or likeness without authorizationThese issues tend to surface first as “shadow AI” usage—teams adopting public GenAI tools faster than security can standardize access, logging, and data protections. Without guardrails, GenAI becomes a new pathway for sensitive-data exposure, policy violations, and operational risk at scale.Why States Need Strong GenAI ControlsFor state and local governments, addressing GenAI security helps reduce risk across cost, mission, and trust. It also creates the foundation to enable approved GenAI use cases without forcing teams into unsafe workarounds.Financial riskCitizen data leakage, misuse, or inadvertent exposureLoss of public trustLegal liabilityReputational damageThe practical question is how to translate these risks into controls that can be deployed and measured. Most state security teams prioritize capabilities that (1) establish AI usage and data visibility, (2) reduce the likelihood of data loss or unsafe outputs, and (3) support forensics, oversight, and reporting.How Zscaler’s Capabilities Map to State NeedsBelow are the capabilities that Zscaler offers through its GenAI protection/data protection suite. The goal is to operationalize GenAI security using familiar control categories – discovery, data protection, access control, and audit – so agencies can implement quickly and measure impact.The mapping below is organized the way many security programs implement GenAI controls: start with discovery and classification, then add guardrails and least privilege, and finally operationalize with monitoring, remediation, and compliance reporting.CapabilityWhat it does / key featuresHow it helpsAI/Data Visibility & Discovery / Classification (Zscaler AI-SPM, DSPM, etc.)Automatically discover and classify datasets, models, vectors, and AI services (managed and unmanaged) to understand what data is in use and where exposure might exist.Shows where “high-risk” data is used; supports risk assessments; improves transparency and reporting.Prompt / Input / Output Monitoring & GuardrailsInspect, classify, and block inputs/prompts that violate policy; control outputs; help prevent PII exposure or data exfiltration through GenAI workflows.Helps prevent misuse (e.g., disallowed content); supports guardrails when GenAI is used for communications or decisions that require controls.Browser/Session Isolation & Data Leakage Prevention (DLP)Isolate GenAI applications so risky actions (cut/paste, upload/download) can be controlled; enforce DLP across AI interactions.Helps protect sensitive or regulated data (e.g., identity, health, financial) from leaking through GenAI channels, safeguarding citizen privacy.Least Privilege / Entitlement ControlMinimize which users/roles can access which AI services or data; revoke overprivileged rights; restrict high-risk app usage.Reduces attack surface and limits misuse; supports protection of regulated data and critical systems.Audit Trails, Logging, & ReportingMaintain logs of AI usage: who submitted which prompt, when, and what response was returned; capture system/model interaction metadata.Supports transparency, accountability, oversight, and audit/readiness reporting.Policy Enforcement / Guided RemediationIdentify misconfigurations and data exposure; provide remediation guidance and real-time alerts.Enables continuous monitoring and correction; supports risk assessments, internal controls, and prevention of configuration drift.Framework AlignmentMap controls to frameworks (e.g., NIST AI RMF, HIPAA where applicable) via compliance modules and reporting.Helps demonstrate alignment to best practices and applicable frameworks.Practical Steps State Entities Should ConsiderHere are suggestions for how state agencies/entities can build (or upgrade) their GenAI security program to prepare for rapid advancement. These steps are intended to fit into existing security operations—policy, identity, data protection, and monitoring—rather than creating a separate “AI-only” track.Inventory AI UseIdentify all GenAI tools in use (chatbots, assistants, third-party tools, open tools)Identify what data is being used or referenced, where it’s stored, and how it’s accessedData Classification & Sensitivity MappingDefine categories of data sensitivity (PII, health, financial, etc.)Map which AI services have access to sensitive dataDefine Clear Policies & GuardrailsPolicies around who can use GenAI and for what purposesProhibitions consistent with agreed-upon use (including data handling and disclosure)Implement Technical ControlsPrompt/input filters, DLP blocking, browser/session isolationEntitlement/restriction controlsLogging/auditingContinuous Monitoring & Risk AssessmentMonitor for misuse and privacy violationsPeriodically assess risk and complianceTraining & AwarenessEnsure staff understand which GenAI tools are allowed and what data they can/can’t useReinforce awareness of legal and regulatory obligationsGovernance & OversightAssign a responsible party/team (e.g., a state CIO/CISO or AI Oversight Board)Embed human review/oversight for higher-risk use cases (e.g., decisions affecting citizens)Capabilities only reduce risk when they’re implemented as part of a repeatable program. The steps above provide a security-team-friendly sequence that can plug into existing IRM/GRC, data protection, and zero trust initiatives.How Zscaler Supports StatesZscaler’s GenAI protection and data security portfolio offers a toolkit that aligns well with the current environment. In practice, many agencies start by using these capabilities to define “approved GenAI usage” (tools, users, data types), then expand into continuous monitoring and audit support as adoption scales.Pre-Deployment Risk Assessment:&nbsp;Before deploying a GenAI model or enabling a GenAI tool for public-facing use, use Zscaler’s AI-SPM (Service & Posture Management) to discover what data and models are involved, classify their risk, test policy violations, and understand exposure.Implementing Transparency/Disclosure Controls: Use logging and audit trail features to capture prompts, response metadata, and user activity—supporting oversight, disclosure obligations, and responses to legal requests.Restricting/Blocking Sensitive Data Exposure: Use DLP integration, prompt filtering, and browser/session isolation to block high-risk actions (e.g., uploading sensitive documents, copying/pasting PII) when interacting with GenAI tools.Enforcing Use Policies (Entitlements, Privileges): Allow only approved roles to access external GenAI apps; enforce least privilege; quarantine or block risky apps/services until controls are validated.Monitoring & Remediation: Use guided remediation to address misconfigurations (e.g., over-entitled roles, open access to datasets, insecure storage). Trigger alerts when policy thresholds are crossed.Compliance Reporting & Audit Support: Generate reports on AI usage, data access, and incidents to support oversight and respond to inquiries, litigation, or citizen complaints.With a baseline program in place, agencies can phase implementation—often starting with discovery and DLP coverage for GenAI, then expanding into entitlement controls, isolation for higher-risk use cases, and centralized logging/reporting for oversight.ConclusionGenerative AI is reshaping how government works. Alongside opportunity, it also brings real legal, ethical, and operational risks—especially as adoption accelerates. States and municipalities bear responsibility in uncharted territory, and the time is now to put in place strong controls that increase resilience while maximizing the benefits of GenAI.Tools like those from Zscaler (AI-SPM, DLP for GenAI, prompt monitoring and filtering, isolation, audit trails, etc.) provide technical building blocks needed for secure adoption. Combined with strong policy, oversight, and continuous risk assessment, state and local governments can harness the power of GenAI while protecting citizens, supporting compliance, and reducing legal exposure.]]></description>
            <dc:creator>Fred Green (Zscaler)</dc:creator>
        </item>
        <item>
            <title><![CDATA[Leveraging Zero Trust for More Accurate Exposure Prioritization]]></title>
            <link>https://www.zscaler.com/de/blogs/product-insights/leveraging-zero-trust-more-accurate-exposure-prioritization</link>
            <guid>https://www.zscaler.com/de/blogs/product-insights/leveraging-zero-trust-more-accurate-exposure-prioritization</guid>
            <pubDate>Mon, 23 Feb 2026 15:11:59 GMT</pubDate>
            <description><![CDATA[Vulnerability management is often compared to “searching for needles in a haystack” because a small group of findings create the greatest risk as potential gateways for attackers.It’s no secret that the haystack keeps getting larger–it’s now more like a hundred-acre field. There were nearly 50,000 CVEs published last year, and Recorded Future reports that&nbsp;42% of CVEs disclosed in the first half of 2025 had a public proof-of-concept exploit. Enterprise security teams invest in upwards of&nbsp;45 different tools to monitor risk across an increasingly complex attack surface, often producing hundreds of thousands of findings.&nbsp;The good news? Attackers can do no significant harm with the vast majority of those findings. The bad news? Finding the handful that matter gets harder every day.Organizations use lots of tactics to identify what’s “risky,” including threat intelligence feeds, asset criticality, adversary behavior tracking, and applying unique business context to influence prioritization. Your teams can (and should) apply as many risk signals as are available.An equally effective prioritization factor – or deprioritization if you will – is to&nbsp;account for compensating controls that are already in place. That's exactly what Zscaler does by integrating context from our Zero Trust Exchange – our research identifies which vulnerabilities are mitigated by your zero trust policies, and we apply that context so you know where to focus instead. Let’s take a look at how Zscaler can help focus your efforts.Deprioritize CVEs Mitigated by ZIA and ZPAOne of the most effective policy engines for mitigating vulnerabilities is your zero trust program. Very few security teams automatically apply these mitigations to prioritization scoring. In other words, despite the absence of a pathway for an individual vulnerability to be exploited, security teams spend valuable cross-functional resources deploying patches or system upgrades that are actually unnecessary, simply in response to a “critical” finding from a vulnerability scanner. It’s a textbook example of a “false critical” – teams simply have too many real issues to fix and too little time to waste resources on remediations that don’t impact risk.Zscaler Exposure Management customers often see up to 80% reduction in “false critical” findings by applying context from any data source in their environment. One such source is ThreatLabz–a research organization within Zscaler that focuses on identifying and analyzing emerging threats, vulnerabilities, and attack techniques. The ThreatLabz team maintains a database of CVEs with information on how they're mitigated by different Zscaler products, including Zscaler Internet Access (ZIA) and Zscaler Private Access (ZPA).Many Zscaler customers see a significant reduction in findings truly deemed critical because of the vulnerabilities proactively mitigated by zero trust policies. Let’s look at an example.<div> <script async src="https://js.storylane.io/js/v2/storylane.js"></script> <div class="sl-embed" style="position:relative;padding-bottom:calc(50.26% + 25px);width:100%;height:0;transform:scale(1)">   <iframe loading="lazy" class="sl-demo" src="https://app.storylane.io/demo/cpf18xux96sd?embed=inline" name="sl-embed" allow="fullscreen" allowfullscreen style="position:absolute;top:0;left:0;width:100%!important;height:100%!important;border:1px solid rgba(63,95,172,0.35);box-shadow: 0px 0px 18px rgba(26, 19, 72, 0.15);border-radius:10px;box-sizing:border-box;"></iframe> </div></div>
Focus on what’s risky in YOUR environmentJust because a vulnerability is known to be exploited in the wild doesn’t always mean it poses a critical risk in your environment. Consider the following example of CVE-2021-44228, a CISA KEV most commonly known as log4shell. ZIA’s Intrusion Prevention System (IPS) mitigates this particular vulnerability, as detailed in the&nbsp;ThreatLabz Threat Library.Most vulnerability assessment tools would score this finding as critical, and with good reason: exploitation can result in Remote Code Execution. But&nbsp;Zscaler Unified Vulnerability Management (UVM) has automatically reduced the severity to a “medium” 4.7, recognizing the presence of a mitigating control in the form of ZIA.UVM has logged the original CVSS score of 10 and the “original severity score” from the scanning tool, also a 10. But UVM goes on to create a contextual, risk-adjust score – let’s drill deeper into the explanation of that score:All the tools in the environment report the finding as critical, but the vulnerability is fully mitigated by ZIA, taking it off the critical list entirely.&nbsp;As a matter of fact, the integrated ThreatLabz data has determined that all five findings associated with this ticket are mitigated by ZIA or ZPA policies, so the severity score has been automatically adjusted from 10 down to 4.7.Most exposure management programs would fail to recognize the presence of mitigating controls. The ticket would be prioritized as a critical, and organizations would spend security and IT resources fixing a problem that poses no significant risk. By adjusting the severity score automatically, UVM keeps teams focused on the work that matters, the fixes that actually reduce risk.Maximize the value of the tools you already haveIntegrating ThreatLabz research and Zscaler Client Connector (ZCC) data into your exposure management program adds valuable context to help your security team focus on truly critical vulnerabilities in your specific environment. Zscaler customers have a wealth of data and telemetry in their existing deployments that can turbocharge exposure prioritization and risk mitigation, but benefitting from all that context requires an exposure management solution capable of assimilating that data.Tool sprawl is often associated with complexity in exposure management. Dozens of siloed tools producing risk signals, none of which work together, and all contributing to the flood of data that prevents security teams from quickly identifying truly critical risk.&nbsp;Zscaler helps you channel the power of all those currently siloed tools and use the breadth of their insights to your advantage. By combining context from vulnerability scanners, cloud security tools, data security tools, identity and access management, IoT/OT security tools, threat intelligence feeds, and anything else with relevant data, organizations can use that rich context of the risk signals and mitigating controls in place to discern which findings truly represent risk. The haystack shrinks, even as the quantity of assets and findings grows larger.Evolve to a holistic exposure management program with ZscalerYou may be closer than you think to building a holistic exposure management engine that helps your security team pull the needles from the haystack. Your investments in vulnerability scanning and cyber risk assessment tools can work together with Zscaler Exposure Management, and your zero trust policy engine serves as a great foundation for inline controls and mitigation.With&nbsp;Zscaler Exposure Management, organizations can harness the power of contextual data and risk signals across the environment to deliver:Complete visibility of assets in a risk-based inventoryPrioritized exposure findings, unified from every sourceAccelerated remediation leveraging your existing tools and workflowsRequest a demo to see how your Zscaler products and existing security investments can come together to deliver better exposure management.]]></description>
            <dc:creator>Chris McManus (Senior Product Marketing Manager)</dc:creator>
        </item>
        <item>
            <title><![CDATA[Future-Proof Your Security with the First Quantum-Ready Security Service Edge (SSE)]]></title>
            <link>https://www.zscaler.com/de/blogs/product-insights/future-proof-security-first-quantum-ready-security-service-edge-sse</link>
            <guid>https://www.zscaler.com/de/blogs/product-insights/future-proof-security-first-quantum-ready-security-service-edge-sse</guid>
            <pubDate>Tue, 17 Feb 2026 09:00:00 GMT</pubDate>
            <description><![CDATA[Zscaler has already made significant investment in providing customers with&nbsp;post-quantum cryptography (PQC) visibility and logging capabilities—and now we’re building upon that foundation to ensure our customers can realize true crypto-agility.&nbsp;That's why today, we are thrilled to announce that the leading Security Service Edge (SSE) is now quantum-ready:&nbsp; Zscaler Internet Access inline inspection now supports hybrid PQC key exchange.&nbsp;This first-to-market capability allows your organization to decrypt and inspect quantum-encrypted traffic at scale, enforce your security policies, and defend against the emerging quantum threat landscape. With Zscaler’s proxy architecture, our new PQC key exchange capability also provides customers protection from “harvest now, decrypt later” (HNDL) attacks, even at the last mile if an application server does not support PQC yet.Additionally, with this launch we can now&nbsp;secure customers’ IPsec VPN tunnels with post-quantum, pre-shared Keys (PPK) which securely connects our customers’ PPK-ready endpoints to Zscaler.&nbsp; PPKs are an additional secret that both peers already share—and mixing it into the IKE key derivation results in IPsec keys that remain secure even if the Diffie-Hellman with Ephemeral keys (DHE/ECDHE)&nbsp;exchange is later broken by a quantum computer. In other words, it’s a post-quantum risk-mitigation mode for IPsec without requiring full PQC algorithms in the key exchange.Why Hybrid PQC Key Exchange MattersDuring the period of transition from classical to quantum-resilient encryption, hybrid PQC key exchange will act as a vital safety net. By combining a proven classical algorithm with a new quantum-resistant one, hybrid key exchange ensures that encrypted traffic remains secure even if one of the algorithms is compromised. This dual-layered approach provides robust protection against both current threats and the future risk of a quantum computer breaking today's standard encryption.Hybrid PQC key change is also foundational to helping address several core customer challenges in a quantum world:Defending Against Quantum Threats:&nbsp;With HNDL attacks already a viable threat, protecting data in transit is paramount. Our new capabilities that utilize hybrid key exchange mitigate the HNDL threat by making it extremely difficult for attackers to later decrypt harvested data.Meeting Compliance Mandates:&nbsp;Governments are mandating PQC adoption to protect critical infrastructure and data. Zscaler enables you to get ahead of these requirements and prove compliance with detailed reporting on quantum cipher usage across your environment.Bolstering Business Continuity:&nbsp;The crypto-transition is a predictable, high-impact event. A proactive strategy with Zscaler’s approach leveraging hybrid key exchange prevents the disruption, loss of trust, and compliance failures that a reactive approach would cause.Zscaler now provides real-time, deep inspection of PQC traffic, leveraging the NIST-standardized ML-KEM (FIPS 203) standard for post-quantum key exchange. Just as we do for classical encryption, Zscaler unlocks complete visibility and protection for PQC sessions, all without impacting performance. Our implementation of hybrid PQC key exchange is compliant with the&nbsp;draft-ietf-tls-echde-mlkem proposed standard and is fully compatible with Chrome, Firefox, Safari and other widely deployed clients as well as servers.The Zscaler Zero Trust Exchange sits inline, and our cloud-native inspection engine seamlessly decrypts, scans and enforces security policy, and re-encrypts traffic before sending it onto its destination. Here’s how our quantum-ready inspection process works:Zscaler checks the TLS ClientHello message from the client: If the client indicates TLS 1.3 support and includes a hybrid PQC key exchange in its proposal, Zscaler Internet Access uses TLS 1.3 with a supported hybrid PQC key exchange group. This process is independent of server capabilities and allows PQC usage between client and ZIA even if the server does not support it. The supported TLS version and selected key exchange group is always logged so administrators can get valuable information about PQC support on the client side. Those same insights can help security and IT teams prioritize upgrading software that is not PQC ready.Zscaler sends TLS ClientHello to the server on behalf of the client:&nbsp;In the ClientHello message it indicates support for TLS 1.3 and includes all standard hybrid PQC key exchange methods in the offer. In the TLS protocol it is up to the server to choose from a supported list of key exchange algorithms. Zscaler Internet Access logs selected TLS version and cryptographic parameters for each session that allows administrators to understand the security posture and work with service providers to use PQC capabilities.Zscaler performs traffic inspection and applies security policies:&nbsp;all threat prevention, DLP and access control policies are applied transparently for the client and server without any configuration changes to current policies. This means Zscaler provides the same industry-leading threat detection and prevention to PQC sessions that Zscaler has applied to non-PQC traffic for years.&nbsp;New Capabilities to Secure Your Quantum JourneyThis launch delivers two major innovations for the Zscaler platform:SSL/TLS Inspection with ML-KEM:&nbsp;Perform full decryption and deep content inspection on traffic flows that were established using hybrid PQC key exchange. We automatically detect and negotiate TLS groups, applying all your existing security policies without any changes to configurations or impact on user experience.&nbsp;IPsec with Post-quantum Pre-shared Keys (PPK): Secure your branch office and data center connections with future-proof VPN forwarding to Zscaler. By mixing a pre-shared key into the IKE key derivation, the resulting IPsec keys remain secure even if the Diffie-Hellman exchange is later broken by a quantum computer. This provides a practical, quantum-resistant upgrade for IPsec that can be deployed today.Begin the PQC Transition Journey NowThe shift to post-quantum cryptography is perhaps one of the defining security challenges of our time. With Zscaler, you can move from a reactive posture to a proactive one. Gain the visibility you need to stop threats hiding in PQC traffic, fortify your defenses against future decryption attacks, and meet emerging compliance mandates head-on.The members of our partner ecosystem will also play an important role in helping customers along their journey to quantum-readiness. Zscaler will work with members of our partner ecosystem, including Ernst & Young and HCLTech, to do just that:"We are thrilled to announce a strategic expansion of our partnership with EY, focused on delivering advanced Post-Quantum Cryptography (PQC) visibility through real-time crypto inventory capabilities. By leveraging Zscaler as the primary data source for cryptographic discovery, EY clients can now gain the comprehensive insights necessary to drive informed PQC migration and future-proof decision-making. This critical data allows EY’s expert consultants to help organizations develop robust, long-term security strategies tailored to their unique risk profiles. Together, we are simplifying the complex path to quantum safety and ensuring EY's clients remain resilient against emerging threats."— Adam Berman, Global Alliances Director, Zscaler“Post-Quantum Cryptography is becoming a strategic priority for enterprises committed to digital trust and total resilience. Through our collaboration with Zscaler, HCLTech is helping organizations accelerate crypto discovery, strengthen crypto-agility and secure communications against emerging quantum threats. Together, we are enabling ZIA customers to transition confidently to a quantum-safe future while meeting evolving compliance and regulatory expectations.”— Prikshit Goel, VP and Global Practice Head, Cybersecurity, HCLTechReady to future-proof your security? Learn more about preparing for the quantum future:&nbsp;watch our launch event webinar where our product experts will walk you through our PQC inline inspection capabilities and how we can help your organization prepare for the quantum era.]]></description>
            <dc:creator>Brendon Macaraeg (Sr. Product Marketing Manager)</dc:creator>
        </item>
        <item>
            <title><![CDATA[Demystifying Key Exchange: From Classical Elliptic Curve Cryptography to a Post-Quantum Future]]></title>
            <link>https://www.zscaler.com/de/blogs/product-insights/demystifying-key-exchange-post-quantum-pqc</link>
            <guid>https://www.zscaler.com/de/blogs/product-insights/demystifying-key-exchange-post-quantum-pqc</guid>
            <pubDate>Thu, 12 Feb 2026 22:54:58 GMT</pubDate>
            <description><![CDATA[In the digital world, the secure exchange of cryptographic keys is the foundation upon which all private communication is built. It’s the initial, critical handshake that allows two parties, like a user’s browser and a web server, to establish a shared secret and communicate securely over the untrusted expanse of the internet.As the quantum computing era approaches, the very mathematics underpinning our traditional key exchange mechanisms are facing an existential threat. This spurred the development of new, quantum-resistant algorithms. This blog post provides a deep dive into how modern key exchange works, from the trusted classical methods to the emerging post-quantum standards, and explores how Zscaler leverages hybrid key exchange to bridge the gap.The Key Components of Modern Key ExchangeAt a high level, a secure key exchange protocol must achieve the following:Confidentiality:&nbsp;&nbsp;The established key must be a secret shared only between the two communicating parties. An eavesdropper should not be able to determine the key.Authentication: In many cases (like with TLS), the parties must be able to verify each other's identity to prevent man-in-the-middle attacks. This is typically handled by digital certificates and is complementary to the key exchange itself.Forward Secrecy: The compromise of a long-term secret (like a server's private key) should not compromise the security of past session keys. This ensures that previously recorded encrypted traffic cannot be decrypted.Classical Key Exchange: The Reign of ECDHEFor the better part of a decade, the gold standard for key exchange on the web has been&nbsp; Elliptic Curve Diffie-Hellman Ephemeral (ECDHE). It is a cornerstone of Transport Layer Security (TLS) and is responsible for securing trillions of connections daily.How Key Exchange WorksThe Foundation: Elliptic Curve Cryptography (ECC): Instead of using very large prime numbers like traditional Diffie-Hellman, ECDHE uses the mathematical properties of elliptic curves. ECC offers the same level of security as older methods but with significantly smaller key sizes, making it faster and more efficient—a crucial advantage for mobile and IoT devices.The Handshake: Both the client and the server agree on a common elliptic curve and a starting point on that curve (the "generator").The "Ephemeral" Nature: This is where forward secrecy comes from. For each new session, both the client and server generate a new, temporary (ephemeral) key pair consisting of a private key (a random number) and a public key (a point on the curve).The Exchange:&nbsp;The client and server exchange their public keys.The Shared Secret:&nbsp;Each party then uses its *own* private key and the *other* party's public key to perform a calculation. Due to the magic of elliptic curve mathematics, both the client and the server independently arrive at the exact same point on the curve—this becomes their shared secret.Session Encryption: This shared secret is then used to derive the symmetric encryption keys that will encrypt all data for the remainder of the session.Even if an attacker were to steal the server's long-term private key years later, they could not use it to derive the ephemeral session keys from past traffic.The Quantum Threat and Post-Quantum Key Exchange: ML-KEMThe security of ECDHE relies on the difficulty of the "elliptic curve discrete logarithm problem." For a classical computer, this is an incredibly hard problem to solve. But for a sufficiently powerful quantum computer, Shor's algorithm&nbsp; makes it trivial because it can factor large integers into prime numbers with extreme efficiency.This has led to a new field of cryptography:&nbsp;Post-Quantum Cryptography (PQC). The goal is to create algorithms that are secure against attacks from both classical and quantum computers.After a multi-year competition, the U.S. National Institute of Standards and Technology (NIST) selected a suite of algorithms for standardization. For key exchange, the primary choice is the&nbsp;Module-Lattice-based Key-Encapsulation Mechanism (ML-KEM), formerly known as CRYSTALS Kyber.How it Works as a Key Encapsulation Mechanism (KEM):Unlike the interactive exchange in Diffie-Hellman, a KEM works slightly differently:The server generates a public and private key pair based on the mathematical difficulty of problems in crystal-like structures called lattices.The server sends its public key to the client.The client uses the server's public key to generate two things: a shared secret and a "ciphertext" that encapsulates (or wraps) that secret.The client sends this encapsulating ciphertext back to the server.The server uses its private key to "decapsulate" the ciphertext, revealing the exact same shared secret that the client generated.Now both parties have the secret, and an eavesdropper, even one with a quantum computer, cannot solve the underlying lattice math to discover it.The Real World: Hybrid Key Exchange (ECDHE + ML-KEM)We are in a transitional period. While powerful quantum computers are not yet widely available, the threat of "harvest now, decrypt later" is very real: adversaries can record sensitive encrypted data today and store it, waiting for the day they have access to a quantum computer to break it.To counter this, the industry is moving towards a hybrid approach. Zscaler has implemented this by combining the battle-tested classical algorithm with a next-generation post-quantum one.How Zscaler's Hybrid Implementation Works:Zscaler’s Zero Trust Exchange acts as an intelligent switchboard for connections. When a client initiates a TLS connection, it sends a "ClientHello" message advertising its capabilities.Dual Key Generation: In a hybrid key exchange, the client and server perform&nbsp;both an ECDHE key exchange and an ML-KEM key encapsulation simultaneously.Two Secrets are Better Than One:&nbsp;This process results in two independent shared secrets: one from ECDHE and one from ML-KEM.Concatenation for a Single Master Key: These two secrets are then concatenated (combined end-to-end) to create the final master secret for the session.Deriving Session Keys: This robust, hybrid master secret is then used to derive the encryption keys for the session traffic.This process secures the session end-to-end. To break the encryption and read the data, an attacker would have to break&nbsp;both the classical ECDHE algorithm and the post-quantum ML-KEM algorithm. This "belt and suspenders" model provides a powerful guarantee: the connection is at least as secure as the classical cryptography we trust today, and it is also protected against the quantum threats of tomorrow. This allows organizations to safely transition to a post-quantum world without compromising on current security.Conclusion: Two Worlds, One GoalClassical key exchange is the workhorse of today, securing trillions of connections with proven, efficient software. But the road ahead will be a hybrid one. We can expect to see Post-Quantum Cryptography (PQC)—new algorithms resistant to quantum attacks—securing our communications and critical software-dependent transactions. For security and networking practitioners, understanding the new paradigm is no longer optional—it's essential for securing today’s data against future quantum-based attacks.Learn more about preparing for the quantum future:&nbsp;save your spot for our webinar launch event&nbsp;where our product experts will walk you through how Zscaler used hybrid key exchange in service of decrypting and inspecting quantum-encrypted traffic with ML-KEM.&nbsp;]]></description>
            <dc:creator>Brendon Macaraeg (Sr. Product Marketing Manager)</dc:creator>
        </item>
        <item>
            <title><![CDATA[2026 Zscaler Public Sector Summit: Cyber Strong in the AI Era]]></title>
            <link>https://www.zscaler.com/de/blogs/product-insights/2026-zscaler-public-sector-summit-cyber-strong-ai-era</link>
            <guid>https://www.zscaler.com/de/blogs/product-insights/2026-zscaler-public-sector-summit-cyber-strong-ai-era</guid>
            <pubDate>Thu, 12 Feb 2026 14:42:02 GMT</pubDate>
            <description><![CDATA[The 2026 Zscaler Public Sector Summit marks a homecoming for me and several others here at Zscaler who have recently hung up their federal spurs, and I feel a renewed sense of passion for the mission.I find myself reflecting on the common thread that binds Zscaler and the varied operational communities we support: the mission. Having recently retired from the front lines of government IT, I understand that our “customers” aren’t just users; they are the American people, all focused on protecting our country.&nbsp;Today, we stand at a critical juncture in the AI journey for our great nation. With a robust “America’s AI Action Plan,” our government is moving past the “pilot” phase of generative AI and entering a period of deep integration. However, as we weave AI into the fabric of government operations, we must ensure that the fabric itself is “Cyber Strong.”We are no longer “preparing” for AI or adversarial use of this new technology. We are in the midst of an active race. We are also realizing that while these systems are revolutionary defensive force multipliers, they are simultaneously becoming high-value targets. Our adversaries, nation-states with deep pockets and sophisticated AI capabilities, are leveraging technology at a rate that traditional defenses cannot match. The new “AI-powered script kiddies,” using large language models (LLMs) to generate, refine, and deploy malicious code without understanding the underlying mechanics, are accelerating that challenge.We are also seeing this in our recent ThreatLabz 2026 AI Security Report. From April 2024 to April 2025 alone, the Zscaler cloud blocked more ransomware attempts than in any previous year. That was more than 10.8 million hits, marking a 145.9% year-over-year increase and the highest volume recorded since tracking began. In the same year, the scale of AI/ML activity increased dramatically to 536,500,000,000 total AI/ML transactions, marking a 3,464.6% year-over-year surge across the Zscaler Zero Trust Exchange, compared to our last analysis period.To stay ahead of increasingly sophisticated adversarial AI, deploying AI isn’t enough. We must ensure that every model in a safety-, critical-, or high-value role is built on a foundation of secure-by-design and resilient architecture. True cyber strength in the AI era requires systems that are not only robust but actively instrumented to detect data integrity and performance shifts, “sensing” and ensuring we can identify and neutralize malicious activity before it compromises the mission.This March, we gather at the Ronald Reagan Building and International Trade Center, a location that holds significant personal meaning for me. Did you know it is the second-largest building in the federal inventory? It is literally a city within a city. At over 3 million square feet full of offices near the White House, it is the only federal building congressionally mandated to be a mixed-use building open to the public, effectively uniting the nation’s best public and private resources in a national forum for the advancement of trade, serving a uniquely dual mission that presents inherent security challenges. It serves as a perfect metaphor for our current technology challenge: securing a vast, interconnected digital landscape where the boundaries between “inside” and “outside” have effectively vanished—especially in the food court!The human element also comes front and center for this event. In the new digital age, securing the tech is only half the battle; we must also secure the “human” landscape. This is why I am particularly excited to welcome Eric O’Neill to our stage. Eric helped expose Robert Hanssen, a man who operated from within the very heart of our national security apparatus. It’s a stark reminder that the greatest threats often come from within, using a PalmPilot, no less.Eric’s insights into counterintelligence are more relevant now than ever. Adversarial AI is being used to craft social engineering attacks so convincing they bypass traditional human intuition. We must fight fire with fire. In 2026, the “insider” might not be a person at all, but a compromised AI agent or a deepfake identity. Eric will bridge the gap between “old school” counterintelligence and “new school” AI threats. His experience reminds us that while the tools change, the adversary’s intent remains the same: to undermine public trust and compromise our national security.Walking through the Reagan Building, above or below ground, always reminds me of the scale of our government’s responsibility. It is a place of history, but also a place of the future. As we open the 2026 Public Sector Summit, my message to my peers in the public sector is simple: the journey to Zero Trust, and now AI, is a journey of security. We cannot have one without the other.Join us on March 3, 2026. We will not just be talking about surviving the AI revolution; together with our partners, we will show how we will lead it - together. Let’s forge a nation that is not just cyber-aware, but Cyber Strong.]]></description>
            <dc:creator>Chad Tetreault (Zscaler)</dc:creator>
        </item>
        <item>
            <title><![CDATA[Microsoft Copilot Oversharing Data? Not Anymore. Meet Zscaler’s New Wizard]]></title>
            <link>https://www.zscaler.com/de/blogs/product-insights/microsoft-copilot-oversharing-data-not-anymore-meet-zscaler-s-new-wizard</link>
            <guid>https://www.zscaler.com/de/blogs/product-insights/microsoft-copilot-oversharing-data-not-anymore-meet-zscaler-s-new-wizard</guid>
            <pubDate>Thu, 12 Feb 2026 12:10:15 GMT</pubDate>
            <description><![CDATA[Microsoft Copilot is accelerating how people work in Microsoft 365—and it can accelerate exposure when access controls aren’t clean. Copilot runs on your existing permissions model, so if SharePoint, OneDrive, and Teams are over-permissioned, it can end up saying the quiet part out loud: surfacing sensitive data to underprivileged users through seemingly harmless prompts.The good news: you don’t need to hit pause on Copilot to be safe. You need to be&nbsp;Copilot-ready—with a clear understanding of what data is exposed, why it’s exposed, and how to remediate it fast at scale.That’s exactly where the&nbsp;Zscaler’s new Copilot Readiness Wizard adds value. &nbsp;But more on that later.&nbsp;Ready for Copilot Readiness?When it comes to Microsoft Copilot “readiness”, most discussions focus on licensing, user eligibility, and adoption. These are Important—but not where the try success of a deployment is.True Copilot readiness is answering questions like the following, which challenges your data risk level:Which sensitive files in M365 are dangerously overshared?Which items are missing the sensitivity labels (or have the wrong ones)?How much exposure is driven by anonymous links, org-wide links, or broad collaborator access?Can we fix the issue across our tenant without weeks of manual effort?Can we reduce risk&nbsp;without slowing users down or creating an admin bottleneck?As you can see, these force you to evaluate how overshared your data is (in the spirit of collaboration). &nbsp;A good readiness plan needs to ensure your Data Security approach can ace the test when it comes to the questions above.&nbsp;Data Risk: Brought to you by CollaborationThe main challenge with collaboration is data security often takes a back seat to other approaches in the company that help drive productivity. &nbsp;So what collaboration approaches cause the most risk?&nbsp;“Everyone in the company” permissions to “keep things simple”Org-wide links used as a shortcutExternal sharing that persists long after a project endsSharePoint sites that evolve into de facto data lakesBut let’s be clear - these collaboration approaches in Copilot don't break security. It just makes the consequences of oversharing&nbsp;immediate.&nbsp;&nbsp;Put simply, Copilot Prompt helps everyone discover data quickly using semantic search.The challenge becomes what Copilot can share in user prompts.&nbsp; Without the ability to clean up issues above, Copilot can over share sensitive data within user prompts when it isn’t appropriate - like company wide salary information, acquisitions plans, or customer level PII data. &nbsp;This type of data should be kept within a small, trusted circle—not repeated in responses prompts to underprivileged users.&nbsp;Where Microsoft Purview Fits inMicrosoft Purview provides important building blocks for governing information access and classification in Microsoft 365. It’s also true that&nbsp;Copilot respects sensitivity labels and permissions. In other words, if a document is properly labeled and protected, Copilot will follow those rules.The challenge is getting to “properly labeled and protected” across the dynamic insanity of a real-world M365 deploymentUsers often over share in the spirit of productivity and collaborationLabels are often applied inconsistently when done manually.Lack of auto-labeling capabilities, which are only available with E5 licensing.Rinse and repeat all bullets above thousands of times a day, when new data arrives.&nbsp;&nbsp;Many teams then need a faster, more actionable path to reduce overexposure beyond what Purview can help with - especially when Copilot adoption accelerates.&nbsp;Enter Zscaler Copilot Readiness Wizard&nbsp;The&nbsp;Zscaler Copilot Readiness Wizard is built to help security and IT teams quickly understand whether Copilot could surface sensitive information—and to reduce that risk with targeted, scalable remediation.It focuses on the practical realities of Copilot exposure:Sensitive data living in widely accessible locationsSharing links that got created and forgottenLarge collaborator sets that ballooned over timeInconsistent labeling (or no labeling) across high-risk contentMost importantly, it’s designed to help you move from “insight” to “action” quickly—because the window between Copilot enablement and exposure discovery is often uncomfortably short.&nbsp;&nbsp;&nbsp;Putting Copilot Readiness on SteroidsHere’s how the Zscaler Copilot Readiness Wizard can take traditional Purview approaches to the next level in order to help you control oversharing faster and smarter.&nbsp;Get Actionable Exposure VisibilityInstead of simply “you have exposure,” you want to know&nbsp;how exposure happens.&nbsp; You can see:See Public/anonymous linksSee Internal/org-wide linksUnderstand overly broad collaborator access (and how broad)This granularity matters, because it changes the remediation strategy. A public link problem is different from a “1000+ collaborators” problem.&nbsp;&nbsp;&nbsp;Understand Richer ContextRicher context for what’s overexposed provides valuable insights so&nbsp;security teams can prioritize what matters:Where sensitive info is overexposedWhich content contains privacy identifiers?Where risk is concentrated so you can reduce it quickly&nbsp;&nbsp;&nbsp;Deliver File-level remediationWith the ability to enable File-level remediation,&nbsp;you get better control over a small subset of high-value files. If remediation is only practical at the SharePoint site level, you can end up overcorrecting and disrupting business collaboration.&nbsp;&nbsp;File-level action lets you be precise:&nbsp; Fix&nbsp;the risky files without breaking the entire site’s workflows.&nbsp;Comparing Zscaler to Native Copilot ControlsSo how does Zscaler's Copilot Readiness Wizard stack up to M365 native capabilities? &nbsp;The table below spells it out.&nbsp;It’s important to note that Microsoft's Auto-labeling functionality comes at the E5 licensing level, where Zscaler’s approach can help you this achieve this key value-add functionality with only an E3 license.&nbsp;&nbsp;&nbsp;Capability areaMicrosoft Purview&nbsp;Copilot readiness&nbsp;Zscaler Copilot&nbsp;Readiness Wizard&nbsp;Auto-LabelingRequires E5 license.&nbsp; With E3 license manual error-prone labeling required.Enable with E3 license.&nbsp; Bulk actions across assets; apply&nbsp;MIP labels as part of remediation (position as operational efficiency)Remediation actions (examples)Apply labels; restrict access to SharePoint sitesApply MIP labels; remove sharing links/collaborators; quarantine; report incidentExposure visibilityLimited scope of visibilityIn-depth insights across collaboration exposure: public links, internal links, and&nbsp;Collaboration sharing tiers (0-100, 100-1000, 1000+)Detection contextFocus on exposure + label-related viewsAdds prioritization views (e.g., overexposed sensitive info; overexposed items matching DLP dictionaries)Reporting horizonOften limited to short windows (e.g., 1 week in some views)Longer lookback to spot patterns and regressionsDash boardingActivity and assessment views within Purview experiencesClear separation: readiness posture vs activity views (position as clarity + operational workflow)&nbsp;Bringing it all togetherCopilot can be transformational—but only if your data permissions and protections are ready for a world where anyone can ask,&nbsp;“Show me everything about X.”&nbsp;The&nbsp;Zscaler Copilot Readiness Wizard helps you quickly assess where Copilot could unintentionally surface sensitive information and gives you practical, file-level remediation paths to reduce risk without slowing the business down.If you're ready to learn more about Zscaler, jump on over to our solution website, or schedule a demo to chat with us!]]></description>
            <dc:creator>Steve Grossenbacher (Senior Director, Product Marketing)</dc:creator>
        </item>
        <item>
            <title><![CDATA[Communicating Security Notifications to Users with Zscaler Client Connector EUN Notifications]]></title>
            <link>https://www.zscaler.com/de/blogs/product-insights/communicating-security-notifications-users-zscaler-client-connector-eun</link>
            <guid>https://www.zscaler.com/de/blogs/product-insights/communicating-security-notifications-users-zscaler-client-connector-eun</guid>
            <pubDate>Tue, 10 Feb 2026 17:43:44 GMT</pubDate>
            <description><![CDATA[In the networking world, there is a widely known adage:&nbsp;"It's always the network". This phrase refers to the tendency of users to blame network connectivity whenever access to a resource fails, even if the true reason lies elsewhere—such as being blocked by a corporate security policy.The Need for Better User CommunicationWhen end-users receive no clear notification of why access to an application or network has been denied or other action taken, it is natural for them to assume the failure stems from a "networking issue." Left in the dark, users often retry accessing the resource, wasting valuable time and, eventually, filing help desk tickets.This pattern creates multiple challenges:Increased workload for IT support teams, draining resources that could be allocated elsewhere.Frustration across the business, as employees feel hindered by network inefficiencies.Potential security risks, as users may attempt to bypass corporate security restrictions by leveraging unsanctioned third-party solutions.In most instances, employees adopting workarounds are driven by necessity, not malice—they simply want to complete tasks without engaging with technical barriers they don’t fully understand.The solution? Providing clear, timely&nbsp;end-user notifications (EUNs) that inform users when access to a specific resource is blocked, along with the reason for the restriction.&nbsp; &nbsp;Such transparency not only reduces the volume of unnecessary tickets but also cultivates better-informed, security-aware employees. Over time, this strengthens the organization’s overall security posture.A Unique Challenge: Non-Web Traffic EUNsFor web traffic, user notifications are relatively straightforward: organizations can display a web-based&nbsp;End-User Notification (EUN) page explaining the block. This page might include customized corporate branding, a message specific to the policy violation, and instructions for contacting IT support if needed.But not all traffic is web-based. What happens, for example, when a user tries to access a resource via&nbsp;SSH in a public cloud, only to have the attempt blocked by a security policy? Since there’s no browser-based interaction, traditional EUN pages can’t be displayed in such cases. This can leave users confused, wasting time trying to troubleshoot what they perceive as “networking” or application-related issues.Enter Zscaler Client Connector EUN NotificationsThis is where&nbsp;Zscaler Client Connector EUN Notifications step in to fill the gap. Starting with&nbsp;Zscaler Client Connector version 4.8 (used in conjunction with&nbsp;Z-Tunnel 2.0), notifications can now be surfaced directly to the user for&nbsp;ZIA policies, clearly explaining that access to a site or resource has been blocked by a corporate security policy.Expanded Policy SupportPreviously, ZCC-based notifications were available for policies such as&nbsp;Inline Web Data Loss Prevention (DLP),&nbsp;Endpoint DLP, and&nbsp;Cloud App Control. Recently, Zscaler has enhanced these capabilities to include:Firewall FilteringDNS ControlIntrusion Prevention System (IPS) ControlThis expanded support is particularly valuable for&nbsp;non-web traffic, where no web-based EUN page can be presented.Key Use Cases for EUN NotificationsHere are some common scenarios in which Zscaler Client Connector EUN Notifications offer clarity:DNS Control Actions:When a DNS request is blocked due to a classification (e.g., a domain falls under a restricted category).When DNS Control redirects a request (e.g., A-record response redirected to a specified IP), but no subsequent web flow occurs, leaving the user without context for the block.Firewall or IPS Control Actions:When attempts to use protocols such as&nbsp;SSH are blocked.When an&nbsp;IPS signature match triggers a block, users are left wondering why their application or connection isn't functioning as expected.EUN notifications eliminate this ambiguity by clearly communicating the reason behind the restriction, for example, by communicating:Block actions on non-web traffic to the user.Warnings&nbsp;to the user when they go to a suspicious domain or use a protocol or application that is not banned but dangerous.Remediation steps to the user (opening a ticket, not running an app etc.).&nbsp;&nbsp;]]></description>
            <dc:creator>Siddhartha Aggarwal (Staff Technical Product Specialist - Firewall)</dc:creator>
        </item>
        <item>
            <title><![CDATA[A Guide to OpenClaw and Securing It with Zscaler]]></title>
            <link>https://www.zscaler.com/de/blogs/product-insights/guide-openclaw-and-securing-it-zscaler</link>
            <guid>https://www.zscaler.com/de/blogs/product-insights/guide-openclaw-and-securing-it-zscaler</guid>
            <pubDate>Mon, 09 Feb 2026 22:23:42 GMT</pubDate>
            <description><![CDATA[What Is OpenClawOpenClaw is an application designed as a persistent, long-running Node.js service that functions as a sophisticated AI agent. It bridges the gap between the LLM and the operating system, granting the agent the capability to manipulate files, execute shell commands, and interact with third-party services via the Model Context Protocol (MCP) or API.It used to be called ClawdBot and MoltBot, and now OpenClaw. All refer to the same application.Why It Matters?In the past, agents have been specialized to one task or a group of similar tasks. OpenClaw lays the foundation to be a generalized application that can address multiple use cases while improving on the basic principles of AI agents with memory management and skills deployment.This capability, while transformative, introduces a profound security paradox: the utility of the agent is directly proportional to its level of access. This very access creates an unprecedented attack surface within the host and the environment in which it is deployed.Why Organizations Should CareIt is incredibly easy for users to download a malicious skill/library for OpenClaw. In fact, within days there were hundreds of malicious skills that users could download with a click of a button.A great example is One-Click RCE, where:“A victim would simply need to visit an attacker-controlled website that leaks the authentication token from the Gateway Control UI, which is enabled by default, via a WebSocket channel. Then an arbitrary command will run, even if the victim is hosting locally.”The fact that no administrative rights are needed to install OpenClaw locally significantly increases the risk of users running and downloading malicious content/skills, using the OpenClaw device to move laterally once compromised, as well as uploading sensitive data (captured via integrations), since it can bypass typical security controls. This is made even worse by the fact that it is not easy to identify the application or service, nor does it have an identity related to OpenClaw.This guide is for IT/security admins on how to protect their environments from a user installing, running, or bringing in rogue devices into a network that has OpenClaw installed/running. This poses a significant risk to the enterprise network and should not be allowed.There are mitigating controls that users of OpenClaw can deploy, but these are often left to the user, who might not fully understand them and might not care to implement them. These controls are not covered here.How Does OpenClaw Work?OpenClaw is a gateway-centric system designed to facilitate an agentic loop (such as ReAct)—a continuous cycle of perception, reasoning, and action. This puts the LLM between the users and the data (for integrations/tools), allowing the LLM to provide reasoning. The architecture is divided into three primary functional domains: the Gateway, the front end (node), and the integration layer. Thus, OpenClaw uses standard HTTPS for all bound connections/integrations.The GatewayThe Gateway serves as the centralized control plane, managing sessions, maintaining persistent memory, and routing communications between the user and the agent across various messaging platforms such as WhatsApp, Telegram, Slack, and Discord. Here are the default ports used by OpenClaw internally on the system:Gateway Daemon18789WebSocketCentral control plane; requires token-based authentication (but can be bypassed with a simple config change)Browser Control18791CDPUsed for headless Chrome automation; risk of web-based exfiltrationExternal APIs443HTTPSOutbound traffic to LLM providers and messaging servers.&nbsp;Node LayerThe node layer is used to access resources on the system and beyond—such as local file system access, camera access, screen recording, and location services—and provide them to the Gateway. These are also a collection of node libraries running on the endpoint as part of the Node.js process.The Integration LayerThis layer manages “skills”—modular packages of code, metadata, and natural-language instructions that define what the agent can do. It leverages the Model Context Protocol (MCP) to interface with external services (such as GitHub, Google Workspace, or Notion) using a standardized schema, ensuring the agent always uses the correct API parameters without requiring hardcoded custom integrations for every task.LLM APIs443 &nbsp; &nbsp; &nbsp; &nbsp;HTTPS &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;Outbound API calls to LLM providers and messaging servers. Note these are typically different from webAI which is what is used by bowsersExternal APIs443HTTPSOutbound traffic to anything really that is hosted on the internet. It can be via API or can be via a browser.External MCP server &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;443HTTPSOutbound traffic to the MCP tools, these tools can also be hosted locally and converted to API call externally.&nbsp;Security Takeaways on ArchitectureThe key takeaway is that OpenClaw inherits the user-agent string from the Chrome browser. There is no hardcoded, unique “OpenClaw” user-agent string used globally for all outgoing traffic, which makes it difficult to differentiate OpenClaw applications from standard user browser traffic. Since all its integrations rely on outbound HTTPS connections, which are typically allowed on user devices and network firewalls, uniquely identifying it at the transport layer is challenging. Furthermore, the fact that the service runs locally on the device makes it difficult to detect at the network layer outside of the device itself.In addition, OpenClaw has extensive integrations, allowing it access to a wealth of data out of the box, which can then be extended by adding “skills.” Couple this with local system access and the ability to install it without needing admin rights, and OpenClaw becomes a significant risk vector.How Can Zscaler Help?Note: This is not a step-by-step configuration guide. It provides guidance on what controls should be strongly considered to detect and restrict OpenClaw within an environment. Please use the standard change management process within your environment to roll out any changes.There are two main ways of deploying OpenClaw:Cloud-based/centrally hosted LLM (most likely scenario)LLM deployed locally (typically needs computers with NPU/GPU and memory of over 32 GB)&nbsp;OpenClaw can be installed locally on the device, in a container, or in an IaaS/PaaS platform. For this document, we will treat both container-based and locally installed methods the same.Note that not all of these controls need to be implemented; this list merely provides a defense-in-depth strategy that would allow an organization to prevent unauthorized use from both managed and BYOD devices. A simple URL block would prevent the download, but pairing it with TLS inspection provides significantly more visibility and control. Controls such as file-type filtering, sandboxing, and DLP will enhance this protection. In addition, implementing tenancy control would allow access to enterprise GitHub while blocking other GitHub instances that could be hosting OpenClaw. Thus, it is generally recommended to implement layered controls.A note on TLS inspection: Keep in mind that Node.js by default does not use the OS credential/certificate store; thus, if TLS inspection is enabled, the user will get a certificate error while talking with external tools, LLMs, and communication channels. The node libraries will have to end up trusting Zscaler root certificates to talk externally, thus forcing TLS inspection.1. Preventing download of OpenClaw: Using URL and/or a combination of file type control, Zscaler can prevent unauthorized downloads of OpenClaw on endpoints. OpenClaw install files are typically .ps1, .sh, or Docker files. These file types should be blocked.Block URLshttps://openclaw.ai/https://github.com/openclaw/openclawURL FilteringFiletypesBlock File type ps1, sh, Docker(yaml/yml).&nbsp;File Type controlDetecting existing installs &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;&nbsp;Existing installs of OpenClaw can be detected using Zcaler Endpoint DLP, EDR, or MDM. See the respective sections below for details.2. Preventing the download of additional playbooks and 0day malware is crucial. OpenClaw uses markdown for its skills files. Custom file type control can be used to detect markdown files and block downloads. Furthermore, Zscaler CASB can be used to isolate, restrict, or block access to GitHub repositories to prevent users from duplicating repos and bypassing security by using custom repositories.Block URLshttps://openclaw.ai/https://github.com/openclaw/openclawTLS Inspection&nbsp;Enable TLS inspection policy as broadly as possible and at a minimum across allowed LLMs and sanctioned Apps with which OpenClaw IntegratesOpenClaw IntegrationsSandbox policyAny Executable and Archive should be Quarantine First-time Action&nbsp;&nbsp;Zscaler Sandbox&nbsp;Filetype controlBlocking File types: JSON, ps1, sh, Docker(yaml/yml), Markdown, unscannable and password protected filesZscaler File Type ControlsZscaler Custom File Type ControlsCloud App control&nbsp;Restrict access to Github to align with user role&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Zscaler Cloud App controlTenancy restrictions for Github &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;&nbsp;Certain users such as developers might still need access to Enterprise Github repo, Zscaler Tenant Profiles in combination with cloud app controls can be used to provide granular access.&nbsp;&nbsp;&nbsp;&nbsp;Zscaler Tenant Profile&nbsp;3. Prevent callbacks and connections to known malicious and 0-day malware. OpenClaw Skill files that are malicious would often call back to C&C servers; they can also use evasive techniques such as SSH tunnels or DOH tunnels. Zscaler can prevent these callbacks along with preventing executables/scripts that would trigger these callbacks.Advance Threat protection policyEnable Botnet productionEnable Malicious Active Content ProtectionEnable Fraud ProtectionBlock Unauthorized Communication Protection &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;Block BItTorrentBlock P2P file sharingATP policySandbox policyAny Executable and Archive should be Quarantine First-time ActionZscaler SandboxDNS DGA&nbsp;&nbsp;ATP policyDNS tunnelsEnable DGA under ATP PolicyBlock DOH tunnelsBlock unknown DNS tunnelsATP policyDNS ControlSSH tunnelsUnauthorized Communication ProtectionATP policy4. Protect Against sensitive data leakage. Depending on the deployment, OpenClaw will have to use the network for tool/skill access and/or for LLM access. During this time, Zscaler can perform data protection on these sessions, if they are inspected. Keep in mind that Node.js by default does not use the OS certificate store; thus, if TLS inspection is enabled, the user will get a certificate error while talking with external tools, LLMs, and communication channels. Thus the node libraries will have to end up trusting Zscaler root certificates to talk externalling, thus forcing TLS inspection.Enable SSL inspection across allowed LLMS and sanctioned APPs the OpenClaw Integrates with&nbsp;TLS inspection policyOpenClaw IntegrationsEnable DLP inspection on HTTP postsExisting policies should be extended to GenAI, LLM, and other unsanctioned apps.Implement Zscaler Data ProtectionUse DLP for DetectionZscaler provides a way to detect presence of Node and OpenClaw files using Endpoint DLP to identify OpenClaw artifacts and restrict data movement.Endpoint DLP&nbsp;For example by default a directory structure is created under ~/.openclaw with the following files.Zscaler EDLP can detect these files and create an alert if these files exist on an endpoint. Scanning for files names under openclaw/workspace would point to existing installs..├── agents│&nbsp; &nbsp;└── main│&nbsp; &nbsp; &nbsp; &nbsp;├── agent│&nbsp; &nbsp; &nbsp; &nbsp;│&nbsp; &nbsp;└── auth-profiles.json│&nbsp; &nbsp; &nbsp; &nbsp;└── sessions│&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;└── sessions.json├── canvas│&nbsp; &nbsp;└── index.html├── credentials│&nbsp; &nbsp;├── discord-allowFrom.json│&nbsp; &nbsp;├── discord-pairing.json│&nbsp; &nbsp;└── whatsapp│&nbsp; &nbsp; &nbsp; &nbsp;└── default│&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;└── creds.json├── cron│&nbsp; &nbsp;├── jobs.json│&nbsp; &nbsp;└── jobs.json.bak├── devices│&nbsp; &nbsp;├── paired.json│&nbsp; &nbsp;└── pending.json├── exec-approvals.json├── identity│&nbsp; &nbsp;├── device-auth.json│&nbsp; &nbsp;└── device.json├── memory│&nbsp; &nbsp;└── main.sqlite├── openclaw.json├── update-check.json└── workspace&nbsp; &nbsp;&nbsp;├── AGENTS.md&nbsp; &nbsp;&nbsp;├── BOOTSTRAP.md&nbsp; &nbsp;&nbsp;├── first&nbsp; &nbsp;&nbsp;├── HEARTBEAT.md&nbsp; &nbsp;&nbsp;├── IDENTITY.md&nbsp; &nbsp;&nbsp;├── SOUL.md&nbsp; &nbsp;&nbsp;├── TOOLS.md&nbsp; &nbsp;&nbsp;└── USER.md5. Prevent unauthorized LLM calls. The most common deployment I anticipate would be using public LLMs. In which case OpenClaw will be making outbound calls to LLM using API. Controls should be placed around this where only sanctioned AIs are allowed from an organization's network and this sanctioned AI will provide visibility and guardrails.Block all LLM usage directlyBlock all LLMs via URL/Cloud app control and only allow Zscaler AI Guard from the Enterprise network.Zscaler Cloud App controlhttps://api.zseclipse.nethttps://proxy.zseclipse.netUse AI guard as Authorized AI platformDeploy AI Guardrails to monitor and restrict prompt usage.Zscaler AI Guard Rails&nbsp;6. Prevent rogue devices from running OpenClaw and/or moving laterally. In open networks such as college campuses or research institutions, users can plug in rogue devices that have OpenClaw running. If these devices are compromised or used maliciously, they can be used as an entry point into the enterprise network. A common example is plugging a MacMini into an open port. This is where Zscaler can help control and direct communications from these devices by effectively isolating them.&nbsp;Isolate DevicesEnsure new devices on network on onboarded as “island of one.”&nbsp;This can be achieved easily with Zero Trust BranchControl BYOD policy to prevent north/south communicationTunnel Traffic to ZIA from BYOD/Rogue devices.Apply ATP, DNS, and URL inspection policy (in absence of TLS inspection).This can be achieved with Zero Trust Branch7. Restrict BYOD From Accessing Enterprise data directly: Another use case to cover is for contractors and/or BYOD devices accessing SaaS applications such as Workday or Salesforce. Contractors or BYOD devices with OpenClaw can download skills that would allow them to use the Chrome Dev Kit to scrape data from your SaaS services. This is where Zscaler can help prevent data loss at a mass scale with Zscaler Zero Trust Browser.Conditional access policy &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;&nbsp;Implement in a Conditional Access Policy: Block when going direct to SaaS applications and only allow access via your Zscaler tenant.Zscaler Zero Trust BrowserUse Zscaler Zero Trust Browser to provide a sandboxed, isolated app access environment, preventing data from landing on the endpoints.Zscaler Zero Trust Browser +&nbsp;Zscaler SquareX&nbsp;Endpoint Controls to ConsiderAs OpenClaw runs locally on an endpoint, the Gateway and node layers have components/services that are running on the endpoint locally. EDRs have visibility and control into these, thus EDR should be paired with Zero Trust principles to gain full visibility and control over managed devices.Package/config file inspection with EDR: Inventory NPM global installations and identify OpenClaw binaries and config files in common paths.Installer Logic: Rules can be set to block common one-line "curl-to-bash" installation patterns.Process monitoring and escalation detection: Detect Node processes running on the endpoint, especially with high privilege access.&nbsp;Detecting locally hosted services: OpenClaw’s front end can be deployed as local only or a remote service. In either scenario all inbound access to endpoints should be blocked, especially the ports called out in the Gateway section.&nbsp;MDMs can also be used to detect presence of OpenClaw on managed devicesSummaryOpenClaw feels like a new frontier in agentic AI. It is poised to change how we view and use AI agents today, and potentially lay the groundwork for what Agentic AI applications could like like going forward However, at this point, OpenClaw introduces significant security and privacy risks for an organization. Zscaler can help accelerate enterprise, government, and education institutions' secure adoption of GenAI while ensuring malicious tools or risky applications are not introduced, preventing data loss, and preventing device compromise within the organization's environment.]]></description>
            <dc:creator>Hersh Patel (Zscaler)</dc:creator>
        </item>
        <item>
            <title><![CDATA[Transforming Threat Detection: How Partnerships in Deception Technology Are Shaping the Future]]></title>
            <link>https://www.zscaler.com/de/blogs/product-insights/transforming-threat-detection-how-partnerships-deception-technology-are</link>
            <guid>https://www.zscaler.com/de/blogs/product-insights/transforming-threat-detection-how-partnerships-deception-technology-are</guid>
            <pubDate>Mon, 09 Feb 2026 16:04:51 GMT</pubDate>
            <description><![CDATA[Security Operations Centers (SOCs) are drowning in alerts. The constant flood of data from disparate tools creates a significant challenge: distinguishing real threats from false positives. In this environment, a reactive security posture is not just inefficient; it’s dangerous.A truly proactive strategy requires two things: unambiguous, high-fidelity threat signals and the automated ability to act on them instantly. This is where the combination of deception technology and a connected security ecosystem shines. Zscaler Deception provides the undeniable proof of an active threat, and through our deep third-party integrations, we empower organizations to turn that critical intelligence into immediate, decisive action. This blog explores how that powerful synergy transforms your security stack from a collection of siloed tools into a cohesive, self-defending ecosystem.High-Fidelity IntelligenceZscaler Deception fundamentally changes the defensive game. By creating a digital minefield of convincing decoys and lures across endpoints, cloud workloads, Active Directory, and GenAI infrastructure, it turns the tables on attackers. Instead of searching for weaknesses, defenders create an environment where any unauthorized interaction is, by definition, malicious.When an attacker engages with a decoy, Zscaler Deception generates a high-fidelity alert. Because legitimate users have no reason to interact with these assets, the alerts produced are virtually free of false positives. This provides security teams with three critical advantages:Early Detection:&nbsp;Catching attackers at the earliest stages of the kill chain, often before they can access critical data.Rich Intelligence:&nbsp;Gathering detailed TTPs (Tactics, Techniques, and Procedures) and IOCs directly from the attacker’s actions.Unquestionable Confidence: Providing an unambiguous signal that an active threat is present in the environment.From Intelligence to Automated ActionBut what happens next? A high-fidelity alert is only the starting point. Its true power is only realized when it triggers an immediate, decisive response. The time between detection and containment is where breaches escalate, and manual intervention is often too slow.The key to closing this loop and drastically reducing Mean-Time-to-Respond (MTTR) lies in automation. This is where Zscaler Deception’s built-in orchestration and third-party integrations become transformative. By connecting its high-confidence signals directly to the other security tools in your stack, deception becomes the trigger for an automated, continuous response. The value is no longer just about finding the threat; it's about neutralizing it instantly.Endpoint Detection and Response (EDR)Integrating with an EDR partner such as Crowdstrike Falcon or Microsoft Defender, Zscaler Deception can automatically share threat intelligence, such as indicators of compromise (IOCs) and attack context, with the CrowdStrike Falcon platform. This enables immediate automated actions including quarantining compromised endpoints ensuring immediate and effective containment of the threat actors thereby preventing lateral movement and potential escalation allowing security teams to swiftly investigate and remediate the incident. Additionally, both platforms exchange threat intelligence, enrich detection and response workflows to ensure the broader security stack remains up-to-date with the most relevant IOCs and attack patterns.This integration delivers a proactive defense layer allowing joint customers to contain threats earlier in the kill chain and automate robust incident response actions across their environments.Use Case: A prominent financial institution using Zscaler Deception identified an attacker on a compromised endpoint. Through its direct integration with CrowdStrike, the system automatically quarantined the device, instantly isolating the threat and stopping the attack in its tracks.SIEM and SOAR PlatformsZscaler Deception enriches Security Information and Event Management (SIEM) platforms like Splunk, Sumo Logic, and IBM QRadar with context-rich, high-priority alerts. This allows security teams to correlate threat intelligence and visualize the attack lifecycle. But the real power is unlocked when these signals trigger a Security Orchestration, Automation, and Response (SOAR) playbook. The deception alert can initiate an automated workflow that orchestrates actions across multiple security tools—from threat hunting to triggering broader network policy changes—dramatically accelerating the entire incident response process.Use Case:&nbsp;A global travel management firm that detected active attackers probing their Active Directory endpoints when they hit a Zscaler Deception decoy. The detection was sent to their SIEM, which triggered a high-risk event translating to human attention for analysis. Based on this pre-emptive alert allowed the firm to not only determine the containment strategy for the attack but also create runbooks for any such future incidents.&nbsp;Perimeter FirewallsContaining a threat often means blocking the attacker's command and control (C2) infrastructure. By integrating with next-generation firewalls, Zscaler Deception can automatically share the source IP of an attacker engaging with a decoy. The firewall can then immediately update its rules to block that malicious IP, effectively cutting off the attacker's access to the network before they can exfiltrate data or receive further instructions.Use Case: A global travel management firm detected active attackers probing their network with Zscaler Deception. By leveraging our integration with the organization’s firewall, over 250 distinct attacker IPs were automatically blocked, instantly neutralizing the threats before they could impact critical systems.Building a Self-Defending EcosystemThe old paradigm of security—where defenders reactively chase alerts—is no longer sustainable. A proactive strategy with deception provides the early warning system, but its true potential is unlocked through automation.By integrating Zscaler Deception with your existing EDR, SIEM, SOAR, and firewall solutions, you create a continuous response cycle. High-fidelity detections reliably trigger automated investigation, containment, and eradication actions. This approach not only shrinks attacker dwell time and drastically reduces MTTR, but it also frees up your security team to focus on strategic initiatives rather than chasing ghosts. It’s time to move beyond simple detection and build a truly actionable, automated defense leveraging Zscaler’s rich technology partner ecosystem.Request a demo to learn more about how Zscaler Deception can help close the detection and response loop with 3rd party integrations.]]></description>
            <dc:creator>Jaideep Chanda (Technology Partner Manager)</dc:creator>
        </item>
        <item>
            <title><![CDATA[How Organizations Can Make a Successful Transition to Post-Quantum Cryptography (PQC)]]></title>
            <link>https://www.zscaler.com/de/blogs/product-insights/organizations-make-successful-transition-post-quantum-cryptography-pqc</link>
            <guid>https://www.zscaler.com/de/blogs/product-insights/organizations-make-successful-transition-post-quantum-cryptography-pqc</guid>
            <pubDate>Thu, 05 Feb 2026 18:14:07 GMT</pubDate>
            <description><![CDATA[The Quantum Era is fast approaching—and the eventual threat is no longer a distant concern: quantum computers will change our digital world because algorithms like Shor's break the public-key cryptography that currently underpins digital security.&nbsp;The most immediate danger isn't that a quantum computer will appear overnight. It's the "Harvest Now, Decrypt Later" (HNDL) attacks that are likely already happening. Malicious actors are siphoning off encrypted data today: they can store it and wait for the day a quantum computer can unlock its secrets. For data with a long shelf life—trade secrets, government intelligence, healthcare records, financial data—the vulnerability is present now.&nbsp;The good news is that the path forward has become clearer.&nbsp;Now that standards bodies like the National Institute of Standards and Technology (NIST) have finalized their initial standards for Post-Quantum Cryptography (PQC), the time to plan, inventory, and act is now.So what steps should your organization take for a successful transition? Here is a practical, four-step guide with recommendations to building your quantum-resistant future.1. Plan and Adopt a Quantum-Safe StrategyA successful migration doesn't happen by accident: it requires a deliberate, top-down strategy. Without a plan, efforts will be fragmented, incomplete, and ultimately ineffective.&nbsp;Use a hybrid cryptography approachA "rip and replace" strategy is too risky. A hybrid approach combines a classic, proven algorithm (like ECDH) with a new PQC algorithm like ML-KEM (Module-Lattice-Based Key-Encapsulation Mechanism — finalized by NIST in FIPS 203). ML-KEM is&nbsp; a leading PQC algorithm designed to secure digital communications against future attacks by quantum computers.A session key is generated using both the classical and PQC algorithms, meaning an attacker would need to break both to compromise the connection. This provides a safety net, ensuring security against both classical attackers today and quantum attackers tomorrow, while also hedging against any unforeseen weaknesses in the first generation of PQC algorithms.Organizations should adopt NIST-recommended PQC algorithmsRelying on standardized, peer-reviewed algorithms is non-negotiable. Organizations like NIST, ISO, and ETSI have subjected these algorithms to years of intense global scrutiny. Adopting them ensures you are implementing the most secure, vetted options available and guarantees interoperability with the broader ecosystem of vendors, partners, and customers who are also making the transition.Update your internal security and acquisition standardsStrategy must be codified into policy. By explicitly requiring PQC in your organization’s cybersecurity, data security, and vendor procurement standards, you create a powerful forcing function. This ensures that all new software, hardware, and cloud services are evaluated for quantum readiness from day one, preventing the continued growth of your cryptographic debt.Assign clear ownershipWithout accountability, even the best plans fail. The PQC transition is a complex, cross-functional initiative that will touch nearly every part of the business—from IT and security to application development, legal, and supply chain management. Designating a specific leader or a dedicated team creates a center of gravity for the project, ensuring coordination, driving progress, and providing a single point of contact for executive leadership.2. Inventory Your Cryptographic-Dependent AssetsYou cannot protect what you don't know you have. This discovery phase is the foundation of your entire migration effort.Inventory all cryptographic algorithms, keys, certificates, and protocolsThis is the most critical first step. Your organization uses cryptography in thousands of places you might not expect: web servers (TLS), VPNs, SSH connections, code signing, secure boot processes, IoT devices, and internal applications. A comprehensive inventory—often called a Crypto-Bill of Materials (CBOM)—is the only way to understand the true scale of your quantum vulnerability.Prioritize IT assets vital to business operationsYou can't fix everything at once. A risk-based approach is essential. Start by identifying your "crown jewels"—the systems that, if compromised, would cause the most damage to your business. This includes systems managing financial transactions, sensitive intellectual property, customer PII, and critical operational controls. Focusing on these high-value assets first ensures you are mitigating the most significant risks immediately.Catalog critical data at risk from HNDL attacksThis action is directly tied to mitigating the "Harvest Now, Decrypt Later" threat. You must identify data based on its required confidentiality lifespan. Does this data need to remain secret for more than 5-10 years? If so, it is a prime target for HNDL. Any data encrypted today with classical algorithms—like M&A documents, long-term strategic plans, or patient health records—must be prioritized for re-encryption or protection using PQC.Identify where public-key cryptography is being used and mark these systems as quantum-vulnerableThis translates your inventory into an actionable roadmap. By pinpointing every instance of vulnerable algorithms like RSA, Diffie-Hellman, and ECDSA, you create a concrete list of systems, applications, and processes that need remediation. This moves the problem from an abstract concept ("we need to be quantum-safe") to a tangible project plan ("we need to update these 50 VPN gateways and these 200 web servers").3. Implement PQC Key ExchangeThe secure handshake that begins every encrypted session is a primary target for quantum attacks.Replace or complement current key exchange mechanisms with PQC algorithmsThe key exchange (e.g., RSA, ECDH) is how two parties establish a shared secret over an untrusted network. Shor's algorithm is specifically designed to break these mechanisms. By transitioning to a PQC key exchange algorithm like the NIST-standardized ML-KEM, you protect the very foundation of your secure connections. As mentioned earlier, implementing this in a hybrid mode is the recommended starting point, ensuring the confidentiality of your session data against all current and future threats.4. Implement PQC Algorithms for AuthenticationOnce a session is established, you need to trust the identity of who you're talking to. That's where digital signatures come in.Transition certificates to use PQC digital signature algorithmsDigital signatures (e.g., RSA, ECDSA) are used in certificates to prove identity and ensure integrity. A quantum computer could forge these signatures, allowing an attacker to impersonate a legitimate website, server, or software publisher. This would shatter digital trust. As PQC signature algorithms like ML-DSA (Module-Lattice-Based Digital Signature Algorithm — formally specified in the FIPS 204 standard) become widely available from certificate authorities, you must begin the process of replacing your existing certificates to protect against identity spoofing and man-in-the-middle attacks.Engage in proxy optimization effortsPragmatism is key to a smooth transition. PQC algorithms often have larger key and signature sizes, which can impact performance and latency, especially for legacy clients or constrained networks. A modern, intelligent security proxy like the public service edge nodes of Zscaler’s Zero Trust Exchange can act as a "crypto-translator." It can establish a PQC-secured connection to a modern server while presenting a classical connection to a legacy client, and vice-versa. This offloads the heavy lifting, optimizes performance, and allows you to roll out quantum-safe protections without needing to update every single endpoint simultaneously.The Transition to PQC Journey Starts TodayThe transition to a quantum-resistant world is a marathon, not a sprint. But it is a race that has already begun. By viewing this not as a single event but as a continuous process of strategic modernization, you can turn a monumental challenge into a competitive advantage. The organizations that start planning, inventorying, and implementing these steps today will not only defend against the threats of tomorrow but also build a more resilient and secure foundation for the future.Learn more about preparing for the quantum future:&nbsp;save your spot for our webinar launch event&nbsp;where our product experts will walk you through how Zscaler decrypts and inspects quantum-encrypted traffic with hybrid key exchange using ML-KEM.&nbsp;]]></description>
            <dc:creator>Brendon Macaraeg (Sr. Product Marketing Manager)</dc:creator>
        </item>
        <item>
            <title><![CDATA[If You're Reachable, You're Breachable, Part 3: The Adversary's Final Move – Exploiting You]]></title>
            <link>https://www.zscaler.com/de/blogs/product-insights/if-you-re-reachable-you-re-breachable-part-3-adversary-s-final-move</link>
            <guid>https://www.zscaler.com/de/blogs/product-insights/if-you-re-reachable-you-re-breachable-part-3-adversary-s-final-move</guid>
            <pubDate>Sat, 31 Jan 2026 23:34:51 GMT</pubDate>
            <description><![CDATA[Over the&nbsp;part 1 and&nbsp;part 2 of this series, we have followed the adversary's journey. In Part 1, we saw how they use internet-wide scanners to&nbsp;find your exposed VPNs, Firewall and other digital assets. In Part 2, we detailed how they&nbsp;classify those assets, building a detailed blueprint of your security stack i.e. VPNs, Firewalls, and your application infrastructure.Now, we arrive at the final, inevitable conclusion of this process. The reconnaissance is over. The blueprint is complete. This phase is the "breach" in "breachable." This is the exploitation phase.From Knowledge to Action: Weaponizing IntelligenceThe adversary now has a list of your exposed services like VPNs and Firewalls, and their exact versions. This is the ammunition. The next step is to find the weapon to fire it.1. Finding the Exploit (The CVE Playbook)The first stop is a public vulnerability database, like the National Vulnerability Database (NVD). The attacker takes the version number they discovered (e.g., Apache/2.4.49, VPN/Brand Name) and searches for any associated Common Vulnerabilities and Exposures (CVEs).Instantly, they have a list of known weaknesses for that specific software. Each CVE comes with a description of the vulnerability, its severity score (CVSS), and often, links to proof-of-concept (PoC) code. The attacker isn't guessing; they are following a well-documented recipe for a breach.2. Loading the Weapon (Exploit Frameworks like Metasploit)For common vulnerabilities, an attacker doesn't even need to write code. They turn to powerful, open-source exploit frameworks. Think of these frameworks as a digital Swiss Army knife for penetration testers and, unfortunately, for criminals. It contains a vast library of pre-built "exploit modules"—scripts that are ready to fire at a vulnerable service.The process is chillingly simple:Search these repositories or frameworks for the CVE number (e.g., CVE-2024-55591).Load the corresponding exploit module.Set the target IP address (which they already have).Type exploitIf successful, the framework establishes a "shell" or a "session" on your VPN or Firewall server, giving the attacker direct command-line control. They are now inside your network. It can be that easy.AI: The Autonomous Attacker Is HereIf the commoditization of exploits wasn't bad enough, AI is now supercharging the&nbsp;entire exploitation process, enabling attacks at a scale and speed that is impossible for human defenders to counter.AI-Driven Exploit Customization: Standard exploits are often caught by security tools like Intrusion Detection Systems (IDS) or Web Application Firewalls (WAF). Adversaries are now using AI to generate polymorphic versions of their exploits. The AI can subtly alter the attack code for each attempt, creating an infinite number of variations that fly under the radar of signature-based defenses.Predictive Exploitation: An AI model can analyze the complete target profile—OS, services, patch level, detected security tools—and predict the single most effective exploit chain. It might determine that a frontal assault on the web server will be blocked, but a less-common vulnerability in an adjacent VPN has a higher chance of success and will lead directly to the internal database.Autonomous Kill Chains: The most advanced adversaries are using AI to automate the entire attack sequence. The AI finds a target, classifies its services, selects and launches the initial exploit, and then—once inside—begins moving laterally, escalating privileges, and exfiltrating data, all without direct human intervention. This compresses an attack that once took weeks or months into a matter of minutes.Breaking the Chain: How to Make Yourself Un-breachableLet’s recap the adversary's playbook: Find → Classify → Exploit.Notice a pattern? Every single step depends on one fundamental prerequisite: your internal application must be invisible and unreachable on the public internet. If an attacker can't find you, they can't classify you. If they can't classify you, they can't exploit you.Traditional security tried to solve this with better firewalls, WAFs, and VPNs—essentially, by building stronger doors and locks. But as we've seen, adversaries will always find a way to pick the lock or discover a window left open.The only way to win is to change the game entirely. The solution is not a stronger door; it’s to remove the door from public view i.e. replace your VPNs and Firewalls.The Zscaler DifferenceThis is the core principle behind the Zscaler Zero Trust Exchange.Instead of exposing your applications to the internet and hoping your defenses hold, Zscaler makes your applications and internal resources completely invisible. The Zero Trust Exchange operates as an intelligent, inline switchboard that checks identity, device posture and business policies before connecting the right party (user, application, etc.) to the right party. Here's how:No Inbound Connections: Your applications, code repositories etc., whether in the data center or a public cloud, never accept inbound connections. They are not listening on the internet. They have no IP addresses that can be discovered or scanned by any tools. Your attack surface is not just minimized—it's eliminated.Inside-Out Connectivity: To make services available, a lightweight Zscaler connector, sitting with your applications, establishes an inside-out connection to the Zscaler cloud. This connection is outbound only, so no inbound firewall rules are ever needed.Brokered Access: When an authorized user—authenticated and policy-checked by Zscaler—needs to access an application, the Zero Trust Exchange securely stitches the two outbound connections together. The user connects to the application&nbsp;through Zscaler; they never connect&nbsp;to the application directly. Secure, brokered connections are built on a session-by-session basis, following the principles of least privilege access, and continuously assessed for changes in risk.An adversary scanning the internet sees nothing. There is no VPN to find, no Firewall port to scan, no banner to grab, and no vulnerability to exploit. Your organization is off the public map. Your existing VPNs and Firewalls are not the answer as they are built on an architecture that exposes them to the Internet and hence to the attackers. Your security stack needs to protect you, not expose you. Hence, you should look at replacing your existing VPNs and Firewalls, with a solution that enables you to stay invisible and reduces your attack surface.You can't be reachable, because you're not there. And if you're not reachable, you can't be breached. It's that simple.For a summary and a visual representation, please see this&nbsp;video.]]></description>
            <dc:creator>Akhilesh Dhawan (Sr. Director, Product Marketing - Platform)</dc:creator>
        </item>
        <item>
            <title><![CDATA[If You're Reachable, You're Breachable, Part 2: The Adversary's Second Move – Classifying You]]></title>
            <link>https://www.zscaler.com/de/blogs/product-insights/if-you-re-reachable-you-re-breachable-part-2-adversary-s-second-move</link>
            <guid>https://www.zscaler.com/de/blogs/product-insights/if-you-re-reachable-you-re-breachable-part-2-adversary-s-second-move</guid>
            <pubDate>Sat, 31 Jan 2026 21:49:53 GMT</pubDate>
            <description><![CDATA[In the&nbsp;first part&nbsp;of this three-part series, we explored how adversaries no longer need to hunt for you; they simply consult massive internet-wide scanning databases to&nbsp;find your exposed VPNs, Firewalls and other digital doorways. This provides them with a list of "reachable" IP addresses—the digital equivalent of a list of buildings with unlocked front doors.But finding the door is just the beginning. Before an adversary can attempt to enter, they need to understand what they're looking at. Is it a flimsy wooden door or a reinforced steel vault? Does it lead to an empty janitor's closet or the CEO's office?&nbsp;This is the second, crucial phase of the attack playbook: classification. Now that they've found you, they need to figure out exactly&nbsp;what they've found.From IP Address to Attack Plan: Active ReconnaissanceWhile the "Find" phase was largely passive, classification requires active probing. The adversary begins to interact with your exposed systems to build a detailed blueprint. They use a suite of standard, readily available tools to answer critical questions.1. Which Doors are Open? (Port Scanning)The first step is to see which services are listening on the IP addresses they found. Think of it as an attacker walking up to your digital building and checking every single one of the 65,535 possible doors and windows (ports) to see which ones are unlocked (open).A simple scan reveals which ports are listening. Is port 3389 open, suggesting a Remote Desktop? Is port 22 open, indicating an SSH server for administrative access? Is port 443 open for web traffic? Each open port is a potential attack vector.2. What’s Written on the Doorbell? (Banner Grabbing)Once an open port is identified, the attacker wants to know what service is running behind it. Often, services willingly announce themselves through a "banner"—a small bit of text sent to any new connection.A banner might look like this: Apache/2.4.29 (Ubuntu) or Microsoft-IIS/10.0. A banner like "Unauthorized Access Prohibited" may confirm a VPN.&nbsp;This is a goldmine. The banner doesn't just reveal the service; it provides the&nbsp;exact version. This sort of information along with the frequency at which these vulnerabilities are reported have made VPNs and Firewalls a favorite for attackers. An attacker can instantly cross-reference a version of these VPNs and Firewalls with a database of Common Vulnerabilities and Exposures (CVEs) to find a known, exploitable flaw. They've gone from "an open web server" to "a web server vulnerable to CVE-2021-41773" or "a VPN" to "a VPN vulnerable to CVE-2024-55591".&nbsp;3. What Kind of Lock is on the Door? (Fingerprinting)What if the banner is generic or has been removed? This is where attackers get more sophisticated, using fingerprinting techniques to identify the underlying technology.TLS/SSL Fingerprinting: The way a server negotiates a secure connection is highly unique. The combination of supported TLS versions, cipher suites, and extensions creates a fingerprint. An attacker can capture this fingerprint and compare it against a database to identify the technology. That generic web server might have a TLS fingerprint that screams the brand and the version of the VPN or a Firewall—revealing the nature of your security stack.Web Fingerprinting: For web servers (ports 80/443), some of the tools go even deeper. They inspect HTTP headers, cookie names, and HTML source code to identify not just the server, but the entire application stack: the Content Management System, the JavaScript libraries, and even embedded analytics tools. Each identified component is another potential source of vulnerabilities.Protocol Analysis: For unusual or custom services, an attacker might use a protocol analyzer to capture and dissect the traffic. This helps them reverse-engineer how the application communicates, looking for weaknesses in the protocol itself, such as unencrypted authentication or predictable session tokens.The AI Analyst: Supercharging ClassificationA skilled human can perform this analysis, but it's slow and requires deep expertise. Once again, AI is a game-changer for the adversary, acting as an automated, super-intelligent analyst.An attacker can now feed the raw data from these tools into an AI model. This model, trained on millions of known device and service profiles, accomplishes two things with terrifying speed and accuracy:High-Confidence Identification: The AI correlates all the data points—open ports, banners, headers, TLS fingerprints—to make a high-confidence classification. It moves beyond simple signatures to probabilistic analysis. For example: "The combination of this TLS fingerprint, these HTTP server headers, and this login page HTML structure gives a high probability of a specific “VPN running a vulnerable version of an OS." This allows attackers to instantly identify your perimeter security devices, which are prime targets for exploitation.Automated Vulnerability Mapping: The AI doesn't stop at identification. It immediately cross-references the identified service and version with real-time threat intelligence feeds, exploit databases, and even chatter on dark web forums. The output is no longer just a list of services; it's a prioritized list of actionable attack vectors. It tells the attacker not just&nbsp;what you are, but&nbsp;how you are vulnerable, right now.You Can't Hide What You ExposeThe classification phase is where your attack surface goes from being a list of IP addresses to a detailed blueprint for an attack. Every service you expose to the internet is broadcasting information about itself, and adversaries, armed with modern tools and AI, are listening. They are profiling your web servers, your VPN gateways, your firewalls, and your applications, patiently building a case for how to break in. A majority of enterprises have experienced an attack that started by exploiting a vulnerability in VPN and Firewall devices. And moving these devices to the cloud doesn’t solve the fundamental issue of exposed public IPs. &nbsp;The concept of public IP addresses for your security stack is incompatible with Zero Trust principles.This leads to the final, inevitable step. Now that they have found you and classified you, they are ready to exploit you.For summarizing this information, check out our&nbsp;video.Join me in the final part of this series, where we will dive into the methods attackers use to turn this intelligence into a breach.]]></description>
            <dc:creator>Akhilesh Dhawan (Sr. Director, Product Marketing - Platform)</dc:creator>
        </item>
        <item>
            <title><![CDATA[If You're Reachable, You're Breachable, Part 1: The Adversary's First Move – Finding You]]></title>
            <link>https://www.zscaler.com/de/blogs/product-insights/if-you-re-reachable-you-re-breachable-part-1-adversary-s-first-move-finding</link>
            <guid>https://www.zscaler.com/de/blogs/product-insights/if-you-re-reachable-you-re-breachable-part-1-adversary-s-first-move-finding</guid>
            <pubDate>Sat, 31 Jan 2026 21:38:26 GMT</pubDate>
            <description><![CDATA[In the physical world, we understand security through simple, tangible concepts. We lock our doors, close our windows, and draw the blinds. We know that an open door is an invitation for trouble. In the digital world, however, the doors and windows aren't always so obvious. The most troubling fact is that they are your Firewalls and VPNs. The very devices that you thought were protecting you are now a front door into your organization. They are your attack surface. The continued use of the castle-and-moat security model and network security products such as firewalls and VPNs is putting organizations at risk. This brings us to a fundamental truth of modern cybersecurity: If you are reachable, you are breachable.It’s a simple but powerful premise. Every server, application, or device directly exposed to the internet is a potential foothold for an adversary. This isn't a scare tactic; it's the foundational principle of every modern cyberattack.&nbsp;Over this three-part series, we'll deconstruct the adversary's playbook, which is finding you, classifying you and then exploiting you. Let’s start with the critical first step that makes all others possible: finding you.The Old Playbook vs. The New: Reconnaissance at ScaleIn the past, reconnaissance was a noisy and laborious process. Attackers would run active scans against a target's IP range, "knocking" on digital doors to see which ones were open. It was time-consuming, and it created a lot of noise that could be detected by security teams.Today, the game has completely changed. Adversaries no longer need to knock on&nbsp;your specific door. Instead, they consult global, publicly available directories that have already cataloged every open door, window, and unlocked shed on the entire internet.The tools: The Search Engines of ExposureMeet the adversary's best friends: the tools. Think of these tools not as Google, which indexes web content, but as search engines for&nbsp;devices. They continuously scan the entire internet (every single IPv4 and IPv6 address) and index the services running on them.What can they find? Everything.Vulnerable VPNs and Firewalls: An attacker can search for a specific, vulnerable version of Firewall or a VPN and get a list of every instance on the internet that needs to be patched—a ready-made list of targets.Exposed Databases: A quick search can reveal databases that are publicly accessible, often without authentication.Vulnerable Remote Access: They can instantly find servers with exposed Remote Desktop Protocol (RDP) or SSH ports, a favorite entry point for ransomware gangs.Industrial Control Systems (ICS): Frighteningly, systems controlling water treatment plants, power grids, and manufacturing lines can be found with simple queries.These tools transform reconnaissance from an active hunt into a passive query. The attacker isn't targeting you; they are targeting a vulnerability. They simply ask, "Show me everyone who is vulnerable to X," and the tools provide a list. If your organization is on that list, you've just been "found."Enter AI: Reconnaissance on AutopilotAs powerful as these search engines are, the sheer volume of data they provide can be overwhelming. This is where Artificial Intelligence is becoming the adversary's most powerful force multiplier in the "Find" phase. Attackers are using AI to supercharge their reconnaissance in three key ways:Hyper-Efficient Pattern Recognition: An AI model can sift through petabytes of data from these tools, public records, and other sources to identify subtle patterns of exposure. It doesn't just find one open port; it can identify an organization's entire external footprint, recognizing naming conventions in subdomains or identifying all assets hosted on a specific cloud provider.Intelligent Correlation: AI excels at connecting disparate dots. It can take a list of exposed devices from these tools, correlate it with employee profiles on social media ("show me all network admins at Company X"), and cross-reference that with code snippets leaked on public repositories. This builds a rich, multi-dimensional profile of a target organization, moving beyond simple IP addresses to understand the people and processes behind them.Predictive Targeting: Most importantly, AI helps adversaries prioritize. By analyzing the data, AI models can predict which of the thousands of exposed services are most likely to be successfully exploitable or lead to high-value assets. It answers the question, "Of these 10,000 potential targets, which 10 offer the path of least resistance to the crown jewels?" This allows them to focus their efforts with surgical precision.You Must Be UnreachableThe "Find" phase of an attack is no longer a manual effort. It is a continuous, automated, AI-driven process. Your organization's attack surface is being scanned and indexed 24/7, not necessarily by someone targeting you specifically, but by automated systems looking for any opportunity.This is why the traditional castle-and-moat approach of Firewall and VPNs that is trying to protect the perimeter is failing. The perimeter has dissolved, and the doors are everywhere. In fact,&nbsp;those very VPNs and Firewalls that were supposed to protect you, have themselves become the front door for attackers. They are plagued with a myriad of actively exploited vulnerabilities. If they are part of your attack surface, they certainly cannot be part of your cybersecurity defense.&nbsp;The only winning move is to make your doors invisible. The solution is to replace your existing VPNs and Firewalls and make your internal applications and infrastructure off the internet entirely, rendering them unreachable and therefore unfindable.For a summary of this blog and for a visual representation, take a look at this&nbsp;video.In Part 2, where we explore what happens next. Now that adversaries have found you, how do they classify your assets and employees to plot their attack?]]></description>
            <dc:creator>Akhilesh Dhawan (Sr. Director, Product Marketing - Platform)</dc:creator>
        </item>
        <item>
            <title><![CDATA[From Blunt Force to Surgical Precision: Elevating Control in Zscaler Internet Access]]></title>
            <link>https://www.zscaler.com/de/blogs/product-insights/blunt-force-surgical-precision-elevating-control-zscaler-internet-access</link>
            <guid>https://www.zscaler.com/de/blogs/product-insights/blunt-force-surgical-precision-elevating-control-zscaler-internet-access</guid>
            <pubDate>Sat, 31 Jan 2026 18:27:13 GMT</pubDate>
            <description><![CDATA[Search is where work starts. Engineers look for fixes. Analysts look for context. Creative teams look for assets. And in that “normal work” moment, risk can slip in quietly—inappropriate results in a shared environment, accidental IP misuse from a reused image, or controls that don’t scale cleanly across a real org.That’s why in our recent ZIA releases, we’ve rolled out key enhancements to make search governance more precise in three practical ways, so you can shape search outcomes without turning everyday work into a policy negotiation.The goal isn’t “web filtering.” It’s Search Governance: guiding what search produces and what users can safely do with it—consistently, and at scale.&nbsp;It’s exactly what these ZIA capabilities are built to deliver: moving from broad strokes to surgical control, shaping outcomes without breaking workflows.Update 1: Moving SafeSearch From a “Blunt Switch” to Precision GovernanceSafeSearch is one of those controls that looks small on paper but plays big in real life—especially in shared spaces or regulated contexts. However, until now, enforcing it was often a tenant-wide decision: either "On" for everything or "Off" for everything.This created a dilemma: to enforce safety on Google Images, you often had to force the same restrictions on YouTube or Bing, potentially blocking training videos or research material. Admins were stuck effectively "blocking the internet" for specific tools just to maintain compliance elsewhere.What’s new (and why it matters): We have introduced Granular Service Controls for SafeSearch. Instead of a global toggle, administrators can now configure SafeSearch settings with specificity regarding which search engines and services are restricted.Earlier: Turn SafeSearch "ON" for all traffic.New: Enforce SafeSearch for Google and Bing, but leave YouTube unrestricted for your marketing team.Why this is Search Governance:You’re tailoring outcomes for each application, rather than applying broader network restrictions.You avoid the security risk of bypassing SSL inspection just to unblock a specific search tool.Update 2: Rights-Safe Reuse With Creative Commons Search SupportA lot of enterprise “risk” doesn’t show up as an attack. It shows up as accidental misuse.Creative teams, field marketers, enablement folks—anyone who builds decks, campaigns, training, or customer-facing content—pulls assets from search constantly. And nobody wakes up thinking, “Today I’ll create a licensing problem.”What’s new (and why it matters):&nbsp;ZIA now supports enabling Creative Commons-focused search results as a governance control This simple toggle helps steer users toward content designed for reuse in supported search experiences.Automated Compliance: The search engine ensures results are licensed under Creative Commons, reducing the risk of accidental IP infringement.Workflow Efficiency: Users stop fighting security to get their job done. They save time manually filtering results, and the business quietly reduces risk.Update 3: Policies That Scale — Because Pilots Are Easy, Enterprises Are NotHere’s where most good intentions die. You build a clean policy, and then the "org reality" shows up.&nbsp;“We need to create an exception policy for more than 32 users/ 32 groups.”“We acquired new companies and they were managing per user based exceptions”“We acquired three companies and none of their groups map cleanly.”Suddenly, the challenge isn’t what the control does. It’s whether you can express it at scale without hitting ceilings or creating rule sprawl.What’s new (and why it matters): ZIA has expanded policy criteria limits to support cleaner, more scalable rule design—so you can represent real organizational structures with fewer fragmented policies.And if you need additional scale beyond defaults, limits can be expanded further via Support (based on tenant needs).The benefit:&nbsp;less duplication, fewer policy contortions, simpler audits, and governance that stays consistent as the org grows.The Practical Implementation PlaybookIf you want this to read like something an admin could actually run next week, here’s the playbook.1) Pick Your Governance “North Star”Workplace-appropriate discovery → lead with&nbsp;SafeSearchRights-safe reuse → lead with&nbsp;Creative CommonsConsistent enforcement at enterprise scale → lead with&nbsp;policy criteria / segmentationYou’ll probably land on all three. But naming the primary goal upfront keeps you from building a policy museum full of exceptions.2) Confirm PrerequisitesIf you’re trying to govern search-result outcomes, make sure the traffic is actually governable—SSL inspection is usually the dependency that makes or breaks the whole effort.3) Start with Rollout&nbsp;4) Measure Outcomes That Humans Actually FeelTrack:reduction in policy exceptions over timefewer “why did that show up?” incidentsfewer internal escalations about content reuseadmin time saved (because criteria scaling avoids policy gymnastics)Precision Is the Future of PolicyThese enhancements represent our commitment to building a platform that doesn't just secure your traffic, but understands the nuance of your business.&nbsp;By moving away from one-size-fits-all restrictions to granular, precise controls, Zscaler ensures that security remains a business enabler, not a bottleneck.These features are rolling out now. Log in to your ZIA portal and check your&nbsp;Advanced Policy Settings to start refining your rules today.]]></description>
            <dc:creator>Nishant Kumar (Senior Manager, Product Marketing)</dc:creator>
        </item>
        <item>
            <title><![CDATA[Zscaler Adaptive Access Engine: Turning Logs into Logic]]></title>
            <link>https://www.zscaler.com/de/blogs/product-insights/zscaler-adaptive-access-engine-turning-logs-logic</link>
            <guid>https://www.zscaler.com/de/blogs/product-insights/zscaler-adaptive-access-engine-turning-logs-logic</guid>
            <pubDate>Sat, 31 Jan 2026 14:23:46 GMT</pubDate>
            <description><![CDATA[There’s a quiet misconception in enterprise security that access is static. A one-time cryptographic handshake that holds until a token expires.But entropy doesn’t stop at the login screen.&nbsp;Risk shifts mid-session. Devices drift. Credentials change in the background. Context moves and mutates like a living system.In a hyper-connected environment, a user’s risk profile isn’t static. It oscillates. A user who looks “safe” at 9:00 AM may become a liability by 9:05 AM if their endpoint surfaces a new CVE or their identity provider flags a credential update.Yet static access policies are blind to all of this. They only see a valid token.&nbsp;We Built an Engine for EntropyWhen it comes to modern access, identity, device posture, and user behavior all generate rich signals — the kind that can sharpen decisions dramatically when they’re interpreted together.Picture a user logging in at 9:00 AM. Their SAML/OIDC assertion is clean. Everything looks normal.By 9:04 AM, though:CrowdStrike may drop their ZTA score from 50 → 5Microsoft Defender may detect a new CVEOkta may register a password reset or MFA exhaustion patternZIA may see anomalous download behaviorZPA may observe access to a sensitive private app the user has never touchedUEBA may detect a deviation in behavioral baselinesThese signals need to be automatically propagated to your enforcement points. The opportunity is simple: orchestrate the signals, kill the noise, and wire every tool into one nervous system.Without a central nervous system to aggregate them, you are forced to manage "one-off signal sharing" — building fragile bridges between your IdP and your SSE, or your EDR and your gateway.This is why we built the Adaptive Access Engine—to take this unbounded entropy and turn it into deterministic, enforceable logic.What is Adaptive Access EngineWe designed the Adaptive Access Engine as the real-time logic layer between your telemetry and your enforcement. It doesn’t replace your policies, it makes them kinetic. It ingests raw telemetry — what we call “Context Nuggets” — from Zscaler’s own data lakes and from partners like CrowdStrike, Microsoft, and Okta. Then it normalizes that input into a unified risk signal and pushes that context, instantly, to enforcement points like ZIA and ZPA.The Mechanics of the "Nugget"Let’s look at the architecture. The system relies on a few core concepts that change how you write policy.1. Turning Signals into Context NuggetsContext Nugget is the atomic unit of risk —clean, usable data that your policy engine understands immediately. It associates a subject (User or Device) with a specific data point. A Nugget includes:SubjectuserId,&nbsp;deviceId, originating source IDs (Zscaler, Okta, CrowdStrike, etc.)Typeinteger, boolean, enumeration, timestamp-based, or compositeValuee.g.,&nbsp;zta_score=8,&nbsp;credential_change=true,&nbsp;user_risk=HighLogTime / StartTime /captured in the schema (ref: profile conclusion JSON)This is documented across the Context Producer / Nugget Type Catalog sections of the PRDs you provided.Key design constraints:Nuggets must be&nbsp;non-fuzzy. No machine-learning probability fields.Nuggets must be&nbsp;deterministic.Nuggets must be&nbsp;traceable to a source system.Nuggets must be&nbsp;evaluatable at high frequency without ambiguity.Nuggets preserve&nbsp;state until TTL expiry or revocation — enabling mid-session enforcementIt answers specific questions:Has a user downloaded more sensitive documents than their normal baseline?Has an endpoint’s Defender risk level crossed a threshold?Has&nbsp;a user performed five password resets in a week?Did an Okta "Credential Change" event occur in the last 5 minutes?Is the ZIA User Risk Score "High"?Context Nuggets are explicit, logical, and built for evaluation — integers, enumerations, booleans. Nothing fuzzy. Nothing ephemeral. Nothing that breaks policy logic.&nbsp;2. Combining Nuggets into Adaptive Access ProfilesHere’s where Zscaler made an architectural leap. Adaptive Access Engine let admins express conditions that matter, combining multiple signals into one reusable definition.Instead of embedding risk logic inside hundreds of ZIA/ZPA rules, Adaptive Access Engine introduces Adaptive Access Profiles — reusable logical objects constructed from nuggets.A profile is essentially a Boolean expression tree:Why this matters:Profiles decouple context evaluation from policy evaluation.ZIA/ZPA don’t need to know how to interpret Okta or CrowdStrike models.Profiles act as a semantic layer — one definition, many policy surfaces.This is the same model used by modern policy engines (OPA, Cedar), but implemented at Zscaler scale and optimized for inline, per-request evaluation.&nbsp;3. Distribution Pipeline: How Enforcement Points Receive ContextWhen a profile evaluates to true for a user/device, the Context Engine publishes an applicability message:This means ZIA/ZPA enforcement engines always hold a current, in-memory view of:applicable profilesnugget stateTTLversioned changesThere are no API calls at enforcement time. No round trips. No synchronous dependencies. This is what makes it scalable.&nbsp;4. Enforcement: Inline, Per-Request, Real-TimeOn ZIA:Profiles appear as a first-class criteria in URL Filtering and Cloud App Control.When traffic hits ZEN, the engine evaluates:URL/App categoryuser identitydevice identitypolicy matchprofile applicability (from Adaptive Access Engine)Enforcement action is taken (allow, block, isolate, or step-up if tied to another system).On ZPA:The evaluation model is similar:Connector pathprivate app segmentidentity provider mappingdevice trustprofile applicabilityPrivate app access adapts based on signals just like internet/SaaS traffic.Mid-Session AdaptationThis is the major technical unlock:If a user’s context changes at T+17 seconds, ZIA/ZPA adapts at the very next request.No need to wait for session expiry.This is the part most SSE vendors cannot replicate because their enforcement model is not inline.Keeping the Human in the LoopWe know that automation without observability is dangerous. A "High Risk" flag shouldn't always mean a hard block, especially for a CEO traveling for a keynote.We built Adaptive Access Engine with an ability to override the context. This puts the controls back in your hands. If the system flags a user as risky but you know the context (e.g., a known travel scenario), you can manually override that specific signal for a set duration (e.g., 24 hours).It keeps the system fast, but it keeps the operator in command.What This Unlocks for the EnterpriseConsistent cross-surface context semantics:&nbsp;ZIA and ZPA now consume identical context objects. No more rewriting posture logic in two places.Immediate availability of new context types-&nbsp;No more multi-system upgrade cycles. New context types become usable immediately.Third-party integrations without custom plumbing-&nbsp;CrowdStrike, Defender, Okta, UEMs — integrated through consistent ingestion, not bespoke pipelines.False positives don’t break access anymore-&nbsp;Admins can override incorrect signals centrally.Policy sprawl collapses into reusable profiles-&nbsp;Instead of editing 2000 rules, admins modify a single profile.Policies that adapt mid-session-&nbsp;Access isn’t static — it reflects the real world’s fluctuations.And all of this sits on the Zero Trust Exchange, without adding new appliances, latency, or operational drag.Want to learn more?&nbsp;Speak to our experts.]]></description>
            <dc:creator>Nishant Kumar (Senior Manager, Product Marketing)</dc:creator>
        </item>
        <item>
            <title><![CDATA[Beyond The Crown Jewel Fallacy: Making Segmentation Work for Your Business]]></title>
            <link>https://www.zscaler.com/de/blogs/product-insights/beyond-crown-jewel-fallacy-making-segmentation-work-your-business</link>
            <guid>https://www.zscaler.com/de/blogs/product-insights/beyond-crown-jewel-fallacy-making-segmentation-work-your-business</guid>
            <pubDate>Fri, 30 Jan 2026 22:04:21 GMT</pubDate>
            <description><![CDATA[In Zero Trust conversations, there’s a familiar story many organizations tell themselves.It starts with identifying the most critical applications, the “crown jewels”, and surrounding them with some ZTNA solution. Access is locked down, dashboards turn green, and on paper, least-privilege access looks like a mission accomplished.But this story is incomplete.Focusing only on crown jewels is one of the most dangerous and pervasive myths in cybersecurity today. It gives the false sense of security while leaving the majority of your environment exposed to lateral movement.Securing your most valuable assets is a critical first step, but it’s a dangerous fallacy to believe that this alone delivers a complete segmentation strategy.&nbsp;The Fallacy: Partial Protection is a Full-Time RiskThink of your enterprise network like a house. The crown jewel approach is like installing a state-of-the-art vault door on the master bedroom while leaving the front door, windows, garage, and the back door wide open.An attacker won’t waste time trying to breach the vault. They will simply walk in through an open window instead, targeting certain “non-critical” applications that are unprotected. Once inside, they have free rein to move laterally across your network, turning a small breach into a catastrophic data leak. They can locate and steal your intellectual property and business records, while also establishing a foothold for a future ransomware attack.&nbsp;&nbsp;Modern attacks rarely start where you’ve invested the most security. They start where you’ve invested the least. By concentrating your efforts solely on a small set of crown-jewel applications, you often leave open the vast majority of your potential attack surface:Unsegmented – Users and workloads can reach far more than they shouldUnder-monitored – “Low-value” apps get less visibility and fewer controlsIdeal launchpads&nbsp;– Perfect footholds for ransomware and data exfiltrationThe Operational Nightmare: Why Manual Segmentation Fails at ScaleIf pervasive segmentation is the goal, why does everyone get stuck at the crown jewels? Because for most organizations, the operational reality of scaling segmentation is an absolute nightmare.When AJ Sofia, our CTO in Residence, meets with security leaders and customers, he often starts with a simple question:"How many applications are in your environment?"&nbsp;The answers are revealing. A CISO might say 400. Someone on their network team might say the real number is closer to 4,000.This ten-fold gap highlights the three core reasons why manual segmentation is a failing strategy:The Discovery Problem: You can’t secure what you can’t see. Manually identifying every application and mapping every user-to-app affinity across a dynamic enterprise is an impossible task.The Policy Problem:&nbsp;Even if you develop some tools and manage to discover everything, manually writing and vetting thousands of granular, identity-based policies leads to "segmentation by spreadsheet", which is a process so slow, painful and error-prone it’s often abandoned very early.The Maintenance Problem: In a modern business, users change roles, new apps are deployed, applications also scale horizontally–meaning new instances spin up and down automatically, and old ones are retired daily. Manually created policies are outdated the moment they’re written, creating security gaps or breaking user access.&nbsp;The Paradigm Shift: From Manual Effort to Automated IntelligenceThis is not a problem you can solve with more people, more processes, more spreadsheets, or bigger change-control meetings. What’s needed is a shift in how we think about segmentation itself, from a manual project to a strategic, automated, continuous process.Instead of asking:“How can my team write and manage thousands of policies?”We should be asking:“How can my platform automatically discover every application, use AI to help segment access and generate policy at scale, and continuously strengthen my security posture?”That’s where an autonomous approach to segmentation comes in.In this model, segmentation stops being a one-time initiative and becomes a native capability of your secure private access platform—constantly learning from your real user traffic and adapting as your environment changes.The answer lies in an architecture where segmentation isn’t a one-time, manual project, but an automated, continuous process. In this model, an AI engine helps you:Automatically discover all the unmanaged and unknown applications across your environmentIntelligently segment&nbsp;applications and generate policy recommendations based on business context and riskContinuously optimize through live insights dashboards that highlight gaps, trends, and opportunities to strengthen your posture. A key determinant of segmentation success is your ability to continuously monitor access and enforce true least-privilege at all times.&nbsp;This flips the model from one of overwhelming human effort to one of intelligent, autonomous control, finally making enterprise-wide segmentation a practical reality.Go Deeper: Join the WebinarThe move from partial protection to total segmentation is the most critical step in maturing your Zero Trust architecture. In our upcoming webinar,&nbsp;Beyond the Datasheet: The Autonomous Journey to User-to-App Segmentation, we will take a deep dive into the architectural principles that make this possible.We’ll explore the AI engine in action, discuss the future roadmap for autonomous policy, and provide a CTO's perspective on building a security posture that is both more comprehensive and far simpler to operate.The era of partial, manual segmentation is over. The future is autonomous.]]></description>
            <dc:creator>Olivia Vort (Senior Product Marketing Manager)</dc:creator>
        </item>
        <item>
            <title><![CDATA[Warum Zero Trust für Finanzinstitute unverzichtbar ist]]></title>
            <link>https://www.zscaler.com/de/blogs/product-insights/why-financial-institutions-should-adopt-zero-trust</link>
            <guid>https://www.zscaler.com/de/blogs/product-insights/why-financial-institutions-should-adopt-zero-trust</guid>
            <pubDate>Thu, 29 Jan 2026 18:16:20 GMT</pubDate>
            <description><![CDATA[Für Finanzdienstleister steht heute mehr auf dem Spiel als je zuvor.  KI-Innovationen und eine dauerhaft hybride Arbeitswelt überfordern unsere herkömmlichen Sicherheitsstrukturen. Was uns früher geschützt hat, ist heute ein Klotz am Bein – es macht die IT komplexer, verschlechtert die User Experience und öffnet am Ende sogar Türen für neue Risiken.Als IT- und Sicherheitsexperten liegt es in unserer Hand, diesen Wandel aktiv zu gestalten. Die alten Ansätze reichen schlichtweg nicht mehr aus, um den neuen Herausforderungen zu begegnen.Die zentrale Herausforderung: Eine veraltete Hub-and-Spoke-ArchitekturSeit Jahrzehnten basieren unsere Netzwerke auf dem klassischen Hub-and-Spoke-Modell. Dabei wurde der gesamte Traffic – egal ob aus Zweigstellen, von mobilen Mitarbeitern oder aus dem Homeoffice – mühsam in ein zentrales Rechenzentrum zurückgeleitet. Erst dort passierten die Daten eine Reihe von Sicherheitslösungen wie Firewalls, IPS und Sandboxes, bevor sie endlich ihr eigentliches Ziel erreichten.In der heutigen Zeit führt dieses Modell jedoch zu drei massiven Problemen:Schlechte User Experience: Das Backhauling des Traffics – oft als „Hairpinning“ bezeichnet – verursacht erhebliche Latenzen. Für User, die auf Cloud- und KI-Anwendungen zugreifen müssen, führen diese frustrierenden Verzögerungen zu Produktivitätseinbußen und sinkender Zufriedenheit.Erhöhtes Risiko: Dieses Modell gewährt nach der ersten Authentifizierung zu viel Vertrauen. Sobald ein Angreifer eine Firewall oder ein VPN überwindet – oder ein User mit einem infizierten Gerät Zugriff erhält –, kann er sich ungehindert im gesamten Netzwerk bewegen. Dies setzt alle vertraulichen Unternehmensdaten und das geistige Eigentum einer massiven Gefahr aus.Erschwerte Audits und Compliance-Hürden: Eingeschränkte Transparenz und komplexe Firewall-Regeln machen Audits und die Einhaltung von Compliance-Vorgaben extrem schwierig. Zudem ist es kaum möglich, über zahlreiche Einzellösungen hinweg zu prüfen, ob Sicherheitsrichtlinien konsistent durchgesetzt werden.Die Lösung: eine Zero-Trust-ArchitekturUm diese Hürden zu nehmen, brauchen wir ein völlig neues Sicherheitsdenken: Zero Trust.Der Grundsatz dieses Prinzips lautet: Vertraue dem Netzwerk nicht, sondern verifiziere jeden Zugriff – immer und überall. Zero Trust macht das Internet zu Ihrem neuen Unternehmensnetzwerk und sorgt für eine strikte Entkopplung von Anwendungen und Netzwerk.Statt Usern Zugriff auf das gesamte Netzwerk zu gewähren, verbindet Zero Trust sie direkt mit der jeweiligen Anwendung. Diese Verbindung wird über einen Cloud-nativen Exchange-Service vermittelt, der zwischen User und Anwendung agiert und Richtlinien basierend auf Identität und Kontext durchsetzt. So macht eine Zero-Trust-Architektur interne Anwendungen im Internet vollständig unsichtbar, sodass sie weder aufgespürt noch angegriffen werden können. Ein entscheidender Vorteil: Da User niemals direkt Zugriff auf das Unternehmensnetzwerk erhalten, wird die laterale Ausbreitung von Bedrohungen konsequent unterbunden.Zentrale Anwendungsfälle für FinanzinstituteDie Implementierung einer Zero-Trust-Architektur liefert sofortige und messbare Vorteile für die Sicherheitsstrategie von Finanzdienstleistern. Die zentralen Punkte, die wir in unserem Leitfaden vertiefen, sind:Schutz vor Zero-Day-Angriffen: Durch die Echtzeit- und Inline-Überprüfung des gesamten Traffics können Finanzdienstleister Zero-Day-Bedrohungen sowie Angriffe auf bereits bekannte Schwachstellen proaktiv blockieren.Weniger Risiken durch Ransomware: Die Zscaler-Zero-Trust-Exchange™-Plattform setzt das Prinzip der minimalen Rechtevergabe konsequent durch und macht Unternehmensressourcen unsichtbar. Dies verhindert die laterale Ausbreitung von Bedrohungen im Netzwerk. Sollte es dennoch zu einer ersten Kompromittierung kommen, werden die potenziellen Auswirkungen für Finanzunternehmen auf ein Minimum reduziert.Verhinderung von Kontoübernahmen: Durch die fortlaufende Überprüfung des Sicherheitsstatus von Usern und Endgeräten in Echtzeit erkennt Zscaler verdächtiges Verhalten sofort. Finanzinstitute können so verhindern, dass Angreifer Konten übernehmen und diese für betrügerische Transaktionen missbrauchen.Vermeidung von Datenexfiltration: Durch die Implementierung granularer Zugriffskontrollen, die genau definieren, wer unter welchen Bedingungen auf welche Daten zugreifen kann, und durch den Einsatz von Inline-Funktionen zum Schutz vor Datenverlusten (Data Loss Prevention, DLP) können Unternehmen das Risiko einer unbefugten Datenexfiltration erheblich reduzieren.Vereinfachung von Compliance- und Audit-Prozessen: Durch die grundlegende Verbesserung von Sicherheit und Transparenz erleichtert Zero Trust grundsätzlich die Erfüllung gesetzlicher Anforderungen und den Nachweis gegenüber Prüfern und Versicherern.Weitere Informationen finden Sie im WhitepaperDie Abkehr von einem netzwerkzentrierten Sicherheitsmodell ist ein unverzichtbarer Schritt für jedes moderne Finanzinstitut. Unser Whitepaper bietet Ihnen einen kompakten Überblick über die aktuellen Herausforderungen, die passende Lösung sowie Best Practices für die Implementierung einer zukunftsweisenden Zero-Trust-Architektur.Erfahren Sie alles über Best Practices und konkrete Praxisbeispiele: Laden Sie jetzt unser Whitepaper Gestärkte Cybersicherheit im Finanzwesen mit Zero Trust herunter. Erfahren Sie aus erster Hand, wie Zscaler-Kunden ihre Sicherheit transformiert haben und wie Sie Ihre IT-Infrastruktur moderner, agiler und effizienter gestalten können.]]></description>
            <dc:creator>Akhilesh Dhawan (Sr. Director, Product Marketing - Platform)</dc:creator>
        </item>
        <item>
            <title><![CDATA[Announcing the Zscaler Automation Hub and Other OneAPI News]]></title>
            <link>https://www.zscaler.com/de/blogs/product-insights/announcing-zscaler-automation-hub-and-other-oneapi-news</link>
            <guid>https://www.zscaler.com/de/blogs/product-insights/announcing-zscaler-automation-hub-and-other-oneapi-news</guid>
            <pubDate>Thu, 29 Jan 2026 13:07:16 GMT</pubDate>
            <description><![CDATA[Before we dive into the latest&nbsp;OneAPI news, we need to answer a simple question for the uninitiated.&nbsp;What even is OneAPI?OneAPI is the singular application programming interface (API) for the entire&nbsp;Zscaler Zero Trust Exchange platform. It provides programmatic access that enables integrations with any Zscaler solutions, and lets admins deploy and manage Zscaler via the tools where they prefer to oversee their IT products.&nbsp;This programmatic access (meaning access via code) also allows organizations to embrace Zero Trust Automation and make their use of Zscaler more autonomous. As one example, customers can use OneAPI to automate change implementation like policy configuration. As another example, they can use OneAPI to automate the retrieval of Zscaler analytics data and the creation of custom reports and dashboards.&nbsp;Overall, this reduces the need to spend time on manual tasks, minimizes the possibility of human administrative errors, and enhances scalability, precision, and security.&nbsp;So, how are we making things even better?&nbsp;The Zscaler Automation HubCustomers want implementing automation with Zscaler to be a simple and painless process—they want it to be as easy as possible to find API specifications, code samples, and more. To fulfill that desire, we’ve launched the&nbsp;Zscaler Automation Hub. This all-in-one resource provides everything that organizations need to streamline the setup of automation via OneAPI. It does this by providing:An AI-powered copilot that answers questions and surfaces relevant contentCollections of code snippets for basic tasks like pulling data about policy violationsPlaybook templates to automate multi-step workflows like deploying App ConnectorsComprehensive help documentation that includes API specifications, rate limits, getting started guides, sample use cases, and moreBy centralizing these resources, the hub helps organizations reduce the amount of effort that’s needed to automate the use of Zscaler. They can save time by eliminating manual setups. They can improve efficiency by leveraging workflows that can be repeated for different use cases. And they can enhance ROI by minimizing management overhead and freeing up admins to focus on value-creating work. To see the hub for yourself, you can visit it at&nbsp;automate.zscaler.com or watch a demo&nbsp;here.&nbsp;Additional API coverage for configuration and management&nbsp;OneAPI is constantly evolving to provide broader coverage of public APIs that can be used to configure and manage Zscaler products. Recent updates let admins leverage OneAPI to administer:&nbsp;Zscaler Internet Access (ZIA) PAC filesClient Connector forwarding and app profilesZscaler Private Access (ZPA) app protectionShadow IT reportingSSL inspection policiesAnd much more that you can explore&nbsp;here&nbsp;Analytics for key ZIA data domainsThrough GraphQL, OneAPI provides programmatic access to key Zscaler analytics. With this capability, customers can now pull Zscaler Internet Access data to build custom dashboards, analyze trends, and extract insights related to:SaaS securityShadow ITInternet of things (IoT) securityThe Zscaler&nbsp;Zero Trust FirewallCybersecurity postureAnd web traffic behavior across their organizationAutomation for ZIdentity configurationsLike other Zscaler solutions, the Zscaler authentication service,&nbsp;ZIdentity, can be accessed programmatically via OneAPI. As a result, admins can now let automation manage users and groups in Zscaler, as well as API clients. More details on how this works can be found&nbsp;here.Where to go from hereWant to start your Zero Trust Automation journey with Zscaler? Visit the&nbsp;Automation Hub and, in particular, look at&nbsp;our official SDKs and Postman collections, which can help you get up and running quickly.&nbsp;]]></description>
            <dc:creator>Jacob Serpa (Sr. Product Marketing Manager)</dc:creator>
        </item>
        <item>
            <title><![CDATA[Zero Trust Branch: Redefining Connectivity]]></title>
            <link>https://www.zscaler.com/de/blogs/product-insights/zero-trust-branch-redefining-connectivity</link>
            <guid>https://www.zscaler.com/de/blogs/product-insights/zero-trust-branch-redefining-connectivity</guid>
            <pubDate>Wed, 28 Jan 2026 08:51:45 GMT</pubDate>
            <description><![CDATA[In Part 1, we explored why traditional network-centric architectures struggle to scale in modern enterprise environments. Layering security controls onto broadly connected networks increases complexity, expands attack surface, and creates operational friction, particularly as organizations adopt cloud services, integrate IoT/OT, and respond to faster-moving threats.&nbsp;These limitations are structural, not tactical, and cannot be resolved by adding more segmentation, firewalls, or overlays.This part introduces Zero Trust Branch as an architectural reset, one that separates connectivity from trust to reduce risk, simplify operations, lower cost, and improve performance at the enterprise edge.Introducing Zero Trust Branch (ZTB)Zero Trust Branch (ZTB) reimagines the branch network&nbsp;decoupling connectivity from trust.Instead of extending the corporate network to the branch, it connects users, devices and apps leveraging the Zero Trust Exchange.At its core:Every device is placed in a microsegment or “network-of-one”Devices cannot directly see or communicate with each other: nothing is trusted by defaultSessions between sites are authenticated and brokered by the Zero Trust Exchange.This eliminates uncontrolled peer-to-peer communication, dramatically reducing lateral movement and the internal attack surface. With no traditional inbound connections from the internet, the external attack surface is also minimized.ZTB automatically discovers, fingerprints, and classifies devices, whether end-user, servers or IoT/OT, enforcing policies based on identity and behavior rather than only relying on spoofable MAC addresses, static IPs or cumbersome inventories. East-west and north-south traffic is policed with granular security applied without agents, ACLs, or LAN redesign. With Zero Trust Branch, business partners and external suppliers only connect to the resources they need to access through the Zero Trust Exchange, based on their identity and the principle of least privilege:If they are compromised, they are not on your network and the Zero Trust Exchange is between you and themThe complexity of VPNs and Jump Hosts can be removedSimilarly, because application access is decoupled from network access, Mergings & Acquisitions activities are faster and streamlined without having to worry about IP addresses overlapping: you integrate companies without integrating networks, which results in shorter time to revenues for the business.Effectively, each branch, factory, or cloud location functions as a “virtual island”, where business policies dictate exactly which users, workloads, and devices can communicate, ensuring consistent least-privilege enforcement. Deployment can be completed in hours with zero-touch provisioning, no need to reconfigure the whole LAN or to plan for downtime, enabling rapid business agility.The results are:Reduced complexity and operational overheadLower costsMinimized blast radius for attacksSignificantly reduced lateral movementHow ZTB Differs from Traditional SASE and SD-WANTraditional SASE solutions often combine SD-WAN with cloud-delivered security, but the underlying network assumptions remain similar: routing overlays, full meshes, firewall-centric segmentation, and inbound VPN constructs.&nbsp;ZTB differs in several key ways:Minimized attack surfaceInternal devices cannot see each other.No inbound services exposed on the public internet.Automatic device discovery and classificationSimplify policy management by automatically grouping devices based on behavioral identity.&nbsp;Avoid complex inventory management.Identity-driven communicationPolicies are enforced based on device and user identity, not IP addresses or VLANs.&nbsp;No transitive trust or shared broadcast domains.No routable overlaySessions between sites are brokered by the Zero Trust Exchange.Every session is authenticated and authorized.Native east-west segmentation without VLAN/ACL/Agent complexityZero Trust is applied within the branch, not just at the perimeter.Segmentation is policy-driven rather than network-engineered.Unified security and connectivityZTB integrates seamlessly with the Zero Trust Exchange, providing consistent visibility and policy enforcement for SaaS, private apps, cloud workloads, and branch devices.Business and Security ImpactZero Trust Branch addresses the inherent weaknesses of legacy connectivity and segmentation architectures by design:Reduces the attack surface and the risk of lateral movement.Simplifies segmentation, allowing for deployments in days, without VLAN changes or downtime.Consolidates legacy infrastructure: no additional branch firewalls or point products.Aligns operations around identity and policy, and delivers consistent security policies for users, devices, apps.The outcomes:Lower cyber risk: stop ransomware spread.Lower cost and complexity: fewer appliances and tools to manage.Higher business agility: deploy in days, integrate sites and companies without worrying about IP address conflict.Better user experience: eliminate backhaul to central security stacks at DC or co-lo sites and provide the shortest path to the resources.For CISOs, architects, and IT leaders, ZTB represents more than just a product; it is a new architectural paradigm. This branch model is purpose-built for the cloud era, for today’s dynamic threat landscape, and fundamentally for Zero Trust.If you want to learn more about "How to architect a Cafe-like Branch", join our Webinar on 4th of February.]]></description>
            <dc:creator>Andrea Polesel (Principal Transformation Architect)</dc:creator>
        </item>
        <item>
            <title><![CDATA[Building Visibility to Enable Secure Healthcare AI Adoption]]></title>
            <link>https://www.zscaler.com/de/blogs/product-insights/building-visibility-enable-secure-healthcare-ai-adoption</link>
            <guid>https://www.zscaler.com/de/blogs/product-insights/building-visibility-enable-secure-healthcare-ai-adoption</guid>
            <pubDate>Tue, 27 Jan 2026 22:53:44 GMT</pubDate>
            <description><![CDATA[Generative AI isn’t just a buzzword in healthcare anymore—it’s table stakes. Physicians, nurses, and analysts are tapping into generative AI to transform patient care. Whether it’s summarizing notes into a patient record, coding faster with AI assistants, or automating time-consuming documentation, the technology promises massive improvements in operational efficiency and clinical accuracy.But as healthcare embraces AI, most organizations are flying blind. Your staff isn’t waiting for enterprise rollouts—they’re solving problems right now. There’s that cardiologist using ChatGPT to streamline discharge summaries, the nurse with a "smart" summarization tool, or the analyst uploading "anonymized" electronic health record (EHR) exports to a coding assistant. In every case, they’ve jumped ahead. Unfortunately, what they see as innovation, your network sees as risk.This is the uncomfortable truth: AI users in your organization may just have triggered your next biggest security incident. Here's why—and how to fix it.Shadow AI: The Elephant in the RoomEvery healthcare leader knows that AI adoption is happening. It’s already underway in more than 60% of organizations piloting or implementing enterprise AI solutions. But here’s the problem: the real number is likely much higher because shadow AI tools—AI systems adopted by users without enterprise approval—are flying under the radar.When one healthcare organization deployed inline WebSocket inspection, they discovered 31 unique AI tools being used within 72 hours. None of them had been approved, evaluated for compliance, or configured to safely handle Protected Health Information (PHI). AI-related traffic across enterprises has increased 3,000% over the last year, and 10–20% of that traffic already violates policies. This widespread activity creates significant blind spots for security teams—and significant opportunities for attackers.Shadow AI Risks Are RisingAI has brought unprecedented opportunities, but it has also introduced unique risks. Without visibility into what tools are being used and how your people interact with AI, you risk:PHI exposure: Shadow AI users may unintentionally upload sensitive patient data, creating major compliance risks.Vulnerability to AI-related attacks: Threat actors are using AI for sophisticated phishing campaigns, compromise tactics like prompt injections, and exploiting organizational blind spots. AI-fueled attacks jumped 146% from 2023 to 2025, with healthcare data theft rising 92%.Regulatory fines: With updated regulations like the proposed HIPAA Security Rule and the HITRUST AI Security Framework, compliance gaps related to AI adoption could lead to millions in penalties.Shadow AI isn’t a future problem. It’s happening in your organization now.Why WebSocket Blindness Keeps You in the DarkMost security teams already rely on SSL/TLS inspection for visibility. While this approach may work for traditional web traffic, it isn’t suited for generative AI platforms like ChatGPT, Microsoft Copilot, Claude, or Google Gemini. These modern platforms don’t communicate in the simple HTTPS formats you’re used to inspecting.Instead, they rely on WebSockets—persistent, bidirectional connections that continuously stream complex payloads. This creates a black box for organizations without inline WebSocket inspection. Your firewall may flag a session to an AI domain, but it won’t reveal what’s inside that session.Without WebSocket Inspection, You Miss:User attribution: Who sent the prompt?Sensitive content: PHI, MRNs, ICD-10 codes embedded in AI requests.Risks in action: Prompt chaining, jailbreak attempts, or hallucinated clinical recommendations.With WebSocket Inspection, You Gain:Full prompt and response visibility in real time.Identification and blocking of policy violations before sensitive data leaves your network.Tied attribution of AI sessions to users and devices for rich audit trails.Detection of risky or malicious prompt activities.In short, WebSocket inspection transforms AI-related blind spots into protected environments where you can allow safe use of AI without compromise.Governance and Innovation: Striking the BalanceBlocking AI outright isn’t realistic. Your clinicians, analysts, and staff will find ways to adopt tools—often through less secure methods that increase risk. Instead, organizations need to embrace AI responsibly by anchoring their governance model in Zero Trust principles.Step 1: Focus on Visibility FirstDeploy WebSocket inspection to see the tools and data your staff are already using.Monitor prompts at the application level with full attribution (who, when, what).Flag risky patterns like jailbreak attempts or PHI-laden queries in real time.Step 2: Govern Approved AI SolutionsBuild a structured approval process for generative AI tools, defining requirements for data retention, licensing, and compliance certifications like HITRUST or HIPAA.Explicitly block unsanctioned AI tools or browser extensions at the network level while enabling access to approved solutions.Step 3: Secure the DataUse contextual detection like regular expressions or natural language processing (NLP) to identify and block sensitive data (e.g., SOAP notes, clinical codes, or names) from being transmitted accidentally.Build immutable audit trails for all AI-related activity, enabling continuous improvement and compliance reporting.The Bottom Line: AI Can’t Come at the Cost of SafetyYour people are excited about AI—and for good reason. From saving hours on documentation to improving diagnostic processes and reducing errors, generative AI offers healthcare organizations incredible potential. But adoption must come with safety, visibility, and governance.With inline WebSocket inspection and a Zero Trust approach, you can:Protect PHI while enabling safe AI-driven workflows.Identify and block shadow AI usage without stifling innovation.Comply with emerging regulations and maintain trust with patients and stakeholders.Generative AI is inevitable. The question isn’t whether your organization will use it; the question is whether you’ll use it securely. Your first step to building a safer, AI-enabled future starts with visibility.Download our eBook to learn more about how you can secure AI while enabling innovation.]]></description>
            <dc:creator>Steven Hajny (Healthcare Principal Sales Engineer)</dc:creator>
        </item>
        <item>
            <title><![CDATA[The "Control" Trap: 3 Reasons Your Legacy Firewall Can’t Keep Up (And Why You Think It Can)]]></title>
            <link>https://www.zscaler.com/de/blogs/product-insights/control-trap-3-reasons-your-legacy-firewall-can-t-keep-and-why-you-think-it</link>
            <guid>https://www.zscaler.com/de/blogs/product-insights/control-trap-3-reasons-your-legacy-firewall-can-t-keep-and-why-you-think-it</guid>
            <pubDate>Tue, 27 Jan 2026 08:18:41 GMT</pubDate>
            <description><![CDATA[There is a specific kind of psychological comfort associated with on-premises firewall appliances.&nbsp;The hum of the cooling fans, the perfectly dressed cables, and the rhythmic blinking of green LEDs creates a reassuring illusion: if traffic crosses this box, it’s controlled.I get why orgs hesitate to go all-in on a cloud-native proxy architecture. Letting go of the box feels like letting go of the wheel. But clinging to the appliance model is no longer the conservative choice, but an active acceptance of gaps.Let’s dismantle the three persistent myths that keep organizations tethered to the appliance model.The Reality: Centralized enforcement only works when traffic reliably transits that choke point. Topology drift has rendered the physical perimeter porous. Users originate from diverse remote networks, and applications reside in SaaS and public cloud VPCs/VNETs rather than a single data center.&nbsp;Consequently, the on-premises legacy firewall inspects a statistically shrinking slice of enterprise traffic. To maintain usability, operations teams are frequently forced to implement split-tunnelling and route exceptions for high-bandwidth applications - effectively removing policy enforcement from the highest-volume paths.The illusion of control further collapses under the weight of modern protocols such as TLS 1.3, HTTP/3 over QUIC, and WebSockets with persistent, multiplexed flows that demand sustained compute power, not burst capacity. The legacy firewall suffers from performance challenges:TLS interception is expensive per flow: Session setup, Key operations, Decryption/Re-encryption, Certificate validation/rewriting, plus full content scanning (IPS, malware, sandbox detonation, DLP, CASB controls) are CPU intensive tasks. Firewall appliances cannot scale with the needs of your organization.Feature stacking compounds cost: Enabling SSL inspection, IPS, Sandboxing and DLP materially increases CPU cycles, memory pressure, and queue depth. As the legacy firewalls hit CPU saturation, latency climbs and throughput collapses.Operational reality: When the appliances hit limits, your teams reduce coverage via category exclusions, app bypasses, and quick-fix exceptions. That creates predictable blind spots - exactly where attackers concentrate.The on-premises appliance carries inherent security risks.&nbsp;These firewalls are exposed assets because of their public IP addresses which are&nbsp;routable and continuously scanned from the internet.The management plane and dataplane&nbsp;vulnerabilities are repeatedly weaponized in the wild. Your teams spend significant time in patching the software to ensure you are up-to-date against security threats.The impact on your organization is high, if the appliances are compromised because it often sits adjacent to broad network segments and becomes a pivot point.What “better control” looks like nowA cloud-delivered Zero Trust architecture removes the inbound attack surface entirely. Users establish outbound sessions to the service where policy is enforced, and private applications are accessed via outbound connectors without public exposure.&nbsp;True control today is defined by policy consistency and inspection depth, not by the ownership of the box processing the packets.The Reality: If the problem is architectural (distributed egress + encrypted traffic + fixed capacity), running the same appliance as a VM in a public or private cloud environment doesn’t change the physics - it just changes the hosting location.You still inherit the full appliance lifecycle:&nbsp;VM firewalls still require OS/image hardening, vulnerability management, emergency patching, upgrade testing, rollback plans, and maintenance windows. High Availability remains stateful and fragile in public cloud environments.&nbsp;At cloud scale, this pattern also breeds&nbsp;image sprawl and&nbsp;configuration drift across regions and accounts.Scaling is still engineering work, not elasticity:&nbsp;When traffic grows or when the magnitude of inspection increases, you still hit performance ceilings. “Scale” with VMs means instance sizing, provisioning new nodes, tuning load balancers, and rewriting routes to preserve symmetry. When CPU cycles are saturated in individual VMs or across a cluster of VMs, you see latency, session drops, and selective inspection bypass, not a clean autoscale outcome.The architecture stays network-centric, so lateral movement persists:&nbsp;Appliance models enforce network boundaries. If users/workloads retain subnet/port reachability, compromise becomes inevitable. In the classic kill-chain, once the network has been breached, lateral movement follows. Micro-segmentation can reduce blast radius, but in appliance-centric designs, your security often devolves into distributed Access Control Lists, policy sprawl and region-by-region duplication.What changes with cloud-native securityA cloud-native enforcement fabric is delivered as a managed, multi-tenant service: the provider owns patching, scaling, and High Availability. Policy decisions are identity/device/context-driven and enforced consistently for internet, SaaS, and private apps. Critically, access is&nbsp;app-specific. There are no network-routable apps. Apps are not discoverable and lateral movement paths do not exist.The Reality: In a distributed world, the&nbsp;opposite is true. Your legacy architecture is the bottleneck.In hub-and-spoke designs, users often tunnel to a central data center for inspection, then exit to the internet - regardless of where the destination actually is. That creates the classic hairpin path: a user in London routes to a firewall in New York, then back to a SaaS front door that might be in London.&nbsp;You’ve added distance, congestion points, and failure domains&nbsp;before you even start the application session.The penalty compounds because latency isn’t one number - it is how many times you pay the round-trip tax:TCP handshakeTLS handshake (often multiple RTTs, plus cert validation)App negotiation (HTTP/2/3, auth redirects, token exchanges)Long-lived flows (WebSockets, streaming, GenAI responses) that magnify jitter and lossSo the real question isn’t “proxy or not.” It is: where is the first security decision made relative to the user.The Cloud AdvantageA properly built cloud edge model makes the first enforcement point&nbsp;local.Users connect to the nearest PoP, so the “security hop” is a few milliseconds away.Policy is enforced at that edge, then traffic rides optimized peering paths to the destination (SaaS/IaaS).Net result: you typically&nbsp;remove the backhaul hop rather than add a new one - fewer transits, fewer choke points, better p95 experience for SaaS.Caveat (the part people confuse): If your “proxy” is just a VM cluster in one region, it will behave like the old model and be slow. That’s a failure of the architecture, not an inherent property of proxying.The Bottom Line: Redefining ControlMoving to SSE isn’t surrendering control. It’s shifting control from&nbsp;infrastructure ownership to&nbsp;policy enforcement.You can continue to operate legacy firewall appliances with/ without hypervisors, managing images, HA pairs, route tables, patch cycles, and capacity events.&nbsp;Or you can operate based on&nbsp;intent:&nbsp;who can access&nbsp;what, under&nbsp;which conditions, with inspection and logging applied the same way everywhere.One model scales people's problems. The other scales security outcomes.How to evaluate your legacy firewall appliancesRun three tests. They’ll tell you more than any vendor deck:Encrypted reality testIncrease TLS decryption/inspection coverage. Track p95 latency, breakage rates, and the number of forced exclusions needed to stay stable.Operations truth testInventory what you still own: OS/image patching, HA design, scaling events, routing symmetry, policy replication, and troubleshooting paths across regions.Path and experience testTrace flows by geography and app. Measure RTT and p95 to your top SaaS/private apps with security on/off, and confirm where the first enforcement decision is made (local edge vs centralized backhaul).The real question is not “cloud vs on-prem. It is whether your architecture can&nbsp;inspect encrypted traffic at scale,&nbsp;minimize exposed attack surface, and&nbsp;enforce policy close to users without turning security into an infrastructure maintenance job.]]></description>
            <dc:creator>Nishant Kumar (Senior Manager, Product Marketing)</dc:creator>
        </item>
        <item>
            <title><![CDATA[Accelerating AI Initiatives with Zero Trust]]></title>
            <link>https://www.zscaler.com/de/blogs/product-insights/accelerating-ai-initiatives-zero-trust</link>
            <guid>https://www.zscaler.com/de/blogs/product-insights/accelerating-ai-initiatives-zero-trust</guid>
            <pubDate>Tue, 27 Jan 2026 08:00:10 GMT</pubDate>
            <description><![CDATA[Act Fast. Stay Secure. This is the critical mission for enterprise organizations in the rapidly evolving world of AI. Today we launch exciting new innovations to Zscaler’s AI security portfolio, paving the way for accelerating AI initiatives with confidence.&nbsp;Since Chat GPT’s debut three short years ago, the proliferation of AI in various forms is unlike anything the tech world has ever seen. It began with several GenAI apps that greatly improved productivity. Then AI became embedded in just about every SaaS app we use today, such as Microsoft Office, Salesforce, Atlassian and more. Today most organizations have a strategic initiative to build and deploy custom enterprise AI applications to maintain a competitive advantage. And now we are seeing the rapid emergence of agentic AI, where the promise of autonomous agents can greatly accelerate productivity.&nbsp;The AI Security Gap: A Roadblock to InnovationWhile the rapid pace of AI innovation is exciting, the reality is that traditional security has not kept pace - creating friction as organizations strive to migrate from prototypes to production. Security leaders face a number of challenges, including:AI sprawl has dramatically expanded the attack surface, increasing risks of data exposure;AI introduces new classes of attacks, such as prompt injection and context poisoning, which bypass traditional controls;New protocols, such as MCP, A2A, and websockets make AI interactions harder to inspect and secure; andAgentic AI ushers in a new frontier, where autonomous agents with excessive permissions could wreak havoc if not kept in check.Given the competitive landscape, the question for security teams is not whether to adopt AI, but how to do so securely, consistently, and at enterprise scale as business leaders expect AI to drive productivity, efficiency, and growth. This requires organizations to rethink their security frameworks to align with the new dynamic AI era.Based on the Zero Trust Exchange platform, Zscaler’s AI Security portfolio is designed to address the full range of requirements to safeguard an organization’s AI journey.&nbsp;Asset Management - Gain full visibility of your AI footprint and risksSecure Access to AI - Ensure the safe and responsible use of AISafeguard AI Apps and Infrastructure - Secure the full AI lifecycle from development through deployment.&nbsp;&nbsp;Zscaler is unveiling innovations across all of these critical pillars.&nbsp;AI Asset ManagementZscaler’s existing platform provides granular visibility into the use of GenAI apps. However, the reality today is that many traditional SaaS apps are embedding AI capabilities, which creates a unique&nbsp;blind spot. These apps may have the same URL as their parent SaaS app, but are in fact AI, adding to the shadow AI challenge. Zscaler has enhanced its solution to provide this additional level of visibility, mitigating these new risks.&nbsp;In addition to understanding the use of AI,&nbsp;most enterprises&nbsp;struggle to understand all of the AI applications and infrastructure deployed throughout their organization. Developer tools, AI models, MCP servers, and agent platforms can quickly proliferate without proper oversight. Zscaler’s new solution is pulling together a 360 degree view of your entire AI footprint leveraging a wide range of telemetry, including insights from the Zscaler platform, scanning of cloud AI platforms, code repositories and more. From these insights, Zscaler identifies the MCP servers, agents and models deployed throughout the organization and how they are interconnected - uncovering data and AI pipeline risks. In addition, Zscaler uncovers hidden risks and vulnerabilities such as posture misconfigurations, model risks, supply chain risks and more.&nbsp;&nbsp;Secure Access to AI Apps and ModelsZscaler pioneered Zero Trust Exchange for secure access and eliminating risks including lateral threat movement and more to secure their users, workloads and branches. Now, with the AI Security platform, we have extended our Zero Trust Exchange for secure access to AI apps and models everywhere. Secure access to AI includes the following:&nbsp;Access controls:&nbsp;Identify and secure access to AI apps including embedded AI apps with inline DLP.Advanced intent-based detectors:&nbsp;Safeguard user interactions with AI apps to moderate content (e.g., prevent off topic prompts) and prevent threats (e.g., responses with malicious content).Prompt extraction and classification:&nbsp;Extract and classify prompts from the request and response of dozens of Gen AI apps for insights into usage patterns.Secure access to AI development environments: Ensure zero trust based access to development environments, enforcing access controls for IDE applications accessing AI infrastructure to prevent data and PII leakage as well as security threats.&nbsp;Secure AI apps and InfrastructureThe dynamic nature of AI has radically impacted the app development process. Frequently updating models, rapidly expanding attack surfaces and new attack methods outpace traditional scanning and posture management tools.&nbsp;With our recent&nbsp;acquisition of SPLX, Zscaler now has one of the most advanced AI red teaming solutions in the market, specifically designed to address these new challenges. Harnessing over 5000 simulated attacks across a range of categories, our&nbsp;red teaming solution helps uncover and remediate vulnerabilities in real time. Insights can be leveraged to harden system prompts, improving system performance across a number of dimensions. This overall approach provides value throughout the lifecycle of an AI system, from build to deploy to runtime, ensuring continuous protection.Once applications are deployed, Zscaler offers ongoing robust runtime protection, including:AI Guard: Zscaler is announcing general availability of its&nbsp;AI guard solution. With a deep bench of prompt and response detectors, AI guardrails safeguard interactions between AI apps and models. The solution blocks malicious attacks, such as prompt injections and jailbreaks. It also moderates prompt responses to ensure your applications are aligned with corporate policies, including factors such as toxicity, competition or brand and reputation.Policy Generator for Automated AI Guardrails: Zscaler is also introducing a new integration between our red teaming and AI guard solutions. This feature leverages red team findings to automatically generate guardrail policies, closing the loop between testing and enforcement.Zscaler’s AI security portfolio also addresses governance and compliance, with built-in frameworks for EU AI Act, NIST AI RMF, OWASP Top 10 and other popular regulations.&nbsp; This enables organizations to quickly test and assess for compliance and remediate any gaps.The way forwardFor almost twenty years, organizations have relied on Zscaler to streamline and secure digital transformation, transitioning from legacy infrastructure to a cloud-native platform. A similar paradigm shift is currently occurring with the adoption of AI. Just as Zero Trust architecture established the cornerstone for a new era of security, enterprises must now extend this fundamental principle to safeguard their AI transformation. Zscaler’s proven scalability, unified platform approach and ability to address the full range of AI requirements makes us an ideal partner for your AI journey.&nbsp;Ready to See It in Action?We invite you to learn more about our&nbsp;AI Security portfolio, and request a demo to see how Zscaler can help you accelerate your AI initiatives.&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Forward-Looking Statements&nbsp;This blog post contains forward-looking statements that are based on our management's beliefs and assumptions and on information currently available to&nbsp;our management. These forward-looking statements include the expected benefits of the expansion of our AI Security portfolio and the solutions and protections offered to our customers. These forward-looking statements are subject to the safe harbor provisions created by the Private Securities Litigation Reform Act of 1995. A significant number of factors could cause actual results to differ materially from statements made in this blog post, including those factors related to our ability to successfully integrate new features of our product offerings into our AI Security portfolio and the business impact additional offerings may have for our customers. Additional risks and uncertainties are set forth in our most recent Quarterly Report on Form 10-Q filed with the Securities and Exchange Commission (“SEC”) on November 25, 2025, which is available on our website at ir.zscaler.com and on the SEC's website at www.sec.gov. Any forward-looking statements in this blog post are based on the limited information currently available to Zscaler as of the date hereof, which is subject to change, and Zscaler will not necessarily update the information, even if new information becomes available in the future.&nbsp;]]></description>
            <dc:creator>Eric Andrews (VP, Product Marketing - Data Security)</dc:creator>
        </item>
        <item>
            <title><![CDATA[2025 ZDX Recap: Elevating IT Operations with Customer-Driven Innovations ]]></title>
            <link>https://www.zscaler.com/de/blogs/product-insights/2025-zdx-recap-elevating-it-operations-customer-driven-innovations</link>
            <guid>https://www.zscaler.com/de/blogs/product-insights/2025-zdx-recap-elevating-it-operations-customer-driven-innovations</guid>
            <pubDate>Fri, 23 Jan 2026 18:54:45 GMT</pubDate>
            <description><![CDATA[As we ring in 2026, it’s a great moment to reflect on the significant advancements&nbsp;Zscaler Digital Experience (ZDX) has delivered throughout 2025. While ZDX made headlines with major, groundbreaking capabilities that redefined device, network, and application monitoring and troubleshooting - read the&nbsp;launch recap blog and the&nbsp;new innovations blog to learn more -&nbsp; ZDX also had a continuous stream of impactful innovations.&nbsp;These customer-driven advancements demonstrate ZDX's commitment to ongoing product velocity, delivering enhancements that streamline workflows, sharpen insights, and empower IT teams to deliver an even more seamless digital experience for users. They are not just features; they are strategic innovations built to solve real-world challenges.Let’s take a moment to shine a light on some of these&nbsp;ZDX innovations from 2025.Enhanced Visibility: See the Full Path, Prove the PerformanceIn 2025, ZDX dramatically expanded its capabilities to provide more granular, actionable visibility across the digital landscape. These enhancements are critical for rapid root cause analysis and precise understanding of performance bottlenecks.Managed Monitoring Companion Probe, and Data Explorer ViewsThis year brought a significant innovation with Managed Monitoring through the Companion Probe, dramatically extending your network visibility and troubleshooting capabilities.What's New:&nbsp;The Managed Monitoring Companion Probe functionality, paired with Data Explorer views, significantly extended network visibility. This enhancement introduces a cloud-deployed, outbound-only probe for actively monitoring connectivity and performance to any target application on any port, using TCP and ICMP.Crucially, the companion probe runs its cloud path monitoring against the exact same DNS-resolved IP address as the Zscaler Web Probe. This ensures that both the web probe (for application performance) and the cloud path probe (for network performance) target the same destination, providing a highly correlated and comprehensive view.This empowers NetOps to:Gain unparalleled insight into the exact network path to an application, including public internet segments and third-party ISPs.Compare application performance from multiple Zscaler locations to pinpoint congestion or degradation.Deliver concrete evidence to determine if a bottleneck is on the corporate network, public internet, or a cloud provider.Drastically reduce Mean Time To Innocence (MTTI).&nbsp;Wi-Fi Dashboard Enhancements&nbsp;User complaints about "bad Wi-Fi" are common, whether in the office or working remotely. ZDX’s Wi-Fi capabilities received significant enhancements to address this.What's New: The Wi-Fi Dashboard now includes advanced capabilities such as the Wi-Fi Performance by Locations List View. This enhanced view provides a comprehensive list of access points and their connected devices, alongside their&nbsp;ZDX Score.These Wi-Fi dashboard enhancements empower NetOps and Service Desk to:&nbsp;Quickly diagnose user-reported slowness related to local Wi-Fi conditions by filtering and sorting by best and worst performing locations.Identify if the issue is local to the user's Wi-Fi environment using granular metricsProvide faster, more accurate guidance to users, preventing unnecessary corporate network troubleshooting.&nbsp;Proactively Solve Issues Before They EscalateReducing MTTR and catching issues before they impact productivity is paramount for both NetOps and Service Desk. ZDX's 2025 advancements deliver on this promise with smarter diagnostics and comprehensive alerting.UCaaS Application Support for User-Level AnalysisUnified Communications as a Service (UCaaS) applications like Zoom, Microsoft Teams, and Webex are critical for modern organizations, making performance issues particularly disruptive. ZDX now provides deeper, more actionable insights into these critical communication tools.What's New:&nbsp;ZDX introduced AI/ML-driven, user-level UCaaS analysis for individual meeting sessions. This provides detailed meeting metrics like&nbsp;ZDX Score for the call, audio quality, audio latency, audio jitter, and audio packet loss, as well as video latency and jitter. ZDX now identifies specific contributing factors impacting a meeting's&nbsp;ZDX Score, such as "High Local Network Latency" or "Device Resource Exhaustion", with a clear confidence level.This intelligent root cause analysis empowers NetOps and Service Desk to:Pinpoints the exact source of poor meeting quality for specific users and individual meetings, whether it's device issues, network latency, applicationGain clear, data-backed insights into the user's experience for rapid resolution.Significantly reduce user frustration during critical communications.&nbsp;Enhanced Proactive Alerting (Custom Apps, Call Quality, Any Incident Type)Waiting for a user complaint is reactive. ZDX's expanded alerting capabilities enable early intervention, tackling issues before they escalate into user complaints.What's New:Expanded Alert Rule Support for Custom Applications: Configure alerts for critical custom applications based on network metrics like packet loss, number of hops, and packet count, extending alerting beyond just predefined SaaS apps.New Alert Support for UCaaS Call Quality Metrics: Immediate notifications when voice or video call quality dips below acceptable thresholds for any user or groupEnhanced Alert Support for Any Incident Type: Configure alerts for virtually any performance anomaly ZDX detects, including specific incident types like Last Mile ISP blackouts, brownouts, or device resource issues.These proactive notifications, deliverable via email, webhooks, or ServiceNow integrations, give your Service Desk and NetOps teams a crucial head start:Empowering them to address performance degradations or outages before users are broadly impacted.Preventing widespread disruption and maintaining productivity across the organization.Device Incident Type for WindowsOften, what appears to be a "network problem" is actually a device problem. ZDX introduced an innovation to help accurately differentiate and diagnose these issues.What's New:&nbsp;A new incident type for Windows devices proactively identifies device health issues impacting user experience, such as high CPU utilization, memory exhaustion, or application crashes. ZDX provides detailed incident reports that include impacted users by geolocation and historical trends.This proactive detection helps Service Desk:&nbsp;Quickly determine if user slowness or poor experience is device-related.Provide clear evidence and metrics for software crashes or resource exhaustion.Pinpoint and troubleshoot directly to the source, preventing misdiagnosis and avoiding unwarranted blame on the network.&nbsp;Looking Forward with ZDXThe ZDX enhancements of 2025 are more than just new features; they are strategic tools designed to empower you. They provide deep, end-to-end visibility NetOps needs to prove network performance and pinpoint true bottlenecks, and equip Service Desk with the diagnostic muscle for faster, data-backed issue resolution. From granular network path visibility to AI/ML-driven UCaaS analysis and proactive device health monitoring, ZDX empowers you to move beyond reactive troubleshooting.These innovations show our commitment to continuous improvement, building on ZDX's foundational strengths to provide an even more refined, responsive, and robust monitoring solution.Look forward to this ongoing journey of innovation and delivering even more transformative capabilities in 2026!&nbsp;To learn more, sign up for a&nbsp;demo.]]></description>
            <dc:creator>Cynthia Tu (Sr. Product Marketing Manager, DEM)</dc:creator>
        </item>
        <item>
            <title><![CDATA[Rethinking Branch Security: Embracing Zero Trust Branch for the Modern Enterprise]]></title>
            <link>https://www.zscaler.com/de/blogs/product-insights/rethinking-branch-security-embracing-zero-trust-branch-for-the-modern-enterprise</link>
            <guid>https://www.zscaler.com/de/blogs/product-insights/rethinking-branch-security-embracing-zero-trust-branch-for-the-modern-enterprise</guid>
            <pubDate>Wed, 21 Jan 2026 09:58:24 GMT</pubDate>
            <description><![CDATA[This two-part series explores why a traditional network-centric security approach with its reliance on implicit trust is no longer adequate for today's cloud-centric, high-threat environment, and introduces Zscaler's Zero Trust Branch (ZTB) as a transformative solution.Part 1&nbsp;explores the current state of enterprise branch networking, highlighting its fundamental flaws including implicit trust models, broad network reachability, and persistent vulnerabilities to lateral movement and ransomware.Part 2&nbsp;presents how Zero Trust Branch addresses and overcomes these limitations, delivering a fundamentally more secure, agile, and cost-effective architecture that extends true Zero Trust principles to all branch devices, workloads, and connections.Part 1 - The Limits of Traditional Network ThinkingFor decades, the foundation of enterprise connectivity followed a fundamentally network-centric approach. This traditional perimeter-based security model operated on a deeply flawed premise: that trust was inherent to the network itself. The core mechanism was to prioritize granting users&nbsp;full access to the corporate network first, after which various security controls such as firewalls, VRFs, access lists, and antivirus software were layered on top. This "castle-and-moat" strategy had significant consequences for security and operational efficiency.&nbsp;&nbsp;&nbsp;&nbsp;By its very design, it provided broad, general network access to anyone who could authenticate to the network, effectively making the network the primary security domain. The outcome was a system that failed to secure and grant least-privilege access specifically to individual corporate resources and applications. If an attacker managed to breach the perimeter, or if an internal user's credentials were compromised, they were often allowed almost unrestricted lateral movement, a direct consequence of the initial generalized network access. Including business partners, Mergers and Acquisitions (M&A), and contractors via VPNs or Jump Hosts inherits their attack surface increasing business risk and operational complexity: if they are compromised, you are compromised.&nbsp;This model's inherent trust in the network meant that once a user was "inside," the security enforcement became significantly weaker, allowing for easy reconnaissance and data exfiltration across the organization. Traditional network segmentation techniques (firewalls, VLANs, ACLs, VRFs, agent based segmentation) only mitigate the risk of lateral movement; they do not eliminate the underlying network reachability that attackers exploit.&nbsp;&nbsp;Persistent breaches show these legacy controls are inadequate, increasing complexity, cost, and business risk. Additionally, traditional solutions like Internet-facing firewalls, VPNs, and SD-WAN routing overlays increase costs, complexity, and, crucially, expand the attack surface.&nbsp;&nbsp;&nbsp;The fundamental issue is very simple: accessibility equals vulnerability. Any part of your infrastructure that is reachable is, by definition, breachable. If a legitimate VPN client or an SD-WAN device can locate your VPN concentrator or another SD-WAN device on the public internet, so can a malicious actor. The proliferation of AI is now dramatically intensifying these problems.&nbsp;Malicious actors, who once needed weeks or months to complete steps like discovering an attack surface or pinpointing exploitable vulnerabilities, can now accomplish the same feats in minutes using rogue AI engines. The rise of AI, cloud, IoT/OT, and the increasing convergence of IT and OT necessitate a fundamental reevaluation of legacy architectures. These trends necessitate a shift away from providing extensive, network-level access.&nbsp;The Fundamental Flaw: Implicit TrustOftentimes the concept of perimeter is still used to set the trust boundaries:&nbsp;Everything which is outside of the perimeter is deemed untrusted, Everything that is inside the perimeter is implicitly considered trusted.&nbsp;Once inside the perimeter:Everything is reachable,Security controls filter&nbsp;after connectivity is already granted,Applications determine authorization, but&nbsp;the network allows the attempt:&nbsp;the application may refuse access, but&nbsp;the network still delivers the attacker to the door.A traditional architecture based on such principles is like an office building where visitors are allowed to roam the corridors without restriction, and security checks are performed only at individual office doors. This is not Zero Trust. Least privilege requires the opposite: if a user or a device is not entitled to a resource, they should not be able to reach it in the first place.Zscaler introduced Zero Trust Access for users many years ago, enforcing context and identity-driven policy, continuous risk evaluation, and connecting users to applications, not to networks.&nbsp;This addresses the need for securing individual users and managed devices.Zero Trust Branch extends these principles to ALL devices: IoT, OT, servers, and unmanaged endpoints. By extending Zero Trust to the branch, organizations can achieve a unified, consistent security posture across their entire distributed environment, ensuring that every connection, regardless of the connecting entity, is explicitly verified and secured. This eliminates implicit trust for everything in the branch, significantly shrinking the overall attack surface and enhancing resilience against sophisticated threats targeting non-user devices.In Part 2, we will explore how Zero Trust Branch redefines branch connectivity by decoupling connectivity from trust, reducing risk and complexity by design, and enabling a more scalable, efficient model for securing the enterprise edge.]]></description>
            <dc:creator>Andrea Polesel (Principal Transformation Architect)</dc:creator>
        </item>
        <item>
            <title><![CDATA[Top 5 Considerations for Effective AI Runtime Protection]]></title>
            <link>https://www.zscaler.com/de/blogs/product-insights/top-5-considerations-effective-ai-runtime-protection</link>
            <guid>https://www.zscaler.com/de/blogs/product-insights/top-5-considerations-effective-ai-runtime-protection</guid>
            <pubDate>Tue, 20 Jan 2026 17:00:06 GMT</pubDate>
            <description><![CDATA[AI is quickly becoming the new norm for business innovation. AI apps and agents now power customer and employee experiences and streamline business processes. But as adoption accelerates, security remains a top concern - especially as agents gain access to sensitive data and enterprise resources. This creates a new attack surface that adversaries can exploit to exfiltrate data, trigger unintended actions, and disrupt the business.Legacy firewall-based systems are not built to protect AI, and though there are numerous up-and-coming security solutions on the market, none of them address the full breadth of threats and are not built for enterprise scale. AI runtime protection, in particular, is a critical piece of a comprehensive security solution. Without effective AI runtime protection, businesses are left exposed to numerous threat vectors that can damage their business and compromise their company and customer data.&nbsp;At Zscaler, we help 45% of Fortune 500 companies secure their businesses. Many of our customers are AI innovators. CTOs, CISOs, and CAIOs tell us that while AI is transforming their organizations, securing their AI initiatives remains a top concern. Based on our experience, here are the top five considerations that AI and security professionals should evaluate for effective AI runtime protection:Deep visibility into prompts and responses: AI apps and agents converse with LLMs to process queries. Malicious actors can trigger prompts for unintended responses that can lead to data leaks or unintended actions. Getting visibility into prompts and responses is the first step to securing those interactions.Guardrails that cover the full breadth of AI safety and security risks:&nbsp;The interactions between AI apps and agents are exposed to a variety of threats, including security threats such as prompt injections, malicious code insertion, and jailbreaks. Content safety issues and compliance requirements such as toxic and off-topic prompts, undesired responses, and PII data pose additional risk.Effectiveness of detection and data protection:&nbsp;A high number of false positives can distract from real vulnerabilities, while a high rate of false negatives can increase risk. A guardrails solution needs high accuracy in order to be effective. Further, many off-the shelf open-source based data loss prevention engines are not effective at detecting sensitive information across AI apps and LLMs.Ease of integration and enforcement:&nbsp;AI apps, LLMs, and the data they access are dynamic, continuously learning and evolving. Runtime protection is not a one-time action, but an ongoing process that needs to evolve with your AI apps and infrastructure. For this reason, it needs to integrate seamlessly with your AI app and security infrastructure so it can effectively block threats while reducing management overhead and risk.Audit and compliance:&nbsp;A guardrails solution needs to secure AI apps while maintaining auditable logs for compliance and troubleshooting. While visibility is key, privacy of prompts/responses and data collected to enforce security is also critical so it’s not exposed to third parties.&nbsp;Accelerate your AI initiatives with Zero TrustTo help our customers protect their enterprise AI, we introduced Zscaler AI Guard. It is a high-fidelity AI runtime protection solution that secures enterprise AI applications so organizations can adopt AI with confidence. It delivers end-to-end inline visibility and control into prompts and responses across AI apps, agents, and LLMs, along with inline allow/block/coach enforcement to reduce data leakage and policy violations. AI Guard has a broad set of detectors for AI security threats (such as jailbreaks, prompt injection, malicious code), sensitive data leakage (such as PII and source code), and content moderation risks (such as toxicity, off topic, competition). It also supports centralized governance and audit-ready reporting aligned to leading frameworks (including NIST, the EU AI Act, and OWASP Top 10 for LLM apps), integrates with major AI platforms and frameworks, and is designed for privacy.Zscaler helps more than 8,000 enterprises secure their digital transformation journeys. Zscaler’s own IT team serves as customer zero to enable delivery of our security technologies to customers. Watch this video to learn about how the Zscaler IT team uses AI Guard to enable AI guardrails for AI adoption at Zscaler.]]></description>
            <dc:creator>Neelay Thaker (Director of Product Marketing - AI Security)</dc:creator>
        </item>
        <item>
            <title><![CDATA[Digital Experience Predictions in 2026]]></title>
            <link>https://www.zscaler.com/de/blogs/product-insights/digital-experience-predictions-2026</link>
            <guid>https://www.zscaler.com/de/blogs/product-insights/digital-experience-predictions-2026</guid>
            <pubDate>Fri, 16 Jan 2026 20:03:59 GMT</pubDate>
            <description><![CDATA[AI is now table stakes, but scaled value is still rare. McKinsey’s latest&nbsp;State of AI survey shows&nbsp;88% of organizations are using AI in at least one business function, yet&nbsp;nearly two-thirds haven’t begun scaling AI across the enterprise and only&nbsp;39% report EBIT impact at the enterprise level. At the same time,&nbsp;AI agents are moving quickly from curiosity to trials:&nbsp;62% of respondents say their organizations are experimenting with agents, and&nbsp;23% report scaling an agentic AI system somewhere in the enterprise.For IT, the takeaway is straightforward:&nbsp;AI won’t compress time-to-resolution if experience data remains fragmented across endpoint tools, network tools, application monitoring, and service workflows. As we look toward 2026, the organizations that move fastest will&nbsp;consolidate first, creating end-to-end experience visibility with a&nbsp;single endpoint agent, and then use AI to turn that unified data into&nbsp;instant expertise for every operator.(Source to include on publish:&nbsp;McKinsey & Company, QuantumBlack, “The State of AI” [November 5, 2025].)Over the past year, we’ve learned that the future of digital experience isn’t about adding more dashboards or generating more alerts. It’s about reducing the time and effort required to get to the right answer, across the entire organization.We’ve seen what happens when experience data becomes immediately usable in real environments:A global IT consulting firm avoided a significant productivity hit across 1,000 employees by using network intelligence to identify an ISP-level issue in minutes—not hours—and reroute users quickly.A large U.S. healthcare system uncovered thousands of endpoint failures (including blue screens, audio failures, and browser crashes), helping protect productivity for clinicians and staff.The takeaway: when experience data is unified and actionable, teams don’t just respond faster—they prevent downstream impact.See the&nbsp;Zscaler Digital Experience launch event for more information.&nbsp;Predictions for 2026Prediction 1: Consolidation becomes the execution advantage—not just a cost playBy 2026, consolidation will be driven less by license rationalization and more by a simple operational requirement:&nbsp;speed to clarity. Tool sprawl forces operators to swivel between consoles, reconcile conflicting signals, and escalate issues simply to gather context.The winning model will start with a consistent foundation:&nbsp;a single endpoint agent that captures user experience signals across devices, networks, and applications—so teams can correlate what’s happening without manual stitching.Why it matters: consolidation is the prerequisite for faster Zero Trust rollouts, actionable device health, and AI that can deliver precise answers.&nbsp;Prediction 2: Zero Trust rollouts will accelerate when experience leads the rolloutZero Trust adoption will continue to accelerate—but the differentiator won’t be policy ambition. It will be whether teams can&nbsp;prove and protect user experience through the rollout.Organizations replacing legacy VPNs are already learning that the biggest obstacles often aren’t access controls. They’re the reality of distributed work: device instability, Wi‑Fi degradation, last-mile ISP issues, and SaaS path variability.By 2026, successful Zero Trust programs will operationalize experience insights to:baseline performance before changespinpoint friction during cutoversvalidate performance continuously after policy updatesBottom line: experience becomes the accelerant for Zero Trust because it provides the evidence to move fast without breaking productivity.&nbsp;Prediction 3: Device health becomes a first-class signal and remediation becomes a requirementDevices are no longer passive endpoints. They’re complex systems that directly shape productivity and frequently the hidden root cause behind “the network is slow” or “the app is down.”But by 2026,&nbsp;visibility alone won’t be enough. Leading IT organizations will require&nbsp;closed-loop device operations:&nbsp;detect → explain → remediate → verify.That means expecting digital experience solutions to support&nbsp;safe, role-appropriate remediation such as:approved endpoint actions to address common degraders (e.g., disk cleanup, clearing browser/DNS caches, restarting specific Windows services)posture/readiness validation signals to isolate configuration-related friction (kept generic for external audiences)standard endpoint network diagnostics (DNS lookup, latency/packet-loss tests, route/path checks)verification loops that confirm whether the action improved experienceWhy it matters: this is how service desks reduce escalations by resolving more issues at first touch with guardrails.&nbsp;Prediction 4: Real-user experience becomes the primary truth; synthetic becomes supporting coverageSynthetic monitoring still has value, but it doesn’t reflect reality at scale, especially in highly distributed environments. By 2026, teams will rely more on&nbsp;real-user experience signals from actual devices on real networks inside live applications.The challenge won’t be data collection. It will be interpretation, correlating endpoint behavior, network path changes, and application performance without overwhelming teams.Winning solutions will prioritize correlation and impact: who is affected, where the issue sits, what changed, and what to do next.&nbsp;Prediction 5: The service desk becomes an intelligence layer, measured by prevented disruptionBy 2026, service desk performance won’t be judged solely by ticket closure speed. It will be measured by how effectively teams:prevent escalationsreduce user downtimeresolve issues at first touchThis shift requires two things:instant access to cross-domain context (device, network, app, and access-path signals)dramatically lower cognitive load for first-line respondersAnd it must show up where teams work. Increasingly, customers will expect experience context and guided insights to be&nbsp;embedded directly into ServiceNow workflows, not trapped in separate tools.&nbsp;Prediction 6: AI agents move into workflows, but only unified data makes them preciseChat-based AI is a starting point, not the destination. By 2026, organizations will expect AI-powered troubleshooting to be:embedded in workflows like&nbsp;ServiceNowcallable via APIs and automationintegrated into operational views—not isolated conversationsBut practitioners will demand technical fidelity. AI must be able to ground answers in concrete evidence like endpoint failures, path changes, and network quality signals without turning every responder into a specialist.This is the unlock: AI becomes “instant expertise” only when it can reason over&nbsp;complete, end-to-end experience data. Without that foundation, AI scales guesswork.&nbsp;Prediction 7: ISP performance incidents becomes a top priority category because “the internet” is now part of your stackMore enterprise traffic will traverse public internet segments and Zero Trust overlays, meaning user experience will increasingly depend on paths IT doesn’t directly control. The operational problem isn’t just performance variability; it’s proving where the variability lives (endpoint, Wi‑Fi, ISP, intermediate carrier, or application) fast enough to act.This is why ISP performance will become a first-class incident category. Gartner reports that&nbsp;70% of organizations struggle with network complexity and lack of end-to-end visibility, which is exactly what turns routine degradations into drawn-out war rooms.&nbsp;The winning model will look less like reactive troubleshooting and more like continuous, route-aware measurement:Lightweight, frequent probing and telemetry (latency, packet loss, jitter) along the user’s actual path to the appBaselines and automatic deviation detection to flag “what changed” immediatelyAggregation by ISP/intermediary (e.g., ASN) and geography to pinpoint bottlenecks and quantify blast radiusWhy it matters: when teams can rapidly identify ISP and carrier-driven issues with evidence, they reduce MTTR, avoid unnecessary escalations, and protect productivity at scale.&nbsp;ClosingIn 2026, the advantage won’t come from adding more AI on top of fragmented tools. It will come from&nbsp;consolidating experience signals end-to-end with a single endpoint agent, accelerating Zero Trust with evidence, and enabling every operator to act with expert-level context—directly in the workflows where work happens.]]></description>
            <dc:creator>Rohit Goyal (Sr. Director, Product Marketing - ZDX)</dc:creator>
        </item>
        <item>
            <title><![CDATA[Operationalizing Threat Intelligence with Zscaler Integrations MCP Server]]></title>
            <link>https://www.zscaler.com/de/blogs/product-insights/operationalizing-threat-intelligence-zscaler-integrations-mcp-server</link>
            <guid>https://www.zscaler.com/de/blogs/product-insights/operationalizing-threat-intelligence-zscaler-integrations-mcp-server</guid>
            <pubDate>Thu, 15 Jan 2026 15:33:31 GMT</pubDate>
            <description><![CDATA[The Threat Intelligence ProblemEvery security professional faces the same challenge: threat intelligence overload. Your inbox fills with advisories from CISA, industry ISACs, vendor bulletins, and security blogs. Each contains critical Indicators of Compromise (IOCs) - malicious IPs, domains, file hashes - that should be blocked immediately. But translating these text-heavy PDFs, RSS feeds, field advisories and blog posts into actionable security policies takes hours.Zscaler has always been a threat intelligence-driven company. Our&nbsp;Zero Trust Exchange is powered by real-time analysis of&nbsp;500+ billion transactions daily, feeding into continuously updated&nbsp;threat intelligence that protects customers automatically. But what about the intelligence that's specific to your organization? The regional threats from your local CERT, the industry-specific campaigns targeting your vertical, or the emerging threats your security team discovers through threat hunting?The&nbsp;Zscaler Integrations MCP Server represents a new paradigm: AI-assisted threat intelligence operationalization that augments Zscaler's existing protections with your organization's unique intelligence requirements. Using Zscaler Integrations MCP Server, you can transform multi-hour policy creation workflows into conversational, minutes-long exchanges.What is the Zscaler Integrations MCP Server?The Zscaler Integrations MCP Server is an&nbsp;open-source integration that connects AI assistants (like Claude or ChatGPT) to Zscaler's extensive API&nbsp;ecosystem. It provides access to a growing list of tools across Zscaler's portfolio:ZIA: Firewall rules, URL categories, IP groups, etc.ZPA: Application segments, access policies, etc.ZDX: Device and network health monitoring, etc.ZCC: Client Connector managementZIdentity: User and Group managementInstead of clicking through consoles or writing scripts, you simply converse with your preferred chatbot:&nbsp;Deploying the Zscaler Integrations MCP ServerSetting up the Zscaler Integrations MCP Server takes about 10 minutes. You can deploy it in your choice of container framework (i.e. Docker,&nbsp;AWS Bedrock AgentCore, etc.). For detailed setup instructions, check out the following guides:Setup Guide:&nbsp;https://zscaler-mcp-server.readthedocs.io/en/latest/getting-started.htmlGitHub README:&nbsp;https://github.com/zscaler/zscaler-mcp-serverOnce configured, the server integrates directly with your preferred chatbot, giving you conversational access to your Zscaler environment. In this blog, we’ll demonstrate the integration using&nbsp;Claude Desktop.How the LLM Works with Threat IntelligenceWhen you provide a research-focused prompt, the LLM follows a workflow that mirrors how a human analyst would approach threat research (but at machine speed).Research & ContextualizationThe LLM begins by searching authoritative threat intelligence sources based on your prompt criteria (i.e. government sources like CISA advisories and HHS HC3 alerts, vendor research from security blogs and sector-specific intelligence feeds when relevant). Once it locates relevant threat intelligence, it builds context around the campaign: identifying threat actor attribution (ransomware groups like RansomHub or LockBit, APT groups, financially-motivated actors), understanding attack patterns and TTPs, and analyzing the timeline of events. For instance, the LLM might discover that a threat actor conducted a major disruption operation in May 2025, only to resurface with new infrastructure in July. It may also examine victim demographics (i.e. which sectors are being targeted, geographic focus, and whether attacks target specific organization types).IOC Extraction & AttributionOnce threat intelligence has been collected, the LLM extracts network indicators from the narrative that can be used inside Zscaler policy, such as:Infrastructure: C2 server IPs, domains, URLs, etc.Distribution: Malware hosting sites, phishing domains, exploit kit URLs, etc.Impersonation: Spoofed portals mimicking legitimate services (MyChart, Epic EMR, insurance sites)With effective prompting, each IOC can be linked back to its originating source (a blog post, advisory, or campaign analysis). This attribution enables you to validate the LLM's research and understand the reality/legitimacy of each indicator.Policy ProposalOnce complete, the LLM presents ZIA policy recommendations ready for review and activation. These typically include IP Destination Groups for C2 infrastructure, URL Categories for phishing domains and malicious infrastructure and Firewall Rules with appropriate actions and logging configurations. These policy proposals are then translated to API calls and implemented using the MCP Server.Example: Emerging Threat CampaignsThe ScenarioSecurity vendors and government agencies regularly publish threat intelligence on active malware campaigns. For example, Lumma Stealer (LummaC2), a prolific infostealer-as-a-service, recently rebounded after a major May 2025 takedown, with new C2 infrastructure appearing within days. Let's analyze this emerging threat intelligence and create a policy to defend against it.The PromptCopy this prompt to try it yourself:Today's date is December 11, 2025, 2:30 PM EST.Research the Lumma Stealer (LummaC2) malware campaign. Search for:Security slog postsCISA advisoriesRecent security vendor analysesExtract all network IOCs mentioned (C2 IP addresses, domains, infrastructure) and create ZIA policy recommendations to augment Zscaler's existing protections:IP destination groups for C2 infrastructureCustom URL categories for malicious domainsFirewall rules to block accessFor each policy, include in the description:Source: [Blog post or advisory]Created: [Today's date]Review: [Today's date + 90 days]Threat: Lumma Stealer (LummaC2) infostealerCreate the necessary ZIA policy to block these threats, but DO NOT activate anything yet. Use concise bullets in your summary. Under 500 words.Managing the IOC LifecycleLikewise, three months later, you may choose to revisit this policy and clean up old IOCs:Copy this follow-up prompt to try it yourself:Today's date is March 11, 2026.Please review all ZIA IP destination groups, URL categories and policies that have a Review Date of March 11, 2026 (or earlier) in their descriptions.For each one:Research the state of the malware campaign listed in the description.If NO, campaign has been contained: Recommend removing the rule (stale IOC).If YES, campaign is still a threat: Recommend extending review date by 90 days.Show me what you'd remove vs. keep, and explain your reasoning.The result? Automated IOC lifecycle management prevents "threat intel bloat" while ensuring active threats remain blocked. The 90-day review cycle aligns with research showing most C2 servers have 
Example: Sector-Specific Threat Intelligence (Healthcare)The ScenarioHealthcare organizations face unique threats. Ransomware groups specifically target medical facilities, knowing downtime can be life-threatening. In our next example, let’s augment our policy with healthcare-specific intelligence. Note the addition of priority to the prompt such that we can implement or withdraw policy suggestions easily when the time comes.The PromptCopy this prompt to try it yourself:Today's date is December 11, 2025, 3:15 PM EST.You work in healthcare cybersecurity. Research recent cyber threats specifically targeting healthcare organizations:Search HHS HC3 website for recent healthcare alertsSearch for "ransomware healthcare 2025" campaignsLook for security vendor research about healthcare-targeted attacksFind any campaigns impacting healthcare (credential theft is often initial access for ransomware)Extract network IOCs (C2 IPs, phishing domains, malware&nbsp;distribution sites) from these articles and create ZIA policies to augment Zscaler's protections with&nbsp;healthcare-specific threat intelligence:IP destination groups for healthcare-targeted C2 infrastructureCustom URL categories for spoofed medical portalsFirewall rules to block these threatsFor each policy, include:Source: [Blog post or advisory]Created: [Today's date]Review: [Today's date + 90 days]Sector: HealthcareThreat: [Campaign name]Priority: [Critical/High/Medium]Consider that:We already have Zscaler's threat intel activeUsers need access to legitimate medical sites (.nih.gov,&nbsp;.mayoclinic.org, EHR vendors)The policy should not break critical healthcare SaaS appsThe policy should implement full logging for HIPAA complianceCreate the necessary ZIA policy to block these threats, but DO NOT activate anything yet. Use concise bullets in your summary. Under 500 words.&nbsp;Managing the IOC LifecycleAnd, here again (three months later), you can easily review and clean up old IOCs:Copy this follow-up prompt to try it yourself:Today's date is March 11, 2026.Please review all ZIA IP destination groups, URL categories and policies that have a Review Date of March 11, 2026 (or earlier) in their descriptions AND a Sector of Healthcare.For each healthcare-specific policy:Research the state of the malware campaign listed in the description.If NO, campaign has been contained: Recommend removing the rule (stale IOC).If YES, campaign is still a threat: Recommend extending review date by 90 days.Show me what you'd remove vs. keep, and explain&nbsp;your reasoning.&nbsp;ConsiderationsRisk PrioritizationNot all threats are equally urgent. You can prompt the LLM to categorize threats based on relevance and immediacy. Critical-priority threats are active campaigns with confirmed victims in your sector and may demand immediate action. High-priority threats come from threat groups with documented targeting history for your industry, even if no active campaign is underway. Medium-priority threats are opportunistic malware that may have some mention of your sector but lack evidence of targeted campaigns.Copy this prompt to try it yourself:For each policy recommendation, assign a priority level:Critical: Active campaigns targeting [your sector]High: Threat groups with [your sector] targeting historyMedium: Opportunistic malware with sector mentionsShow priority in the policy description.Customizable ExecutionKeep in mind that suggested policies don’t have to be executed en masse. In fact, you may decide to execute only on the high-priority or critical suggestions while leaving the medium and low priority policies for further deliberation:&nbsp;Operational ValidationBefore implementing any policy changes, prompt the LLM to validate that proposed blocks won't disrupt legitimate operations. This includes ensuring that legitimate sites (such as vendor portals, SaaS applications, EHR systems) aren't caught in the block lists. For regulated industries, the LLM can also confirm that logging configurations meet compliance requirements like HIPAA, PCI-DSS, or SOX.Copy this prompt to try it yourself:Before creating these policies, validate:No legitimate [vendor/SaaS/EHR] sites are blockedLogging meets [HIPAA/PCI-DSS/SOX] requirementsNo conflicts with existing ZIA policiesShow validation results before proceeding.Monitor Before BlockingAlways test new policies in log-only mode (24-48 hours) before full blocking.Copy this prompt to try it yourself:Create&nbsp;this&nbsp;policy&nbsp;but&nbsp;set&nbsp;action&nbsp;to&nbsp;ALLOW&nbsp;with&nbsp;full&nbsp;logging&nbsp;enabled.&nbsp;We'll&nbsp;monitor&nbsp;for&nbsp;false&nbsp;positives&nbsp;before&nbsp;converting&nbsp;to&nbsp;BLOCK.Lifecycle ManagementMake use of date-stamped descriptions to automate IOC aging. This makes clean-up a breeze - even for non AI-assisted policy that was created!Copy this prompt to try it yourself:Show&nbsp;me&nbsp;all&nbsp;policies&nbsp;with&nbsp;review&nbsp;dates&nbsp;older&nbsp;than&nbsp;[current&nbsp;date].&nbsp;For&nbsp;each,&nbsp;research&nbsp;recent&nbsp;activity&nbsp;and&nbsp;recommend&nbsp;keep&nbsp;vs.&nbsp;remove.ConclusionZscaler's Security Cloud provides unmatched threat intelligence that automatically protects customers worldwide. The Zscaler Integrations MCP Server augments that foundation with intelligence unique to your organization:Sector-specific threats from your industryRegional threats from your local CERTEmerging threats not yet in mainstream feedsOrganization-specific IOCs from threat huntingIntelligent lifecycle management with automated agingThe analyst remains in control - making critical decisions about what to block and when. The AI handles the mechanical translation from intelligence to policy, the correlation across sources, and the lifecycle management.The result? More comprehensive coverage, faster response times, and more time for the proactive security work that truly matters.]]></description>
            <dc:creator>Aaron Rohyans (Sr. Principal Solutions Architect - Business Development)</dc:creator>
        </item>
        <item>
            <title><![CDATA[Building Trust Through Identity: Addressing Security Challenges in Modern Healthcare]]></title>
            <link>https://www.zscaler.com/de/blogs/product-insights/building-trust-through-identity-addressing-security-challenges-modern</link>
            <guid>https://www.zscaler.com/de/blogs/product-insights/building-trust-through-identity-addressing-security-challenges-modern</guid>
            <pubDate>Thu, 15 Jan 2026 04:50:38 GMT</pubDate>
            <description><![CDATA[In healthcare, patients are always the priority. But behind the scenes, the quest to secure sensitive data while empowering clinicians remains a delicate balancing act. In a recent episode of the radio show We Have Trust Issues, we (Tamer and Steven) sat down with Joel Burleson-Davis, CTO of Imprivata, to tackle one of the industry’s most pressing challenges: improving clinician efficiency while maintaining modern identity security.&nbsp;&nbsp;With regulated industries such as healthcare experiencing rapid digital transformation, this episode shed light on key strategies and technologies designed to build trust, secure workflows, and eliminate friction between clinical and IT teams.Why Healthcare Security is UniqueJoel opened the conversation with an important point: healthcare workflows are vastly different from other industries. Unlike professionals such as accountants or engineers, clinicians spend the bulk of their time focused on patient care, not interacting with technology. "The primary concern of these end users is very different," Joel explains, "and so the way they work is very different."&nbsp;&nbsp;This focus on care creates unique challenges. Healthcare professionals often share workstations, devices, and data, making it harder to track identity in real-time. Meanwhile, hospitals remain prime targets for ransomware attacks because of their mission-critical operations. The combination of shared assets, constant workflow changes, and heightened regulatory requirements has led to friction between clinical care teams and IT security departments.&nbsp;&nbsp;The "No Balance" Mindset ShiftOne of the standout moments from the discussion came when Joel challenged the common notion of balancing security and productivity. According to him, framing the relationship as a balance implies that one side must lose for the other to win. Instead, he emphasized the need to pursue both ends simultaneously.&nbsp;&nbsp;By innovating with "workflow-aware" solutions, Joel argues that healthcare systems can achieve superior security without burdening clinicians. "Technology teams need to embrace the hard problems," he said, "and eliminate the perception that security improvements must come with sacrifices on the clinical side."&nbsp;&nbsp;Innovative Solutions Driving TransformationHealthcare organizations are tasked with solving both productivity and security issues simultaneously—and technological innovation is key. Joel laid out multiple practical examples of how identity security can empower care teams while enhancing protection.Passwordless AuthenticationPasswordless authentication was highlighted as a powerful "win-win" solution. Joel explained how integrating biometric logins, behavioral analytics, and intelligent PIN systems can replace the cumbersome, time-consuming process of typing in lengthy credentials. Without passwords to remember—or to reset—clinicians can reclaim more time for patient care, while IT departments benefit from enhanced security and reduced risk of human error.&nbsp;&nbsp;For Joel, the potential savings go far beyond seconds shaved off workflows. "If you calculate the time lost worldwide to typing passwords, killing them could change the game entirely," he remarked.&nbsp;&nbsp;Mobile WorkflowsAnother transformative technology discussed was the growing use of mobile devices in clinical settings. Joel described phones and tablets as tools that could replace traditional workstations, enabling more flexible, streamlined workflows. These devices can empower clinicians to ditch rolling carts or desktop logins in favor of a smartphone that connects them directly to critical systems and apps.&nbsp;&nbsp;However, Joel cautioned that mobile integration requires careful execution. "Mobile devices are mobile—it’s in the name," he explained. For a successful rollout, healthcare organizations must address challenges such as device sharing, fleet management, and initial setup hurdles. For example, shared devices should easily transition between users with minimal effort—using badge scans or face recognition for quick personalization.&nbsp;&nbsp;AI-Powered EfficiencyIt wouldn’t be a modern conversation about technology without discussing artificial intelligence. Joel sees incredible potential for AI to make security an "invisible" part of clinician workflows. Using AI, healthcare institutions can automate identity verification and policymaking tasks that currently burden IT teams and distract clinicians.&nbsp;&nbsp;Beyond security, AI also offers opportunities to elevate workflows. For example, predictive algorithms can anticipate a clinician's needs, delivering key patient information exactly when it’s required, reducing time spent searching for critical data. However, Joel warned that the efficacy of AI solutions depends entirely on the quality, protection, and curation of the underlying data they use.&nbsp;&nbsp;The Danger of Poor Execution&nbsp;Even the best technologies can fail if they’re deployed without clinician input. In shared healthcare environments, it’s crucial for IT teams to consider factors like ease of use, device accessibility, and workflow compatibility. Joel recounted failed device rollouts where clinicians abandoned state-of-the-art workstations, not due to flawed hardware, but because boot times and added clicks slowed them down.&nbsp;&nbsp;"Doctors have literally timed how many seconds new processes take and calculated the number of patients they miss during a shift," Tamer emphasized. "It’s not something we can afford to ignore—not when clinician burnout and patient satisfaction is at stake."From Trust Issues to Trust BuildingFor Joel, the solution to these longstanding issues is to rebuild trust through technology. Features that prioritize speed, simplicity, and clarity are essential to making security "invisible," giving clinicians one less thing to worry about in their often-stressful settings.&nbsp;&nbsp;Removing friction, streamlining identity verification, and reducing cognitive load are all part of a broader strategy to align IT and clinical goals. Joel’s message is clear: security teams must collaborate with clinical teams to design systems that prioritize both care delivery and regulatory compliance—and never sacrifice one for the other.&nbsp;&nbsp;Looking Ahead: What's Next for Healthcare Identity?We closed the episode by looking to the future of identity security in healthcare—and AI took center stage. Joel predicted advances in AI-powered automation would enable healthcare systems to reduce manual tasks, enhance user experiences, and improve security postures.&nbsp;&nbsp;However, with AI's reliance on data, Joel emphasized that organizations must invest heavily in data protection and governance. "Without data, AI is a worthless piece of technology," Joel stated bluntly. "We must ensure its accuracy, security, and integrity if we’re going to depend on it."&nbsp;&nbsp;Continue the ConversationTo hear more about strategies for fostering trust while addressing modern security concerns in healthcare, check out Imprivata’s podcast,&nbsp;Access Point.&nbsp;&nbsp;For even more insights, don’t miss the upcoming&nbsp;CHIME State of Cyber Summit on January 20th, where AI’s role in healthcare security will be further explored.&nbsp;&nbsp;And of course, join us for more episodes of&nbsp;We Have Trust Issues, where the most important (and sometimes controversial) topics in cybersecurity are always on the table.]]></description>
            <dc:creator>Tamer Baker (Healthcare CTO)</dc:creator>
        </item>
        <item>
            <title><![CDATA[Beyond Patient Zero: Why Detection is Dead and Quarantine is King]]></title>
            <link>https://www.zscaler.com/de/blogs/product-insights/beyond-patient-zero-why-detection-dead-and-quarantine-king</link>
            <guid>https://www.zscaler.com/de/blogs/product-insights/beyond-patient-zero-why-detection-dead-and-quarantine-king</guid>
            <pubDate>Wed, 14 Jan 2026 12:24:18 GMT</pubDate>
            <description><![CDATA[A recent survey found the median ransomware variant can encrypt nearly 100,000 files (about 53.93GB) in 43 minutes.This is why “Time to Detect” is starting to feel like a comforting statistic from a slower decade.In times where ransomware can encrypt 300 files in under a minute, detection is a consolation prize, not a strategy. If your security tool alerts you five minutes&nbsp;after a user has downloaded a malicious file, the damage is already in motion.This is the&nbsp;"Patient Zero" Paradox: Traditional security tools often allow the first user to download a file while analyzing it in the background. They sacrifice the security of that first user to maintain speed for everyone else.It’s time to retire the "detect and remediate" model. To stop modern threats, we must move to a "quarantine and prevent" architecture.The Flaw in "Allow and Scan"Legacy sandboxing solutions (and even some modern firewalls) operate on a pass-through architecture. They inspect traffic, but to avoid latency, they often allow a file to pass through to the endpoint&nbsp;before the verdict is ready.If the file turns out to be malicious, the alert comes too late. The code has already been executed. The endpoint is compromised, data creates a blast radius, and the organization is now in a reactive state of breach containment.This approach treats the first victim (Patient Zero) as a sacrificial lamb.The Solution: AI-Driven QuarantineZscaler AdvancedCloud Sandbox isn't just about scanning more files; it's about fundamentally changing when the verdict is applied.1. Hold the File, Not the VerdictAdvanced Cloud Sandbox utilizes AI-Driven Quarantine to hold suspicious files in the cloud environment while they are analyzed. The user does not receive the file until it is verified as safe.This protects the first user (Patient Zero) from infection, rather than just alerting you after the fact.It eliminates the "race condition" where malware races to encrypt files before the sandbox finishes its analysis.&nbsp;Closing the Resilience GapAdopting a quarantine-first model is about more than technical efficacy; it’s about business continuity.Eliminate the "Safe Site" Blind Spot: The 'Developer Blind Spot' was the defining theme of late 2025. Campaigns targeting the npm and PyPI ecosystems (such as the 'Shai-Hulud' malicious packages) proved that developers are the new high-value targets. These attacks didn't come through sketchy websites; they came through 'trusted' repositories and legitimate-looking scripts. Because Basic Sandbox often ignores script files or archives from 'neutral' URLs, these supply chain attacks walked right past the perimeter.Prevent Supply Chain Poisoning: By stopping "Patient Zero," you prevent the initial foothold that attackers use to move laterally. You aren't just saving one laptop; you are protecting the integrity of the wider network.Regulatory & Compliance Maturity: For regulated industries, proving that you have controls in place to prevent malware—rather than just detect it—is a cleaner, stronger narrative for compliance frameworks and Zero Trust maturity.The Bottom LineIf your sandbox policy is set to "Detect," you are operating on a probability model that assumes you can clean up a mess faster than an attacker can make one.But true security goes beyond just blocking threats, it must also accelerate your operations. By leveraging the Zscaler Sandbox API, you can evolve your SOC from a reactive cleanup crew into a proactive intelligence hub. This integration empowers your team to:Automate AnalysisEnrich InvestigationsOperationalize IntelTo truly secure the modern enterprise, you must transition to Advanced Cloud Sandbox.Stop relying on finding the needle in the haystack after it pricks you. Insist on a system that keeps the needle out of your hand entirely.Want to talk to an expert?&nbsp;click here.]]></description>
            <dc:creator>Nishant Kumar (Senior Manager, Product Marketing)</dc:creator>
        </item>
        <item>
            <title><![CDATA[Fortify Your Future: How Zscaler Drives a Modern Defensible Architecture for Supercharged Cyber Resilience]]></title>
            <link>https://www.zscaler.com/de/blogs/product-insights/fortify-your-future-how-zscaler-drives-modern-defensible-architecture</link>
            <guid>https://www.zscaler.com/de/blogs/product-insights/fortify-your-future-how-zscaler-drives-modern-defensible-architecture</guid>
            <pubDate>Mon, 12 Jan 2026 22:15:03 GMT</pubDate>
            <description><![CDATA[In today's hyper-connected world, cyber threats are not just a possibility, they're a relentless reality. From sophisticated nation-state actors to organised criminal groups, the adversaries are more advanced and persistent than ever. Traditional perimeter based security is simply inadequate, leaving organisations with an increased attack surface and vulnerable to breaches.This escalating complexity demands a fundamental shift in how we approach cybersecurity. Enter the Australian Cyber Security Centre Foundations for Modern Defensible Architecture (MDA), a critical framework offering a strategic, layered blueprint for building contemporary cyber resilience. The MDA champions three core pillars: a Layered Architecture, comprehensive Zero Trust, and Secure-by-Design methodologies. At its core, Zscaler's Zero Trust Exchange was developed to support the implementation of these vital principles.The MDA Blueprint: Beyond the PerimeterThe Modern Defensible Architecture isn't just about replacing existing perimeter controls with new&nbsp;stronger perimeter controls; it's about building a fortress from the inside out. It acknowledges that breaches are inevitable and focuses on minimising impact, containing threats, and ensuring business continuity.&nbsp;&nbsp;Its three pillars are:Layered Architecture and Traceability: Ensuring security controls are directly linked to business objectives and provide deep visibility.Comprehensive Zero Trust: Embracing "never trust, always verify", "assume breach", and "verify explicitly" principles for every interaction.Secure-by-Design: Integrating security from the outset of all development and operational processes, making it an inherent quality, not an afterthought.These pillars are further supported by ten foundational capabilities, designed to create an environment of continuous security validation and adaptation.Zscaler's Zero Trust Exchange: The Engine of MDAZscaler's cloud-native Zero Trust Exchange platform is uniquely positioned to help organisations achieve the MDA vision. It shifts traditional network security by connecting users directly to applications, not the network. This "never trust, always verify" model transforms security from a static perimeter defense to continuous verification of every user, device, and application interaction.Here's how Zscaler provides the necessary components to build a robust and defensible architecture:Zscaler Internet Access (ZIA): For secure internet and SaaS access, inspecting all outbound traffic for threats and policy violations.Zscaler Private Access (ZPA): Providing zero trust access to internal private applications, making them "dark" to the public internet.Zscaler Digital Experience (ZDX): Offering end-to-end monitoring of user experience and application performance.Zscaler Client Connector (ZCC): The intelligent agent on endpoints that enforces policies and gathers crucial device posture.Zscaler Security Operations - Providing external attack surface management and continual risk, vulnerability and control context.Red Canary a Zscaler company: Provides 24x7 continuous security monitoring.Powering MDA: Zscaler in ActionLet's look at how Zscaler directly contributes to some of the MDA's most critical foundations:High Confidence Authentication (MDA Foundation 2)Phishing-resistant and cryptographically bound authentication is crucial. Zscaler integrates seamlessly with your existing IdP, enforcing strong MFA for all user authentications. Beyond the user, the ZCC provides a critical layer of device authentication, establishing a secure, device-bound tunnel to the Zscaler cloud. This dual-layered approach ensures access is granted only to an authenticated user from an authenticated and compliant device.Contextual Authorisation (MDA Foundation 3)The MDA demands dynamic, real-time validation for every access request, factoring in user identity, device posture, location, and threat intelligence. Zscaler's platform excels here. Its policy engine acts as the central Policy Decision Point (PDP), aggregating "confidence signals" from your Identity Provider (IdP), the ZCC (device health), and real-time threat intelligence. This allows Zscaler to enforce granular, adaptive policies, adjusting access privileges based on the evolving risk profile, truly "never trust, always verify."Reliable Asset Inventory (MDA Foundation 4)Having a reliable asset inventory drives better management, visibility and decision making. A number of Zscaler modules provide near real time asset inventory information and/or the scanning of external assets to supplement existing approaches to CMDB management. This provides more accurate information regarding assets deployed within the organisation.Reduced Attack Surface (MDA Foundation 6)The MDA emphasises minimising exploitable entry points. ZPA dramatically shrinks your attack surface by making private applications invisible to the internet. Instead of exposing services via inbound firewall ports, ZPA establishes secure, outbound-initiated microtunnels, preventing reconnaissance and direct attacks. ZIA complements this by securing all internet-bound traffic, blocking malicious sites and preventing exploitation at the web gateway. Universal vulnerability management provides up to date context to support vulnerability and patching processes.Continuous and Actionable Monitoring (MDA Foundation 10)All Zscaler components provide detailed logging which can feed into an organisation's security analytics systems. Zscaler also has a 24x7 security monitoring service provided by Red Canary, a Zscaler company.Zscaler's capabilities extend across all ten MDA foundations, from supporting Centrally Managed Enterprise Identities and Reliable Asset Inventory to enabling Resilient Networks, promoting Secure-by-Design practices, and facilitating Comprehensive Validation and Continuous and Actionable Monitoring. Its inline, cloud-native architecture generates a wealth of high-fidelity logs, seamlessly integrating with SIEM/SOAR platforms to provide unparalleled visibility and rapid response capabilities.&nbsp;Beyond Security: The Zscaler AdvantageAdopting Zscaler for your Modern Defensible Architecture offers a cascade of benefits:Reduction of risk via the implementation of controls which align to the MDA principlesAccelerated Zero Trust Adoption: Rapidly deploy "never trust, always verify" principles without complex network overhaulsEnhanced Threat Prevention and Data Protection: Inline inspection, advanced threat detection, and robust Data Loss Prevention (DLP) stop sophisticated attacks in their tracksImproved User Experience: Direct-to-app connections mean faster, more seamless access for users, regardless of their locationOperational Simplicity and Scalability: As a cloud-native platform, Zscaler removes the burden of managing security hardware, simplifying operations and scaling globally on demandUnrivaled Visibility and Auditability: Granular insights into every user activity and security event empower compliance, incident response, and continuous security validationBeyond an IT project, the journey to a truly Modern Defensible Architecture is&nbsp; a strategic imperative for every organisation. Zscaler doesn't just check the boxes for MDA requirements; it embodies them. By shifting security to a cloud-native, Zero Trust model, Zscaler empowers you to build a future-proof, resilient security posture that can withstand and mitigate the cyber challenges of today and tomorrow.Ready to transform your security from reactive defense to proactive resilience? Contact us to explore how Zscaler can be the foundational enabler for your Modern Defensible Architecture. Stay tuned for Zscaler’s whitepaper covering in depth how Zscaler maps to the MDA foundations.]]></description>
            <dc:creator>Nick Clark (Zscaler)</dc:creator>
        </item>
        <item>
            <title><![CDATA[Top 7 Requirements for Effective AI Red Teaming]]></title>
            <link>https://www.zscaler.com/de/blogs/product-insights/top-7-requirements-effective-ai-red-teaming</link>
            <guid>https://www.zscaler.com/de/blogs/product-insights/top-7-requirements-effective-ai-red-teaming</guid>
            <pubDate>Mon, 12 Jan 2026 17:00:00 GMT</pubDate>
            <description><![CDATA[Enterprises across the globe are racing to deploy AI across every business workflow, but with accelerated adoption comes a completely new set of risks – one that conventional security tooling was never designed to mitigate. LLMs hallucinate, misinterpret intent, overgeneralize policies, and behave unpredictably under adversarial pressure. Today, most organizations deploying LLM-powered systems at scale have little visibility into how their models fail or where real vulnerabilities are emerging.This is the reality customers now face: dozens of AI apps in production, hundreds more being developed, and virtually no scalable way to understand or mitigate the risks. This is where AI red teaming becomes essential – and where Zscaler differentiates itself from every available solution in the market.The Hidden/Unknown Risks Behind LLM-Powered SystemsLLMs have introduced a range of vulnerabilities that cannot be uncovered through static code scanning or manual testing efforts. Organizations today struggle with:&nbsp;Undiscovered exposure to prompt injection, jailbreaks, bias, and harmful outputsHallucinations and trust failures that impact business decisionsNo repeatable process to validate behavior across scenariosLack of on-domain testing coverage that reflects real user behaviorManual red teaming that takes weeks to complete and still lacks critical failure modesAs enterprises deploy AI globally and across different languages, modalities, and business units, the risks multiply. AI red teaming must be proactive, continuous, scalable – and deeply contextual.&nbsp;Top 7 Requirements for Effective Enterprise AI Red TeamingEarly read teaming solutions have suffered from a number of limitations, including lack of depth, limited operational scale, and tools that fail to reflect real-world threats. Here are some key requirements to look for when building a modern, enterprise-grade AI red teaming solution:&nbsp;1. Domain-Specific Testing with Predefined Scanners (Probes)AI red teaming solutions should include a large number of predefined probes that test across major categories, such as security, safety, hallucination and trustworthiness, and business alignment. These should not be simply generic tests – but instead modeled after real enterprise scenarios and reflect how regular users, employees, and adversaries interact with AI systems.&nbsp;2. Full Customizability for Comprehensive Testing DepthUsers should be able to provide structured context about their AI system and create fully customized probes:Create custom probes through natural languageUpload custom datasets with predefined test cases (Bring your own dataset)Simulate business-specific attack pathsBasic red teaming solutions lack this close alignment with enterprise environments.&nbsp;&nbsp;&nbsp;3. A Large, Continuously Updated AI Attack DatabaseA robust AI attack database is critical to a successful red teaming solution. This includes continuously updating the database through:AI security researchReal-world exploitation patternsA comprehensive attack database ensures organizations can always test against the current AI threat landscape.&nbsp;4. Scalability –&nbsp; Simulate Thousands of Test Cases in HoursA robust AI red teaming platform should be able to run thousands of on-domain test simulations in hours, not weeks. This makes enterprise-wide AI risk assessments across hundreds of different use cases achievable.&nbsp;5. Multimodal and Multilingual Testing CoverageAI red teaming solutions should test across:Text, voice, image, and document inputsTesting in more than 60 supported languagesGlobal deployments require global testing standards and multilingual support.&nbsp;6. Modular Out-of-the-Box Integrations for any Enterprise AI StackRobust AI red teaming solutions should support a wide range of built-in connector types (REST API, LLM providers, cloud platforms, enterprise communication platforms). This enables seamless integration into any enterprise AI architecture.&nbsp;7. AI Analysis with Instant Remediation GuidanceIdentifying issues is only the start. AI red teaming solutions should also provide analysis that can explain extensive testing results in plain language, highlighting the most critical jailbreak patterns, and generating actionable remediation guidance.&nbsp;Accelerate Your AI Initiatives with Zero TrustAI red teaming isn't just about showing failures – it’s about understanding them, learning from them, and operationalizing AI protection at needed scale. With its recent acquisition of SPLX, Zscaler delivers a most complete, scalable, and deeply contextual platform, turning AI risk into something measurable, manageable, and most importantly - fixable.&nbsp;Learn more about Zscaler’s newest addition to its AI security portfolio, including the unveiling of exciting new capabilities in our exclusive launch:&nbsp;Accelerate Your AI Initiatives with Zero Trust&nbsp;&nbsp;This blog post has been created by Zscaler for informational purposes only and is provided "as is" without any guarantees of accuracy, completeness or reliability. Zscaler assumes no responsibility for any errors or omissions or for any actions taken based on the information provided. Any third-party websites or resources linked in this blog post are provided for convenience only, and Zscaler is not responsible for their content or practices. All content is subject to change without notice. By accessing this blog, you agree to these terms and acknowledge your sole responsibility to verify and use the information as appropriate for your needs.]]></description>
            <dc:creator>Dorian Granosa (Director, AI Research)</dc:creator>
        </item>
        <item>
            <title><![CDATA[Threat Intel, SSL Inspection and Other Considerations: A Real-World Checklist for SSE]]></title>
            <link>https://www.zscaler.com/de/blogs/product-insights/threat-intel-ssl-inspection-and-other-considerations-real-world-checklist</link>
            <guid>https://www.zscaler.com/de/blogs/product-insights/threat-intel-ssl-inspection-and-other-considerations-real-world-checklist</guid>
            <pubDate>Mon, 12 Jan 2026 14:39:39 GMT</pubDate>
            <description><![CDATA[Somewhere in the middle of your cloud-first journey, there’s a moment that doesn’t feel like progress.Despite users, apps, and data being decentralized and spread everywhere, most real-world trouble still walks through the same front door it always has: the open web.The web now hosts SaaS, partner portals, developer tooling, and a growing pile of AI assistants—almost all wrapped in TLS/SSL. Great for privacy. Brutal for visibility. Without scalable inspection, encryption becomes a cloak for lateral movement, malware delivery, and data exfiltration.Secure Service Edge (SSE) adoption is the logical response to this shift. But it only wins if your&nbsp; Secure Web Gateway (SWG) can take a punch.So ask yourself: will your SSE-based platform hold up in production? Validate it against this five-point checklist.1. The Encryption Test: Can You Inspect Without Collapsing?With over 87% of threats now delivered via encrypted channels, SSL/TLS inspection is no longer optional— it’s baseline defense.However, the architectural challenge is not simply&nbsp;capability, but&nbsp;capacity.Legacy appliances and their virtualized equivalents are bound by fixed compute resources. When inspection load spikes, they force a choice: throttle the user or bypass security.&nbsp;A cloud-native proxy architecture eliminates this trade-off by decoupling inspection from physical hardware limits, dynamically scaling to inspect traffic without creating a bottleneck.Considerations for your SSL/ TLS inspection:Decrypt coverage: What % of relevant TLS SaaS sessions do you inspect—by app and by category?Granular TLS controls: Do you have control over specific apps to be decrypted/ bypassed (SNI-based)? Are these policy controls consistent over web and SaaS applications?Certificate reality: How is certificate distribution being managed across all managed and unmanaged devices? How are trust store updates being propagated across VDI?Performance + failure mode: What’s the p95 added latency at normal and peak, and is fail-open vs. fail-closed configurable by risk tier?Exception governance: For every application bypass, can you prove an owner, a reason, and an expiry/review cycle—with reporting?Protocol roadmap: Beyond TLS 1.3, what’s your plan for QUIC/HTTP/3 visibility and mitigation when full inspection isn’t possible?2. The Traffic Flow Test: Local Breakout vs. The HairpinIn a distributed world, network architecture equates to&nbsp;security architecture.If users in Europe have to hairpin through a U.S. hub just to reach a European SaaS endpoint, you’re paying a latency and bandwidth tax with no security upside.&nbsp;That’s not SSE. That’s hub-and-spoke with a new outfit.A true Zero Trust Exchange model inverts this. It routes users to the nearest point of presence, applies security policy instantly, and connects them directly to their destination—so the infrastructure stays invisible to attackers and users connect to apps, not networks.Considerations:Nearest enforcement point: Are you fully utilizing a truly global presence of at least 150+ Datacenters or relying on a few “regional hubs” that are recreating choke points?Real latency evidence: Do you have traceroutes and real-user latency across multiple geos and ISPs (not vendor demo networks). If you’re a Zscaler customer, use ZDX to baseline the user-to-app path (device → Wi-Fi/ISP → Zscaler cloud → SaaS/app) and show where the delay lives.One policy model, everywhere: Does policy follow the user—or do rules drift by geography, and do you have audit trails for what was applied?Predictable egress steering: Are you able to comply with regional and SaaS requirements such as in-country logging and Dedicated IPs with your own IPs being used if necessary? Are your users viewing content in their local language with little impact on performance?3. The Operational Reality: Reducing the Burden or Relocating It?Traditional appliance-based models force you to manage dozens of boxes—patching, upgrading, monitoring. That multiplies operational risk and burns scarce engineering time.A lot of “modern SWG” projects stall because they just relocate the same burden into cloud instances and call it progress.A cloud-native SWG removes the need for distributed firewalls and point products, cutting hardware spend and patch overhead—while the platform updates continuously as threats evolve, without forklift upgrades.Considerations:Ownership boundary: Do you have a clear demarcation between your service provider’s responsibilities and your own? Do you still own uptime/ scaling and patching after moving to the service?No infrastructure runbooks: If you are still scheduling reboot windows or kernel patches, are you running software, or consuming a service?Elasticity under stress: Has your M&A cutover been simplified? Do you still have to plan for infrastructure to cater to office reopening spikes?4. The Data Protection Test: Inline EnforcementSWG isn’t “web filtering” anymore. It is business protection. Modern exfiltration doesn’t look like “upload to a sketchy site.” It looks like sanctioned SaaS uploads, mis-shared links, copy/paste into AI assistants, and normal workflows moving sensitive data to places where work actually happens.The question is: Can your SWG enforce data protection policies inline, not after the fact?Considerations:Inline controls for web + SaaS sessions: Is enforcement happening inline in the SWG path—or are you leaning on API, after-the-fact scanning that shows up after the damage is done?Unified DLP policy + engine: Are the same classifiers, dictionaries, and fingerprinting used across DLP/CASB/email and enforced inline for web + SaaS—or does “HR data” trigger in email but slip through the browser?Detection depth: Do you truly cover PII/PCI/PHI, exact data matching, document fingerprinting, and regional identifiers tied to your regulatory footprint—and are decisions context-aware (user + device posture + app + action)?GenAI coverage: As AI adoption grows, does your SWG inspect prompts, uploads, and browser sessions for web AI tools—inline, in real time?Proof scenarios to run (don’t skip these):Upload source code to a developer SaaSPaste customer data into a web-based AI assistantSync sensitive files to cloud storageIs your SWG able to prevent all of the above? If the result is “we detected it in logs,” you didn’t protect anything.5. The Threat Intel Test: Cloud Speed vs. Patch SpeedFinally, look at the speed of your defense. Does threat intelligence move fast enough to matter?In an appliance model, a new zero-day often means waiting on a vendor patch—then testing it, then rolling it out across your fleet.In a cloud-native platform, a threat blocked in one geography (say, an attack on a manufacturing plant in Asia) can be turned into global protection—automatically and immediately.Considerations:Propagation speed: How quickly are cloud detections enforced for your tenant? Does it take minutes or days?Real examples, with timelines: Is your SWG sharing recent campaigns, what triggered the update, and how fast protections rolled out?Global consistency: Is the same protection available across geographies and user populations?Your signals at cloud scale: Do your IOCs/blocklists go live quickly without turning into policy spaghetti?Learning loop + telemetry:Third-party validation: Beyond vendor claims, what independent evidence validates security effectiveness and real-world impact—e.g., published lab testing, peer-reviewed evaluations, external audits, analyst assessments, or customer-run benchmarks with documented methodology?Public proof trail:&nbsp;A good benchmark is the kind of public, time-stamped research stream Zscaler ThreatLabz publishes—ongoing security research write-ups and annual reports that document what changed and when.Conclusion: SSE in production vs. on PowerPointSWG isn’t making a comeback because anyone is nostalgic. It’s central again because the web is where your business runs—and where risk shows up first.So the question isn’t “Do we still need SWG?” It’s whether your SWG model can:If the answer is no… your SSE strategy is meant to look good only on a slide.Want to talk to an expert?&nbsp;click here.]]></description>
            <dc:creator>Nishant Kumar (Senior Manager, Product Marketing)</dc:creator>
        </item>
        <item>
            <title><![CDATA[Zscaler Expands FedRAMP Moderate Cloud Data Plane to Support Global Operations]]></title>
            <link>https://www.zscaler.com/de/blogs/product-insights/zscaler-expands-fedramp-moderate-cloud-data-plane-support-global-operations</link>
            <guid>https://www.zscaler.com/de/blogs/product-insights/zscaler-expands-fedramp-moderate-cloud-data-plane-support-global-operations</guid>
            <pubDate>Thu, 08 Jan 2026 15:03:01 GMT</pubDate>
            <description><![CDATA[Zscaler has expanded its FedRAMP Moderate Authorized cloud data plane to new locations in Zurich and Singapore to better enable U.S. government agencies, Federal Systems Integrators (FSIs), and enterprises to meet the evolving demands of global operations. These strategic new locations complement Zscaler’s existing FedRAMP Moderate cloud in the U.S., providing better performance and an optimized user experience for international workforces—all powered by Zscaler’s distributed Zero Trust Exchange.Enhancing Performance and Compliance for a Global WorkforceThe expansion of Zscaler’s FedRAMP Moderate cloud platform enables government organizations and federal contractors to transform legacy IT systems into secure, high-performance environments. The data plane expansion into the new locations in Zurich, Singapore, and soon São Paulo ensure fast, secure, and compliant access for employees and stakeholders globally, while addressing the most pressing challenges for today’s government operations, including:Enabling Secure Government Missions Across BordersProviding secure, fast, zero-trust access for overseas embassies, field operations, and branch offices is critical. With local breakouts in Zurich and Singapore, agencies can reduce latency, enhance productivity, and seamlessly connect international teams to U.S. Federal systems. Sensitive communications and data remain secure under Zscaler’s industry-leading platform.Supporting an International WorkforceFederal agencies and contractors who depend on non-U.S. employees to spearhead vital Federal programs globally. The globally expanding FedRAMP Moderate cloud platform enables vendors and agencies to securely access U.S. government environments directly in these international regions, improving performance and productivity, while maintaining FedRAMP compliance for global operations.Scaling Operations Without Local Hosting BurdensThe expanded cloud platform helps customers eliminate the need for local hosting sites, proxies, or PSEs. Using Zscaler’s distributed Zero Trust Exchange, agencies and organizations can avoid the complexity of managing regional systems while staying scalable and compliant.Why Zurich and Singapore?Zurich and Singapore were chosen for their global strategic importance:Zurich supports U.S. operations across Europe, making it easier for agencies and contractors to maintain high performance and meet stringent European regulatory requirements.Singapore is a critical hub for Southeast Asia, empowering federal and enterprise customers with low-latency performance and robust compliance infrastructure in the APAC region.Looking AheadWith FedRAMP Moderate cloud expansion now live in Zurich and Singapore, and São Paulo on the horizon, Zscaler continues to transform global government operations. These expansions ensure fast, secure, and compliant performance for international employees and contractors, while enabling government agencies and federal enterprises to confidently scale their global operations with Zscaler’s market-leading cloud security and zero-trust architecture.]]></description>
            <dc:creator>Niraj Gopal ( Head of Product Management, Federal and Sovereign Clouds)</dc:creator>
        </item>
        <item>
            <title><![CDATA[2025 Reflections and 2026 Predictions: Healthcare’s Cybersecurity Frontier]]></title>
            <link>https://www.zscaler.com/de/blogs/product-insights/2025-reflections-and-2026-predictions-healthcare-s-cybersecurity-frontier</link>
            <guid>https://www.zscaler.com/de/blogs/product-insights/2025-reflections-and-2026-predictions-healthcare-s-cybersecurity-frontier</guid>
            <pubDate>Tue, 06 Jan 2026 22:17:22 GMT</pubDate>
            <description><![CDATA[As cybersecurity professionals, one of the most valuable things we can do is reflect on the lessons of the past while preparing thoughtfully for the challenges ahead. Healthcare is a uniquely complex field, and its evolving cybersecurity landscape demands fresh perspectives and intentional strategies.On the latest episode of We Have Trust Issues, we (Tamer and Steven) invited Carter Groome, CEO of First Health Advisory, to join us in dissecting 2025’s major healthcare trends and anticipate what 2026 has in store. Carter’s perspective as a seasoned consultant and industry leader revealed what healthcare cybersecurity leaders need to know to navigate pressing challenges in AI adoption, regulatory compliance, risk reduction, and operational resilience. Here are the takeaways we think every reader should consider carefully.Lessons from 2025: A Pivotal Year for HealthcareTake a deep breath—2025 was a whirlwind. Beyond a surge in AI implementation, the healthcare sector faced mounting external pressures that forced security teams to evolve rapidly.Reflecting back, Carter identified two major themes that dominated 2025:Delivering Measurable Value in Cybersecurity: Boards are no longer interested in hearing about risks without action plans. 2025 saw heightened calls for rationalizing technologies, streamlining tools, and proving measurable reductions to risk exposure. Security leaders need to answer questions about their stacks: Are tools overlapping unnecessarily? Is anyone addressing the noise? How can systems integrate to reduce vulnerabilities, instead of simply highlighting them?Building Resilience: Healthcare organizations shifted heavily toward operational resilience. With the assumption that a breach isn’t a matter of “if” but “when,” CISOs are investing more in continuity plans, disaster recovery strategies, and minimum viable hospital models.“Healthcare security teams aren’t just tasked with defending anymore. They need to recover and help organizations thrive—even when bad actors succeed,” Carter noted during our conversation. The incredible pressure to enable agility while reducing costs has left security leaders juggling priorities more intensely than ever.AI Dominated 2025: But What Was the Real Impact?Artificial Intelligence was the buzzword of the year—and while it unleashed enormous potential across healthcare, it also exposed serious risks. We’ve seen enterprises rush to adopt AI solutions across operations, clinical workflows, and cybersecurity. But this “race to innovate” often lacks governance, intentionality, or alignment with real-world challenges.“There’s been an obsessive approach to implementing AI for the sake of implementing AI,” Carter noted. “Boards push competitive advantages, efficiency, and labor replacement—but often forget the critical steps like governance and risk reviews. This pressure could lead organizations into dangerous territory if left unchecked.”The parallels with the onset of the pandemic are impossible to ignore, as organizations scrambled to enable work-from-home setups overnight, figuring out security after the fact. While AI represents progress, Carter warned against deploying solutions without thoughtfulness, transparency, or careful evaluation of real use cases.As security professionals, we agree there’s a need for balance—AI adoption doesn’t have to mean sacrificing foundational principles. Instead, let’s focus on sober assessments of AI’s utility and risks, ensuring tools solve problems rather than creating new vulnerabilities.Looking Ahead: Predictions for 2026As we turn to 2026, Carter emphasized one guiding principle: intentionality. Healthcare needs more deliberate efforts to address governance structures, data strategies, and technical infrastructure. Without thoughtful preparation, healthcare organizations won’t be able to keep up with the accelerating pace of threats.Here’s what Carter predicts for 2026:Identity Takes Center Stage: Identity management—including human users, devices, and AI agents—will be mission-critical as adversaries find easier ways to exploit credential-based attacks. With healthcare tied so closely to IoT and medical devices, zero trust policies will increasingly target identity-first frameworks.Organizational Extortion Intensifies: Executive extortion and class action lawsuits after breaches are likely to increase, leaving healthcare CISOs to defend both the digital and legal standing of their organizations. Carter emphasized that industry-wide adoption of baseline cybersecurity controls, such as the Cybersecurity Performance Goals (CPG), could reduce liability and improve recoverability.Malware-Free Intrusions Become Commonplace:&nbsp;Why hack systems when stolen credentials allow bad actors to log in directly? Healthcare organizations will need to rethink defenses to address this growing trend.Authenticity Becomes a Priority: AI-generated media, voice deepfakes, and sophisticated social engineering tactics will make distinguishing real from fake harder than ever. Security strategies must emphasize authenticity, ensuring trust remains intact across systems, users, and stakeholders.Risk Reduction Must Be Measurable: Platforms will need to shift from identifying risks to actively reducing them. Carter projected that organizations will cancel contracts with tools unable to demonstrate measurable risk reduction and ROI.&nbsp;Cybersecurity Strategy in ActionAs we discussed with Carter, healthcare cybersecurity leaders have their work cut out for them in 2026. A successful strategy will hinge on intentional planning and coordinated efforts, and there are tangible steps organizations can take right now:Rationalize Your Security "Estate": Visibility across IoT, medical devices, IT systems, and data inventory is critical. Carter highlighted that high-fidelity inventories and tools explicitly designed to consolidate visibility will offer healthcare organizations a competitive edge.Prove ROI: Security is often seen as a cost center, but boards are asking for more. Carter suggested that next year’s focus will be on demonstrating reduced costs, minimized risks, and smarter resource allocation.Lead with Zero Trust and Identity Frameworks:&nbsp;The healthcare threat landscape is evolving, placing clinical workflows and patient devices at greater risk. Aligning resources with zero trust frameworks centered on human and device identity will be essential moving forward.Adopt AI Intentionally: Thoughtful use of AI requires transparent vendors and proper risk evaluation. Avoid rushing to implement technology just because it’s available—focus instead on solutions that align with measurable outcomes.The Regulatory LandscapeOne area Carter flagged for significant 2026 growth is healthcare-specific regulation. From updates to the HIPAA security rule to sector-specific Cybersecurity Performance Goals (CPGs), policy movements will shape compliance efforts.“Regulatory updates like HIPAA’s proposed rules bring significant pain points for healthcare organizations,” he explained. “If frameworks are too demanding, security leaders will need time, consultation, and scalable solutions to avoid compounding financial strain in an already vulnerable industry.”Final Thoughts: Authenticity Sets the ToneAs we said goodbye to Carter after the episode, he left us with one important point: authenticity will be at the heart of effective cybersecurity strategy in the year ahead. Healthcare leadership—boards, C-Suite executives, and cybersecurity professionals alike—must create a foundation of trust across their organizations. Whether defending against adversaries or educating teams about skepticism online, setting the right tone will drive investment in security and privacy.“Nobody wants their healthcare organization to get extorted by bad actors—and nobody wants their patients to lose confidence in their care providers,” Carter remarked. “Right now, the focus needs to be on reducing risks thoughtfully and proving value in everything we do.”We couldn’t agree more—and as we enter 2026, intentional planning and prioritized solutions must be the cornerstone of every healthcare security program.]]></description>
            <dc:creator>Tamer Baker (Healthcare CTO)</dc:creator>
        </item>
        <item>
            <title><![CDATA[ShadyPanda and the Seven-Year Browser Extension Breach: How Zscaler SSPM Strengthens SaaS Supply Chain Security]]></title>
            <link>https://www.zscaler.com/de/blogs/product-insights/shadypanda-and-seven-year-browser-extension-breach-how-zscaler-sspm</link>
            <guid>https://www.zscaler.com/de/blogs/product-insights/shadypanda-and-seven-year-browser-extension-breach-how-zscaler-sspm</guid>
            <pubDate>Tue, 06 Jan 2026 16:00:03 GMT</pubDate>
            <description><![CDATA[A recently uncovered campaign known as&nbsp;ShadyPanda revealed how trusted Chrome and Edge browser extensions can be quietly weaponized over time. For seven years, the attackers behind ShadyPanda used seemingly harmless extensions—some with over&nbsp;4 million installs—to manipulate browser activity, redirect searches, collect behavioral data, and inject malicious scripts into web sessions.While browser extensions cannot directly access files stored inside SaaS applications, they operate within the user’s authenticated browser environment. This allows them to observe browsing behavior, redirect users to malicious sites, interfere with session flows, and influence how users interact with enterprise SaaS applications. When extensions possess high-risk permissions such as&nbsp;cookies,&nbsp;tabs, or&nbsp;webRequest, they introduce meaningful exposure to organizations.ShadyPanda demonstrates why extensions are part of today’s&nbsp;SaaS supply chain—and why continuous visibility and monitoring are critical.Fig: ShadyPanda Attack ChainHow Zscaler SSPM helps identify and mitigate risks like ShadyPandaZscaler SSPM provides the capabilities organizations need to detect risky browser extensions early, understand their impact, and take appropriate action through governance and endpoint controls.1. Comprehensive visibility into browser extensionsZscaler maintains a large catalog of SaaS apps, third-party integrations, and browser extensions enriched with:Publisher and version historyRequested permissionsBehavioral and risk attributesThreat intelligence indicatorsAs soon as users install an extension—regardless of how benign it appears—it is surfaced in&nbsp;third-party plugin Inventory, categorized by risk (e.g.,&nbsp;Potentially Harmful,&nbsp;Over-Privileged,&nbsp;Dormant).ShadyPanda extensions exhibited high-risk permission patterns early on, which Zscaler would have highlighted for security teams to review.The following screenshot shows how Zscaler solution identifies browser extensions such as&nbsp;“Clear Master” in the App Inventory, highlighting their permissions, risk attributes, and findings. This gives security teams immediate visibility into potentially harmful or over-privileged extensions present in their environment.&nbsp;&nbsp;2. Continuous monitoring for changes in permissions, behavior, or riskShadyPanda’s most dangerous activity began years after installation, delivered through silent updates.Zscaler SSPM continuously monitors extensions for:Increasing risk scoresNew permissions or expanded accessUpdated versions that introduce behavioral changesEmerging threat intelligence hitsIf an extension suddenly requests broader access—such as the ability to read cookies or intercept web requests—Zscaler generates an alert and notify that app risk has increasedThis early signal enables teams to investigate the extension and adjust internal controls before malicious behavior escalates.3. Understanding true impact through user and SaaS contextZscaler goes beyond identifying risky extensions—it correlates extension presence with:Which users installed itWhat SaaS applications those users accessPrivilege levels such as admin rolesExisting SaaS misconfigurations that could amplify exposureThis provides a clear blast-radius view:An extension installed by a low-privilege user may represent minimal riskThe same extension installed by a global admin interacting with critical SaaS apps requires immediate attentionZscaler gives organizations the context needed to prioritize action and strengthen governance.&nbsp;4. Enabling customers to take targeted, policy-driven actionWith clear risk categorization, drift insights, and user/SaaS correlations, customers can:Update browser and endpoint policiesRestrict certain categories of extensionsRequire security review for extensions requesting sensitive permissionsRemove or disable unapproved extensions through existing IT controlsEducate users and enforce internal governance policiesZscaler provides the intelligence and prioritization needed to make these actions timely and effective.Strengthen Your SaaS Supply Chain SecurityShadyPanda reinforces that browser extensions are part of the modern SaaS ecosystem—and that risks can evolve long after initial installation.&nbsp;Zscaler SSPM equips organizations with the visibility, context, and continuous monitoring required to surface these risks early and take action before attackers gain footholds.To learn how Zscaler can help assess and secure your SaaS and extension landscape, contact your Zscaler representative for a demo, or request one&nbsp;here.&nbsp;&nbsp;&nbsp;This blog post has been created by Zscaler for informational purposes only and is provided "as is" without any guarantees of accuracy, completeness or reliability. Zscaler assumes no responsibility for any errors or omissions or for any actions taken based on the information provided. Any third-party websites or resources linked in this blog post are provided for convenience only, and Zscaler is not responsible for their content or practices. All content is subject to change without notice. By accessing this blog, you agree to these terms and acknowledge your sole responsibility to verify and use the information as appropriate for your needs.]]></description>
            <dc:creator>Niharika Sharma (Staff Product Manager - CASB PM)</dc:creator>
        </item>
        <item>
            <title><![CDATA[Building A Better Zero Trust Culture Starts With Debunking The Myths Around Trust]]></title>
            <link>https://www.zscaler.com/de/blogs/product-insights/building-better-zero-trust-culture-starts-debunking-myths-around-trust</link>
            <guid>https://www.zscaler.com/de/blogs/product-insights/building-better-zero-trust-culture-starts-debunking-myths-around-trust</guid>
            <pubDate>Mon, 05 Jan 2026 22:02:28 GMT</pubDate>
            <description><![CDATA[The term Zero Trust is everywhere in conversations around cybersecurity, from boardroom slides, project plans, and strategy documents, to architectures and technical designs. As Zero Trust Network Access (ZTNA) moves from tech jargon to mainstream lingo in Australian public sector organisations, an unexpected side effect has arisen: discomfort. The term “Zero Trust” just… sounds harsh. For many staff, it can feel like a vote of no confidence in their integrity or professionalism. But, herein lies the misconception. Let’s unpack what Zero Trust really means, why the confusion exists, and how staff play an essential role in creating a secure digital culture.What Zero Trust Actually Is...And What It Isn’tZero Trust isn’t a judgement of someone’s loyalty, values, security clearance, or intentions. It means not blindly trusting digital transactions and systems, even when the person using them is highly trusted. The core principle of Zero Trust is that every user, device, and digital request is continuously verified because the greatest vulnerabilities in today’s hyper-connected world come from the security assumptions that are made within them.Consider a well-intentioned, long-serving staff member. They have a spotless record and always follow security protocols. But what happens if their laptop picks up malware or is compromised? Suddenly, every action from that device, regardless of how well intentioned, could be a risk. Without Zero Trust controls, one click could inadvertently expose sensitive data within an entire network – VPNs can do little to protect at this stage. The role of Zero Trust, however, is to protect the organisation, its people and its data against these evolving threats, which can have nothing to do with staff behaviour or integrity.Zero Trust: A “Defensible Modern Architecture” for Our TimesThe Australian Cyber Security Centre (ACSC) describes Zero Trust as “a fundamental building block in creating a modern defensible architecture.” Instead of relying on a perimeter firewall and blind trust within it, Zero Trust builds verification and segmentation into every step of a digital transaction. This is typically visible to staff as the interactions from their endpoint to the applications they use.This approach doesn’t diminish the user’s role in these digital transactions. In fact, it should do the opposite. Staff, who understand why continuous verification is essential, become partners in security. In practice, this leads to faster, more reliable access, including for more than 120,000 educators and administrators at the&nbsp;Victorian Department of Education. With fewer connectivity issues and smoother lesson delivery, this has led to better outcomes for more than 680,000 Victorian students. Likewise, at&nbsp;Northern Beaches Council in Sydney, mobile and field workers have seen simpler, consistent access with fewer logins and reduced disruption to everyday work, allowing them to better service their local community.Zero Trust Culture: Trusting People, Not SystemsWithout context and leadership, the continuous verification of Zero Trust may lead to a perception among staff that they are not inherently trusted. However, a healthy Zero Trust culture is never about being suspicious of staff. It’s about creating an environment where everyone has the knowledge and tools to keep digital interactions secure. Protected transactions enable access from anywhere. When this is done well, staff notice the benefits in their day-to-day workflows such as quicker paths into the tools they need and fewer support requests for access problems – just as the Victorian Department of Education and Northern Beaches Council do. Empowered, informed staff normalise verification and help prevent breaches early.How leaders can support cultural change for Zero Trust:Lead with clarity and purpose: Explain that Zero Trust protects people and services by verifying digital activity. Frame changes in terms of safer, simpler work.Design for minimal friction: Prioritise user experience so secure access feels seamless (e.g., fewer VPN dependencies, intelligent access to only the apps people need). Good UX builds trust in the model.Make it practical and role-based: Provide guidance aligned to how staff work day to day – clear, role-specific access policies, simple steps for device health, and intuitive pathways to the apps they use most.Co-create policies with staff: Involve frontline teams and champions in shaping access rules, testing changes and giving feedback before broad rollout. Shared ownership reduces resistance.Communicate early and often: Use transparent updates for what’s changing, why, and how it benefits staff. Pair announcements with short “how-to” resources and quick-win tips.Invest in targeted enablement: Run brief, scenario-based sessions on topics like phishing resistance, secure collaboration, and working securely from anywhere. Keep training lightweight and practical.Measure what matters: Track user-centric metrics – login success rates, access times to key apps, reduction in connectivity-related tickets – and share improvements with teams.Support managers to model behaviours: Equip leaders to reinforce secure-by-default practices in team routines (e.g., verifying device health, just-in-time access) and celebrate positive outcomes.Build feedback loops: Provide fast channels to report access pain points, respond visibly, and close the loop with fixes. Visible responsiveness strengthens confidence in the change.Building Security on Trust, But The Right Kind of TrustZero Trust is a foundational cybersecurity approach built for the modern workplace, where people, devices, and applications are in constant motion. Its focus is always on digital trustworthiness, not doubting staff character. By cultivating a Zero Trust culture, organisations like those in the Australian public sector can create environments that are both highly secure and empowering for staff. When we challenge misconceptions and clarify the intent, staff become the champions of Zero Trust, driving better outcomes for everyone.]]></description>
            <dc:creator>Nick Clark (Zscaler)</dc:creator>
        </item>
        <item>
            <title><![CDATA[Do You Really Know Your AI Landscape?]]></title>
            <link>https://www.zscaler.com/de/blogs/product-insights/do-you-really-know-your-ai-landscape</link>
            <guid>https://www.zscaler.com/de/blogs/product-insights/do-you-really-know-your-ai-landscape</guid>
            <pubDate>Mon, 05 Jan 2026 18:00:03 GMT</pubDate>
            <description><![CDATA[OverviewEnterprise adoption of AI is no longer a future trend; it's a present-day reality. As organizations race to leverage AI for innovations, security teams are grappling with a new, complex, and dynamic attack surface. AI is breaking the operational silos that currently segregate Cloud, SaaS and Endpoint Security; AI is everywhere and it is consuming enterprise data and assets across these channels. Traditional security tools, designed for cloud infrastructure and SaaS applications, are fundamentally ill-equipped to handle the unique risks posed by AI.&nbsp;AI security posture management (AI-SPM) solutions can provide relief by protecting critical AI assets, but it’s important to note that not all AI-SPM solutions are created equal. Many solutions offer only basic posture checks and are focused predominantly on infrastructure and vulnerability management. In addition, most focus solely on Cloud or SaaS, leaving many blind spots when trying to get the full picture of your AI landscape.&nbsp;&nbsp;Key security challenges creating demand for advanced AI security posture managementBasic AI-SPM might identify AI models, services, and data, but it tends to stop there. Security teams need deeper insights as AI applications are not monolithic entities but rather a complex assembly of models, datasets, identities, code dependencies, and APIs.&nbsp;Fig: AI-SPM visibility, governance and risk management frameworkIn order to truly address AI risk we need to fundamentally understand this ecosystem and be able to answer the following questions:Which models are in use, both sanctioned (managed) and unsanctioned (unmanaged)?What are the inherent risks associated with the models?Where are my AI agents, how are they interacting with the models?What identities are being leveraged by AI?Where are my AI orchestration tools and model context protocol (MCP) servers?Which datasets are these models trained on?Can we prove data lineage for compliance?What is my AI supply chain risk, where are the models coming from, what's the risk of using pytorch models?Let's take a close look at some of the top AI risks and security challenges security teams are facing today:&nbsp;How to secure the AI supply chain—Identifying and mitigating supply chain risks in AIThe AI supply chain is a complex web of dependencies that attackers are actively targeting. With&nbsp;supply chain breaches costing nearly&nbsp;$4.5 million on average, organizations cannot afford to ignore the risks embedded in their AI models and libraries.Key supply chain risks include:Missing model provenance: Without a model's "birth certificate"—a clear record of its origin, training data, and history—security teams cannot verify its integrity or ensure it's free from malicious backdoors.Vulnerable dependencies:&nbsp;Modern AI development relies on third-party models, from hubs like Hugging Face and open source AI libraries. Each external component is a potential&nbsp;entry point for attackers, allowing a single compromised library to undermine an entire organization's AI posture.Mitigating these risks requires integrating deep AI supply chain visibility and validation into your core security program.&nbsp;Understanding and preventing common AI model vulnerabilitiesAI models are subjected to unique and rapidly evolving vulnerabilities that attackers can exploit across the entire&nbsp;AI lifecycle—from development to deployment and operation. These risks go beyond traditional security risk and can result in&nbsp;data breaches, intellectual property theft, and operational disruptions. Key AI model vulnerability risks include:&nbsp;Direct model vulnerabilities:&nbsp;Threat actors are actively exploiting vulnerabilities backdoored ML models by planting executable and malicious code inside Python based serialisable models such as pytorch and Kerras based models.Dataset and training vulnerabilities: The security of a model is contingent on the security of its training data.&nbsp;Data poisoning attacks can subtly corrupt a training dataset to create specific, exploitable behaviors in the deployed model.&nbsp;Biased or non-compliant data introduces significant reputational and regulatory risk.Shadow AI vulnerabilities:&nbsp;"Shadow AI"&nbsp;or&nbsp;unmanaged models are those used by developers and data scientists without security oversight. These models are deployed into containers and workloads that are not easily visible to the Cloud infrastructure.&nbsp; These unmanaged assets are often sourced from untrusted locations and operate without any security controls, creating massive blind spots.&nbsp;Model context protocol (MCP): Security risks and how to protect enterprise AI integrationsModel context protocol (MCP)&nbsp;connects AI models directly to live enterprise systems, creating a powerful but high-risk integration layer invisible to traditional security. A compromised MCP server is a master key to your data and APIs. Developers are adding MCP servers and capabilities to almost every Enterprise application without security oversight.Key risks include:Massive blast radius:&nbsp;As a new protocol linking disparate systems, a single compromised MCP server can disrupt operations enterprise-wide.Centralized credential risk:&nbsp;MCP servers act as a vault for access tokens. A breach grants attackers widespread&nbsp;lateral access to countless connected services.Tool poisoning:&nbsp;Attackers can embed malicious commands in tool metadata, tricking an LLM into executing unauthorized actions like data exfiltration.Implementation flaws:&nbsp;Poorly coded MCP servers are vulnerable to classic exploits like command injection, creating pathways for privilege escalation and lateral movement.Securing MCP requires a new class of security capable of monitoring and enforcing policy on this unique protocol.&nbsp;Data lineage: The missing link in AI securityData lineage is the cornerstone of&nbsp;trustworthy AI, providing the transparent audit trail—from source to consumption—needed for governance and compliance.However, traditional lineage tools and even first-generation AI-SPM solutions fall short. They can discover AI models but fail to answer the most critical question: What specific data was this model trained on? For security teams governing thousands of models, this creates a massive security and compliance gap.An advanced AI security platform bridges this gap. By correlating signals from data sources, code repositories, and the models themselves, it automatically reconstructs the data-to-model relationship. This creates a definitive, auditable trail from data origin to the final model version, providing the foundation for responsible and secure AI.&nbsp;Accelerate Your AI Initiatives with Zero TrustThe era of AI demands an evolution in our security mindset. Simply knowing you have AI is not enough. A truly advanced AI-SPM framework must provide comprehensive visibility into the entire&nbsp;supply chain, proactively identify and manage model vulnerabilities,&nbsp;reconstruct data lineage for compliance, and enforce&nbsp;zero trust&nbsp;controls at the point of inference. As AI becomes more integrated into the fabric of your business, investing in an advanced AI-SPM strategy is not just a security measure—it's a critical enabler of innovation and trust.Organizations can plan for advanced AI-SPM for complete visibility and protection across entire AI ecosystem, ensuring security teams can:Discover and inventory AI models deployed across your cloud environments.Assess AI-specific risks, including data exposure, insecure model configurations and vulnerable dependencies.Monitor the AI supply chain to identify poisoned datasets or unauthorized models.Enforce governance policies for responsible AI use and regulatory compliance.Detect misconfigurations in AI workflows that could lead to sensitive data exposure.These steps help apply the principles of a&nbsp;zero trust architecture to your use of AI applications, enabling security teams to:&nbsp;Deploy AI applications confidently: Implement AI with confidence and security assurance.Protect sensitive data: Prevent unauthorized access to data and context information.Enable secure innovations:&nbsp;Adopt new AI capabilities without compromising security.Learn more about Zscaler’s latest innovations to secure and accelerate AI adoption in our exclusive launch:&nbsp;Accelerate Your AI Initiatives with Zero Trust&nbsp;&nbsp;This blog post has been created by Zscaler for informational purposes only and is provided "as is" without any guarantees of accuracy, completeness or reliability. Zscaler assumes no responsibility for any errors or omissions or for any actions taken based on the information provided. Any third-party websites or resources linked in this blog post are provided for convenience only, and Zscaler is not responsible for their content or practices. All content is subject to change without notice. By accessing this blog, you agree to these terms and acknowledge your sole responsibility to verify and use the information as appropriate for your needs.]]></description>
            <dc:creator>Arnab Roy (Director, Product Management)</dc:creator>
        </item>
        <item>
            <title><![CDATA[IPv6: End-to-End Forwarding to Zscaler and Simplified Deployment]]></title>
            <link>https://www.zscaler.com/de/blogs/product-insights/ipv6-end-end-forwarding-zscaler-and-simplified-deployment</link>
            <guid>https://www.zscaler.com/de/blogs/product-insights/ipv6-end-end-forwarding-zscaler-and-simplified-deployment</guid>
            <pubDate>Tue, 30 Dec 2025 18:19:01 GMT</pubDate>
            <description><![CDATA[IPv6 adoption is growing, but its uneven rollout in the internet requires enterprises to manage a long coexistence of IPv4 and IPv6. For organizations, the goal is to provide a consistent, secure experience for all their users. Initially, Zscaler Client Connector tunneled traffic over the IPv4 internet, and ZIA proxied to IPv4 or IPv6 destinations as necessary. This approach supported dual-stack and v6-only users but required reliance on NAT64/DNS64 at the edge for v6-only clients to reach v4-only services.New End-to-End IPv6 CapabilitiesIn order to help our customers to transition to IPv6 more easily, Zscaler has introduced new capabilities to allow v6-only clients to communicate directly over IPv6 to the Zscaler service VIPs, maintaining the same inspection and policy enforcement.End-to-End IPv6 to Zscaler Service VIPs: Zscaler Client Connector now resolves and uses IPv6 VIPs for PAC and TLS/DTLS tunnels, establishing control connections directly over IPv6. On the egress path, Zscaler still prefers IPv4 towards dual-stack destinations but will forward IPv6 traffic for IPv6-only destinations.New IPv6 Service Domains: Parallel IPv6 service FQDNs, such as gateway6.<cloud>.net and pac6.<cloud>.net, have been introduced. These domains resolve to IPv6 VIPs in enabled data centers, allowing Zscaler Client Connector to use available IPv6 service endpoints and making rollouts more predictable.</cloud></cloud>
Traffic FlowThe traffic flow for the new native IPv6 capability is as follows:Client to Service Ingress: Zscaler Client Connector resolves the PAC/gateway using the new "6" domains over IPv6 and establishes a control connection to the SSL VPN (SVPN) IPv6 VIP. IPv6-enabled edge devices and load balancers direct the connection to the appropriate SVPN node at the Zscaler Datacenter.Inside the Cloud: The inner customer packet remains IPv6 to the Service Edge, which decapsulates the packet, enforces policy (URL, Firewall, Cloud App, etc.), and forwards the traffic toward the destination.To the Destination: Dual-stack destinations are accessed over IPv4, which is preferred over IPv6. Zscaler forwards IPv6 traffic to destinations that are IPv6-only. IPv4-only destinations remain accessible as always, over IPv4.&nbsp;DNS Observation TableThe following table illustrates what DNS records users will observe depending on the client and destination types:&nbsp;ClientDestinationDNS A recordDNS AAAA Record&nbsp;v4&nbsp;v4+v6&nbsp;Yes&nbsp;No Ipv6 record&nbsp;v4&nbsp;v4&nbsp;Yes&nbsp;No Ipv6 record&nbsp;v4&nbsp;v6&nbsp;Empty&nbsp;Native IPv6&nbsp;v4 + v6&nbsp;v4+v6&nbsp;Yes&nbsp;Empty&nbsp;v4 + v6&nbsp;v4&nbsp;Yes&nbsp;Empty&nbsp;v4 + v6&nbsp;v6&nbsp;Empty&nbsp;Native IPv6&nbsp;v6&nbsp;v4+v6&nbsp;N/A&nbsp;Native IPv6&nbsp;v6&nbsp;v4&nbsp;N/A&nbsp;DNS64&nbsp;v6&nbsp;v6&nbsp;Empty&nbsp;Native IPv6Admin Configuration: What to EnableForwarding IPv6 natively to Zscaler requires only a small set of administrative changes:ZIA Tenant Prerequisites:Ensure your tenant has the required IPv6 licenses (Advanced).Enable IPv6 for the tenant under Administration → IPv6 in your ZIA Admin portal.Firewall and Traffic Controls:Allow IPv6 DNS traffic (on by default).Change the “Block IPv6 All” rule from Block/Drop to Allow so IPv6 flows are permitted and inspected.Zscaler Client Connector Version:&nbsp;For Windows, 4.8+ version and for MacOS, 4.7+ versions are required.Zscaler Client Connector Platform Setting (New):Enable&nbsp;“IPv6 resolution for Zscaler domains” per platform in the Zscaler Client Connector admin portal. This controls whether Zscaler Client Connector resolves the new "6" service domains (e.g., gateway6, pac6).Zscaler Client Connector Forwarding and App Profiles:Forwarding Profile: Continue to use Z-Tunnel 2.0 (or 1.0) as appropriate with no change in forwarding method selection.Application Profile:For Windows and MacOS, include&nbsp;2000::/3 in the IPv6 inclusion list to route all global unicast IPv6 flows through Zscaler Client Connector. If IPv6 inclusion is left blank, IPv6 traffic may flow direct (fail open).Keep standard exclusions for ULA, link-local, and multicast. For Tunnel 2.0 environments forwarding DNS to ZIA, set DNS inclusion to “*” to forward all DNS traffic to Zscaler.PAC Considerations:If your PAC uses tokenized variables like&nbsp;$GATEWAY, enabling&nbsp;“IPv6 resolution for Zscaler domains” allows Zscaler Client Connector to automatically prefer the correct "6" domains.If your PAC hard-codes hostnames or IPs, plan to update them to the new "6" service FQDNs in IPv6-upgraded regions. Avoid hard-coding IPs during the rollout.Rollout Strategy and MigrationThe need for "subclouds" to enforce connections to specific data centers diminishes as&nbsp;gateway6/pac6 are enabled and Zscaler Client Connector automatically resolves to v6-capable DCs when "IPv6 resolution for Zscaler domains" is active.Continue to prefer service FQDNs in PAC files, over fixed IPs for a smoother rollout.Ensure all pre-requisites listed in the section above are met.Start with a few test users, ensuring all the cases are met before rolling out the change to all users.Validate key use cases, including Dual-stack, IPv4-only, and policy controls on IPv6 traffic. Once validated, you can reduce reliance on local NAT64 for cloud ingress.Design Guidance and CaveatsDo not bypass v4-only destinations for v6-only clients unless the direct path includes NAT64.Be explicit about IPv6 inclusions by including&nbsp;2000::/3 in your Zscaler Client Connector App Profile configuration to ensure IPv6 flows are serviced by Zscaler.Ensure IPv6 firewall controls are ordered correctly (e.g., QUIC block rules before broad IPv6 allows).For v6-only clients, IPv4-only servers in Z-Tunnel 2.0 DNS Exclusions will not get a NAT64 prefix IPv6 address and will not be resolvable.Similarly, an IPv4-only server in ZT2 IPv4 Exclusions with a NAT64 prefix IPv6 address from DNS Inclusions will not be accessible without ZIA Z-Tunnel 2.0.Additionally, v6-only clients cannot connect to literal IPv4 addresses (e.g.,&nbsp;http://8.8.8.8/).Conclusion&nbsp;These enhancements provide a cleaner, more native experience for v6-only clients by enabling IPv6 all the way to Zscaler service VIPs. A few administrative changes - enabling IPv6 resolution for Zscaler domains, allowing IPv6 traffic, and setting appropriate IPv6 inclusions - are sufficient to deliver a seamless user experience, reduce local NAT64 reliance, and keep policy enforcement consistent.To learn more about the native IPv6 forwarding capabilities to Zscaler, refer to the Zscaler&nbsp;help documentation.Glossaryv4-only: Refers to a client or destination that exclusively uses the Internet Protocol version 4 (IPv4) for communication.v6-only: Refers to a client or destination that exclusively uses the Internet Protocol version 6 (IPv6) for communication.Dual-stack: Refers to a client or destination that is configured to run both IPv4 and IPv6 protocols simultaneously, allowing communication with both IPv4 and IPv6 systems.]]></description>
            <dc:creator>Mithun Hebbar (Principal Product Manager - SWG)</dc:creator>
        </item>
        <item>
            <title><![CDATA[Long-Term Support for Zscaler Client Connector]]></title>
            <link>https://www.zscaler.com/de/blogs/product-insights/long-term-support-zscaler-client-connector</link>
            <guid>https://www.zscaler.com/de/blogs/product-insights/long-term-support-zscaler-client-connector</guid>
            <pubDate>Thu, 18 Dec 2025 13:45:03 GMT</pubDate>
            <description><![CDATA[The Long-Term Support (LTS) initiative for&nbsp;Zscaler Client Connector provides a solution for customers who require stable, minimal-update software versions that still maintain both proper security and support. Under this program, Zscaler guarantees support for designated LTS versions for the 12 months following their announcement dates. During that time, Zscaler’s focus will be placed solely upon addressing critical bugs and security issues—customers should note that no new features are introduced for these versions.Beginning on January 31st of 2026 and lasting for 12 months, the LTS versions of Client Connector that will be supported across operating systems are:Windows: Client Connector Version 4.7macOS: Client Connector Version 4.5.2Linux: Client Connector Version 3.7.2These LTS versions are intended for customers with strict control policies on version upgrades. They will allow customers to remain secure and supported, but without any feature changes.&nbsp;Using the latest Client Connector version is still advisedDespite this Long-Term Support initiative, it is important to note that Zscaler still encourages customers to adopt the latest versions of Client Connector whenever possible. That’s because the latest versions typically include additional vulnerability patches, bug fixes, features, and improvements that enhance both overall security and user experiences. So, for customers using outdated Client Connector versions, there is significant cyber risk—not to mention, they are ineligible for support.&nbsp;As a result of the above, Zscaler has implemented a phased plan to deprecate and block connections from older, unsupported versions. Ultimately, the goal is to require compliance with supported versions only. But in the near term, the most pertinent information can be seen below.Phase 1 (Effective February 1, 2026):&nbsp;Zscaler will not allow new device enrollment for users&nbsp;running Client Connector 1.x and 2.x on Windows, Mac, and mobile platforms. For Linux devices, versions prior to 1.5 will be blocked from enrollment. Users already enrolled will not be immediately impacted, but will be unable to re-enroll after logging out of Client Connector.For additional details on this plan, refer to the Zscaler trust portal.]]></description>
            <dc:creator>Jamil Alomari (Staff Product Manager)</dc:creator>
        </item>
        <item>
            <title><![CDATA[Evolving Heroes: How the Role of Healthcare CISOs is Changing]]></title>
            <link>https://www.zscaler.com/de/blogs/product-insights/evolving-heroes-how-role-healthcare-cisos-changing</link>
            <guid>https://www.zscaler.com/de/blogs/product-insights/evolving-heroes-how-role-healthcare-cisos-changing</guid>
            <pubDate>Tue, 16 Dec 2025 20:59:43 GMT</pubDate>
            <description><![CDATA[The perception of Chief Information Security Officers (CISOs) in healthcare has shifted dramatically over the past few years. What was once seen as a rigid, policy-focused role—“the Department of No,” as some would say—has evolved into a dynamic, strategic position at the intersection of security, technology, and business innovation.I had the privilege of sitting down with Drex DeFord, former CIO and current thought leader at This Week Health, on a recent episode of We Have Trust Issues to discuss this evolution. Drex's experience spans decades in the healthcare landscape, from serving as CIO for major institutions like Scripps Health and Seattle Children’s Hospital to his current work with healthcare executives across the nation. What was clear from our conversation is that modern CISOs are stepping far beyond their traditional responsibilities and into exciting but complex new roles.From “The Department of No” to the Enabler of InnovationOnce upon a time, CISOs were seen as bureaucratic gatekeepers, responsible for writing policies, enforcing rules, and building firewalls to keep cyber threats at bay. Fast forward to today, and now CISOs are increasingly called upon to be business enablers, proactively driving innovation while managing risk.&nbsp;As Drex explained, “CISOs aren’t just trying to keep the bad guys out anymore. They’re keeping the business alive, ensuring resilience, and enabling their organizations to recover quickly when bad things happen.”This is particularly true in healthcare, where the pandemic accelerated digital transformation and demanded unprecedented agility in responding to rapidly changing needs. Security leaders found themselves knee-deep in projects like enabling remote clinical workflows, telehealth readiness, and securing massive migrations to cloud-based platforms.Bridging Silos: Bringing Security and Technology TogetherOne of the most striking trends Drex and I discussed is the hybridization of roles like CISO, Chief Technology Officer (CTO), and even Chief Information Officer (CIO) in healthcare. Many health systems are consolidating these roles to reduce friction and align security with overarching technology goals. The result? CISOs are increasingly stepping into merged leadership titles, like Chief Information Security and Technology Officer (CISTO).This shift is partly a response to friction that used to exist between security and IT teams. “In some cases, the simplest way to resolve the tension was to put both responsibilities under one leader,” Drex mentioned. But more than that, these evolving roles equip organizations with leaders who inherently understand security’s critical role in supporting business objectives.The modern CISO has also developed a deeper understanding of clinical workflows, business operations, and organizational priorities. "CISOs are learning to step out of their silos," Drex noted, "collaborating with stakeholders in clinical care, research, and operations to ensure security isn’t a limitation but a partner to progress."The Balancing Act: Prioritizing Budgets, Innovation, and ResilienceAs the role of the CISO gains complexity, so too do the challenges they face. Healthcare organizations are under immense financial pressure, meaning that CISOs are juggling cost optimization, digital transformation, and security risk management all at once. With the threat landscape constantly changing, Drex observed, cybersecurity is no longer just about “keeping the bad guys out” but ensuring business continuity and safeguarding patient care—even under attack.“Innovation, modernization, application rationalization, AI, and digital transformation are now all part of the CISO’s remit,” Drex said. “They’re at the executive table, shaping strategies that touch every part of the organization—from clinical workflows to supply chain security.”CISOs, now more than ever, must balance their role as protectors with their emerging function as enablers of innovation. This requires saying “yes, but” instead of a hard “no”—helping their peers understand that creativity and agility are possible within the guardrails of a secure framework.The Path Forward: Advice for Aspiring CISOsThe evolving demands of the CISO role provide a unique opportunity for leadership growth. As Drex put it, many CISOs are equipped with everything they need to ascend beyond their current positions—whether it’s into the CIO role, a Chief Operating Officer role, or even to CEO someday. His advice for those looking to make the leap is simple but profound:Think Bigger: Don’t limit yourself to being “just a CISO.” Modern CISOs have deep expertise in technology, security, and operations, which makes them natural candidates for leadership roles. Embrace this unique perspective.Learn the Business: Understand clinical workflows, operations, tech stacks, and even how your organization gets paid. Work on speaking the language of all departments—from orthopedics to billing.Be a Problem Solver:&nbsp;Saying “no” can create division, but saying “yes, but” means offering solutions while outlining requirements and resourcing challenges. Break the “Ivory Tower” stereotype and show the value security can bring.Step Outside Your Comfort Zone: Volunteer for projects outside the security space. Whether it’s filling a temporary role or working on a cross-department initiative, these experiences build trust and open doors.Building Trust in the Age of AIAnother prominent theme in our discussion was trust—or, more specifically, the growing “trust recession” in today’s digital world. Rapid advancements in AI and deepfake technology have made synthetic media commonplace, muddying the waters of what can be trusted online.“Generative AI, voice deepfakes, and manipulated media can all be used for good—but they can also be used for malicious purposes,” Drex said. With groundbreaking tools emerging every day, healthcare security leaders must grapple with new priorities, including protecting against AI-powered threats, vetting vendors’ AI capabilities, and identifying safe use cases for large language models (LLMs).Healthcare organizations need to stay ahead by creating sandboxes where innovation can flourish safely, bringing AI capabilities in-house where necessary, and ensuring that sensitive data is handled responsibly. As Drex succinctly put it, “We want to enable innovation and creativity, but we have to do it in a way that protects the organization.”The Power of Shared KnowledgeOne of the most compelling takeaways from my conversation with Drex was his insight on the collaborative nature of the healthcare industry. Unlike other sectors, where competitors rarely converse, healthcare is unique in its willingness to share resources, strategies, and even lessons learned from mistakes.As Drex explained, “Nobody wants to win because another hospital is taken down by a ransomware attack. Healthcare is a team sport. The more we share knowledge, the better prepared we all are to protect patients and deliver care.”Building a strong network with peers, mentors, and leaders from outside your own organization isn’t just valuable—it's essential. Whether through industry events, summits, or conversations over dinner, these connections provide the guidance, support, and perspective CISOs need to continually grow as executives.Listen to the full conversation at HealthcareNOW Radio&nbsp;https://www.healthcarenowradio.com/programs/we-have-trust-issues/&nbsp;]]></description>
            <dc:creator>Steven Hajny (Healthcare Principal Sales Engineer)</dc:creator>
        </item>
        <item>
            <title><![CDATA[What AI Risks Are Hiding in Your Apps?]]></title>
            <link>https://www.zscaler.com/de/blogs/product-insights/what-ai-risks-are-hiding-your-apps</link>
            <guid>https://www.zscaler.com/de/blogs/product-insights/what-ai-risks-are-hiding-your-apps</guid>
            <pubDate>Mon, 15 Dec 2025 18:00:03 GMT</pubDate>
            <description><![CDATA[To learn more how these cutting-edge AI innovations can secure and accelerate AI adoption, register now for our exclusive launch event&nbsp;Accelerate Your AI Initiatives with Zero TrustAI is transforming business operations, offering unprecedented productivity, faster decision-making, and new competitive edges. As per Gartner, by 2028, more than 95% of enterprises will be using generative AI APIs or models, and/or will have deployed GenAI-enabled applications in production environments. At Zscaler, we have witnessed exponential increase in AI transactions, with a 36x increase year-over-year, highlighting the explosive growth of enterprise AI adoption. The surge is fueled by ChatGPT, Microsoft Copilot, Grammarly, and other generative AI tools, which account for the majority of AI-related traffic from known applications.However, AI adoption and its integration into daily workflows introduces novel security, data privacy, and compliance risks. For the past two years, security leaders have been grappling with&nbsp;shadow AI—the unsanctioned use of public AI tools like ChatGPT by employees. The initial response was often reactive—block the domains and hope for the best—but the landscape has shifted dramatically. AI is no longer just a destination tool or website; it's an integrated feature, embedded directly into the sanctioned, everyday business applications we rely on.According to the 2025 Gartner Cybersecurity Innovations in AI Risk Management and Use Survey, 71% of cybersecurity leaders suspect or have evidence of employees using embedded AI features without going through necessary cybersecurity risk management processes.This evolution from standalone shadow AI to embedded, pervasive AI creates far more complex and layered security challenges. Blocking is no longer a viable strategy when AI is part of your core collaboration suite. To safely harness the productivity benefits of AI, enterprises need a new security playbook—one that goes beyond simply blocking shadow AI and embraces a&nbsp;zero trust + AI security approach focused on visibility, context, and intent. This post will explore the new frontier of AI security challenges, risks and outline a modern framework for securing it.Fig: Zscaler ThreatLabz Report: Top AI application usageEmerging AI security challengesAs organizations integrate AI deeper into their operations, our findings indicate they face a growing twofold challenge:&nbsp;Securing the inevitable and rapid adoption of AI within their environments; andRecognizing and mitigating the growing vulnerabilities that come with it.Below, we outline the five biggest AI security challenges that will shape how you protect the AI ecosystem and how to address them.1. Shadow AI: A silent insider threatShadow AI can enable innovation, but it also exposes organizations to significant risks, particularly concerning data loss and&nbsp;breach potential.&nbsp;BCG’s latest “AI at Work” study reveals that 54% of employees openly admit they would use AI tools even without company authorization. The consequences of staying blind? As per a recent&nbsp;IBM report, 20% of organizations experienced breaches linked to unauthorized AI use, adding an average of $670,000 to breach costs. Additionally, shadow AI incidents had serious downstream effects beyond security concerns:44% suffered data compromise41% reported increased security costs39% experienced operational disruption23% faced reputational damageThese impacts demonstrate that shadow AI isn't just a security concern—it's a business risk that affects operations, finances, and reputation.&nbsp;2. Embedded AI simplifies workflows but complicates securityThe new front line for AI security isn't a standalone website. It's the "AI" button inside the tools employees use every day. Countless SaaS applications—from CRMs to design tools—are embedding generative AI features.&nbsp;Enterprises significantly underestimate the security risks posed by Embedded AI, which accounts for over 40% of their AI usage and often operates opaquely. Current AI TRiSM (Artificial Intelligence Trust, Risk, and Security Management) solutions and vendor-provided security assurances are largely ineffective for Embedded AI, which frequently lurks in Shadow AI. This leaves organizations vulnerable, relying on outdated audits and inadequate clickwrap agreements that fail to address the complex orchestration and interfaces of these embedded systems.&nbsp;&nbsp;Fig: Gartner IT Symposium Keynote Survey, 2024Here are a few classic examples of Embedded AI security challenges:&nbsp;Teams use Jira and Confluence to manage sensitive projects, track critical software bugs, and document internal processes. With some intelligence models, a user can now prompt AI with sensitive projects and data. Security teams often lose visibility on the model used—they’re left to wonder where it is hosted, whether sensitive data is being used for training, whether it’s exposed, and more.Microsoft Copilot represents the ultimate integration of AI into the enterprise workflow. It has access to a user’s entire M365 Graph—their emails, chats, calendars, and documents. A single prompt could exfiltrate highly confidential data. As the interaction happens within the trusted Microsoft ecosystem, traditional&nbsp;DLP and&nbsp;CASB solutions are often blind to the content and context of the AI query itself.Each AI integration represents a new, unvetted channel for data to leave the environment. An organization might have 20 different sanctioned SaaS apps, each with its own embedded AI that communicates with a different large language model (LLM) under different data privacy terms. Manually tracking and governing this hidden mesh of AI interactions is a challenging task. Security teams often have no visibility into the data being exchanged in these interactions, creating a massive blind spot.3. AI prompts and outputs: A data loss hotspot&nbsp;AI Prompts may contain sensitive data that includes source code, unreleased financial data, customer personally identifiable information (PII), healthcare records, and strategic plans. As per the&nbsp;Zscaler ThreatLabz AI Security Report, 59.9% of AI transactions were blocked, signaling concerns over&nbsp;data security and the uncontrolled use of AI applications.&nbsp;The risk isn't just with the input. The output from AI models carries its own set of dangers like:Hallucinations:&nbsp;AI models can confidently invent facts, statistics, or code snippets. An employee who unknowingly incorporates this fabricated information into a report, financial model, or software build introduces errors and risk into the business.IP and copyright issues:&nbsp;Models trained on public data may generate outputs that include copyrighted material or even proprietary code from other organizations, creating serious legal and IP risks.Sensitive data exposure:&nbsp;An AI model may regurgitate sensitive data it was trained on or expose data from another user's session, leading to an unpredictable&nbsp;data leak.Security teams need to regularly sanitize and validate AI inputs and outputs and implement comprehensive prompt monitoring strategies.&nbsp;4. Evolving data privacy and compliance risksAI’s reliance on large datasets introduces compliance risks for organizations bound by regulations such as GDPR, CCPA, and HIPAA. Improper handling of sensitive data within AI models can lead to regulatory violations, fines, and reputational damage. One of the biggest challenges is AI’s opacity—in many cases, organizations lack full visibility into how AI systems process, store, and generate insights from data. This makes it difficult to prove compliance, implement effective governance, or ensure that AI applications don’t inadvertently expose PII.As regulatory scrutiny on AI increases, businesses must prioritize AI-specific security policies and governance frameworks to mitigate legal and compliance risks.5. The strategic AI governance gapEffective AI governance remains out of reach for most organizations because they fail to:Gain comprehensive visibility: They cannot see which AI tools are being used, by whom, or what sensitive data is being exposed in user prompts and sessions.Enforce&nbsp;least-privileged access: Identify and revoke excessive permissions across the organizations that aren’t necessary for AI functionality.Understand user intent: Fail to analyze the purpose behind an AI prompt, forcing them to use outdated keyword blocking instead of intelligent, intent-based policies that prevent high-risk activities.Prevent drift: Consistently detect and remediate risky misconfigurations or compliance violations before they lead to breaches or fines.&nbsp;Accelerate Your AI Initiatives with Zero TrustAI is moving faster than traditional security and governance policies—and that’s exactly where risk grows. Organizations can follow the basic steps below to ensure the safe use of AI:&nbsp;Identify Shadow AI:&nbsp;Gain end-to-end visibility into all of the AI used within your organization, including AI embedded into enterprise applications. Understand how AI is being used, what data is being leveraged and related risks. Enforce least-privileged access: Identify, restrict or revoke access to AI systems based on appropriate policies and user risk profiles. Isolate user sessions to risky apps.Control data:&nbsp;Use labeling and controls to ensure inappropriate data is not used to train AI applications, such as Microsoft Copilot, and enforce advanced DLP policies to safeguard use of sensitive data in AI prompts/responses. Ensure responsible use of AI: Enforce&nbsp;AI Governance best practices and guardrails to secure design, development, deployment and use of AI. Prevent toxic or high-risk prompts with intent-based controls.&nbsp;These steps help apply the principles of a&nbsp;zero trust architecture to your use of AI applications, enabling organizations to stay resilient even as AI evolves at lightspeed.&nbsp; Learn more about Zscaler’s latest innovations to secure and accelerate AI adoption in our exclusive launch:&nbsp;Accelerate Your AI Initiatives with Zero Trust&nbsp;This blog post has been created by Zscaler for informational purposes only and is provided "as is" without any guarantees of accuracy, completeness or reliability. Zscaler assumes no responsibility for any errors or omissions or for any actions taken based on the information provided. Any third-party websites or resources linked in this blog post are provided for convenience only, and Zscaler is not responsible for their content or practices. All content is subject to change without notice. By accessing this blog, you agree to these terms and acknowledge your sole responsibility to verify and use the information as appropriate for your needs.]]></description>
            <dc:creator>Steve Grossenbacher (Senior Director, Product Marketing)</dc:creator>
        </item>
        <item>
            <title><![CDATA[AI for Segmentation: The Limits of AI Policy Optimizers and Private Access Co-Pilots]]></title>
            <link>https://www.zscaler.com/de/blogs/product-insights/ai-segmentation-limits-ai-policy-optimizers-and-private-access-co-pilots</link>
            <guid>https://www.zscaler.com/de/blogs/product-insights/ai-segmentation-limits-ai-policy-optimizers-and-private-access-co-pilots</guid>
            <pubDate>Fri, 12 Dec 2025 23:58:54 GMT</pubDate>
            <description><![CDATA[The industry is having the wrong conversation about AI for private access. Security leaders are being presented with two limited paths, both disguised as the future of ZTNA.On one hand, there is AI being used as a Policy Optimizer, a smarter way to manage the complex firewall rules traditionally used to segment data centers and private app environments. On the other hand, there is the rise of the AI Co-Pilot, a conversational assistant designed to make first-generation ZTNA architectures easier to administer. Both approaches share a critical flaw: they use AI to make a broken, network-centric model more manageable, not to replace it. They are renovations, not a future solution.The true destination for AI in securing private access is not better assistance, but a clear path towards genuine autonomy, a system that can act on its own to reduce risk. That journey isn't possible with a chat interface bolted onto a firewall or a ZTNA 1.0 architecture still dependent on network segments. It requires a fundamentally different foundation.To understand why these approaches are hitting a wall and why a different architecture is required, one must first confront the massive, hidden problem at the heart of every enterprise: the private application landscape.The Iceberg Hiding in Your NetworkAn organization's private application landscape is like an iceberg. The small, visible tip represents the handful of mission-critical apps that are known and actively managed. Yet even for this small set, creating and maintaining strict access policies is a difficult, manual process that consumes significant time and resources. And in reality, this is only a tiny fraction of the total landscape.Just below the surface lies the first layer of hidden risk: the inventoried apps. They are listed in a CMDB, but with little to no data on who actually needs access, creating significant exposure. The real danger lurks in the massive, submerged base of the iceberg: the unaccounted-for apps. This dynamically growing mass of shadow IT has no clear owner, no documentation, and creates a vast, undefended attack surface ripe for lateral movement.An AI co-pilot, by analyzing logs and network data, can only offer surface-level observations about this chaos. It might generate an alert like, "New traffic detected to an unknown server," but its utility effectively ends there. Because it lacks a true understanding of the complex application landscape, and more importantly, the architectural power to actually solve the root problem, it can only ever point out a symptom. It’s a notification about a single crack on the iceberg's surface, with no ability to reveal the immense, hidden danger below. You are still navigating blind.See the Whole Iceberg, Then Eliminate the Attack Path&nbsp;&nbsp;The real breakthrough isn't getting a better tool to explore the problem. It's realizing you can eliminate the problem altogether. The goal isn't to become an expert navigator of a high-risk environment; it's to transform it into a low-risk, fully visible one. This is the shift from an administrative mindset to an architectural one. Instead of asking, "How can I write rules to secure this chaos?" The better question is, "How can I design a system where this chaos can't exist?"&nbsp;This architectural shift is the mandatory first step before any AI strategy can succeed.Think of it like constructing a modern skyscraper. No one would install a sophisticated, AI-powered smart building system to optimize the elevators and HVAC if the structure itself had a crumbling foundation and faulty wiring. The system would just become incredibly efficient at reporting on constant failures. First, you must secure the foundation, an architecture that is inherently simple and safe.&nbsp;Only on top of that secure foundation can AI truly help you get ahead, delivering the speed, scale, and autonomous capabilities that were impossible before.The immediate challenge, however, is that the market is now saturated with AI products all claiming to solve this very problem. On the surface, they sound alike. They promise “automated discovery” and “policy mapping,” creating significant confusion for technology leaders.So how do you cut through the noise and identify a solution that truly eliminates the problem, rather than just managing it more efficiently? The key is to scrutinize the architectural philosophy each AI was built to serve. When you do, you'll find two dominant, yet flawed, approaches have shaped the market.Understanding the AI Segmentation Landscape: The Optimizer, The Co-Pilot, and The Autonomous System1. The Policy Optimizer (The Firewall-Centric Approach)This philosophy grew out of the world of firewalls. For decades, security meant writing rules: source IP, destination IP, port. The goal of AI in this world is to make that process more efficient. This AI acts as a Policy Optimizer, sifting through logs to help you write better firewall rules. The fundamental limitation: This approach keeps you trapped in the endless cycle of managing a complex rule base. The AI helps you manage the problem, but it doesn't eliminate it.2. The Co-Pilot for Private Access (The ZTNA 0.5 Approach)This philosophy represents a step forward, born from first-wave ZTNA. Its AI acts as a helpful Cloud Assistant or "co-pilot," but remains tethered to network-centric concepts. Its recommendations reveal its constraints as it might suggest "narrowing a 10.0.0.0/16 subnet to a 10.0.0.0/24," helping you turn big network segments into smaller ones. The fundamental limitation: This approach still forces you to manage network topology. The AI helps you become a more precise network segment manager, but you are still managing segments on a network that allows for lateral movement.Zscaler's Autonomous User-to-App Segmentation represents the right way, and it’s crucial to understand this is not simply a better version of the same game, it's a different game entirely. This is a true industry-first Autonomous System, built on a zero trust architecture that renders the underlying network irrelevant.The reason it makes the work of managing rules and subnets obsolete is that it fundamentally changes the unit of security. Instead of managing network constructs (IPs, subnets, ports), our AI engine operates on a higher plane: the direct relationship between business entities (a verified user and a specific application). You cannot be stuck managing network rules when the system itself doesn't use them to determine access. This new paradigm is what allows our AI to move beyond assisting and towards full autonomy. Here's how:1. It Sees Every Application, Not Just the Known Ones.Operating inline, the AI automatically discovers and groups all applications as they are accessed, including all the unaccounted-for shadow IT, instantly bringing your hidden attack surface into the light.2. It Segments with Quantifiable Intelligence, Not Vague Suggestions.The AI engine analyzes live traffic to identify segmentation opportunities, presenting a data-driven blueprint like: "These 12 applications appear to be MongoDB databases accessed only by the Dev-Ops team."3. It Proves the Impact Before You Commit.The system proves the value of any recommendation, showing you precisely how a new policy will transform your posture, for example, by moving from 6,000 potentially exposed users down to the 59 observed users who actually need access.4. It Simplifies Action to a Single Click.Once you see the proven benefit, you aren't left with a manual task. With one click, an AI-powered recommendation is converted into a secure application segment and is ready to be applied as policy. The entire workflow from discovering risk, to generating a solution, to proving its value, to implementing it is seamless and eliminates the risk of human error.5. It Delivers Continuous Insights to Strengthen Your Posture.Security is not a one-time project. Our system creates a continuous feedback loop to prevent policy rot. It helps you visualize exposure, eliminate unused "allow" policies that represent latent risk, and refine overly-permissive rules over time. This is the full lifecycle of zero trust segmentation, made simple. The choice is no longer just about which AI is better, but which architectural philosophy will truly secure your future.Your AI strategy will ultimately reflect your architectural philosophy. Will it be defined by making a complex past more manageable, or by building a radically simpler, more secure future?See how that future works in practice.Watch the webinarRequest a demo]]></description>
            <dc:creator>Olivia Vort (Senior Product Marketing Manager)</dc:creator>
        </item>
        <item>
            <title><![CDATA[VPNs Are an Attacker's Front Door. Close It with Zero Trust.]]></title>
            <link>https://www.zscaler.com/de/blogs/product-insights/vpns-are-attacker-s-front-door-close-it-zero-trust</link>
            <guid>https://www.zscaler.com/de/blogs/product-insights/vpns-are-attacker-s-front-door-close-it-zero-trust</guid>
            <pubDate>Fri, 12 Dec 2025 17:44:00 GMT</pubDate>
            <description><![CDATA[A fresh wave of automated login attempts against exposed VPN portals is the latest reminder of a hard truth: VPNs are an enterprise’s most visible, most targeted front door. When attackers can aim limitless credential stuffing, password spraying, and session hijacking at a single internet-facing portal, compromise becomes a numbers game: not a matter of&nbsp;if it happens, but&nbsp;when.In the most recent series of events, threat actors are launching large waves of login attempts against publicly exposed VPN portals such as GlobalProtect. These campaigns use commodity botnets, leaked credential dumps, proxy networks, and MFA fatigue tactics to cycle through accounts until they gain unauthorized access. Once adversaries establish a foothold, they exploit the perceived trust of the VPN connection to move laterally, escalate privileges, and blend in as legitimate users.Attackers love VPNs. VPNs pose serious security risks because their gateways are publicly exposed, making them constant, easy-to-find targets for scanning, brute-forcing, and fingerprinting. A single successful login often grants overly broad access to the internal network and numerous applications, which may exceed the user's need.Compounding this problem, attackers can easily exploit weak authentication and reused credentials to gain access through "spray-and-pray" attacks. Patching VPN appliances is often a complex, risky, and slow process. The implicit trust model of traditional networks aids attackers, by making lateral movement easier.If you still run VPNs today, you should immediately lock down the portal with strong security, limit the blast radius with least-privileged access, and set up solid monitoring and incident response in case something happens.The problem isn’t just weak controls around VPNs: it’s the VPN model itself. Any solution that exposes a network entry point to the internet invites exactly the sort of automated abuse we’re seeing.&nbsp;Zero trust network access (ZTNA) changes the game.No inbound access, no exposed portals:&nbsp;Zscaler Private Access (ZPA) connects users to apps through brokered, outbound-only connections. Applications are hidden behind the Zscaler Zero Trust Exchange—no public IPs, open ports, or VPN concentrators to scan or brute-force.App-level access, not network access:&nbsp;Users get least-privileged access to specific apps based on identity, context, and policy. There’s no “flat” network to roam, significantly reducing lateral movement.Autonomous user-to-app segmentation: Powered by AI, ZPA eliminates the manual burden of defining micro-perimeters and ensures least-privileged access is dynamically enforced, a capability fundamentally missing from traditional network-centric VPNs.Continuous, risk-based trust: Access decisions adapt in real time using identity, device posture, user behavior, and location. If risk spikes, access can step-up MFA, restrict, or cut off sessions automatically.Phishing-resistant authentication: ZPA integrates with modern IdPs and FIDO2 to eliminate passwords for high-value workflows and stop MFA fatigue tactics.Strong posture and segmentation everywhere: Device checks, microtunnels per app, and double-encrypted connections protect traffic on any network without hairpinning or split-tunnel tradeoffs.Operational simplicity:&nbsp;Our cloud-delivered service removes patching burden from fragile appliances and scales elastically under surges—legitimate or hostile.Assess and prioritize: Inventory VPN use cases, app dependencies, and user groups; pick high-risk or easy-to-isolate apps to start.Connect apps safely: Deploy ZPA App Connectors beside each app (data center/cloud) with outbound-only connections—no public IPs or inbound firewall changes.Integrate identity and posture: Hook up your IdP (SAML/OIDC) and device posture sources (EDR/MDM); define least-privilege, app-specific policies.Publish and pilot: Publish initial app segments, enable Zscaler Client Connector, and pilot with contained groups (admins/contractors); tune policies and MFA.Scale and retire VPN: Expand in waves, tighten remaining VPN access during transition, cut over cleanly, monitor/optimize, then decommission concentrators and close inbound ports.VPNs are a liability: a conspicuous front door that adversaries will keep kicking until it opens. You can harden and monitor that door, but the safest, most sustainable answer is to remove it altogether. Zero trust with Zscaler replaces guesswork and implicit trust with app-specific, risk-aware access that attackers can’t easily see, spray, or brute-force.Interested in learning more? Schedule a meeting with our product experts today.]]></description>
            <dc:creator>Kanishka Pandit (Director, Product Marketing)</dc:creator>
        </item>
        <item>
            <title><![CDATA[Zero Trust Automation: Streamlining Reporting with Zscaler OneAPI]]></title>
            <link>https://www.zscaler.com/de/blogs/product-insights/zero-trust-automation-streamlining-reporting-zscaler-oneapi</link>
            <guid>https://www.zscaler.com/de/blogs/product-insights/zero-trust-automation-streamlining-reporting-zscaler-oneapi</guid>
            <pubDate>Wed, 10 Dec 2025 13:35:03 GMT</pubDate>
            <description><![CDATA[IT teams are increasingly relying on automation to streamline their workflows as they manage ever-growing numbers of solutions. Acutely aware of this need, Zscaler developed&nbsp;OneAPI (a single programming interface for the entire&nbsp;Zero Trust Exchange platform) to enable its&nbsp;Zero Trust Automation solution. As a result, customers can manage their Zscaler deployments programmatically (via code) without manually logging in to Zscaler’s admin interface. Thousands of security and network practitioners around the globe are already doing this, and are automating tasks like policy management so they can save time and focus on higher-value projects.&nbsp;Zscaler is constantly listening to customers’ requests and expanding OneAPI’s capabilities in order to address evolving needs. That’s why the company recently announced new functionality that automates the retrieval of Zscaler analytics data for third-party and homegrown tools.&nbsp;Why organizations need to automate their reportingBuilding reports and dashboards is a significant undertaking for today’s network and security teams. That’s because they manage numerous tools that generate and process overwhelming volumes of data.&nbsp;All too often, retrieving analytics is a manual process that requires admins to log in to multiple interfaces, hunt down what they need, and copy and paste information by hand. Naturally, that makes it laborious to compile accurate, usable reports. Not only does this inefficiency consume valuable time that could be better spent elsewhere, but it also invites human errors that can compromise report integrity.The fragmented nature of these workflows slows down critical reporting processes, making it harder for organizations to identify trends, address issues proactively, and make informed decisions. And with every new platform or solution added to an organization’s tech stack, the burden of managing analytics becomes more pronounced, and strains limited IT resources even more.OneAPI for analyticsZscaler’s Zero Trust Automation solution now features a powerful new innovation that enables organizations to programmatically retrieve Zscaler analytics data. In other words,&nbsp;customers can use OneAPI for analytics and automate their reporting; they can build the dashboards they need in the tools they want—without having to log in to the Zscaler admin interface.&nbsp;Designed for flexibility, OneAPI for analytics will allow security and networking teams to pull data from any Zscaler solution (whether&nbsp;ZIA,&nbsp;ZPA,&nbsp;ZDX, a connector, or something else) and into any API client (whether third-party or homegrown). In terms of supported data domains, it will enable the retrieval of analytics from categories like:Web trafficCybersecuritySaaS securityZero Trust FirewallIoTShadow ITWhen using the solution, admins can take advantage of the same role-based access control (RBAC) that they leverage when using OneAPI for configurations. This ensures that analytics data can only be retrieved by the appropriate API clients. Similarly, every analytics-related API call is still logged and assigned a request ID to ensure visibility, control, and compliance. Additional functionality includes query caching that provides faster response times when performing repeated or related queries, as well as built-in data transformation capabilities that convert raw data into more consistent, readable formats.&nbsp;Overall, the above functionality allows admins to customize how data is collected and presented. This empowers them to automate the creation of regularly scheduled reports and dashboards, streamline the building of widgets that monitor their Zscaler deployments, query specific data for visibility or compliance purposes, and more.&nbsp;Wrap-upOneAPI for analytics enables IT teams to simplify and accelerate reporting processes, making it easier to draw insights from Zscaler analytics data. By automating workflows that previously required manual effort, organizations can reduce complexity, save time and money, enable admins to focus on more strategic initiatives, reduce the likelihood of human error, and take an important step toward automating the full Zscaler life cycle.&nbsp;To learn more about Zscaler OneAPI and how it can automate your analytics, sign up for our upcoming webinar,&nbsp;Zero Trust Automation: Streamlining Reporting with Zscaler OneAPI.]]></description>
            <dc:creator>Jacob Serpa (Sr. Product Marketing Manager)</dc:creator>
        </item>
        <item>
            <title><![CDATA[Stop Half-Time Security: Unlock Full Zero Trust Coverage with ZPA Across Your Entire Organization]]></title>
            <link>https://www.zscaler.com/de/blogs/product-insights/stop-half-time-security-unlock-full-zero-trust-coverage-zpa-across-your</link>
            <guid>https://www.zscaler.com/de/blogs/product-insights/stop-half-time-security-unlock-full-zero-trust-coverage-zpa-across-your</guid>
            <pubDate>Tue, 09 Dec 2025 23:06:52 GMT</pubDate>
            <description><![CDATA[Attackers don’t care where your users work, but your security should. Many organizations start their Zero Trust journey by deploying Zscaler Private Access (ZPA) as a VPN replacement for remote workers. While that’s a critical first step, stopping there leaves key parts of your workforce like office users, and third party contractors, exposed to threats like lateral movement, fragmented policies, and visibility gaps. ZPA was built to go beyond remote access. By scaling ZPA across your entire workforce, you eliminate lateral movement, reduce overhead, and deliver consistent, fast, secure access for every user, everywhere.Misconceptions About Expanding Zero Trust&nbsp;Even for security leaders who recognize the importance of Zero Trust, securing remote users often feels like “enough.” But are remote workers really the only ones at risk? Trusted office networks and unmanaged devices can still introduce vulnerabilities, especially if lateral movement threats exploit infected devices or overly permissive access to private applications.&nbsp;Here are some common misconceptions:“We don’t have enough visibility to enforce Zero Trust everywhere.” It might seem like identifying apps and users across office networks and remote environments is an unnecessary and manual task, especially if things feel “secure enough.” However, relying on fragmented tools and policies for different types of users often leads to gaps. Without comprehensive visibility, ensuring consistent security across your workforce becomes impractical, leaving trusted environments exposed to lateral&nbsp;movement risks.“Scaling Zero Trust will take too much time and effort.” Expanding Zero Trust might seem like a complex process especially if manual configurations or legacy tools are still in place, leading to concerns about delays and resource-intensive deployments.“Zero Trust will interrupt productivity.” New controls are often perceived as roadblocks to workflows or as disruptions to user experience, creating friction and frustration for employees.These misconceptions can hold organizations back from achieving the full potential of Zero Trust, but scaling Zero Trust doesn’t have to be hard, slow, or disruptive.Achieve Zero Trust Network Access Everywhere, Fast and SimpleExpanding your ZPA footprint is about delivering modern access everywhere employees work and making IT operations simpler, and delivering exceptional experiences for your workforce. We make this super fast and easy with these three capabilities.Automate least privilege with Autonomous User-to-App Segmentation: ZPA takes the heavy lifting out of scaling Zero Trust. With AI-powered app discovery and segmentation, ZPA automates the creation of least-privilege policies that connect users only to what they need.Reduce Overhead and Deploy Fast With Automated Infrastructure Management:&nbsp;With fast enrollment and automated workflows, IT teams can onboard users quickly and efficiently without operational headaches. By connecting users through the closest Zscaler service edge, ZPA eliminates delays and ensures secure, high-performance app access for employees, no matter where they work.Accelerate Workforce Productivity With Consistent Secure Access Everywhere:&nbsp;ZPA empowers your workforce with uninterrupted access to the apps they need, whether remote, in-office, or hybrid. Employees avoid downtime, benefiting from seamless workflows and improved collaboration, all supported by secure, consistent protections everywhere they work.See What ZPA Can Do for Your OrganizationZPA customers have seen firsthand how expanding coverage beyond just a VPN replacement transforms their ability to secure access for ALL users.“We’ve experienced a net gain in usability, reliability, and security…During a meeting, when I informed people that they don’t have to do MFA or connect to VPNs anymore, they erupted in applause.” - Dan Han, CISO, Commonwealth University (Case study)“Regardless of where users are, they’re receiving the same exact security and policies that on-premises users have.” - Dan Han, CISO, Commonwealth University“Zscaler fast-tracked our digital transformation, allowing us to modernize both our security infrastructure and our workplace, transforming the way we work daily…We are more productive, efficient, and secure than we have ever been.” - Stephen Bailey, Vice President of Information Technology, Cache Creek Casino Resort (Case Study)Now is the time to bring Zero Trust protections to every user, app, and location across your organization. Stop settling for half-time security and unlock the benefits of ZTNA Everywhere.&nbsp;]]></description>
            <dc:creator>Olivia Vort (Senior Product Marketing Manager)</dc:creator>
        </item>
        <item>
            <title><![CDATA[The Gainsight Supply Chain Attack: What it Means for SaaS Security ]]></title>
            <link>https://www.zscaler.com/de/blogs/product-insights/gainsight-supply-chain-attack-what-it-means-saas-security</link>
            <guid>https://www.zscaler.com/de/blogs/product-insights/gainsight-supply-chain-attack-what-it-means-saas-security</guid>
            <pubDate>Tue, 09 Dec 2025 17:00:03 GMT</pubDate>
            <description><![CDATA[According to the&nbsp;official disclosures, the Gainsight incident was a classic SaaS&nbsp;supply chain attack in which&nbsp;threat actors compromised OAuth tokens used by Gainsight’s Salesforce-connected applications. Rather than targeting Salesforce directly, attackers exploited the trusted integration pathway between the two platforms—using stolen refresh tokens to inherit the same data access privileges that organizations had granted to Gainsight.&nbsp;This allowed unauthorized API access, data queries, and potential exfiltration of CRM information across hundreds of customer environments. The root issue was the over-privileged and long-lived OAuth trust established between organizations and a third-party vendor, creating a high-value target that adversaries could abuse at scale.How SaaS security posture management (SSPM) could have prevented or minimized the breachWhat is SSPM?SaaS security posture management (SSPM) is a set of tools and processes designed to monitor, manage, and secure the configurations and usage of SaaS applications. SSPM plays a crucial role in SaaS security by identifying misconfigurations, enforcing compliance, and automating remediation of security issues across cloud applications.&nbsp;By continuously assessing third-party integrations and user permissions, SSPM helps organizations quickly detect and address vulnerabilities that could be exploited in supply chain attacks. This proactive approach significantly reduces the risk of unauthorized access and&nbsp;data breaches stemming from interconnected SaaS platforms.A modern&nbsp;SSPM solution directly addresses the gaps exposed by the Gainsight attack. Here is how:1. Full visibility into all third-party integrationsMost organizations don’t maintain an accurate inventory of all OAuth apps connected to Salesforce or what data they can access.&nbsp;SSPM provides automatic discovery of:All Gainsight apps connected to SalesforceDetailed permission scopes (API access, offline access, user data access)Which users authorized the apps and whenWhat sensitive Salesforce objects those apps could readThis proactive inventory would have immediately highlighted that Gainsight apps had broad, persistent Salesforce access—long before attackers exploited it.2. Continuous monitoring of OAuth token useAttackers relied on compromised refresh tokens to maintain long-term access.&nbsp;SSPM monitors token behaviors such as:Tokens used from unusual IPs or geolocationsTokens being used outside normal business hoursLarge volumes of API queries or bulk record readsTokens accessing Salesforce objects they never accessed beforeIn Gainsight’s case, early October API calls from suspicious IPs would have triggered alerts weeks before Salesforce detected the breach.3. Detection of anomalous app behaviorThe attackers used a known malicious user agent string (“Salesforce-Multi-Org-Fetcher/1.0”) and ran multi-org data-fetch operations—behaviors widely associated with previous OAuth-based exfiltration attacks.&nbsp;SSPM would have detected:New user agents never seen beforeApps suddenly making bulk data export callsApps querying Salesforce objects they weren’t intended to accessApps accessing a significantly larger number of records than usualThese anomalies form clear behavioral red flags for token abuse.4. Enforcing least-privilege permissions for connected appsSSPM identifies apps with excessive permissions—common in SaaS ecosystems. Gainsight apps had wide access to Salesforce objects and token privileges that far exceeded their functional needs.&nbsp;SSPM would have recommended:Reducing scopesRestricting object-level accessRemoving unused permissionsRevoking long-dormant integrationsThis significantly limits blast radius even if an OAuth token is stolen.5. Validating critical posture controls for sanctioned and connected appsBeyond visibility and monitoring, SSPM evaluates whether the underlying SaaS tenant and its connected apps are configured securely. Misconfigurations often create the exact conditions attackers exploit during supply chain incidents.A mature SSPM solution identifies and continuously validates posture controls such as:Secure refresh-token policies enforced for connected appsSecure session-timeout settings to prevent long-lived accessIP restrictions applied so connected apps operate only from trusted networksAPI and object-level access restrictions to limit data exposureMFA requirements for app authorization and approvalsIf these controls were enforced, the stolen OAuth tokens would have been far less useful to attackers, significantly restricting token reuse, long-duration sessions, untrusted IP access, and broad API querying.Zscaler SSPM: Purpose-built to stop connected-app supply chain attacks&nbsp;Zscaler SSPM delivers every control designed to prevent, detect, and contain the type of OAuth-based supply chain attack seen in the Gainsight incident—and the screenshot above illustrates exactly why. The platform automatically discovers high-risk connected apps like Gainsight, classifies them, assigns an app-risk score, and surfaces&nbsp;threat intelligence directly within the admin console. In this case, Zscaler not only detected the Gainsight integration but also provided contextual threat intel describing the active breach, enabling immediate awareness and response.Zscaler SSPM gives detailed visibility into each connected app’s permissions—covering OAuth privileges, API access, and token capabilities—so you can automatically detect risky or over-privileged integrations and reduce exposure if compromised. Continuous monitoring for suspicious API activity, combined with posture controls like IP allowlisting, token policies, API restrictions, and MFA, ensures quick detection and remediation of security gaps, stopping unauthorized access even if an OAuth token is breached.In short, Zscaler SSPM provides the full set of capabilities to prevent, detect, and limit supply chain attacks stemming from compromised third-party integrations—ensuring connected apps cannot become a hidden backdoor into critical SaaS environments.&nbsp;Request a demo to learn more.]]></description>
            <dc:creator>Niharika Sharma (Staff Product Manager - CASB PM)</dc:creator>
        </item>
        <item>
            <title><![CDATA[React2Shell and the Case for Deception in Your Vulnerability Management Program]]></title>
            <link>https://www.zscaler.com/de/blogs/product-insights/react2shell-and-case-deception-your-vulnerability-management-program</link>
            <guid>https://www.zscaler.com/de/blogs/product-insights/react2shell-and-case-deception-your-vulnerability-management-program</guid>
            <pubDate>Fri, 05 Dec 2025 21:14:47 GMT</pubDate>
            <description><![CDATA[Another day, another vulnerability. Deception offers a pragmatic path to detecting and blocking zero day exploits while augmenting vulnerability management programs to accelerate prioritisation and reduce team burnout.On Dec 3, 2025, the React2Shell vulnerability was officially disclosed by the React team. Mere hours later, customers using Zscaler Deception technology had detailed, accurate evidence of exploits of this vulnerability in their environments. This fast, high-fidelity intel gave security teams in those organizations the ammo they needed to bring a full-court press to finding and eradicating this vulnerability.Sometimes, it’s not enough to know your house has dangerous flammable materials that should be addressed – sometimes you need the proof that shows your house is already on fire. Deception can provide that action-inspiring proof.An onslaught of vulnerabilitiesEvery environment has vulnerabilities. A subset of those vulnerabilities are exploitable. In the last month alone,&nbsp;NVD shows their analysts investigated 3223 CVE vulnerabilities. That’s about 100 vulnerabilities a day.&nbsp;These statistics translate to security teams being called into action almost every working day of the last month.&nbsp;&nbsp;This deluge of new CVEs adds to the continuous and ongoing stress that SOC and vulnerability management teams already experience. Being on such a “Here we go again…” treadmill can feel like we can’t stay ahead. It’s the highway to burnout.&nbsp;The promise of Unified Vulnerability ManagementWhen new vulnerabilities like React2Shell emerge, a Unified Vulnerability Management program should be able to to answer the following questions, quickly:&nbsp;Do we have this vulnerability?Does it affect assets we consider important?What’s the blast radius if an exploit materialized?Is there evidence that we are being targeted?Should we stop other activities to chase down this issue?In an ideal world, teams should know these answers in minutes, with a high degree of accuracy.&nbsp;But in the real world, we’re faced with the reality that:Vulnerability information is stored across multiple tools and often in spreadsheets.Asset and software inventory information is stored in disparate databases.&nbsp;This siloed data creates a knock-on effect in scoping and triage of the vulnerability and subsequent prioritization.&nbsp;No wonder the people responsible for mitigating these kinds of risks report feeling high degrees of stress.&nbsp;The communication challengeOftentimes, security teams have enough tooling to realize they have the vulnerability in question, but they lack the tooling needed to demonstrate or provide evidence of the risk to the business. This gap makes it challenging to marshall the resources needed to respond with sufficient speed and force to the risk at hand.&nbsp;&nbsp;Evidence converts theoretical risk to actual riskEvidence is a crucial tool for communicating risk to the business. If you have evidence that a given issue in your environment is being scoped by bad actors right now, then all the other pieces needed to flesh out the story (what’s the vulnerability, the assets, the blast radius, etc) can take center stage. Until then, the risk is only theoretical – a company might dismiss it or not respond with the urgency the security team feels is needed.&nbsp;A lot of security teams and leaders have likely experienced this challenge. Consider the difference in impact of these two summaries of the situation:Without Evidence:&nbsp;“A React2Shell vulnerability has been disclosed. It affects 30% of our applications. Multiple partners and vendors have flagged this vulnerability as a burning issue.&nbsp; We need to fix it right now.”With Evidence: “We have evidence that our environment is actively being probed for a vulnerability named React2shell.&nbsp; It affects 30% of our applications. Multiple partners and vendors have flagged this vulnerability as a burning issue.&nbsp; We need to fix it right now.”&nbsp;Such evidence is pivotal to spurring action. So where does this evidence come from? That’s where deception comes in.&nbsp;Perimeter-facing deception: A source of Private Vulnerability Exploitation IntelligenceDeception technology deploys realistic decoys (apps, APIs, credentials, breadcrumbs) that look and behave like production assets but have no legitimate business use. Any meaningful interaction with them (e.g.,&nbsp; attempting to log in, suspicious requests, large traffic volumes) is a high-fidelity signal of adversary intent rather than generic noise.When those decoys are placed at the perimeter, including public-facing web apps advertising common React frameworks, they do two things:Convert scanner noise into targeted reconnaissance: you see who is actively looking for your stack and patterns, not just sweeping Shodan-grade scans.Produce private, first-party threat intelligence: indicators tied to your environment (IPs, tools, payloads, paths, headers) that aren’t in public feeds yet.This information is not open-source intel or paid feeds – it’s your own adversary engagement data, generated by your decoys, tuned to your attack surface, and available in minutes.React2Shell: What Zscaler customers sawWithin hours of the React2Shell public disclosure, Zscaler customers using Deception detected activity against decoy applications deployed at the perimeter consistent with exploit patterns reported in the vulnerability.&nbsp;What we observed:98 distinct source IPs probed and attempted exploitation across decoy endpoints.The majority of these IPs had not been published anywhere at the time of detection. Deception uncovered net new adversary infrastructure relative to public threat intel feeds.Deception generated clear, high-quality intelligence on the attack, showing what targets were being sought, the payloads being used, and confirming the use of automated tools and evolving attack strategies.This data is private threat intelligence in action: first-party indicators, tailored to your stack, appearing before or in parallel with broader ecosystem reporting.How Deception accelerates vulnerability management decisionsDo we have this vulnerability? Correlate decoy interaction fingerprints to real asset inventory (versions, frameworks). If a decoy that mimics AppType A is being probed, your AppType A inventory gets priority.Does it affect assets we consider important? Map decoy-relevant tech to business-critical services. If decoys for Customer Portal tech are hit, scope that tier first.What’s the blast radius if an exploit materialized? Use adversary paths captured in decoys (endpoints, parameters, headers) to simulate exploit chains across similar apps. Prioritize lateral choke points.Is there evidence that we are being targeted? Yes/no becomes a metric, not a guess: targeted reconnaissance against your stack is a high-fidelity signal, not a heuristic.Should we stop other activities to chase down this issue? Deception alerts are high fidelity, especially in scenarios like React2Shell. If they fire, orchestration is justified—block, throttle, isolate, or accelerate patching for affected tiers.Operationalizing perimeter deception in a Vulnerability Management programDeception technology provides a quick and easy augmentation to your VM program. Here’s a gameplan that can rapidly increase your protection:Place realistic decoys at known tech perimeter hotspotsMirror your common frameworks, versions, and routing conventions.Advertise plausible headers and cookies and build artifacts that attract framework-specific recon.Instrument for early, actionable signalsLog source IPs, tooling fingerprints, exploit paths, payload deltas.Tag decoy interactions by framework/asset class to align with inventory.Automate enrichment and routingAuto-enrich IPs and payloads, but treat decoy hits as first-class, high-confidence events.Route “React-tier decoy hits” to VM owners for immediate scoping.Tie alerts to patch workflowsIf decoy X (React) is probed, open patch tasks for all matching production assets.Gate deployment with runtime mitigations (WAF rules, feature flags) while patch cycles run.Orchestrate protective controlsBlock or throttle the specific IPs engaging decoys.Redirect persistent adversaries deeper into high-interaction decoys to gather more intel safely.Report what matters to the business“We’re seeing active reconnaissance against our React stack today.”“We’ve prioritized patching for Tier-1 customer apps; we’ve deployed interim WAF rules.”“We’ve captured exploit paths, and we have mitigation in place, with net new attacker IPs blocked.”&nbsp;Reduce burnout for Vulnerability Management and SOC teamsOur security teams already work under incredible pressure. We have the tooling needed to spare them from some of the emergency conditions they often face. Deception helps by providing:Fast speed to signal: Minutes, not days, to confirm targeting and triage the right assets.Decision clarity: High-fidelity alerts reduce debate and unlock action.Better sequencing: Patch what’s probed first; defer what’s quiet without guilt.Fewer pivots: Decoy telemetry provides context so analysts spend less time stitching together tools.Closing thoughtsVulnerabilities are inevitable. Breaches are optional—if you can see exploit attempts early and act decisively, you can avoid being compromised. Perimeter-facing deception turns your attack surface into an early-warning sensor, generating private threat intelligence that feeds vulnerability management programs effectively. With signals like “98 net new attacker IPs probing your decoys,” you don’t plead for urgency—you prove the need and move fast.If you’re already patching React2Shell, put decoys on the perimeter now. Let the adversary tell you which parts of your environment to fix first. Then meet them there with patches, mitigations, and blocks—on your terms.Request a demo to learn more about Zscaler Deception.Threat Research by: Deepak Keshav, Prajjwal Singh, and Sayan Mitra]]></description>
            <dc:creator>Amir Moin (Zscaler)</dc:creator>
        </item>
        <item>
            <title><![CDATA[Identifying Vulnerabilities with Cloud Sandbox’s Zero Day Virtual Machine Scan]]></title>
            <link>https://www.zscaler.com/de/blogs/product-insights/identify-vulnerabilities-cloud-sandbox-zero-day-virtual-machine-scan</link>
            <guid>https://www.zscaler.com/de/blogs/product-insights/identify-vulnerabilities-cloud-sandbox-zero-day-virtual-machine-scan</guid>
            <pubDate>Fri, 05 Dec 2025 00:24:59 GMT</pubDate>
            <description><![CDATA[As a pioneer of cloud-based security, Zscaler operates a network of over 160 global data centers that have the necessary infrastructure to:Intercept data flowing between client and server for 9,400+ global customersScan and analyze over 500 billion daily transactions for threatsProcess 500 trillion daily signals for our AI/ML cloud effect that continuously detects and prevents threatsZscaler’s Cloud Sandbox now provides customers an environment in which they can deploy new virtual machine (VM) types and apply the latest vendor-provided patches. Cloud Sandbox detonates sample binaries on a fully patched VM to derive a score based on a binary’s behavior: the higher the value, the greater the risk of exploitation. This blog looks at how Zscaler is able to achieve this and reduce patch deployment friction for security operations teams.Determining a Vendor-supplied Patch’s Impact with Zscaler’s Cloud SandboxOperating the world’s largest security cloud platform means we can rapidly evaluate the&nbsp; behaviour and efficacy of the latest third-party vendor patches when applied. There are multiple nodes in Zscaler’s infrastructure that comprise the infrastructure required to execute data interception and inspection – here’s a look at what each edge component does:The first hop from the customer is a Cloud Enforcement Node (CNE) node, which is an explicit proxy that an endpoint client browser or other application points to.The requests from application-to-application servers are directed to OCS (Original Content Server) via the CNE node.The CNE node is configured explicitly as a proxy on the client endpoint application and performs inline inspection on incoming data.The CNE node forwards the data based on policy to the SM Behavioral Analytics (SMBA) node. This node fronts the Cloud Sandbox infrastructure. The data can take the form of various file types such as .exe, .dll, .pdf etc.SMBA nodes forward the data to the Sandbox Nodes (JSB) for detonation.The JSB nodes detonate the incoming data as a file type, collect behavioral data and determine the impact on the host node. This data is sent back to the SMBA node.Every SOC Team’s Dream: Knowing the Impact of Vendor Patches Before Applying Them&nbsp;&nbsp;Patch rollout is a very cumbersome process operationally for IT and security teams: they have to apply and verify that the patches were applied properly—and this process must be applied for multiple patches.Before applying and rolling out patches, these teams would ideally have a rating that indicates how the patch impacts overall data flow in an organization, which could be any of the following:Fixing known vulnerabilitiesUncovering latent vulnerabilitiesThough unlikely, introducing additional vulnerabilities&nbsp;All these outcomes could impact data flow, including if a new vulnerability is detected and if it could be blocked by security policies customers configure and add in their Zscaler tenant.With such a rating, operations teams could make better decisions on which patches can be applied immediately without causing disruption versus those that pose more risk of interrupting workflows.Zscaler’s Cloud Sandbox now provides customers an environment in which they can deploy new virtual machine types and apply the latest vendor-provided patches. Cloud Sandbox detonates these binaries on a fully patched VM to derive a score based on a binary’s behavior: the higher the value, the greater the risk of exploitation.&nbsp;Cloud Sandbox VM Score Report Provides Threat Score with Behavioral IndicatorsComparing this score of a given binary on a fully patched VM (referred to as “Zero Day VM” going forward in this entry) helps customers understand the value of the patch.&nbsp;For example, assuming the score of a binary in a regular unpatched VM is X and the same as Y on Zero Day VM we can determine the outcome would be the final one in this list of potential impacts:X > Y: Zero Day VM score decreased: the applied patches fixed some of the vulnerabilities.X < T: Zero Day VM score increased:&nbsp;the applied patches detected latent vulnerabilities, e.g., zero day detection of vulnerabilities.X = Y: Zero Day VM score is negligible: the risk negates the value of a patch that is supposed to address a specific vulnerabilityLet’s look at an example of how this works: a customer submits a suspected zero-day exploit sample in two virtual machine environments: a "Regular VM" with limited patches and a "Zero Day VM" that was fully patched.Despite the Zero Day VM being fully patched, the exploit was still successfully executed, confirming it was a true zero-day vulnerability. The resulting report below shows a threat score of 100 for the exploit. Further, the report also identifies behavioural characteristics that support this finding in addition to security bypass, networking and stealth tactics and techniques.]]></description>
            <dc:creator>Brendon Macaraeg (Sr. Product Marketing Manager)</dc:creator>
        </item>
        <item>
            <title><![CDATA[From Shadow AI to Trust: How Healthcare Can Secure the Future of Artificial Intelligence]]></title>
            <link>https://www.zscaler.com/de/blogs/product-insights/shadow-ai-trust-how-healthcare-can-secure-future-artificial-intelligence</link>
            <guid>https://www.zscaler.com/de/blogs/product-insights/shadow-ai-trust-how-healthcare-can-secure-future-artificial-intelligence</guid>
            <pubDate>Tue, 02 Dec 2025 04:13:56 GMT</pubDate>
            <description><![CDATA[Artificial Intelligence (AI) tools like ChatGPT are disrupting industries at an unprecedented pace, and healthcare is no exception. This swift adoption has ushered in challenges, particularly “shadow AI”—the unsanctioned and unmanaged use of AI tools by employees. In healthcare, where trust, privacy, and security are critical, the stakes could not be higher.On a recent episode of We Have TRUST Issues, we had the pleasure of speaking with Nate Couture, CISO of the University of Vermont Health System (UVM Health), who shared his journey addressing shadow AI and enabling secure, meaningful experimentation within his organization. His insights reveal how healthcare can leverage AI responsibly without undermining the trust of employees, patients, or stakeholders.When Shadow AI Takes Organizations by SurpriseAI adoption often starts inconspicuously. Employees hear about tools like ChatGPT and begin experimenting on their own. What starts as simple curiosity quickly grows, becoming shadow AI—unvetted usage of AI tools without oversight or safeguards.For Nate and his team at UVM Health, this discovery was daunting. Using Zscaler logs to track activity, they found that ChatGPT and similar tools were being used thousands of times across various departments. Beyond just marketing teams, clinicians, residents, and IT employees were also entering sensitive healthcare environments to prompt AI tools.This spike in shadow AI activity presented a challenge that would resonate across industries: how do you introduce measured governance when innovation is outpacing your ability to secure it?“Discovering shadow AI in our health system was jarring,” Nate recalled. “Blocking it wasn’t a sustainable solution. We had to find a way to enable safe exploration while protecting sensitive data and earning the trust of our employees.”The Right Approach: Enable, Don’t BlockFaced with shadow AI, many organizations react by blocking access to these tools entirely, leaving employees frustrated and often pushing them to find workarounds. UVM Health took a different route entirely—choosing to foster controlled experimentation.This decision was deeply strategic. AI in healthcare is poised to revolutionize patient care, but without foundational literacy, employees wouldn’t be equipped to assess and leverage purpose-built AI tools down the road. UVM Health’s approach was to balance innovation with security. Here’s how they succeeded:Data Loss Prevention (DLP): The health system implemented controls to catch and block patient data from being entered into public AI platforms like ChatGPT, preventing confidentiality breaches.Education Campaigns: They developed guidance materials to help employees understand what was appropriate to use AI for (and what wasn’t). Ethical considerations, compliance reminders, and risk warnings were part of this training.Proactive Prompts: Users engaging with AI tools encountered a splash page reminding them to use AI responsibly, with clear links to more detailed usage guidance.By enabling safe experimentation, UVM Health avoided stifling curiosity while ensuring sensitive information remained secure. Or as Nate put it, “We created an environment where employees didn’t have to hide their usage; instead, we equipped them with the guardrails they needed to explore AI responsibly.”Governance: The Backbone of TrustGovernance was the next logical step in transforming shadow AI into a secure, trusted tool. UVM Health launched an AI Governance Council to evaluate AI tools before adoption, ensuring risks were mitigated while still enabling innovation. Uniquely, this council’s structure focused on collaboration rather than IT-driven mandates. It includes representatives from:Clinical leadership (via the Chief Nursing Officer and Chief Health Information Officer)Cybersecurity and IT stakeholdersEthics, privacy, and legal expertsMarketing and communications“Governance isn’t about saying ‘no.’ It’s about providing the structure organizations need to innovate safely,” Nate explained. “By gathering insights from across departments, we were able to build buy-in for AI tools and ensure every potential risk—security, ethical, or legal—was addressed.”The governance team also helped foster a culture of trust by being clear about their processes. Breaking down siloes between clinical and technical staff ensured that the council wasn’t just another "IT security team," but instead a resource supporting system-wide innovation.AI in Practice: Elevating Patient CareWith shadow AI under control, UVM Health has since implemented AI tools that are transforming healthcare delivery. For example:Ambient AI for clinical documentation: AI listens in on patient consultations, generating draft notes for doctors to review. This allows clinicians to focus on their patients rather than their keyboards.Nursing Support through AI: Tools monitor patient conditions, such as fall risks or bedsores, and send alerts to nursing staff, ensuring preventative measures are taken promptly.These tools show how AI can reduce clinicians’ cognitive workloads, get them back to patient-first care, and allow healthcare providers to work more efficiently without sacrificing trust or oversight.Preparing for What’s NextWhile today’s focus in healthcare AI centers on tools that assist humans, Nate underscored the need to prepare for the next phase: agentic AI. This is where artificial intelligence evolves to make autonomous decisions—something that could bring extraordinary value, but also significant risk.Threats like “prompt injections” (where AI outputs can be manipulated maliciously) require proactive solutions, and Nate likened this phase to the early days of the internet. “We don’t want AI to repeat the mistakes of unsecured web applications,” he explained. “AI needs guardrails at this stage to avoid vulnerabilities being exploited down the line.”Organizations need to treat agentic AI as they would human users: defining access levels, monitoring for misuse, and continuously validating security protocols. UVM Health is focused on ensuring these protections are ready before AI becomes fully autonomous in their workflows.Additional Resources Include:Listen to the full radio show on demand at Healthcare Now Radio.Download eBook Securing Healthcare's AI RevolutionJoin us for our AI Security innovation launch]]></description>
            <dc:creator>Tamer Baker (Healthcare CTO)</dc:creator>
        </item>
    </channel>
</rss>