Products and Solutions | Blog Category Feed https://www.zscaler.com/ Zscaler Blog — News and views from the leading voice in cloud security. en Cisco ASA Firewall Breach: What to Do When Security Is a Target https://www.zscaler.com/blogs/product-insights/cisco-asa-firewall-breach-what-do-when-security-target The Year of the Dragon has seen some notable events so far: a total eclipse, Facebook’s 20th anniversary, and another Taylor Swift streaming record. But 2024 has also become the Year of the Hardware Vulnerability, with multiple VPNs and firewalls suffering zero-day vulnerabilities that bad actors are actively exploiting. On April 24, Cisco issued a warning that a nation-state supported threat actor had compromised its Adaptive Security Appliances (ASA). ASA integrates a firewall and VPN with other security features. This campaign, known as ArcaneDoor, involved the exploitation of two zero-day vulnerabilities (CVE-2024-20353, CVE-2024-20359) that targeted government networks worldwide. The threat actor deployed two backdoors: Line Dancer allowed them to run custom malware in the memory of network appliances, spy on network traffic, and steal data. Line Runner gave them persistent access to target devices, even after reboots or updates. As of this writing, the initial attack vector is unknown. This hacking campaign may be targeting devices other than the ASA, exploiting other unknown flaws to access and exploit the Cisco ASA vulnerability. Another day, another CVECisco’s disclosure and warning about the ArcaneDoor hacking campaign comes at a time when critical CVEs have been identified for Ivanti, SonicWall, Fortinet, Palo Alto Networks, and other Cisco VPN solutions. This recurring pattern highlights a concerning trend: threat actors are specifically targeting security appliances like firewalls and VPNs, exploiting their vulnerabilities in an attempt to gain access to the very environments they are designed to protect. These attacks indicate that the issue is not limited to any one vendor. Rather, it is the underlying legacy architecture of the devices that makes them lucrative targets. Decoding the architectural flawsThe big question on security and network architects’ minds today: why are perimeter-based security and hub-and-spoke network architecture susceptible to attacks? Decades ago, firewalls and VPNs were vital parts of an organization’s security. Employees mainly worked in offices, there were no smart lights or smart printers, and sophisticated cyberattacks on employees were more fiction than reality. Today’s complex, advanced cyberattacks weren’t yet widespread. Today's organizations are highly distributed and dynamic. The internet is the corporate network, with users, workloads, and IoT/OT devices connecting from various locations. By design, firewalls and VPNs have public-facing IP addresses that sit on the public internet so authorized users can traverse the web and find the entry points into the organization’s environment. This architectural flaw is where the problem lies: anyone, including threat actors, can discover these entry points. Even more concerning, everything within a traditional network is considered "trusted." This enables threat actors to establish a foothold in the network and move laterally, compromising the entire environment. How to protect yourself with zero trust securityThe best defense against zero-day attacks is to embrace zero trust security. Zero trust architecture is inherently different from traditional architectures that rely on firewalls and VPNs. Based on the principle of least privilege, it minimizes the internal and external attack surface, terminates and fully inspects all connections, and establishes one-to-one connectivity between authenticated users and applications without exposing the enterprise network. An effective zero trust approach drastically reduces the risk of successful exploits as well as the impact of a compromise. A cloud native, proxy-based zero trust architecture like the Zscaler Zero Trust Exchange: Minimizes the attack surface by eliminating firewalls, VPNs, and public-facing IP addresses. It allows no inbound connections and hides applications behind a zero trust cloud. Stops compromise by inspecting all traffic, including encrypted traffic, at scale. This enables policy enforcement and real-time threat prevention. Eliminates lateral threat movement by connecting entities to individual IT resources instead of the entire network. Blocks data loss by enforcing policies across all potential leakage paths, including encrypted traffic. This ensures the protection of data in motion, at rest, and in use. Best practices to protect against zero-day attacksThe Zscaler ThreatLabz research team recommends these best practices to protect your organization against exploits: Minimize the attack surface. Make applications (including vulnerable VPNs) invisible to the internet, ensuring that attackers cannot gain initial access. Prevent initial compromise. Inspect all traffic inline to automatically stop zero-day exploits, malware, or other sophisticated threats. Enforce least-privileged access. Restrict permissions for users, traffic, systems, and applications with identity and context, ensuring only authorized users can access named resources. Block unauthorized access. Use strong multifactor authentication (MFA) to validate user access requests. Eliminate lateral movement. Connect users directly to applications, not the network, to limit the blast radius of a potential incident. Shut down compromised users and insider threats. Enable inline inspection and monitoring to detect compromised users with access to your network, private apps, and data. Stop data loss. Inspect data in motion and at rest to prevent active data theft during an attack. Deploy active defenses. Use deception technology with decoys, and perform daily threat hunting to derail and stop attacks in real time. Test your security posture. Obtain regular third-party risk assessments and conduct purple team activities to identify and fix gaps in your security. Ask your service providers and technology partners to do the same, and share findings with your security team. The road aheadThe increased targeting of VPNs and firewalls by threat actors highlights the flaws of traditional perimeter-based architectures. With lucrative gains to be had, these attacks will continue. Organizations must prioritize patching critical vulnerabilities as soon as possible. However, to truly stay ahead of zero-day attacks, adopting zero trust is the most effective approach. A zero trust architecture will enable organizations to minimize the attack surface, enforce strict access controls, and continuously monitor and authenticate users and devices. This proactive approach to security will help mitigate zero-day risks and ensure a more robust, resilient defense in the future. Referenceshttps://www.wired.com/story/arcanedoor-cyberspies-hacked-cisco-firewalls-to-access-government-networks/ https://blog.talosintelligence.com/arcanedoor-new-espionage-focused-campaign-found-targeting-perimeter-network-devices/ If you’re concerned about how these vulnerabilities could affect your organization, contact us at [email protected] for a free external attack surface assessment as well as an expert consultation on how you can migrate from legacy architectures to zero trust. Fri, 26 Apr 2024 10:16:35 -0700 Apoorva Ravikrishnan https://www.zscaler.com/blogs/product-insights/cisco-asa-firewall-breach-what-do-when-security-target Step Into the Future of ZDX with 3 Exciting New Features: ZDX Copilot, Data Explorer, and Hosted Monitoring! https://www.zscaler.com/blogs/product-insights/step-future-zdx-3-exciting-new-features-zdx-copilot-data-explorer-and-hosted Many organizations face challenges supporting a distributed workforce, and pressure on IT resources continues to increase. Zscaler is relentlessly focused on enhancing our Digital Experience platform to empower IT operations and service desk teams to deliver the best end user experiences for their distributed workforces. Not long ago, we announced an ML-based Root Cause Analysis feature that helps IT teams quickly discover the root cause of issues. Using AI and ML for precise issue detection and root-cause analysis helps teams swiftly resolve support tickets by reducing mean time to resolution (MTTR) and mean time to detection (MTTD). We’re very excited to introduce three new advancements that will further assist IT teams in improving efficiency, visibility, and collaboration across IT operations, service desk, and security teams. ZDX Copilot: Revolutionizing IT Operations with AI The ZDX Copilot is an AI-driven virtual assistant designed to simplify and enhance IT operations through an advanced conversational interface. By integrating AI into the core of our network operations tools, ZDX Copilot allows users to interact with their systems using natural language. ZDX Copilot taps into the power of pre-trained LLMs to provide a conversational interface and interpret questions within the context of ZDX. “Show me the user experience for Hiren” Copilot links symptoms (e.g., network drops) to potential causes (Wi-Fi issues, network latency, etc.) and leverages historical data to enable accurate diagnostics. The large language model (LLM) uses input from time series data like the web probes, CloudPath probes, device events, process stats, and hundreds of other time-series metrics to help with the analysis. How It Works The IT admin initiates the conversation with the Copilot: "Could you please look at this user and find out the root cause of the bad score?" The LLM then collaboratively examines the data from various angles, asks clarifying questions if needed, and provides its analysis. Copilot Workflow Initiated Query: An IT admin starts the analysis by asking the LLM (in natural language) to analyze a particular data point. For instance, "Why is John Doe having poor sharepoint experience?" Automated Data Retrieval: The system fetches the data pertaining to the query (in this case for the specific user John Doe) and presents it to the LLM. Analysis by LLM: The LLM processes the data and provides the output/analysis to the ZDX admin. Drill Down: The admin can further ask the LLM to drill down into specifics. For example, "Tell me more about the network slowness observed.” Here are a few examples of the types of questions an ZDX admin could ask the Copilot: "Which users are having a poor Wi-Fi experience?" "Show me John Doe’s CPU utilization" “Troubleshoot the user experience for Linda Lucas” Let’s go through an example: I am asking ZDX Copilot to troubleshoot my user experience: “Troubleshoot Vikas Srivastava’s experience for the last 24 hours” Since in this case I have two devices; Copilot asks me about the device I would like to run the troubleshooting for. After that, it fetches the data from ZDX and provides a detailed analysis of the user experience for me and outlines possible causes that could be impacting my ZDX score. From here, I went to the ZDX user details page to search for the issue impacting my Wi-Fi. In doing so, I can validate what Copilot told me about my Wi-Fi and take remediative actions. Now take a step back and think about how much time it took me to merely ask a question in a conversational manner to Copilot and get to the root cause of the problem. The amount of time ZDX could save for your IT teams and helpdesk is substantial. Take a look at this analysis we did on the financial savings it could provide to you. https://info.zscaler.com/resources-white-papers-calculating-the-financial-value-of-zdx ZDX Hosted Monitoring: Continuous Network Performance Monitoring Expanding on our robust monitoring solutions, Zscaler Hosted Monitoring offers a service that operates continuously across multiple vantage points worldwide. This feature is designed to monitor and benchmark network performance, providing a seamless, comprehensive overview of your network's health and activity. Zscaler Hosted Monitoring The Zscaler Zero Trust exchange is distributed across more than 150 data centers on six continents, enabling users to access services securely from any device, any location, and over any network. You can now monitor the performance of your business-critical and customer-facing services 24/7 from these locations. With this continuous monitoring, you can apply: Application availability monitoring: Ensure that your external applications perform at their best, no matter where your customers are located Circuit monitoring: Ensure SLAs for applications and services you purchase from SaaS, cloud, data center, or network providers Performance monitoring: Confidently roll out new applications or expand into new regions as your business grows organically or by M&A Vantage PointA geographical location from where monitoring probes originate from. At launch, the available vantage points locations include::San Jose Washington DC Chicago Frankfurt Zurich Amsterdam With more to be added in the future, these strategic locations ensure that Zscaler Hosted Monitoring covers a broad spectrum of the network landscape, offering diverse insights into global network performance. Getting started with Hosted Monitoring is straight-forward Collections: Under the Configuration section of ZDX Hosted Probes, we have Collections, a grouping mechanism for the probes you would configure for the monitored destinations. For example, you could have a collection dedicated to mission-critical applications, one for HR, another one for Finance, etc. Hosted Monitoring Configuration Looking at the Metrics From the Zscaler Hosted Monitoring dashboard, you can analyze the time-series data collected from various vantage points. You can even select a specific vantage point of interest and see the metrics from that vantage point’s perspective. Zscaler Hosted Monitoring Dashboard Once you select a data point on the scatter plot, you get detailed insights like DNS response times, TCP connect times, SSL handshake times, server response time, time to last byte (TTLB), page fetch time (PFT), the time series data for the web, and CloudPath along with specifics for the probe data like DNS response times. From the waterfall details below; you can exactly see the time distribution of the different measured metrics (page fetch time, SSL handshake time, etc.) and easily understand the attribute of the page load which is taking the most time. Zscaler Hosted Monitoring Metrics Now let's look at the CloudPath data. Below are all the ISP paths detected to the configured destination (on different DNS resolutions). You can see the ISP information; latency between different hops to quickly pinpoint bottlenecks (highlighted in yellow). ZDX Data Explorer: Advanced Data Querying and Reporting ZDX Data Explorer is a sophisticated tool that enables detailed data analysis and reporting. Users can customize queries and generate reports based on various selectable fields such as applications, metrics, and grouping or aggregation preferences. This flexibility supports a detailed examination of data to uncover operational insights and trends. While building your query, you can select the Applications you are interested in and the specific metrics you would like to report on, such as: ZDX Score Device Count User Count DNS Time Page Fetch Time Web Request Availability Latency Packet Count Packet Loss Number of Hops You can group these by Applications, Zscaler Locations, Geolocations, Departments Data Explorer is valuable for engineers and managers: Engineers can troubleshoot problems by comparing similar services or applications to expose differences and anomalies across time managers and leaders can analyze trends that show how the team has achieved their KPIs, or uncover areas for optimization. Conclusion With these new capabilities, your teams can rely on Copilot as their AI assistant to ask and get answers to all your app, network, and device performance questions; use Hosted Monitoring to ensure that no customer or employee suffers from a poor digital experience; visualize trends with Data Explorer to troubleshoot issues, or to quantify IT’s contributions to optimizing digital experience, and, thereby, improve business results. To learn more about these innovations, watch our webinar, sign up for a demo, and review the latest features today! Thu, 25 Apr 2024 04:26:01 -0700 Vikas Srivastava https://www.zscaler.com/blogs/product-insights/step-future-zdx-3-exciting-new-features-zdx-copilot-data-explorer-and-hosted How the Zscaler SaaS Security and Data Discovery Reports Are Healthcare’s Superheroes https://www.zscaler.com/blogs/product-insights/how-zscaler-saas-security-and-data-discovery-reports-are-healthcare-s SaaS Security Report: A Not-So-Secret Identity Imagine a world where the heroes don't wear capes, but wield reports. And not just any reports: we're talking about the Zscaler SaaS Security Report, a document powerful enough to illuminate the darkest corners of healthcare organizations’ IT environments. Picture shadow IT as a quirky sidekick who means well but always ends up downloading rogue software that promises to “make work easier” or “automatically order pizza on Fridays.” Shadow IT is like the friendly custodian who “helps” organize your supply closet, and then you suddenly can't find your gloves or the superhero bandages for the pediatric patients. They’re there somewhere, hidden behind the paper towel rolls on the top shelf. Enter Zscaler, swinging in to reveal these well-intentioned but potentially hazardous endeavors. By identifying unsanctioned apps and services, the SaaS Security Report helps healthcare organizations wrangle the chaos, securing the network while still allowing for innovation (and maybe the occasional pizza). Data Discovery Report: The Unsung HeroOn the other side, we have the Data Discovery Report. This is our answer to the eternal question: “Where did I leave that incredibly sensitive patient data?” Think of it as the healthcare organization's memory enhancer, ensuring that no piece of critical information ends up in the wrong hands or, worse, on a USB stick in the washing machine. This report is like an organizational expert for data, categorizing and securing it in ways that would make any mom proud. It tells you exactly where your data lives, breathes, and occasionally goes out for a walk, making sure it's always safe and sound. It's particularly adept at flagging data that's decided to take an unscheduled vacation outside the secure confines of the healthcare network—like when your staff decide “Print to PDF” isn’t working, so they use a free converter they find on the internet. With the Data Discovery Report, you can see exactly who did that and which files they uploaded. The Dynamic Duo's AdventuresTogether, the SaaS Security Report and the Data Discovery Report are the dynamic duo of the healthcare IT world, fighting data breaches and compliance issues with the power of insight and analysis. They roam the digital corridors of hospitals and clinics, doing their part to keep patient data as secure as the pharmacy. Episode 1: The Case of the Vanishing Patient RecordsIn this thrilling adventure, our heroes face mysteriously disappearing patient records. The SaaS Security Report, with its keen eye for detail, finds that a well-meaning staff member has been using an unsanctioned cloud storage service to make their work “more efficient.” Meanwhile, the Data Discovery Report, always the detective, pinpoints exactly which files went on this unauthorized excursion. Episode 2: The Saga of the Shadowy SoftwareThis time, a shadowy figure has infiltrated the network with software promising to “revolutionize patient care.” Spoiler alert: it created a gaping security hole instead. But fear not! With the help of the SaaS Security Report and Data Discovery Report, IT staff quickly unmask the rogue application. The Moral of the StoryIn the end, the Data Discovery Report and SaaS Security Report don't just increase a healthcare organization's security posture—they do it with flair, bringing a smile to even the most overworked IT professional's face. You can now see exactly which apps your users are using so that you can create policy, or see if someone is taking data they shouldn’t and uploading it to a mysterious source. With these two reports, your IT team can perform even greater feats of heroism. Want to know more? Visit our Healthcare page! Tue, 16 Apr 2024 13:59:17 -0700 Steven Hajny https://www.zscaler.com/blogs/product-insights/how-zscaler-saas-security-and-data-discovery-reports-are-healthcare-s Why You Need a Proven Platform for Zero Trust https://www.zscaler.com/blogs/product-insights/why-you-need-proven-platform-zero-trust Organizations need a proven platform for zero trust. But before we dive into why that is the case, we must first answer two important questions. What is zero trust? Zero trust is a distinct architecture that provides secure connectivity based on the principle of least-privileged access. It inherently prevents excessive permissions, and gives users and entities access only to the specific IT resources they need in order to do their jobs. On top of that, zero trust means analyzing context to assess risk and determine whether or not to grant access, rather than using identity alone to do so. This is all achieved through a cloud platform that delivers zero trust connectivity as a service at the edge—meaning from as close to the end user as possible. In short, think of a zero trust platform as an intelligent switchboard. Figure 1: Zero trust architecture with Zscaler What is zero trust not? Yesterday’s perimeter-based architectures are built on firewalls and VPNs, which connect users to the networks that house resources rather than connecting them directly to the resources themselves. A commonly used name for such an architecture—castle-and-moat—illustrates the way that it is designed to function. That is, establishing a moat (perimeter) around a castle (network) in order to keep bad things out and good things in. However, if a threat makes it past the moat, there’s no second line of defense to prevent the threat from entering the castle and having free rein to move about within it. In security terms, we call this lateral movement—when a threat moves across network resources unrestricted. To read more about lateral threat movement and other shortcomings of perimeter-based architecture, you can read this ebook. Figure 2: Perimeter-based architecture Now that we understand zero trust as a distinct, cloud-delivered architecture, let’s return to our original point that organizations need a proven platform for zero trust. Namely, a vendor’s zero trust offering must be proven across the three key areas described below. Scalability When all of an organizations’ traffic is routed through a zero trust vendor’s cloud for security and connectivity, that cloud platform becomes a mission-critical service that must have the scalability necessary to ramp up with customers’ evolving traffic volumes in real time. Without it, organizations’ security and connectivity grind to a halt, taking productivity down with them. Additionally, a lack of scalability means that encrypted traffic typically goes at least partially (and sometimes completely) uninspected. This is because inspecting encrypted traffic is a resource-intensive process that requires a high level of performance. With 95% of web traffic now encrypted—and cybercriminals hiding 86% of their attacks within it—organizations must be able to inspect encrypted traffic at scale if they are to stop threats and data loss. One may assume that these scalability challenges only arise for larger organizations, but that is untrue. Without a proven zero trust platform that can scale, smaller organizations can also face these challenges, particularly as their businesses grow and their vendors need to ramp up services seamlessly. In other words, organizations of all sizes need a zero trust platform built on a cloud with proven scalability. Something you may not know about Zscaler is that our name stands for “zenith of scalability.” Since our company was founded, we’ve been committed to delivering unrivaled performance. The Zero Trust Exchange, the name of Zscaler’s zero trust platform, is the world’s largest inline security cloud. It boasts a variety of statistics and proof points that demonstrate its massive capacity for scale: 150 data centers worldwide (not merely on-ramps or vPoPs) 400 billion requests processed each day 500 trillion telemetry signals analyzed daily 9 billion incidents and policy violations prevented each day 150 million cyberthreats blocked daily 250,000 unique security updates implemented each day So, when it comes to choosing a zero trust platform, why settle for anything less than the zenith of scalability? Figure 3: A snapshot of some of Zscaler’s data centers around the world Resilience Business continuity planning for mission-critical services is a board-level priority for IT leaders. As mentioned previously, a zero trust platform’s strategic inline position between users, workloads, apps, and more, makes it a mission-critical service. As such, organizations need to know that unforeseen events won’t disrupt their vendor’s services; otherwise, security, connectivity, and productivity will all suffer. Zscaler Resilience is a core component of the Zero Trust Exchange. It is a complete set of resilience capabilities that offers high availability and serviceability at all times. Customer-controlled disaster recovery features and other robust failover options ensure uninterrupted business continuity, even during catastrophic events. Zscaler offers the following capabilities for the following scenarios: For minor failures, such as node crashes or software bugs, Zscaler can effectively handle the issues with minimal customer interaction. In the event of brownouts or service degradation issues, Zscaler Resilience offers dynamic, performance-based service edge selection, customer-controlled data center exclusion, and other failover mechanisms to maintain seamless experiences for users. For blackouts or severe connectivity issues, Zscaler provides failover options to redirect traffic to secondary Zscaler data centers nearby, ensuring that users can continue to access mission-critical applications. If there are catastrophic events, Zscaler Resilience provides customer-controlled disaster recovery capabilities, allowing organizations to keep their operations running by routing traffic to private service edges and restricting access to critical applications. Figure 4: Zscaler Resilience functionality A history of customer success In addition to scalability and resilience, zero trust platforms must have demonstrated success with actual customers using their services. Organizations need to see the success stories of customers that are similar to them in terms of size, industry, and their security and connectivity challenges—only then should they trust their vendor of choice. This is particularly true for bigger organizations because they need evidence that a zero trust platform can handle larger volumes of traffic and more rigorous performance requirements. At Zscaler, we have a litany of customer success stories available on our website in the form of videos, blogs, case studies, and press releases. Our company has demonstrated success with organizations of all sizes and in all geographies—from small, 100-user organizations like the Commonwealth Grants Commission in Australia, to those with hundreds of thousands of users, like Siemens in Germany, and beyond, to the New York City Department of Education and the 1 million users it secures with the Zero Trust Exchange. Here are some more facts and figures that demonstrate our customers’ trust and belief in our platform: Nearly 8,000 customers of all sizes, industries, and geographies Over 41 million users secured by the Zero Trust Exchange A Net Promoter Score of more than 70 (the average SaaS company’s is 30) More than 40% of the Fortune 500 are customers More than 30% of the Global 2,000 are customers Below are some of our customers across a variety of industries. Each logo is associated with a public-facing customer success story that can be found on our website by typing the customer’s name into the search feature. Figure 5: A snapshot of some Zscaler customers Where to go from here If you are still getting your feet wet with zero trust and would like to listen to an entry-level discussion on the subject, register for our monthly webinar, Start Here: An Introduction to Zero Trust. You may also want to read our ebook, 4 Reasons Firewalls and VPNs Are Exposing Organizations to Breaches. Or, if you would like to learn more about Zscaler Resilience and how the Zero Trust Exchange provides uninterrupted business continuity to customers, read our solution brief. Tue, 16 Apr 2024 07:30:01 -0700 Jacob Serpa https://www.zscaler.com/blogs/product-insights/why-you-need-proven-platform-zero-trust Join Zscaler for the Future of Digital Experience Monitoring Event https://www.zscaler.com/blogs/product-insights/join-zscaler-future-digital-experience-monitoring-event It's time to register for the Future of Digital Experience Monitoring event. Here are a few reasons why you can’t miss it! See what it takes to keep end users productive no matter the device, network, or application.Zscaler Digital Experience (ZDX) is built on a foundation that goes beyond siloed monitoring solutions to provide full end-to-end visibility across devices (CPU, memory, Wi-Fi), networks (corporate or public internet), and applications (public or private). Zscaler Digital Experience end-to-end data path Find out what’s behind the high-confidence results delivered by advanced machine learning models.Machine learning models that provide high-confidence results require an immense amount of data and training, which can't be built overnight. ZDX is powered by the industry’s largest inline security cloud, with 500 trillion daily signals feeding high-quality data to sophisticated AI models. Zscaler Zero Trust Exchange Learn about key ZDX AI capabilities already making an impact for many service desk and network operations teams.Incident Dashboard includes machine learning models to detect issues across last mile and intermediate ISPs, application, Wi-Fi, Zscaler data center, and endpoints with correlation. This enables Network Operations to quickly and efficiently find root cause and focus on restoring reliable connectivity. ZDX Incidents Dashboard ZDX Self Service empowers users to fix problems that impact their digital experience, if the causes are under their control. A lightweight AI engine runs in Zscaler Client Connector and notifies the user of issues such as poor Wi-Fi or high CPU utilization, and then offers ways to resolve the issue, reducing help tickets. ZDX Self Service Notifications Automated Root Cause Analysis reduces strain on service desk and operations teams by identifying root causes of issues—such as high CPU usage, Wi-Fi latency spikes on local routers, slow application response times, and more—that would typically require expert IT knowledge and multiple dashboards. Users can get back to work more quickly and with fewer IT tickets, which tend to spike as users increasingly connect from anywhere using various devices, Wi-Fi access points, ISPs, zero trust environments, and applications. ZDX Automated Root Cause Analysis Join our upcoming webinar eventAs organizations strive to optimize digital experiences and ensure secure access to applications and data, the future of digital experience monitoring lies in leveraging advanced AI capabilities. With consolidated digital experience monitoring integrated in a zero trust architecture, IT teams can resolve issues more quickly to enhance performance, reduce costs, and deliver exceptional user experiences. Join our upcoming webinar to discover how ZDX can transform your organization's digital experience monitoring strategy and drive superior business results. Register now Dates and times: Americas: Thursday, April 25 | 11 a.m. PT EMEA: Tuesday, April 30 | 10 a.m. BST APAC: Tuesday, April 30 | 10 a.m. IST Featured speakers: Dhawal Sharma, SVP & GM, Product Management, Zscaler Javier Rodriguez, Sr. Director., Product Management, Zscaler Wed, 03 Apr 2024 08:03:01 -0700 Rohit Goyal https://www.zscaler.com/blogs/product-insights/join-zscaler-future-digital-experience-monitoring-event Betrayal in the Cloud: Unmasking Insider Threats and Halting Data Exfiltration from Public Cloud Workloads https://www.zscaler.com/blogs/product-insights/betrayal-cloud-unmasking-insider-threats-and-halting-data-exfiltration Introduction In today’s digital world, safeguarding sensitive data, such as source code, is crucial. Insider threats are a worthy adversary, posing significant risk, especially with trusted employees having access to valuable repositories. This article explores how a fictitious software development company could use Zscaler security solutions to stop insider attempts to upload source code. By using Zscaler Workload Communications, the fictitious company detects and prevents unauthorized uploads, ensuring the security of its intellectual property. Insider Threat in the Cloud and How to Stop Them A fictitious software development company relies on its source code repository as the lifeblood of its operations. Trusted employees have access to this repository to facilitate collaboration and innovation. To mitigate the risk of insider threats, the fictitious company implements Zscaler security solutions. Let’s explore how our products thwart an insider’s attempt to upload source code to an unauthorized destination. Attack Chain Use Case StepsTrusted employee access: A trusted employee (insider) has access to the source code repository, enabling them to complete their job responsibilities. A simplified example of source code is shown below: Insider threat incident: The trusted employee with legitimate access decides to misuse their privileges by attempting to upload source code files to an unauthorized destination—an AWS S3 bucket, with the intention of unauthorized sharing. or user:~$ aws s3 cp sourcecode.c s3://bucket/uploads/sourcecode.c Figure 1: This diagram depicts how Zscaler blocks insider threats Integration with Zscaler Workload Communications: The fictitious company’s source code repository is configured to route all outbound traffic through Zscaler Workload Communications, ensuring that data transmissions undergo rigorous inspection and security policies are enforced. ZIA DLP engine implementation: ZIA leverages its powerful inline data loss protection (DLP) engine to analyze data traffic in real time. ZIA’s DLP policies are designed to identify and and prevent unauthorized attempts to upload source code files to external storage spaces. An example of DLP configuration options is shown below. Figure 2: An example of DLP configuration options. Detection and prevention of file upload attempts: As an insider attempts to upload source code files to the unauthorized AWS S3 bucket, ZIA’s DLP engine detects it as a violation of security policies. Leveraging advanced pattern recognition and behavior analysis, ZIA blocks the upload attempt in real time, preventing the exfiltration of company data. The figure below shows the source code file upload attempt failing in real time. Figure 2: The source code file upload command receives an error when executed The upload attempt, which was in violation of company policy, appears in descriptive log records, as shown below. Figure 3: A log showing the failed source code file upload, along with important details like user, location, and destination Alerting and response: The Zscaler security platform generates immediate alerts upon detecting the unauthorized upload attempt. How Zscaler Can HelpZscaler’s security products offer effective solutions against insider threats aimed at source code repositories: Outbound Data Violation TriggerBy routing through Zscaler’s Cloud Connector, organizations can enforce security policies on all outbound data transmissions, including those from source code repositories. This integration ensures that every upload attempt undergoes through security checks, regardless of the destination. Data Breach PreventionZscaler Internet Access (ZIA) features a powerful data loss prevention (DLP) engine that analyzes data in real time. Leveraging advanced DLP policies, ZIA can detect patterns indicative of unauthorized source code uploads. This approach enables organizations to prevent data breaches before they occur. Instant Alerts The Zscaler platform provides real-time monitoring of all network activity, including access to source code repositories. Any suspicious behavior, such as attempts to upload source code to unauthorized destinations, triggers immediate alerts. This allows security teams to respond promptly and prevent potential data exfiltration. ConclusionWith cybersecurity threats on the rise, organizations must combat insider risks effectively. Zscaler solutions offer proactive measures against insider threats, as demonstrated by the hypothetical use case outlined above. By implementing robust DLP policies and real-time monitoring, organizations can protect their critical data unauthorized access and maintain data integrity. The Zscaler platform equips organizations to tackle insider threats confidently, securing their digital assets effectively. Tue, 02 Apr 2024 13:31:17 -0700 Sakthi Chandra https://www.zscaler.com/blogs/product-insights/betrayal-cloud-unmasking-insider-threats-and-halting-data-exfiltration Exposing the Dark Side of Public Clouds - Combating Malicious Attacks on Workloads https://www.zscaler.com/blogs/product-insights/exposing-dark-side-public-clouds-combating-malicious-attacks-workloads IntroductionThis article compares the cybersecurity strategies of a company that does not use Zscaler solutions with one that has implemented Zscaler's offerings. By exploring two different scenarios, we will highlight the advantages of Zscaler zero trust for workload communications and its specific use of data loss prevention. Threat Propagation Without Zscaler IntegrationLateral Movement Between WorkloadsIn the following scenario, you’ll see that without Zscaler’s integration, the organization is unable to detect or prevent threats effectively. This allows attackers to move laterally and exfiltrate data undetected, leading to significant security risks. Workload 1 in Azure West sends an HTTP GET request to GitHub for a patch update: Workload 1, deployed in Azure West, initiates an outbound connection to GitHub to fetch a required patch update. This HTTP GET request is sent to Github to download the patch: An HTTP response containing malware from GitHub: Unbeknownst to the organization, the HTTP response received from GitHub contains embedded malware. Attacker’s lateral movement to Workload 2: By exploiting the malware present in the HTTP response, an attacker gains access to Workload 1 and subsequently moves laterally to Workload 2 within the Azure West environment. From here, the attacker exploits vulnerabilities or misconfigurations in Workload 2 to achieve a network foothold and establish persistence in Workload 2 that further their malicious objectives. Data Exfiltration to a command-and-control (C2) server: With access to Workload 2, the attacker exfiltrates sensitive data from the organization’s environment to a remote C2 server. Threat Containment with Zscaler IntegrationIn the following scenario, Zscaler’s integrated security platform provides comprehensive protection against various stages of the attack life cycle. Organizations can use Zscaler Internet Access (ZIA), coupled with Zscaler Data Loss Prevention (DLP) and Zscaler Workload Communications to implement: Strict access controls Malware detection and prevention measures Workload segmentation Enhanced outbound security measures to GitHub (internet): With Zscaler integrated into the organization’s infrastructure, outbound traffic from Workload 1 to GitHub is subjected to stringent access control policies. Only approved URIs are permitted, which ensures communications are limited to trusted destinations. Any attempt to access unauthorized URIs is blocked. Malware detection and prevention: Zscaler’s security layers, including content inspection and advanced cloud sandbox features, intercept and inspect the HTTP response from GitHub in real time. Upon detecting malware, Zscaler halts transmission, preventing Workload 1 from being compromised. Workload segmentation to prevent lateral movement: Zscaler enforces strict segmentation policies ensuring that Workload 1 and Workload 2, which are deployed across two different regions, are treated as private applications with no direct communication allowed between them. Such segmentation effectively isolates these workloads, preventing any lateral threat movement between them. Egress traffic security from Workload 2 with advanced data protection: Egress traffic from Workload 2 is safeguarded using ZIA advanced protection capabilities. Zscaler ensures that sensitive data is not exfiltrated from the organization's environment. By enforcing DLP policies, Zscaler prevents unauthorized data transfers. ConclusionThe deployment of Zscaler’s solutions significantly enhanced the organization’s ability to combat cyberthreats and safeguard public cloud workloads. Without Zscaler, companies face unmonitored outbound traffic, susceptibility to malware infiltration, and the risk of lateral movement and data exfiltration. With Zscaler zero trust for workloads, organizations enjoy comprehensive protection, including access control policies, malware detection and prevention, segmentation to prevent lateral movement, and advanced data protection measures. Implementing Zscaler solutions enables organizations to bolster their cybersecurity defenses, mitigate risks, and protect their intellectual property from evolving threats in an interconnected digital environment. Tue, 02 Apr 2024 19:14:07 -0700 Sakthi Chandra https://www.zscaler.com/blogs/product-insights/exposing-dark-side-public-clouds-combating-malicious-attacks-workloads The Best Medicine for Healthcare Data Is Integrated DLP https://www.zscaler.com/blogs/product-insights/best-medicine-healthcare-data-integrated-dlp You could argue that the challenges of securing medical data are more imposing than those of securing any other form of data. Electronic health records (EHR) are often transferred and shared between providers on a regular basis, and these records contain personal, in-depth patient data. These transfers put protected health information (PHI) at high risk as it moves from location to location. Additionally, the stringent regulations and compliance requirements for PHI force providers to learn how to construct the best data protection strategy for their needs—although this has been a necessary evil for some time now. To this end, our friends within the Health Information Management Working Group at CSA have put together a great discussion on the task of securing patient data and development of best practices. For providers looking for guidance from an expert that’s made the data protection journey, this content can be extremely valuable: Cloud Security Alliance Working Group: Health Information ManagementResearch Publication: Data Loss Prevention in healthcare One of the main topics of this publication is the architecture from which you should deliver data loss prevention (DLP) and data protection. While it’s important to understand best practices on how to implement data protection in the healthcare industry, it’s also valuable to know what the right architecture for a unified data protection platform should look like. With that, let's read a few paragraphs on how Gartner defines Security Service Edge and how it can help providers deliver better protection for data in motion and at rest. Securing Data In Motion In the medical and health industries, protecting sensitive data during transit is crucial. With the increasing reliance on digital platforms and the internet, organizations often face the challenge of safeguarding data over untrusted networks. The core building block for securing this sensitive data is DLP. Inline DLP combined with SSL inspection enables sensitive data in transit to be identified and classified. This ensures that data leaks to the internet or via email are prevented, maintaining the confidentiality of patient information. To this end, inline visibility into cloud apps such as electronic health record systems is also essential. By leveraging inline CASB technology, organizations can detect shadow IT and block risky apps, ensuring data security without hindering the use of critical cloud applications. In the healthcare industry, the use of personal devices by medical professionals and contractors poses a unique challenge. Implementing browser isolation technology allows for seamless data access on personal devices that doesn’t compromise their security. By hosting browser sessions in a secure cloud environment, sensitive data remains protected, even on unmanaged devices. Better yet, users get the specialized power of a purpose-built enterprise browser, only when needed, without having to change which browser they use. Perhaps the biggest benefit of SSE is that all of these unique features are integrated into a centralized, cloud-delivered platform. When hosted via the cloud, DLP is not only easier to deploy, but also more accurate in detection. Rather than dealing with multiple policies that could trigger differently and at different times, SSE gives you a singular view across your landscape, so decisions can be made on a holistic basis. Securing Data at Rest In the Medical Industry When it comes to securing medical data at rest, it’s worth learning and remembering a few key capabilities that have helped healthcare organizations do so with greater ease: SaaS Data Security lets you prioritize securing sensitive data in SaaS platforms, as it can be easily shared in risky ways. To prevent this, providers often consider adding CASB to their data protection strategy. By using a CASB that leverages the same DLP policy used for data at rest as that in motion, you can reduce alert fatigue and streamline response times. Since DLP engines will trigger the same to data inline and at rest in SaaS, visibility becomes consistent across channels. This is one of the main advantages of standardizing across a Security Service Edge architecture. SaaS Security Posture Management (SSPM) helps to identify and address misconfigurations in SaaS platforms, such as enabling multifactor authentication and closing risky open shares. Look for SSPM platforms that align with compliance frameworks like NIST or HIPAA to establish and maintain the required security posture. SaaS Supply Chain Security helps address the risks associated with third-party applications that may connect into your SaaS Platforms. You can scan SaaS platforms for risky connections from third-party applications that may have known vulnerabilities or allow unauthorized access to sensitive medical data. You’ll then get guidance on how to revoke these connections to ensure data hygiene and maintain a strong posture overall. . Endpoint DLP protects sensitive data stored on endpoints such as removable media or employee devices. Implement endpoint DLP with a unified agent that works alongside an SSE platform and enforces a unified DLP policy through inline inspection. This helps prevent data leaks and ensures the security of patient information. A word on Zscaler and shared workstation security: Securing data on shared workstations can sometimes be a challenge as implementing and managing user-level policy controls across multiple logins on a single device is often difficult to do. Zscaler integrates with the Imprivata Digital Identity platform allowing providers to easily support these multi-user workstation environments. Clinicians can easily and securely authenticate in and out of devices and only access applications for which they’ve been authorized. Bringing it All Together Unifying data protection into one platform is extremely powerful and can drastically simplify how you secure data. When delivered from an always-on cloud, you get one single DLP policy that follows users everywhere as well as consistent alerting, no matter where data is located. It’s helpful to gain a variety of perspectives on how to secure data, especially when it comes to a task as tricky as protecting medical data. While there are a multitude of different approaches to this task, understanding best practices can make all the difference for providers looking to begin their journey. All of this said, building the right architecture is equally important. If you’re interested in learning more about Security Service Edge and how Zscaler can help you secure your patient data, we’re here to chat or show you a demo. Photo Credit: Image by https://www.freepik.com/free-photo/medical-banner-with-doctor-working-laptop_30555907.htm Tue, 26 Mar 2024 05:33:13 -0700 Tamer Baker https://www.zscaler.com/blogs/product-insights/best-medicine-healthcare-data-integrated-dlp Protecting Identity Becomes Pivotal in Stopping Cyberattacks https://www.zscaler.com/blogs/product-insights/protecting-identity-becomes-pivotal-stopping-cyberattacks As today’s workplace transforms, data is no longer centralised and is spread across cloud, increasing the attack surface. Attackers are constantly looking for vulnerabilities to exploit and searching for the Achilles heel in identity systems that could deliver them entry into your IT environment. Cyber actors are now using sophisticated methods to target Identity and access management infrastructure. Credential misuse is the most common attack method. According to Gartner, “Modern attacks have shown that identity hygiene is not enough to prevent breaches. Multifactor authentication and entitlement management can be circumvented, and they lack mechanisms for detection and response if something goes wrong.” Prioritize securing identity infrastructure with tools to monitor identity attack techniques, protect identity and access controls, detect when attacks are occurring, and enable fast remediation. Zscaler ITDR detects credential theft and privilege misuse, attacks on Active Directory, and risky entitlements that create attack paths With identity-based attacks on the rise, today’s businesses require the ability to detect when attackers exploit, misuse, or steal enterprise identities. Identifying and detecting identity-based threats is now crucial due to attackers' propensity of using credentials and Active Directory (AD) exploitation techniques for privilege escalations and for lateral movement across your environment. Zscaler ITDR helps you to thwart identity-based AD attacks in real-time and help you to gain actionable insight into gaps in your identity attack surface. The solution enables you to continuously monitor identities, provides visibility on misconfigurations/ risky permissions and detect identity-based attacks such as credential theft, multifactor authentication bypass, and privilege escalation. Gain Full Visibility Uncover blind spots and understand hidden vulnerabilities that leave your environment susceptible to identity-based attacks such as exposed surfaces, dormant credentials, and policy violations. Real-Time Identity Threat Detection and Response Zscaler Identity Protection uses identity threat detections and decoys that rise high fidelity alerts to help your security teams to swiftly remediate with targeted response. The same endpoint agent that runs deception also detects identity attacks on the endpoint. These include advanced attacks like DCSync, DCShadow, LDAP enumeration, session enumeration, Kerberoast attacks, and more. Reduce Identity Risk With deep visibility on identity context, Zscaler Identity Protection helps your security teams to identify, address, and purge compromised systems and exposed credentials quickly. Often, security teams struggle to collect context and correlations to investigate threats. Zscaler ITDR solves this problem by consolidating all risk signals, threats detected, failed posture checks , Okta metadata, and policy blocks (ZIA/ZPA) into a single view for each identity. You can now quickly investigate risky identities for indicators of compromise and potential exploitation. Prevent Credential Misuse/Theft Attackers use stolen credentials and attack Active Directory to escalate privileges to move laterally. Zscaler Identity Protection helps to detect credential exploits and prevent credential theft or misuse. Spot Lateral Movement Stop attackers who have gotten past perimeter-based defenses and are attempting to move laterally through your environment. Zscaler Deception ITDR enhances security by identifying misconfigurations and credential exposures that create attack paths for attackers to use for lateral movement. Zscaler ITDR: Beyond just prevention – Monitor, detect, & respond to identity threats Monitor: Identity systems are in constant flux with configuration and permissions changes. Get alerts when configuration changes introduce new risks. Organizations lack visibility into credential sprawl across their endpoint footprint, leaving them vulnerable to attackers who exploit these credentials to access sensitive data and apps. The solution is Zscaler ITDR, which audits all endpoints to identify credentials and other sensitive material in various sources such as files, registry, memory, cache, configuration files, credential managers, and browsers and gains visibility into endpoint credential exposure to identify lateral movement paths, enforcing policies, and cleaning up credentials to reduce the internal attack surface. Detect: ITDR automatically surfaces hidden risks that might otherwise slip through the cracks. Zscaler ITDR pulls together all risk signals, threats detected, posture checks failed, metadata from Okta, and policy blocks from ZIA/ZPA into a single unified view to provide a complete picture of risk for an identity. This helps to identify & detect unmanaged identities, misconfigured settings, and even credential misuse. Respond: ITDR spots attacks targeting your identity store, you can take immediate action. Restrict or terminate those identities causing trouble and shut down threats before they have a chance to wreak havoc. Zscaler ITDR Benefits Minimize the Attack Surface Reduce attack surface by gaining continuous visibility into the attack vectors and identity misconfigurations. Identify to stop adversarial advances—including ransomware attacks—in their tracks with traps set. Real-Time Identity Threat Detection Thwart sophisticated attacks on Active Directory using identity threat detections on endpoints. Accelerate Incident Response Built-in threat detection and response speeds up threat detections and expands coverage to significantly reduce mean time to response (MTTR). ITDR helps security teams drive down their mean time to respond and prioritize what matters most by risk scoring. Conclusion No matter what – Breaches are inevitable, and preventative security measures aren’t sufficient to thwart them. Though staying upbeat while fighting cyberthreats, shrinking budgets, and staff turnover is a tall task, how we respond today dictates how we perform tomorrow. Choosing and adopting identity protection solutions like ITDR helps your company evolve its zero trust security and compliance posture in response to the changing threat landscape. Zscaler ITDR strengthens your zero trust posture by mitigating the risks of user compromise and privilege exploitation. Fri, 22 Mar 2024 02:39:16 -0700 Nagesh Swamy https://www.zscaler.com/blogs/product-insights/protecting-identity-becomes-pivotal-stopping-cyberattacks Eliminate Risky Attack Surfaces https://www.zscaler.com/blogs/product-insights/eliminate-risky-attack-surfaces Many moons ago, when the world wide web was young and the nerd in me was strong, I remember building a PC and setting it up as a web server. In those exciting, pioneering days, it was quite something to be able to have my very own IP address on the internet and serve my own web pages directly from my Apache server to the world. Great fun. I also remember looking at the server logs in horror as I scrolled through pages upon pages of failed login, and presumably hacking, attempts. I’d buttoned things up pretty nicely from a security standpoint, but even so, it would only have taken a vulnerability in an unpatched piece of software for a breach to occur, and from there, all bets would have been off. Even today, many internet service providers will let you provision your own server, should you feel brave enough. Of course, the stakes were not high for me at home, but knowing what we know now about the growth of ransomware attacks and how AI is facilitating them, no organization would dare do such a thing in 2024. Back then, I’d created an obvious and open attack surface. Tools were (and still are) readily available to scan IP address ranges on any network and identify open ports. In my case, ports 22, 80 and 443 were open to serve web pages and enable me to administer my server remotely. Every open port is a potential conduit into the heart of the hosting device, and so these should be eliminated where possible. Open ports, VPNs, and business Since online remote working became a real possibility in the early 2000s, organizations have tried to protect themselves and their employees by adopting VPN technology to encrypt traffic between a remote device and a VPN concentrator at the head office, allowing employees access to services like email, file and print servers. Even when these services became cloud-based solutions like Gmail and DropBox, many organizations pulled that traffic across a VPN to apply IT access policies. Not only did this often lead to an inefficient path from a remote worker to their applications, it also presented a serious security risk. As the performance and dependability of the internet grew, we also saw the advent of site-to-site VPNs, which made for an attractive alternative to far more expensive circuit-based connections that had been so prevalent such as MPLS. A vast number of organizations continue to rely on a virtual wide area network (WAN) built on top of VPNs. Unfortunately, as the old saying goes, there’s no such thing as a free lunch. Every VPN client or site using the internet as its backbone needs an IP address to connect to, an open port to connect through, and, well, you can see where this is going. Not every VPN solution has an active flaw, just as—luckily—my Apache server didn’t at the time I was running it. That said, software is fallible, and history has demonstrated this fact in numerous instances in which vulnerabilities are discovered and exploited in VPN products. Just last month, a fatal flaw was discovered in Ivanti’s VPN services, leaving thousands of users and organizations open to attack. Hackers are scouring day and night for vulnerabilities like these to exploit—and AI is only making their lives easier. “without proper configuration, patch management, and hardening, VPNs are vulnerable to attack” from Securing IPsec Virtual Private Networks by the National Security Agency (NSA) Zscaler is different The Zscaler Zero Trust Exchange™ works in a fundamentally different way—no VPN is required to securely connect. Instead, connections via the internet (or even from within a managed network) are policed on multiple levels. An agent on your device creates a TLS tunnel to the Zscaler cloud, which accepts connections only from known tenants (or Zscaler customers). This tunnel is mutually authenticated and encrypted between the agent and the Zscaler cloud. The individual and their device(s) must additionally be identified as part of the process. In short, it’s not possible to simply make a TLS connection to Zscaler. Once an approved user from a known customer with a recognized device connects to Zscaler, they’re still prevented from moving laterally over the network, as is the case with VPNs. With Zscaler, there is no IP range to which the user has access. Instead, every connection attempt has to be authorized, following the principles of zero trust. A user has access only to the applications for which they’ve been authorized. With this framework, even if an organization were to be successfully attacked, the blast radius would be limited. The same cannot be said for network-based security. Here’s the bottom line: VPNs and the firewalls behind them served us well for a long time, but the challenges that come with maintaining a security posture built on these legacy technologies are so great that it’s now a material business risk to use them. You need only to turn the news on for a few minutes to be reminded of this. Networks were built fundamentally to enable connectivity, and adding security to these networks is an uphill battle of putting the right obstacles in the way of that connectivity. This is why more and more public bodies and private organizations are turning this idea on its head and embracing a zero trust architecture that provides access for only an approved entity, on an approved device, to the applications to which they are entitled. At Zscaler we have built tools to help you assess the potential risk your own organization faces, some of which are free to access. Test your own defenses by visiting https://www.zscaler.com/tools/security-assessment and when you’re ready to learn more, get in touch! Tue, 02 Apr 2024 01:00:01 -0700 Simon Tompson https://www.zscaler.com/blogs/product-insights/eliminate-risky-attack-surfaces Break Free from Appliance-Based Secure Web Gateway (SWG) https://www.zscaler.com/blogs/product-insights/break-free-appliance-based-secure-web-gateway-swg The way we work today is vastly different from a few years ago. McKinsey & Company’s State of Organization 2023 report identified that before the COVID-19 pandemic, most organizations expected employees to spend more than 80% of their time in-office. But as of 2023, says the report, 90% of employees have embraced hybrid models, allowing them to work from home or other locations some (if not most) of the time. On a similar note, applications previously hosted in on-premises data centers are increasingly moving to the cloud. Gartner predicted that SaaS application spending would grow 17.9% to total $197 billion in 2023. With employees and apps both migrating off-premises, security controls logically must do the same. It’s no exaggeration to state that cloud and mobility have broken the legacy way of approaching security—so why should the castle-and-moat security approach, heavily reliant on hardware such as appliance-based proxies/SWGs, still exist? Users need fast, reliable, secure connectivity to the internet and cloud apps, with the flexibility to connect and work from anywhere. However, traditional SWGs have certain limitations, leading to security challenges, poor user experience, constant maintenance, and scalability issues. Let’s take a look at why it’s time to break free from appliance-based SWG. Security challengesIn December 2013, the Google Transparency Report showed just 48% of World Wide Web traffic was encrypted. Today, the same report shows at least 95% of traffic is encrypted. So, it’s no surprise that the Zscaler ThreatLabz 2023 State of Encrypted Attacks report showed 85.9% of threats—malware payloads, phishing scams, ad spyware sites, sensitive data leaks, and more—are now delivered over encrypted channels. While most organizations have some form of protection against malware, attackers are evolving their techniques, creating new variants able to bypass reputation-based detection technologies. As threat actors increasingly rely on encrypted channels, it’s more crucial than ever to inspect 100% of TLS/SSL traffic. This is the biggest way appliance-based proxies weigh down organizations: most SWG appliances lack the capacity to perform 100% inspection. Our 2023 State of Encrypted Attacks report surveyed 284 IT, security, and networking professionals and found that they mainly use legacy tools like web application firewalls and network-layer firewalls to scan traffic. However, respondents agreed that complexity, cost, and performance degradation are the biggest barriers to inspecting all TLS/SSL traffic. Furthermore, certain regulations require different policies for distinct data types, making inspection an arduous task. Poor user experienceCompared to only a few years ago, the meaning of “fast” is very different for today’s internet users. Instant access and connectivity has become the norm at home. Employees juxtapose the great digital experience in their personal lives with poor connectivity and performance issues that plague their digital work lives. Appliance-based SWGs are among the main culprits of poor user experience because they can’t scale quickly to handle traffic surges, and they require traffic to be backhauled to a central data center, leading to high latency and lost productivity for users trying to access the internet or SaaS applications. And all this inevitably affects revenue. Maintenance and scalability issuesApart from complexity and tedious management, other challenges of appliance-based SWGs are maintenance and scalability issues. To account for traffic surges and future growth, security teams are forced to overprovision, leading to expensive appliances sitting unused. At other times, they may need to wait multiple months for appliances/upgrades to arrive. With appliance-based SWG, security teams are always spread too thin, having to constantly update SWGs to account for changes to the organization and/or the threat landscape. The Zscaler differenceOvercome the limitations of appliance-based SWG with Zscaler. Better security: Inspect 100% of TLS/SSL traffic to find and stop threats—86% of which are delivered over encrypted channels. Better user experience: Stop backhauling internet/SaaS traffic with AI-powered Zscaler SWG, delivered from 150+ points of presence worldwide–close to your users and their cloud destinations for lower latency. No hardware to maintain: Move to a cloud native proxy architecture and eliminate the hardware headaches of maintenance, updates, patches, and upgrades. Platform approach: Extend comprehensive security functions, such as cloud firewall, sandbox, CASB, and data loss prevention, as well as end-to-end experience monitoring from a single unified platform and agent. If you’d like to know more about the reasons to break free from appliance-based proxies, check out this on-demand webinar. Wed, 20 Mar 2024 07:04:23 -0700 Apoorva Ravikrishnan https://www.zscaler.com/blogs/product-insights/break-free-appliance-based-secure-web-gateway-swg 2024 Zscaler Public Sector Summit in Washington DC https://www.zscaler.com/blogs/product-insights/2024-zscaler-public-sector-summit-washington-dc In March 2023 Zscaler held its inaugural Public Sector Summit, bringing together over 500 government and industry leaders to separate zero trust fact from fiction. The exchange last year was enlightening and energizing! We captured highlights from the event in an eBook, The Power of Zero Trust, including the challenges agencies are facing, some of our best practices for developing a robust zero trust architecture, and a use case demonstrating how zero trust can integrate into every part of your agency’s operations. As we prepare for the 2024 Public Sector Summit on April 4th, I am excited that this year’s line up will be bigger and even more engaging. With more than 22 guest speakers from across government, education and private sector, the audience will hear top of mind topics and discuss current threats and challenges facing agencies and the supporting community such as AI, funding zero trust initiatives, safeguarding critical infrastructure, SD-WAN and much more. Distinguished Speakers The power of the public sector community is in the forward-thinking individuals across agencies who have dedicated their careers to transforming our nation securely. We’ve built a program for the day with a stellar lineup of speakers including: Dr. Kelly Fletcher, CIO Department of State, Luis Coronado, CIO State Consular Affairs, and Eric Hysen, CIO/Chief AI Officer Department of Homeland Security will join Zscaler CEO Jay Chaudhry during his keynote. Chris DeRusha, Federal CISO and Deputy National Cyber Director, OMB. Panel on resources to fuel government modernization with Jessie Posilkin, Technical Lead at Technology Modernization Fund, Maria Roat of MA Consulting and Eric Mill of GSA. Suneel Cherukuri, CISO, DC Government Zach Benz, Sr. Mgr for Cyber Operations/DCISO with Sandia National Laboratories to talk about AI/ML. Panel discussing zero trust implementations with Gerald Caron, CIO, ITA/Commerce, Dan Han, CISO of VCU, Bob Costello, CISO of CISA and Dr. Gregory Edwards, CISO of FEMA. DoD leaders including Winston Beauchamp, DCISO with the Department of Air Force and General Les Call, Director of Zero Trust portfolio management office. Systems Integrator panel with Justin DePalmo, CISO and VP of IT at GDIT and Bob Ritchie, SVP & CTO at SAIC. Nelson Sims, Cyber Architect, DC Water and Dustin Glover, Chief Cyber Officer, State of Louisiana to discuss securing critical infrastructure. From Revolution to Evolution Our CEO and founder, Jay Chaudhry, will keynote the event setting the stage with his perspective on the zero trust revolution that began over a decade ago, and how that has now surpassed the tipping point in adoption thanks to the dedication of IT leaders across government.. He will be joined on stage by Dr. Fletcher, Luis Coronado and Eric Hysen and followed by many more innovators within the public sector speaking to a number of current cybersecurity issues including: Using AI to combat AI-based threats OMB’s perspective on the state of zero trust Unlocking resources to continue modernizing How agency leaders are taking the next steps in their zero trust implementations New innovations in predictive cybersecurity to identify and resolve vulnerabilities View the full agenda here to see the range of topics to be addressed during this year’s summit. Hands-On Zero Trust In addition to the informative sessions, we will also have hands-on solution stations this year for attendees to dive deeper into areas including: From Zero Access to Zero Trust in 10 Minutes: A joint solution with our integration partners AWS, Okta, and CrowdStrike Your Network Transformed: Zero trust for cloud and branch TheatLabz: Global Internet threat insights from Zscaler's research team CMMC: Empowered by zero trust Customer Success Center Zscaler Digital Experience We’re excited to welcome the public sector community in-person for a full day of learning, networking and experiences from the most forward-thinking Government IT leaders. Register today to learn more on how you can Simply, Secure and Transform your agency. Space is limited for this live event so we’ll be in touch to confirm your invitation. There is no charge for the event. Tue, 19 Mar 2024 08:03:31 -0700 Peter Amirkhan https://www.zscaler.com/blogs/product-insights/2024-zscaler-public-sector-summit-washington-dc Zscaler Selects Red Hat Enterprise Linux 9 (RHEL 9) as Next-Gen Private Access Operating System https://www.zscaler.com/blogs/product-insights/zscaler-selects-red-hat-enterprise-linux-9-rhel-9-next-gen-private-access What’s new?On June 30, CentOS 7 will reach end of life, requiring migrations in many software stacks and server environments. In advance of this, Zscaler has selected Red Hat Enterprise Linux 9 as the next-generation operating system for Zscaler Private AccessTM (ZPA). RHEL 9 is the modern enterprise equivalent to CentOS 7, backed by Red Hat, and supported through 2032. This continues ZPA’s proven stability and resiliency on open source Linux platforms and builds on 10 years of maturity on Red Hat Enterprise Linux-based derivatives. What’s more, this transition can be done with no impact to operations or user access. When will it be released?Pre-built images for all ZPA-supported platforms are targeted for release in May 2024. All ZPA images, including containers, hypervisors, and public cloud offerings, will be replaced with RHEL 9. This is the recommended deployment for all future App Connector and Private Service Edge components, and customers should begin migration immediately on release. For customers that manage their own Red Hat base images, Zscaler is targeting the end of April 2024 for release of RHEL 9-native Red Hat Package Manager (RPM) and repositories. New Enterprise OS Without Licensing FeesTo ensure an excellent experience for our customers, Zscaler will provide operating system licenses for all RHEL 9 images on supported platforms. This continues our commitment to secure, open source platforms without imposing additional licensing costs on our customers. We also understand the need for control over security baseline images that meet your security posture and will continue to provide RPM options through support of RHEL 8 and RHEL 9. These software packages are bring-your-own-license (BYOL) and won’t conflict with any existing Red Hat enterprise license agreements you may hold. CentOS 7 End of LifeThe CentOS Project and Red Hat will be ending the final extended support for CentOS 7 and RHEL 7 on June 30, 2024. While we aim to provide RHEL 9 support in advance of this date (and do currently support RHEL 8 with RPMs), we recognize that the transition is a large undertaking, affecting all enterprise data centers, and operations and will take time to transition over to new operating systems and software. In light of this, we want to provide ample time to migrate while considering the security implications of continuing to support an obsolete operating system. Zscaler will support existing CentOS 7 deployments, RPMs, and distribution servers until December 24, 2024. We are confident our ZPA architecture and design uniquely position us to continue to support CentOS 7 past its expiry date. See End-of-Support for CentOS 7.x, RHEL 7.x, and Oracle Linux 7.x for more details on CentOS EOL and the ZPA white paper for architecture and security design. While we have ample controls in place and the utmost confidence, there is always inherent risk in using an unsupported server operating system. Zscaler will not provide backported operating system patches during this transition, but will maintain the ZPA software and supporting security libraries. Lightweight and Container Orchestration ReadyFollowing Zscaler’s cloud-native and best-in-class zero trust approach, ZPA infrastructure components are designed to be lightweight, container ready, and quickly deployed. This allows App Connector and Private Service Edge the benefit of being scaled and migrated without worry for previously deployed instances or operating system upgrade paths. For these reasons, the migration best practice is to deploy new App Connectors and Private Service Edges. Zscaler does not provide direct operating system upgrade paths for currently deployed infrastructure components. In further support of this, we offer Open Container Initiative (OCI) compatible images for Docker CE, Podman, and Red Hat OpenShift Platform. These images as well as the public cloud marketplaces are fully ready for autoscale groups, supporting quick scale up and scale down. Migration and Support ExcellenceZscaler understands your concerns and will fully support you throughout this transition process. Our Technical Account Managers, Support Engineers, and Professional Services are ready to address all concerns related to migration. If a temporary increase of App Connector or PSE limits are needed in your environment to complete migration, there will be no extra licensing costs. Below are the steps to help you replace CentOS 7 instances with RHEL 9. The enrollment and provisioning of new App Connectors and Private Service Edges can be automated in a few steps using Terraform (infrastructure-as-code) or Container Orchestration to simplify deployment further. App Connector Migration Steps:Create new App Connector Groups and provisioning keys for each location (Note: do not reuse existing provisioning keys as it will add the new RHEL 9 App Connectors to the old App Connector Groups. Mixing different host OS and Zscaler software versions in a single group is not supported.) Update the App Connector group's version profile to "default - el9" so that it's able to receive the proper binary updates (This version profile can be set as default for the tenant once all connectors are moved to RHEL 9) Deploy new VMs using the upcoming RHEL 9 OVAs and newly created provisioning keys (templates can be used) Add the new App Connector Groups to each respective Server Group (Optional) In the UI, disable the app connector groups five minutes prior to the regional off-hours maintenance window to allow connections to gradually drain down During regional off-hours, remove the CentOS 7 App Connector Groups Private Service Edge Migration Steps:Create new Service Edge Groups and provisioning keys for each location (Note: do not reuse existing provisioning keys as it will add the new RHEL 9 PSEs to the old Service Edge Groups. Mixing different host OS and Zscaler software versions in a single group is not supported.) Update the Service Edge Group's version profile to "Default - el9" so that it's able to receive the proper binary updates (This version profile can be set as default for the tenant once all connectors and PSEs are moved to RHEL 9) Deploy new VMs using the upcoming RHEL 9 OVAs and the newly created provisioning keys (templates can be used) Add trusted networks and enable “publicly accessible” (if applicable) on the new Service Edge Groups (Optional) In the UI, disable the Service Edge Groups 15 minutes prior to the regional off-hours maintenance window to allow connections to gradually drain down During regional off hours, remove trusted networks and disable public access (if applicable) on CentOS 7 Service Edge Groups Please reach out to your respective support representatives for further assistance and information as needed. For more information: Zscaler Private Access Website Zscaler Private Access | Zero Trust Network Access (ZTNA) End-of-Support for CentOS 7.x, RHEL 7.x, and Oracle Linux 7.x ZPA App Connector Software by Platform ZPA Private Service Edge Software by Platform Mon, 18 Mar 2024 15:34:32 -0700 Shefali Chinni https://www.zscaler.com/blogs/product-insights/zscaler-selects-red-hat-enterprise-linux-9-rhel-9-next-gen-private-access Outpace Attackers with AI-Powered Advanced Threat Protection https://www.zscaler.com/blogs/product-insights/outpace-attackers-ai-powered-advanced-threat-protection Securing access to the internet and applications for any user, device, or workload connecting from anywhere in the world means preventing attacks before they start. Zscaler Advanced Threat Protection (ATP) is a suite of AI-powered cyberthreat and data protection services included with all editions of Zscaler Internet Access (ZIA) that provides always-on defense against complex cyberattacks, including malware, phishing campaigns, and more. Leveraging real-time AI risk assessments informed by threat intelligence that Zscaler harvests from more than 500 trillion daily signals, ATP stops advanced phishing, command-and-control (C2) attacks, and other tactics before they can impact your organization. In aggregate, Zscaler operates the largest global security cloud across 150 data centers and blocks more than 9 billion threats per day. Additionally, our platform consumes more than 40 industry threat intelligence feeds for further analysis and threat prevention. With ATP you can: Allow, block, isolate, or alert on web pages based on AI-determined risk scores Block malicious content, files, botnet, and C2 traffic Stop phishing, spyware, cryptomining, adware, and webspam Prevent data loss via IRC or SSH tunneling and C2 traffic Block cross-site scripting (XSS) and P2P communications to prevent malicious code injection and file downloads To provide this protection, Zscaler inspects traffic—encrypted or unencrypted—to block attackers’ attempts to compromise your organization. Zscaler ThreatLabz found in 2023 that 86% of threats are now delivered over encrypted channels, underscoring the need to thoroughly inspect all traffic. Enabling protection against these threats takes just a few minutes in ATP in the Zscaler Internet Access management console. This blog will help you better understand the attack tactics ATP prevents on a continuous basis. We recommend you select “Block” for all policy options and set the "Suspicious Content Protection" risk tolerance setting to "Low" in the ATP configuration panel of the ZIA management console. Prevent web content from compromising your environmentThreat actors routinely embed malicious scripts and applications on legitimate websites they’ve hacked. ATP policy protects your traffic from fraud, unauthorized communication, and other malicious objects and scripts. To bolster your organization's web security, the Zscaler ATP service identifies these objects and prevents them from downloading unwanted files or scripts onto an endpoint device via the user’s browser. Using multidimensional machine learning models, the ZIA service applies inline AI analysis to examine both a web page URL and its domain to create Page Risk and Domain Risk scores. Given the magnitude of Zscaler’s dataset and threat intelligence inputs, risk scoring is not dependent on specific indicators of compromise (IOCs) or patterns. Using AI/ML to analyze web pages reveals malicious content including injected scripts, vulnerable ActiveX, and zero-pixel iFrames. The Domain Risk score results from analysis of the contextual data of a domain, including hosting country, domain age, and links to high-risk top-level domains. The Page Risk and Domain Risk scores are then combined to produce a single Page Risk score in real time, which is displayed on a sliding scale. This risk score is then evaluated against the Page Risk value you set in the ATP configuration setting (as shown below). Zscaler will block users from accessing all web pages with a Page Risk score higher than the value you set. You can set the Page Risk value based on your organization’s risk tolerance. Disrupt automated botnet communicationA botnet is a group of internet-connected devices, each of which runs one or more bots, or small programs, that are collectively used for service disruption, financial or sensitive information theft via distributed denial-of-service (DDoS) attacks, spam campaigns, or brute-forcing systems. The threat actor controls the botnet using command-and-control software. Command & Control Servers An attacker uses a C2 server to send instructions to systems compromised by malware and retrieve stolen data from victim devices. Enabling this ATP policy blocks communication to known C2 servers, which is key to preventing attackers from communicating with malicious software deployed on victims’ devices. Command & Control Traffic This refers to botnet traffic that sends or receives commands to and from unknown servers. The Zscaler service examines the content of requests and responses to unknown servers. Enabling this control in the ATP configuration blocks this traffic. Block malicious downloads and browser exploits Malicious Content & Sites Websites that attempt to download dangerous content to the user's browser upon loading a page introduce considerable risk: this content can be downloaded silently, without the user's knowledge or awareness. Malicious content could include exploit kits, compromised websites, and malicious advertising. Vulnerable ActiveX Controls An ActiveX control is a software program for Internet Explorer, often referred to as an add-on, that performs specific functionality after a web page loads. Threat actors can use ActiveX controls to masquerade as legitimate software when, in reality, they use them to infiltrate an organization’s environment. Browser Exploits Known web browser vulnerabilities can be exploited, including exploits targeting Internet Explorer and Adobe Flash. Despite Adobe sunsetting the browser-based add-on in January 2021, Flash components are still found embedded in systems, some of which may be critical for infrastructure or data center operations. Foil digital fraud and cryptomining attempts AI-Powered Phishing Detection Phishing is becoming harder to stop with new tactics, including phishing kits sold on the black market—these kits enable attackers to spin up phishing campaigns and malicious web pages that can be updated in a matter of hours. Phishing pages trick users into submitting their credentials, which attackers use in turn to compromise victims’ accounts. Phishing attacks remain problematic because even unsophisticated criminals can simply buy kits on the dark web. Threat actors can also update phishing pages more quickly than most security solutions meant to detect and prevent phishing can keep up with. But with Zscaler ATP, you can prevent compromises from patient zero phishing pages inline with advanced AI-based detection. Known Phishing Sites Phishing websites mimic legitimate banking and financial sites to fool users into thinking they can safely submit account numbers, passwords, and other personal information, which criminals can then use to steal their money. Enable this policy to prevent users from visiting known phishing sites. Suspected Phishing Sites Zscaler can inspect a website’s content for indications that it is a phishing site, and then use AI to stop phishing attack vectors. As part of a highly commoditized attack method, phishing pages can have a lifespan of a few hours, yet most phishing URL feeds lag 24 hours behind—that gap can only be addressed by a capability able to stop both new and unknown phishing attacks. Spyware Callback Adware and spyware sites gather users’ information without their knowledge and sell it to advertisers or criminals. When “Spyware Callback” blocking is enabled, Zscaler ATP prevents spyware from calling home and transmitting sensitive user data such as address, date of birth, and credit card information. Cryptomining Most organizations block cryptomining traffic to prevent cryptojacking, where malicious scripts or programs secretly use a device to mine cryptocurrency—but this malware also consumes resources and impacts performance of infected machines. Enabling “Block” in ATP’s configuration settings prevents cryptomining entering your environment via user devices. Known Adware & Spyware Sites Threat actors stage legitimate-looking websites designed to distribute potentially unwanted applications (PUA). These web requests can be denied based on the reputation of the destination IP or domain name. Choose “Block” in ATP policy configuration to prevent your users from accessing known adware and spyware sites. Shut down unauthorized communication Unauthorized communication refers to the tactics and tools attackers use to bypass firewalls and proxies, such as IRC tunneling applications and "anonymizer" websites. IRC Tunneling Internet Relay Chat (IRC) protocol was created in 1988 to allow real-time text messaging between internet-connected computers. Primarily used in chat rooms (or “channels”), the IRC protocol also supports data transfer as well as server- and client-side commands. While most firewalls block the IRC protocol, they may allow SSH connections. Hackers take advantage of this to tunnel their IRC connections via SSH, bypass firewalls, and exfiltrate data. Enabling this policy option will block IRC traffic from being tunneled over HTTP/S. SSH Tunneling SSH tunneling enables sending data with an existing SSH connection, with the traffic tunneled over HTTP/S. While there are legitimate uses for SSH tunnels, bad actors can use them as an evasion technique to exfiltrate data. Zscaler ATP can block this activity. Anonymizers Attackers use anonymizer applications to obscure the destination and content they want to access. Anonymizers enable the user to bypass policies that control access to websites and internet resources. Enabling this policy option blocks access to anonymizer sites. Block cross-site scripting (XSS) and other malicious web requestsCross-site scripting (XSS) is an attack tactic wherein bad actors inject malicious scripts into otherwise trusted websites. XSS attacks occur when a threat actor uses a web app to send malicious code, usually in the form of a client-side script, to a different end user. Cookie Stealing Cookie stealing, or session hijacking, occurs when bad actors harvest session cookies from users’ web browsers so they can gain access to sensitive data including valuable personal and financial details they in turn sell on the dark web or use for identity theft. Attackers also use cookies to impersonate a user and log in to their social media accounts. Potentially Malicious Requests Variants of XSS requests enable attackers to exploit vulnerabilities in a web application so they can inject malicious code into a website. When other users load a page from the target web server in their browser, the malicious code executes, expanding the attack exponentially. Prevent compromise via peer-to-peer file sharing P2P programs enable users to easily share files with each other over the internet. While there are legitimate uses of P2P file sharing, these tools are also frequently used to illegally acquire copyrighted or protected content—and the same content files can contain malware embedded within legitimate data or programs. BitTorrent The Zscaler service can block the usage of BitTorrent, a communication protocol for decentralized file transfers supported by various client applications. While its usage was once pervasive, global torrent traffic has decreased from a high of 35% in the mid-2000s to just 3% of all global internet traffic in 2022. Tor Tor is a P2P anonymizer protocol that obscures the destination and content accessed by a user, enabling them to bypass policies controlling what websites or internet resources they can access. With Zscaler ATP, you can block the usage of the Tor protocol. Avoid VOIP bandwidth overutilizationWhile convenient for online meetings, video conferencing tools can be bandwidth-intensive. They may also be used to transfer files or other sensitive data. Depending on both your organization's risk tolerance level and overall network performance, you may want to curtail employee or contractor use of Google Hangouts. Google Hangouts While VOIP application usage may be encouraged for cost savings over traditional landline-based communications, it’s often associated with high bandwidth usage. Google Hangouts (a.k.a. Google Meet) requires a single video call participant to meet a 3.2 Mbps outbound bandwidth threshold. Inbound bandwidth required starts at 2.6Mbps for two users and expands with additional participants. In Zscaler ATP, you can block Google Hangout usage to conserve bandwidth for other business-critical applications. Comprehensive, always-on, real-time protection Clearly, there’s a wide swath of protection modern organizations need to fortify their security posture on an ongoing basis. Zscaler Advanced Threat Protection delivers always-on protection against ransomware, zero-day threats, and unknown malware as part of the most comprehensive suite of security capabilities, powered by the world's largest security cloud—all at no extra cost to ZIA customers. ATP filters and blocks threats directed at ZIA customers and, in combination with Zscaler Firewall and Zscaler Sandbox, provides superior threat prevention thanks to: A fully integrated suite of AI-powered security services that closes security gaps and reduces risks left by other vendors’ security tools. Zscaler Sandbox detects zero-day malware for future-proof protection while Zscaler Firewall provides IPS and DNS control and filtering of the latest non-web threats. Real-time threat visibility to stay several steps ahead of threat actors. You can’t wait for another vendor’s tool to finish scheduled scans to determine if you’re secure—that puts your organization at risk. Effective advanced threat protection from Zscaler monitors all your traffic at all times. Centralized context and correlation that provides the full picture for faster threat detection and prevention. Real-time, predictive cybersecurity measures powered by advanced AI continuously give your IT or security team the ability to outpace attackers. The ability to inspect 100% of traffic with Zscaler’s security cloud distributed across 150 points of presence worldwide. Operating as a cloud-native proxy, the Zscaler Zero Trust Exchange ensures that every packet from every user, on or off-network, is fully inspected with unlimited capacity—including all TLS/SSL encrypted traffic. Learn more about how Zscaler prevents encrypted attacks and best practices to stop encrypted threats by securing TLS/SSL traffic: download a copy of the Zscaler ThreatLabz 2023 State of Encrypted Attacks Report. Mon, 11 Mar 2024 07:00:01 -0700 Brendon Macaraeg https://www.zscaler.com/blogs/product-insights/outpace-attackers-ai-powered-advanced-threat-protection LinkedIn Outage Detected by Zscaler Digital Experience (ZDX) https://www.zscaler.com/blogs/product-insights/linkedin-outage-detected-zscaler-digital-experience-zdx At 3:40 p.m. EST on March 6, 2024, Zscaler Digital Experience (ZDX) saw a substantial, unexpected drop in the ZDX score for LinkedIn services around the globe. Upon analysis, we noticed HTTP 503 errors highlighting a LinkedIn outage, with the ZDX heatmap clearly detailing the impact at a global scale. ZDX dashboard indicating widespread LinkedIn outage ZDX enables customers to proactively identify and quickly isolate service issues, giving IT teams confidence in the root cause, reducing mean time to resolve (MTTR) and first response time (MTTD). ZDX dashboard showing LinkedIn global issues ZDX Score highlights LinkedIn outageVisible on the ZDX admin portal dashboard, the ZDX Score represents all users in an organization across all applications, locations, and cities on a scale of 0 to 100, with the low end indicating a poor user experience. Depending on the time period and filters selected in the dashboard, the score will adjust accordingly. The dashboard shows that the ZDX Score for the LinkedIn probes dropped to ZERO during the outage window of approximately 1 hour. From within ZDX, service desk teams can easily see that the service degradation isn’t limited to a single location or user and quickly begin analyzing the root cause. ZDX Score indicating LinkedIn outage and recovery (times in EST) Also in the ZDX dashboard, “Web Probe Metrics” highlight the user impact of reaching LinkedIn applications across a timeline with response times. In this case, the server responded with 503 errors, indicating the server was not ready to handle requests. ZDX Web Probe Metrics indicating 503 errors (times in EST) ZDX can quickly identify the root cause of user experience issues with its new AI-powered root cause analysis capability. This spares IT teams the labor of sifting through fragmented data and troubleshooting, thereby accelerating resolution and keeping employees productive. With a simple click in the ZDX dashboard, you can analyze a score, and ZDX will provide insight into potential issues. As you can see, in the case of this LinkedIn outage, ZDX highlights that the application is impacted while the network itself is fine. ZDX AI-powered root cause analysis indicates the reason for the outage When there’s an application outage, many IT teams turn to the network as the root cause. However, as you can see above, ZDX AI-powered root cause analysis verified that the network transport wasn’t the issue; it was actually at the application level. You can verify this by looking at the CloudPath metrics from the user to the destination. ZDX CloudPath showing full end-to-end data path ZDX CloudPath detailed hops between the nodes With AI-powered analysis and dynamic alerts, IT teams can quickly compare optimal vs. degraded user experiences and set intelligent alerts based on deviations in observed metrics. ZDX allows you to compare two points in time to understand the differences between them. This function determines a good vs. poor user experience, visually highlighting the differences between application, network, and device metrics. The end user comparison during the LinkedIn outage vs. a known good score indicates the ZDX Score difference, highlighting the unexpected performance drop for the end user. ZDX comparison mode identifies the change in user experience According to the LinkedIn status page, the outage was reported at 12:50 PST until 14:05 PST, which correlates to the ZDX data above. However, LinkedIn services started to recover pretty quickly, by 13:40 PST, and LinkedIn reported the issue resolved by 14:05 PST. Source: LinkedIn With ZDX alerting, our customers were proactively notified about end user problems, and incidents were opened automatically with our service desk integration (e.g., ServiceNow) long before users started to report it. From a single dashboard, customers were able to quickly identify this as a LinkedIn issue, not an internal network outage, saving precious IT time. Zscaler Digital Experience successfully detected a LinkedIn outage along with its root cause, giving our customers the confidence that it was not a single location, their networks, or devices, averting critical impact to their business. Try Zscaler Digital Experience today ZDX helps IT teams monitor digital experiences from the end user perspective to optimize performance and rapidly fix offending application, network, and device issues. To see how ZDX can help your organization, please contact us. Thu, 07 Mar 2024 19:14:07 -0800 Rohit Goyal https://www.zscaler.com/blogs/product-insights/linkedin-outage-detected-zscaler-digital-experience-zdx From VDI replacement to complementary use: Part 2 https://www.zscaler.com/blogs/product-insights/positioning-zscaler-private-access-relative-to-vdi-part-2 In the first part of this VDI blog series, we discussed the two major use cases of access granularity and traffic inspection and how Zscaler can support these with the help of the Zero Trust Exchange platform. In this blog, we will focus on more use cases and ways to integrate Zscaler as complementary solution to VDI to cover security related aspects. Data residency restriction This use case deserves a deeper investigation, because although we can say that it is supported, there could be specific instances in which Zscaler cannot replace the VDI environment. Zscaler Cloud Browser Isolation (CBI) prevents data to leave the corporate boundary. We can define what level of restriction applies to the data exchange between the actual application and the isolated container. The recent introduction of Endpoint DLP capabilities could further help our conversation when stricter security requirements are required. Zscaler Cloud Browser Isolation is inherently non-persistent; the virtual machine is terminated after each session. Now, imagine the scenario of a developer working remotely on a virtual desktop where he has his own environment and data can’t leave the company area. This individual would need a persistent desktop to work, and the user environment shouldn’t be destroyed when he closes the working session. This use case could be challenging for a VDI replacement. This use case could be addressed by leveraging the Private Remote Access (PRA) and RDP. In the above example, an RDP session could be launched toward a machine where the developer can work and log in, where their development environment sits and where communication is local, and data won't leave the company boundary. Obviously, the organisation’s environment must be assessed carefully to validate the pros and cons of the proposed alternative. Traffic localization In this scenario, the goal is to keep the communication local to the data centre due to performance issues. From a Zscaler point of view, the area of potential replacement exists once we validated the possibility to leverage Private Remote Access (PRA) and RDP with the customer, where a remote session is launched toward a machine that interacts locally with the server. Desktop or software license management / reduction The discussion about this scenario under the assumption to keep the VDI environment up and running needs a preamble. ZCC does not support multiple, simultaneous user sessions from a single host operating system. The main problem to address is supporting ZIA and ZPA services on multi-user VDI environments. Zscaler now offers the ability to inspect all ports and protocols for multi-session, non-persistent VDI deployments in the public cloud and on-prem data centers through the use of a VDI agent. Enterprises can apply granular threat and data protection policies per individual user session, enabling enterprises to maintain common security policies across all environments. Multi-user VDI can be hosted on a public cloud (Azure, AWS, etc.) or private data centers (VMWare or KVM, etc.). Cloud/Branch Connector can be used to direct traffic from the VDI users to the Zscaler cloud and extend ZIA and ZPA services to the VDI users. However, Cloud/Branch Connector does not have any user/VM context to the traffic and will enforce a single security policy to all the VDI users. To fix this issue, we leverage a VDI agent, that is a lightweight software package running in the user space of the VDI session. It is responsible for authenticating users, establishing a tunnel to Branch or Cloud Connector (Control and Data) and exchanging user context with the Branch or Cloud Connector (see below diagram). The VDI Agent maintains proprietary, lightweight tunnels (UDP encapsulation) to the local Cloud or Branch Connector. These tunnels carry both user session data in the payload, as well as user context information in the UDP header. These tunnels are stateless, which ensures that - in the event of a Branch or Cloud Connector failure - they can failover to other active appliances. With that said, we have now the possibility to extend Zscaler services to multi-user VDI environments. Legacy app support Although this scenario is becoming more and more niche due to applications and architecture evolution, that’s an area where VDI could help customers. The Zscaler Client Connector supports the latest software version and the two previous versions for each software product. See more details on the Zscaler Help page. At a higher level - digging into why organisations use VDI in the first place is important. Walking through their use cases and applications to explore the scope is important to move customers beyond the assumptions that flow from the on-premises/on-network mindset. In some cases, Zscaler can be integrated in the existing environment to simply provide the appropriate level of security. There are two main aspects to consider: Most applications are now web-based and could be securely made available to users regardless of VDI. Even if VDI is not replaced for all users, there are multiple reasons to integrate with ZIA/ ZPA. Just think about users like financial advisors and insurance agents. Many firms have moved to web-based apps, DocuSign, etc. There may no longer be a hard requirement for those thousands of users to have VDI. This requires going beyond what the network team may see, and engaging architects, app owners, etc. If we focus on the second aspect; rather than replacing the VDI infrastructure, another approach is to complement it. If we think about those use cases, organisations could still have security concerns around the user’s connectivity to the VDI environment (e.g. VMware Knowledge Base). In these scenarios, ZPA could protect that user traffic: it can secure access “to” the VDI environment and access “from” the VDI environment like shown in the below diagram. The protection of traffic aimed to Internet/SaaS is addressed by ZIA services. Connectivity to the VDI environment: Organisations may either put the VDI on the edge with its own DMZ infrastructure, firewall, security gateway, load balancer, etc. and have users connect directly or leverage a VPN-like technology to put the user on the network to access the VDI. ZPA, with Client Connector, enables a customer to either replace their internet-facing components or replace their VPN that is putting users onto the network. Connectivity from the VDI environment: Organisations want to further segment what users have access to from their trusted device. ZPA with Client Connector can assist with setting up granular access on an application level. Complete alternative to VDI: Organisations can leverage alternatives to VDI such as Zscaler Browser Isolation to replace their traditional VDI architecture. The benefits remain the same if a user has a browser managed by the enterprise which is isolated from the endpoint, the organisations admin remains in control of what can be egressed. Benefits of such an approach is the significantly lowered overhead to manage such a capability. The Zscaler Client Connector can be installed on the user’s device along with the VDI client, and ZPA carries the VDI traffic as a private application. Another option is installing ZCC on the virtual desktop instance (Citrix XenDesktop, Azure WVD, Amazon WorkSpaces) to control what the user has access to internally. Existing customers are deploying this model with WVD and Amazon Workspaces using ZCC for both ZIA and ZPA. Benefits in such a scenario are centralized visibility and control, single access control policy config for VDI, and other forms of access, creating a consistent user experience. Finally, a hybrid approach is feasible. In this case, organisations want to offer direct ZPA access to employees, but VDI-only access to third parties, and want to extend ZPA’s centralized visibility and control for VDI users accessing private applications. All these examples show that there are multiple ways to either completely exchange or complement the existing VDI installation. Fri, 08 Mar 2024 00:59:22 -0800 Stefano Alei https://www.zscaler.com/blogs/product-insights/positioning-zscaler-private-access-relative-to-vdi-part-2 Positioning Zscaler Private Access Relative to VDI: Part 1 https://www.zscaler.com/blogs/product-insights/positioning-zscaler-private-access-relative-to-vdi-part-1 What are some of the most common concerns heard from customers about virtual desktop infrastructure (VDI)? They are often related to cost, complexity, management, upkeep, and security. How can Zscaler solve these challenges? VDI is an undoubtedly complex environment, and having a clear picture of the organization’s needs, and positioning the right solution to improve security, reduce complexity, and improve the user experience is not always easy. Zscaler Private Access (ZPA) has evolved in the past few years and become a direct replacement of VDI, whether on-premises or cloud-delivered. It is, however, not yet possible to map all the use cases supported by a VDI implementation and, in general, is not a trivial task. Sometimes Zscaler can play a role by simply integrating solutions and providing the right level of security. To be able to understand which role ZPA can play to replace VDI, it is crucial to first understand why the customer is using a VDI. Organisations leverage VDI for various reasons. The most common use cases are: 1. Access granularity – restrict users’ access to only authorized applications 2. Traffic inspection – VDI as a choke point to run all traffic through on-premises security stacks 3. Data residency restriction a) Ensure data stays within corporate boundary and/or b) Ensure data is never stored on an end user's device 4. Traffic localization – minimize latency for heavyweight client-server interactions (e.g. database calls) 5. Desktop or software license management / reduction a) Clean desktop experience b) Persistent desktop that the user can access from multiple devices c) Software deployed to a limited pool of virtual desktops, rather than to all user devices 6. Legacy app support – enable access to apps that require an older OS VDIs are expensive, cumbersome to manage, and often hinder user experience. But there is much more to think about: we are witnessing profound changes in the EUC (End User Computing) market. Considering most applications are now web-based, you could potentially replace a VDI with an isolated browser access and provide secure access to these web applications. Applications requiring access via protocols, such as SSH and RDP, can be easily addressed using the Zscaler Privileged Remote Access (PRA). However, organisations probably still need to depend on VDIs in certain scenarios, like if they have applications requiring thick clients. In this case, they would be able to significantly reduce the size of their VDI deployment by using browser isolation. Whenever Zscaler can’t replace VDI, it can still be integrated. Zscaler can secure Internet/SaaS access of VDI instances, and additionally can protect access to VDI infrastructure. Whenever there is a VDI environment, ZIA and ZPA capabilities can play a role and improve an environment. Major use cases in detail 1. Access granularity This use case is fully supported and can be deployed by leveraging multiple capabilities of the Zscaler Zero Trust Exchange platform, like Browser Isolation. It allows you to leverage a web browser for user authentication and application access over ZPA, without requiring users to install Zscaler Client Connector on their devices. In certain cases, it might not be feasible or desirable to install Client Connector on all devices. For example, you might want to provide third-party access to applications on devices that might not be owned or managed by your company (e.g., contractor or partner-owned devices) or control user access to applications on devices with operating systems that are not currently supported by Zscaler Client Connector. Browser Isolation (BI) enhances the ZPA experience by making applications accessible for users from any web browser without requiring Zscaler Client Connector or browser plugins and configurations. Additionally, the existing Identity Provider (IdP) can be used to provide access to current users, contractors, and other third-party users without managing an internet footprint. BI is a feature that addresses needs in both cyberthreat protection and data loss prevention and can be leveraged for both internet/SaaS apps and private apps. BI policies can dictate if a site should be run within isolation, and if so, whether you allow cut/paste and download capabilities for the user. An isolation container is instantiated for each user in the cloud and only pixels are transmitted to the user’s browser. Sites may be isolated due to a configured URL category, cloud app control policy, or suspicious destinations (if Smart Browser Isolation is enabled). Last, but not least, is worthwhile to mention the recent capabilities introduced by the User Portal 2.0, that allows unmanaged devices to SaaS & private web apps. With this feature enabled, unmanaged devices will be able to use ZPA user portal to access both sanctioned SaaS/private web apps AND have their internet facing traffic routed through ZIA while in Isolation mode. Organisations can provide access to sanctioned SaaS applications from unmanaged devices to enforce policies using the isolation policies defined on ZPA. The isolation containers that are created as a result of a ZPA Isolation Policy can forward non-ZPA defined application traffic and internet traffic generated to ZIA for further processing and enforcement of necessary policies. Any traffic generated for applications defined on ZPA will continue to be forwarded via ZPA’s ZTNA service. 2. Traffic inspection Although this use case is rare, traffic inspection is still fully supported, leveraging the inspection capability provided by the Zero Trust Exchange platform. We can use Zscaler Private Access (ZPA) AppProtection (formerly Inspection), that provides high-performance, inline security inspection of the entire application payload to expose threats. It identifies and blocks known web security risks, such as the OWASP Top 10, and emerging zero-day vulnerabilities that can bypass traditional network security controls. It can help to protect internal applications from all types of attacks in the OWASP predefined controls with SQL injection, cross-site scripting (XSS), and more. Additionally, it helps to understand the severity, description, and recommended default action for each type of attack related to OWASP predefined controls. Each OWASP predefined control is identified with a unique number, defining how the control operates, and is associated with the level of concern. The predefined controls are organized into various categories: - Preprocessors - Environment and Port Scanners - Protocol Issues - Request Smuggling or Response Split or Header Injection - Local File Inclusion - Remote File Inclusion - Remote Code Execution - PHP Injection - Cross-Site Scripting (XSS) - SQL Injection - Session Fixation - Deserialization - Issues Anomalies Additionally, Zscaler recently introduced support for inspecting ZPA application segment traffic. A predefined forwarding rule, ZIA Inspected ZPA Apps, is available on the Policy > Forwarding Control page to inspect the Microtunnel traffic of a ZPA application segment using ZIA. This rule is applied automatically to the traffic from ZPA application segments with the Inspect Traffic with ZIA field enabled in the ZPA Admin Portal. As part of this feature, a predefined Auto ZPA Gateway is available on the Administration > Zscaler Private Access Gateway page. This new gateway is the default for the predefined ZIA Inspected ZPA Apps forwarding rule. We can minimize data exfiltration concerns with ZPA AppProtection, by utilizing Cloud Browser Isolation (CBI) where unmanaged devices are prevented from downloading sensitive content to the local host. For corporately managed devices, organisations can leverage DLP with Source IP Anchoring (SIPA) utilizing the ZIA cloud. AppProtection customers can craft custom signatures to detect and block bulk data downloads and use those in conjunction with other validation methods such as ZPA posture control. Organisations can rely on Zscaler Internet Access (ZIA) SSL Inspection best practices for configuring and deploying in an organization's network environment, for example, while accessing SaaS applications. Encrypting communications helps maintain the privacy and security of information passed between sender and receiver communications. Secure Sockets Layer (SSL) and its successor, Transport Layer Security (TLS), are protocols designed for the privacy and security of internet services. While these protocols do a great job of keeping information private from prying eyes, these security tools also conceal threats to the user, device, and organization. This is where the inspection of SSL and TLS encrypted traffic becomes a necessity. Inspecting encrypted SSL and TLS traffic via SSL Inspection is done by the Zero Trust Exchange at scale, allowing organizations to control risk and enforce policy. Enabling SSL Inspection is a required first step towards: - Controlling risk - Inspecting traffic (malware, data loss) - Adaptive control - Enforcing policy - Per-session policy decision and enforcement - Allowing, blocking, and restricting tenants environment. Use cases 3 to 6 will be covered in part two of this series. Fri, 01 Mar 2024 02:41:49 -0800 Stefano Alei https://www.zscaler.com/blogs/product-insights/positioning-zscaler-private-access-relative-to-vdi-part-1 Securing Government Workload Communications in the Public Cloud https://www.zscaler.com/blogs/product-insights/securing-government-workload-communications-public-cloud As government agencies continue their journey towards digital transformation, many are embracing hybrid cloud deployments to modernize their operations. A transition to a public or private cloud brings new challenges, especially when it comes to securing workload communications. In this blog, we will delve into the reality of hybrid cloud deployments and explore how Zscaler's zero trust architecture provides a comprehensive solution for securing government workloads in the public cloud. The Expanding Definition of Hybrid Cloud Hybrid cloud deployments have become increasingly complex as agencies expand their infrastructure across multiple regions and clouds. Rather than relying on a single cloud or region, agencies leverage different regional clouds to ensure availability and scalability. Additionally, within a specific region, agencies may need to consider availability zones to ensure business continuity. Figure 1 illustrates scenarios of hybrid cloud deployments. Workload Communications in the Public Cloud To illustrate the challenges of workload communications, let's consider the example of a Department of Motor Vehicles (DMV) application deployed in the AWS GovCloud. This application needs to interact with other workloads or applications, such as a CRM or ERP system in the data center, to access driver records. It may also need to communicate with scheduling applications in different regions or clouds, and even access vehicle registration information stored in a different cloud provider such as Azure. Additionally, the DMV application may require software updates and send logs to the Google Cloud Platform. Figure 2 shows Legacy Architecture Challenges Traditionally, agencies have extended their on-premises architecture to the cloud by deploying firewalls and VPNs. While this approach may provide initial security, it also amplifies lateral movement, increases cyberthreats, and exposes the infrastructure to data leaks. Moreover, deploying and managing multiple firewalls and VPNs across different cloud environments and regions adds complexity and operational costs. Introducing Zscaler's Zero Trust Approach Zscaler offers a cloud-delivered security platform based on zero trust principles to address the challenges faced by government agencies in securing workload communications. By adopting a zero trust proxy-based architecture, Zscaler eliminates the expanded attack surface and lateral movement risks associated with legacy architectures. Connectivity and Security Zscaler's platform provides both connectivity and security for workloads in the public cloud. It ensures secure connectivity by allowing access only to specific URLs or APIs, preventing open access to the internet. Workload-to-workload communications are based on least privileged access, ensuring that each workload can only communicate with authorized resources. Before any connection is established, zero trust-based authentication and authorization checks are performed, further enhancing security. Threat Prevention and Data Protection Zscaler's platform offers comprehensive threat prevention and data protection capabilities. It provides URL filtering, intrusion prevention, DNS protection, and behavior analysis, all backed by AI and ML-based risk analysis. Inline data protection ensures that sensitive data does not leak from workloads, with features such as regex-based checks, exact data management matching, OCR technology for file inspection, and AI/ML-based data classification. TLS Decryption at Cloud Scale With the increasing prevalence of encrypted traffic, TLS decryption at cloud scale becomes crucial. Zscaler's platform provides 100% inspection of traffic without compromising performance. This allows for effective threat prevention and data protection, ensuring the safety of data packets and preventing malicious intent. Granular App-to-App Segmentation Zscaler enables granular app-to-app segmentation, eliminating the need for expensive networking infrastructure or additional layers of segmentation software. This ensures that workloads can only access authorized resources, providing an additional layer of security. The Common Platform Advantage Zscaler's platform offers a common platform for securing workloads across multiple clouds. By installing lightweight cloud connectors in different clouds, agencies can benefit from standardized and consolidated security operations. This approach simplifies security management, reduces operational complexity and costs, and ensures consistent security policies across multiple clouds. It stops external threats, by protecting egress traffic from any malicious payload. It protects against insider threats by eliminating the threat of a bad actor within the agency who's got the credential to inflict harm, either by inserting a payload, a malicious payload, or trying to exfiltrate data sensitive data. The Zero Trust Exchange is designed to eliminate lateral movement and reduce the attack surface significantly. Moreover, Zscaler's platform is both FedRAMP and StateRAMP Authorized and GovCloud ready. For more information on Zscaler Workload Communications: Download the Datasheet Watch the Webinar: Ensuring Cloud Workload Security for Federal and State Government Request a Test Drive in AWS Wed, 28 Feb 2024 05:05:01 -0800 Sakthi Chandra https://www.zscaler.com/blogs/product-insights/securing-government-workload-communications-public-cloud Why Haven’t Firewalls and VPNs Stopped More Organizations from Being Breached? https://www.zscaler.com/blogs/product-insights/why-havent-firewalls-and-vpns-stopped-more-organizations-being-breached Reducing cyber risk is an increasingly important initiative for organizations today. Due to the fact that a single cyber breach can be financially fatal as well as disastrous for countless stakeholders, improving cybersecurity has become a board-level concern and drawn increased attention from regulatory bodies around the globe. As a result, organizations everywhere have poured massive amounts of time and money into security technologies that are supposed to protect them from cybercriminals’ malicious ends. Specifically, the go-to tools that are deployed in an effort to enhance security are firewalls and VPNs. Despite the above, breaches continue to occur (and increase in number) at an alarming rate every year. News headlines about particularly noteworthy breaches serve as continual reminders that improperly mitigating risk can be catastrophic, and that the standard tools for ensuring security are insufficient. One needs not look far for concrete examples—the security debacles at Maersk and Colonial Pipeline are powerful, salient illustrations of what can go wrong. With more and more organizations falling prey to our risk-riddled reality, an obvious question arises: Why haven’t firewalls and VPNs stopped more organizations from being breached? The weaknesses of perimeter-based architectures Firewalls and VPNs were designed for an era gone by; when users, apps, and data resided on premises; when remote work was the exception; when the cloud had not yet materialized. And in this age of yesteryear, their primary focus was on establishing a safe perimeter around the network in order to keep the bad things out and the good things in. Even for organizations with massive hub-and-spoke networks connecting various locations like branch sites, the standard methods of trying to achieve threat protection and data protection still inevitably involved securing the network as a whole. This architectural approach goes by multiple names, including perimeter-based, castle-and-moat, network-centric, and more. In other words, firewalls, VPNs, and the architecture that they presuppose are intended for an on-premises-only world that no longer exists. The cloud and remote work have changed things forever. With users, apps, and data all leaving the building en masse, the network perimeter has effectively inverted, meaning more activity now takes place outside the perimeter than within it. And when organizations undergoing digital transformation try to cling to the traditional way of doing security, it creates a variety of challenges. These problems include greater complexity, administrative burden, and cost, as well as decreased productivity and—of primary importance for our topic in this blog post—increased risk. How do firewalls and VPNs increase risk? There are four key ways that legacy tools like firewalls and VPNs increase the risk of breaches and their numerous, harmful side effects. Whether they are hardware appliances or virtual appliances makes little difference. They expand the attack surface. Deploying tools like firewalls and VPNs is supposed to protect the ever-growing network as it is extended to more locations, clouds, users, and devices. However, these tools have public IP addresses that can be found on the internet. This is by design so that the intended users can access the network via the web and do their jobs, but it also means that cybercriminals can find these entry points into the network and target them. As more of these tools are deployed, the attack surface is continually expanded, and the problem is worsened. They enable compromise. Organizations need to inspect all traffic and enforce real-time security policies if they are to stop compromise. But about 95% of traffic today is encrypted, and inspecting such traffic requires extensive compute power. Appliances have static capacities to handle a fixed volume of traffic and, consequently, struggle to scale as needed to inspect encrypted traffic as organizations grow. This means threats are able to pass through defenses via encrypted traffic and compromise organizations. They allow lateral threat movement. Firewalls and VPNs are what primarily compose the “moat” in a castle-and-moat security model. They are focused on establishing a network perimeter, as mentioned above. Relying on this strategy, however, means that there is little protection once a threat actor gets into the “castle,” i.e., the network. As a result, following compromise, attackers can move laterally across the network, from app to app, and do extensive damage. They fail to stop data loss. Once cybercriminals have scoured connected resources on the network for sensitive information, they steal it. This typically occurs via encrypted traffic to the internet, which, as explained above, legacy tools struggle to inspect and secure. Similarly, modern data leakage paths, such as sharing functionality inside of SaaS applications like Box, cannot be secured with tools designed for a time when SaaS apps did not exist. Why zero trust can stop organizations from being breached Zero trust is the solution to the above problems. It is a modern architecture that takes an inherently different approach to security in light of the fact that the cloud and remote work have changed things forever, as described earlier. In other words, zero trust leaves the weaknesses of perimeter-based, network-centric, firewall-and-VPN architectures in the past. With an inline, global security cloud serving as an intelligent switchboard to provide zero trust connectivity (along with a plethora of other functionality), organizations can: Minimize the attack surface: Hide applications behind a zero trust cloud, eliminate security tools with public IP addresses, and prevent inbound connections Stop compromise: Leverage a high performance cloud to inspect all traffic at scale, including encrypted traffic, and enforce real-time policies to stop threats Prevent lateral movement: Connect users, devices, and workloads directly to apps they are authorized to access instead of connecting them to the network as a whole Block data loss: Prevent malicious data exfiltration and accidental data loss across all data leakage paths, including encrypted traffic, cloud apps, and endpoints In addition to reducing risk, zero trust architecture solves problems related to complexity, cost, productivity, and more. If you would like to learn more about zero trust, join our upcoming webinar, “Start Here: An Introduction to Zero Trust.” Or, if you would like to dive deeper on the weaknesses of yesterday’s tools, read our new ebook, “4 Reasons Firewalls and VPNs Are Exposing Organizations to Breaches.” Tue, 27 Feb 2024 08:04:02 -0800 Jacob Serpa https://www.zscaler.com/blogs/product-insights/why-havent-firewalls-and-vpns-stopped-more-organizations-being-breached Microsoft, Midnight Blizzard, and the Scourge of Identity Attacks https://www.zscaler.com/blogs/product-insights/microsoft-midnight-blizzard-and-scourge-identity-attacks Summary On January 19, 2024, technology leader Microsoft disclosed that it had fallen victim to a Russian state-sponsored cyberattack that gave the threat actors access to senior management mailboxes and resulted in sensitive data leakage. While we will break down the attack step-by-step and explain what organizations can do to defend against similar attacks below, here’s a TL;DR. The threat actor Midnight Blizzard: State-sponsored Russian threat actor also known as Nobelium, CozyBear, and APT 29 Notable Midnight Blizzard breaches: Hewlett Packard Enterprise (December 12, 2023) and SolarWinds (December 14, 2020) The facts Attack target: Microsoft’s Entra ID environment Techniques used: Password spraying, exploiting identity and SaaS misconfigurations Impact: Compromised Entra ID environment, unauthorized access to email accounts of Microsoft’s senior leadership team, security team, legal, and more What’s unique about the attack? Using stealthy identity tactics that bypasses existing defenses to compromise users Exploiting misconfigurations in SaaS applications to gain privileges Exploiting identity misconfigurations in Entra ID to escalate privileges The attack sequence Found a legacy, non-production test tenant in Microsoft’s environment. Used password spraying via residential proxies to attack the test app tenant. Limited the number of attack attempts to stay under the threshold and evade blocking triggered by brute forcing heuristics. Guessed the right password and compromised the test tenant’s account. Generated a new secret key for the Test App that allowed the threat actor to control the app every where it was installed. Test App was also present in the corporate tenant. Threat actor used the app’s permissions to create an admin user in the corporate tenant. Used the new admin account to create malicious OAuth apps. Granted the malicious app the privilege to impersonate the users of the Exchange service. Used the malicious app to access Microsoft employee email accounts. Microsoft’s official guidance Defend against malicious OAuth applications Audit privileged identities and apps in your tenant Identify malicious OAuth apps Implement conditional access app control for unmanaged devices Protect against password spray attacks Eliminate insecure passwords Detect, investigate, and remediate identity-based attacks Enforce multi factor authentication and password protections Investigate any possible password spray activity Zscaler’s guidance Continuously assess SaaS applications for misconfigurations, excessive permissions, and malicious changes that open up attack paths. Continuously assess Active Directory and Entra ID (previously known as Azure AD) for misconfigurations, excessive permissions, and malicious changes that open up attack paths. Monitor users with risky permissions and misconfigurations for malicious activity like DCSync, DCShadow, kerberoasting, etc. that is typically associated with an identity attack. Implement containment and response rules to block app access, isolate the user, or quarantine the endpoint on an identity attack detection. Implement deception to detect password spraying, Entra ID exploitation, Active Directory exploitation, privilege escalation, and lateral movement for instances where stealthy attacks bypass existing detection and monitoring controls. Deconstructing the attack The threat actor Midnight Blizzard has had a long history of pulling off highly publicized breaches. It’s Microsoft this time around, but in the past, they’ve allegedly compromised Hewlett Packard Enterprise and SolarWinds. To people who analyze attacks for a living, the Microsoft breach should not come as a surprise. Midnight Blizzard is among a growing list of nation-state and organized threat actors that rely on identity compromise and exploiting misconfigurations/permissions in SaaS applications and identity stores to execute breaches that conventional security thinking cannot defend against. Other threat groups using these strategies and techniques include Evil Corp, Lapsus$, BlackMatter, and Vice Society. In case of the Microsoft breach, the attackers demonstrated a profound understanding of OAuth mechanics and attack techniques to evade detection controls. They created malicious applications to navigate Microsoft's corporate environment. And by manipulating the OAuth permissions, they granted themselves full access to Office 365 Exchange mailboxes, enabling them to easily exfiltrate sensitive emails. Security challenges Identity-centric tactics: Midnight Blizzard strategically targeted identities, exploiting the user's credentials as a gateway to sensitive data. Conventional detection controls like EDRs are not effective against such attacks. OAuth application abuse: The adversaries adeptly abused OAuth applications, a technique that complicates detection and enables prolonged persistence. Misconfiguration blind spots: Identifying misconfigurations within Active Directory/Entra ID and SaaS environments remains a complex task, often resulting in blind spots for defenders. Step-by-step breakdown Pre-breach Before the attack commenced, an admin within Microsoft's test tenant had created an OAuth app. For the purpose of this blog post, let’s call this app ‘TestApp.’ For reasons unknown, this app was subsequently installed in Microsoft's corporate environment with elevated permissions, likely encompassing the scope Directory.ReadWrite.all, granting it the capability to create users and assign roles. Notably, this app appeared to be dormant and possibly forgotten. ThreatLabz note: There is an unimaginable sprawl of applications, users, and associated misconfiguration and permissions that security teams often have no visibility into. More often than not, blind spots like these are what result in publicized breaches. Initial access In late November 2023, Midnight Blizzard initiated reconnaissance on Microsoft's SaaS environment. Discovering the test tenant, the attacker targeted its admin account, which, being a test account, had a weak, guessable password and lacked multi-factor authentication (MFA). Employing a password spraying attack, the attacker systematically attempted common passwords to gain access, leveraging residential proxies to obfuscate their origin and minimize suspicion. Eventually, the attacker successfully compromised the admin account. ThreatLabz note: Traditional threat detection and monitoring controls are ineffective against attacks that use valid credentials, MFA-prompt bombing, and other identity-centric techniques to compromise users. Persistence With control over the admin account, the attacker obtained the ability to generate a new secret key for TestApp, effectively commandeering it across all installations. This tactic mirrors techniques observed in the SolarWinds attack of 2020. ThreatLabz note: In the absence of continuous monitoring and high-confidence alerting for malicious changes being made to permissions in SaaS applications, attacks like these easily cross the initial access phase of the kill chain. Privilege escalation Given TestApp's permissions within Microsoft's corporate tenant, the attacker created a new user, likely an administrator, to further their access. Subsequently, the attacker deployed additional malicious OAuth apps within the tenant to evade detection and ensure persistence, leveraging TestApp to grant elevated roles, such as Exchange role EWS.full_access_as_app, facilitating mailbox access and bypassing MFA protection. ThreatLabz note: Configuration and permission based blindspots extend to identities themselves. As such, it is imperative that organizations have the ability to continuously assess their Active Directory/Entra ID for misconfigurations, excessively permissive policies, and other permissions that give attackers the ability to escalate privileges from a compromised identity. They should also continuously monitor for malicious changes in the identity store that might potentially be creating additional attack surfaces. Lateral movement Though specifics regarding the number and origin of installed apps remain unclear, the attacker's utilization of TestApp to confer privileges is evident. This culminated in unauthorized access to mailboxes belonging to Microsoft's senior leadership, security personnel, legal team, and other stakeholders. How zero trust can help A zero trust architecture provides a fundamentally secure approach that is better at protecting against stealthy attacks that are used by nation-state threat actors and organized adversaries. Zero trust fundamentally eliminates weaknesses in your environment that are core properties of hub and spoke network models. Below is a 10,000 foot reference architecture for zero trust that explains how and why it better protects against Midnight Blizzard-style attacks. Core zero trust capabilities This is the heart of a zero trust architecture consisting of Internet Access and Private Access. The Zero Trust Exchange acts as a switchboard brokering all connections between users and applications. This architecture makes your applications invisible to the Internet, thereby eliminating the external attack surface, replaces high-risk VPNs, and uses segmentation to reduce lateral threat movement and internal blast radius. To broker the connection, the Zero Trust Exchange verifies the identity, determines the destination, assesses risk, and enforces policy. ThreatLabz note: Zscaler extends core zero trust capabilities with SaaS supply chain security, Identity Posture Management, ITDR, Deception, and Identity Credential Exposure to eliminate application and identity misconfigurations, detect stealthy attacks, and provide visibility into exposed credentials on endpoints to remove lateral movement paths. Below, we breakdown what each of these capabilities can do. SaaS Security While the move to the cloud and SaaS applications has aided organizations to accelerate their digital transformation, it has also created a new set of security challenges. Among these, the lack of visibility into dangerous backdoor connections to SaaS applications is paramount as it creates supply chain risk — the kind that was exploited in the Microsoft breach. SaaS Security strengthens your security posture by providing visibility into third-party application connections, over-privileged access, risky permissions, and continuous monitoring for changes that can be malicious in nature. It is a core step in securing your SaaS environment. Identity Posture Management Nine in ten organizations are exposed to Active Directory attacks and there has been a 583% increase in Kerberoasting and similar identity attack techniques in 2023 alone. These are not isolated phenomena. Misconfigurations and excessive permissions in Active Directory and other identity providers are what enable these types of attacks. For example, an unprivileged account without MFA having the ability to control an application with privileged roles should be flagged, but most security teams do not have appropriate visibility into these types of misconfigurations. Identity Posture Management augments zero trust by providing security teams visibility into identity misconfigurations, policies, and permissions that open up potential attack paths. With periodic assessments, security teams can leverage remediation guidance to revoke permissions, limit policies, and remove misconfigurations. Identity Posture Management also alerts security teams to malicious changes in the Active Directory in real time. Deception and ITDR (Identity Threat Detection and Response) As evidenced in the Microsoft breach, attackers used password spraying from a residential proxy and limited the number of tries to evade detection. Traditional threat detection and monitoring approaches just do not work here. Deception, on the other hand, is a pragmatic approach that can detect these attacks with fairly high confidence. Decoy users created in Entra ID can detect such password spraying attacks without false positives or the need to write complex detection rules. ITDR can detect identity-specific attacks like DCSync, DCShadow, and Kerberoasting that would otherwise require detection engineering and significant triage to spot. Identity Credential Exposure While TTPs (Techniques, Tactics, and Procedures) were not reported for credential exploitation, credentials and other sensitive material (like username, passwords, authentication tokens, connection strings, etc.) on the endpoint in files, registry, and other caches are something that threat actors like Volt Typhoon, Scattered Spider, BlackBasta, BlackCat, and LockBit are known to have exploited in publicly reported breaches. Identity Credential Exposure provides security teams with visibility into credential exposure across their endpoint footprint, highlighting blind spots that open up lateral movement and data access paths from the endpoint. Zero trust creates multiple opportunities to detect and stop Midnight Blizzard-style attacks Problem Solution How does it work? MITRE ATT&CK Technique Password spraying Zscaler Deception Decoy user accounts in Entra ID can detect any attempts to sign in using the credentials of the decoy users. Any failed/successful attempts will be logged to detect attacks like password spraying T1110.003 - Brute Force: Password Spraying T1078.004 - Valid Accounts: Cloud Accounts Existence of apps/SPNs with high privilege Zscaler ITDR ITDR can surface unprivileged accounts that have a path (e.g., owner rights) to apps with privileged roles NA Creation of apps/SPNs with high privilege Zscaler SaaS Security Monitoring for and alerting when a risky app is added, app is created by an unverified publisher, and when an app hasn’t been used in a while There is no technique that maps to this but in terms of the nature of the technique, the ones listed below are a close approximation of how you think of the attack. T1136.003 - Create Account: Cloud Account T1098.003 - Account Manipulation: Additional Cloud Roles Creation/modification of users with high privileges Zscaler ITDR Monitoring of an alerting on unauthorized addition of privileged permissions to principals T1136.003 - Create Account: Cloud Account T1098.003 - Account Manipulation: Additional Cloud Roles Secret addition to apps Zscaler SaaS Security Flags applications with multiple Application Secrets T1098.001 - Account Manipulation: Additional Cloud Credentials Disabled MFA Zscaler ITDR Find accounts where MFA is disabled and get alerts when MFA is disabled for any account T1556.006 - Modify Authentication Process: Multi-Factor Authentication Consent grants Zscaler SaaS Security Monitors inclusion of high risk scopes like EWS.full_access_as_app or EWS.AccessAsUser.All to alert on the app’s risk level T1098.003 - Account Manipulation: Additional Cloud Roles T1098.002 - Account Manipulation: Additional Email Delegate Permissions What should I do next? Identity is the weakest link. Irrespective of whether you are running a zero trust architecture or not, start by getting visibility into identity misconfigurations and excessive permissions that can allow attackers to grant themselves privileges. We’re offering a complimentary Identity Posture Assessment with Zscaler ITDR. Gain visibility into your SaaS sprawl and find dangerous backdoor connections that can give attackers the ability to establish persistence. Request an assessment with Zscaler SaaS Security. Implement Deception irrespective of what other threat detection measures you have. It is one of the highest ROI threat detection controls that you can implement, augmenting controls like EDR. Zscaler Deception has a comprehensive set of decoys that can deceive and detect sophisticated attackers. If you are a Zscaler customer, contact your account manager for support on these assessments and Deception rollout. Tue, 13 Feb 2024 17:10:20 -0800 Amir Moin https://www.zscaler.com/blogs/product-insights/microsoft-midnight-blizzard-and-scourge-identity-attacks Start Your Journey in IT Support: A Beginner's Guide https://www.zscaler.com/blogs/product-insights/start-your-journey-it-support-beginner-s-guide Navigating the nuances of IT troubleshooting can be challenging, especially if you're just starting out. Our ebook, A Beginner’s Guide to Troubleshooting Devices, Networks, and Applications for Service Desk Teams, breaks down the essentials of IT support in a clear, digestible format, making it a great resource for newcomers who are eager to become influential service desk team members. It’s a practical guide even for those with limited time. Whether you're dealing with device issues, network complexities, or application troubleshooting, you’ll find step-by-step instructions that are easy to follow even with minimal IT knowledge. We’ve designed this guide to help you enhance your troubleshooting skills, gain the confidence you need to master IT problem-solving, and become a valuable asset to any service desk team. In this ebook, you'll find: An overview of service desk challenges: Understand the evolving IT landscape and the pivotal role of IT support in maintaining productivity. Step-by-step ticket resolution processes: Learn how to handle and resolve IT issues, enhancing customer satisfaction efficiently. Categorization of IT issues: Familiarize yourself with common problems in devices, networks, and applications, along with strategies to tackle them. A focus on device, networking, and application issues: Gain insights into specific challenges in these areas and learn practical solutions. Strategies to enhance troubleshooting workflows: Discover how to streamline IT support processes and use advanced technologies for better problem-solving. It’s also an excellent tool for service desk managers to expedite team onboarding. By equipping your team with this resource, you’ll enable them to handle a wide range of IT issues independently. It reduces the need for escalations and empowers analysts to solve problems efficiently. Ultimately, it can help not only enhance your service desk team’s capabilities, but also significantly shorten the time it takes for new analysts to become proficient. Download the ebook today and transform your service desk team! Fri, 09 Feb 2024 19:14:07 -0800 Rohit Goyal https://www.zscaler.com/blogs/product-insights/start-your-journey-it-support-beginner-s-guide IoT/OT Predictions for 2024 https://www.zscaler.com/blogs/product-insights/iot-ot-predictions-2024 How many smart home devices are you running where you live? Smart speakers, thermostats, cameras, light bulbs, etc. Have you lost count yet? You could be forgiven, because Forbes projects there could be as many as 207 billion of these devices out in the world by the end of this year! By my calculation that works out to more than 25 devices for every human on the planet! In this blog, we’ll cover some of the top IoT/OT predictions for 2024, covering everything from AI at the edge to ransomware. Let’s jump in. IoT/OT devices will see a higher degree of proliferation than ever before Losing count of how many devices you have isn’t just a nuisance in the workplace; it’s a very real problem, particularly from a cybersecurity perspective. The challenge of keeping track of your IoT devices—not to mention keeping them secure—is only going to grow harder with the proliferation of sensors, monitors, point-of-sale, and myriad other devices that are feeding our hunger for data. Fortunately we’ve been working on that. Edge AI will make these devices smarter, faster No predictions blog post for 2024 would be complete without mention of the topic on everyone’s lips: artificial intelligence. Edge AI is already finding its way onto some smartphones, and as the technology advances, its inclusion in IoT/OT is inevitable. It will only improve as time passes, increasing the number of autonomous decisions being made without oversight. This can easily be positioned as a benefit, especially in remote locations where humans cannot or do not want to be, but it can also be a risk, if mishandled. 5G and other WAN connectivity will evolve to meet the needs of IoT/OT It seems we’ve been hearing about 5G forever, but it’s now starting to truly gain traction in the workplace as a new way to connect devices via the internet with minimal latency and without requiring a local network infrastructure. And it’s not alone—newer versions of the Wi-Fi standard, LPWAN, and even satellite connectivity are also coming to the forefront. This simply means we’re able to deploy sensors and other kinds of IoT devices into more locations, including remote and mobile ones, growing the number of potential use cases for the technology. Digital twins will still serve as proving grounds The accelerated growth in the number of sensors continues to cultivate the use of digital twins; virtual representations of the world around them that help us visualize and improve remote systems. Once again, the proliferation of IoT sensors will provide an even richer and more accurate view of what we’re monitoring. This will enable us to drive resource optimization and efficiency, and pave the way for the adoption of more sustainable systems. Taking all of these developments in aggregate, it’s plain to see that when it comes to IoT and OT growth, ‘we ain’t seen nothing yet’! As with all technological advances, there’s the potential that they will make our lives better and businesses more efficient and profitable. At the same time, it’s vital to ensure security is consideration number one when it comes to planning their deployment, especially when it comes to devices that talk to the internet. This brings us to the flip side of these predictions: the challenges they pose. Data privacy The combination of ubiquitous sensors and the rise of AI making use of the data they collect naturally leads us to consider data privacy. Regulations around the world, perhaps most famously the EU’s GDPR, ensure that privacy is a requirement rather than a consideration. The handling of potentially sensitive data is strictly controlled, and its misuse can significantly undermine public confidence, not to mention lead to potentially huge fines. Never is this a greater problem than when such data is leaked or exfiltrated from its owner for potentially nefarious uses. Ransomware on the (continued) rise As the Zscaler ThreatLabz team recently reminded us, ransomware attacks have risen sharply over the past year, over 37% in fact. At the same time, it’s becoming easier than ever to launch such attacks, aided by readily available AI and Ransomware-as-a-Service (RaaS) kits. The firmware problem Remember earlier when I asked you if you knew how many devices you have deployed? Here’s another one for you. Of those devices, how many of them have their firmware up to date? Do you even know what firmware they’re running to be able to establish this? An IoT device may have been secure on the day it shipped, but as our own computers and smartphones have taught us, regular updates are a fact of life in the cat-and-mouse game of vulnerability. A single compromised device could be all an attacker needs to begin their hunt for more damage to cause or data to steal. The ongoing risks presented by legacy security As the cybersecurity industry continues to incessantly point out, traditional security technology practices, many still employed by IT departments around the world, are fundamentally flawed. The ongoing use of firewalls and VPNs opens the door for lateral movement across networks and geographical boundaries, allowing bad actors the opportunity to reach the countless IoT/OT devices in use. Once the network is compromised, the bounty for an attacker grows ever larger. All of these challenges and more point to only one conclusion: Organizations must adopt a zero trust security architecture in order to protect the IoT and OT devices they will inevitably deploy this year. Conclusion On the one hand, the predictions for IoT/OT in 2024 are worth getting excited about. Our world is getting smarter, and advances in devices will no doubt help us drive improvements in our personal and professional lives. But to benefit positively we must put security first. This doesn’t mean adding more and more roadblocks on the network highways. It means reimagining security and building a framework based on the tenets of zero trust. If you’re new to zero trust and want to learn more, we’d like to welcome you to one of our monthly introductory live webinars where you can explore the many benefits of zero trust and why Zscaler delivers it better than anyone else. Click here and search ‘start here’ to find the next session to sign-up for. Tue, 06 Feb 2024 01:00:02 -0800 Simon Tompson https://www.zscaler.com/blogs/product-insights/iot-ot-predictions-2024 Why Firewalls and VPNs Give You a False Sense of Security https://www.zscaler.com/blogs/product-insights/why-firewalls-and-vpns-give-you-false-sense-security Firewalls and VPNs were once hailed as the ultimate solutions for robust enterprise security, but in today’s evolving threat landscape, organizations face a growing number of breaches and vulnerabilities that are outpacing these solutions. Today, the world we work in looks very different from the on-premises era as industries transform how and where work gets done. Firewalls and VPNs are crumbling pillars of a bygone era. They provide a false sense of security because they come with significant weaknesses that put companies at risk—weaknesses that are only realized when embracing digital transformation. Innovation in generative AI, automation, and IoT/OT technologies across industries is set to continue breaking barriers in 2024. This innovation also opens the door for attackers to automate phishing campaigns, craft evasive malware, reduce the development time of threats using AI, and even sell Ransomware-as-a-Service (RaaS). With the growing severity and number of breaches, there’s a heightened concern that VPN vulnerabilities will leave the door open for attackers. According to a Cybersecurity Insider survey, nearly 50% of organizations experienced VPN-related attacks from July 2022 to July 2023, and 90% of organizations are concerned about attackers exploiting third-party vendors to gain backdoor access into their networks through VPNs. It’s becoming clear that even the largest organizations with advanced firewalls still fall victim to breaches. Curious to know some of the reasons that firewalls and VPNs are letting organizations down? Read more below. A thinner sheet of protection across a larger attack surface VPNs and firewalls extend the network, increasing the attack surface with public IP addresses as they connect more users, devices, locations, and clouds. Users can now work from anywhere with an internet connection, further extending the network. The proliferation of IoT devices has also increased the number of Wi-Fi access points across this extended network, including that seemingly harmless Wi-Fi connected espresso machine needed for a post-lunch boost, creating new attack vectors to exploit. Perimeter-based architecture means more work for IT teams More doesn’t mean better when it comes to firewalls and VPNs. Expanding a perimeter-based security architecture rooted in firewalls and VPNs means more deployments, more overhead costs, more time wasted for IT teams - but less security and less peace of mind. Pain also comes in the form of degraded user experience and satisfaction with VPN technology for the entire organization due to backhauling traffic (72% of organizations are slightly to extremely dissatisfied with their VPN experience). Other challenges like the cost and complexity of patch management, security updates, software upgrades, and constantly refreshing aging equipment as an organization grows are enough to exhaust even the largest and most efficient IT teams. The bigger the network, the more operational complexity and time required. VPNs and firewalls can’t effectively guard against today’s threat landscape VPNs and firewalls deployed to protect and defend network access behave a lot like a security guard who sits at the front of a store in order to stop theft. Security Guards Firewalls and VPNs Stationed at the front door of a valuable store - tasked with identifying and stopping attacks. Can’t monitor all entrances at the same time. Deployed at key access points to an organization’s network. Can’t stop all the threats across every access point. Once an attacker gets in, they get access to the entire store. Permit lateral threat movement by placing users and entities onto the network. 1:few threat detection can’t scale unless you hire a lot of security guards to monitor all entrances. Can’t inspect encrypted traffic and enforce real-time security policies at scale. Can be slow, tired, expensive to hire, late for their shift and present a number of other issues that allow threats to go undetected and unanswered. Suffer from a variety of other challenges related to cost, complexity, operational inefficiency, poor user experiences, organizational rigidity, and more. Much like a lone security guard, VPNs and firewalls can help mitigate some risk, but they can’t keep up with the scale and complexity of the cybercrime of today. Your network is extending exponentially as you digitally transform your organization. With constant attacks on the horizon and a thinner cover of protection, how many million security guards can you hire? The Zero Trust Exchange delivers on the promise of security Unlike network-centric technologies like VPNs - zero trust architecture minimizes your attack surface and connects users to the apps they need directly—without putting anyone or anything on the network as a whole. Zscaler delivers zero trust with its cloud native platform: the Zscaler Zero Trust Exchange. The Zero Trust Exchange starts with the premise that no user, workload, or device is inherently trusted. The platform brokers a secure connection between a user, workload, or device and an application—over any network, from anywhere by looking at identity, app policies, and risk. As threats grow more dangerous, we can’t rely on a single security guard to keep everybody out anymore. VPNs and firewalls were designed to make organizations feel secure, but with all the evolving threats of today highlighting the cracks in these technologies, IT and security teams are left with a false sense of security. Truly secure digital transformation can only be delivered by implementing a zero trust architecture. The Zscaler Zero Trust Exchange is the comprehensive cloud platform designed to keep your users, workloads, IoT/OT, and B2B traffic safe in an environment where VPNs and firewalls can’t. If you’d like to learn more, join our webinar that serves as an introduction to zero trust and provides entry-level information about the topic. Or, if you’d like to go a level deeper, consider registering for one of our interactive whiteboard workshops for free Mon, 05 Feb 2024 14:26:59 -0800 Sid Bhatia https://www.zscaler.com/blogs/product-insights/why-firewalls-and-vpns-give-you-false-sense-security AI Detections Across the Attack Chain https://www.zscaler.com/blogs/product-insights/ai-detections-across-attack-chain Organizations face a constant barrage of cyberthreats. To combat these sophisticated attacks, Zscaler delivers layered security protections to deliver more effective security postures across the four key stages of an attack - attack surface discovery, compromise, lateral movement, and data exfiltration. Heading into 2024, with all the buzz surrounding artificial intelligence (AI) over the past year, we are asked daily by prospects and customers, "Zscaler, how do you use AI to keep us safer?" For more on where we see AI and security headed in 2024, please see the blog from our founder, Jay Chaudhry. In this blog, we will explore a handful of examples of Zscaler AI use across key stages of an attack—demonstrating how it can detect and stop threats, protect data, and make teams more efficient. Truth be told, we began to add AI detections into our portfolio some years ago to further bolster other detection methods, and it has paid off. Stage 1: Attack surface discovery While we will spend the better part of this blog discussing AI in other areas, the first stage of an attack involves attackers probing attack surfaces to identify potential weaknesses be exploited. These are often things like VPN/firewall misconfigurations or vulnerabilities, or unpatched servers. We wholeheartedly suggest considering ways to cloak your currently discoverable applications behind Zscaler to immediately reduce your attack surface and reduce your risk of successful attacks Stage 2: Risk of compromise During the compromise stage, attackers exploit vulnerabilities to gain unauthorized access to employee systems or applications. Zscaler's AI-powered products help reduce risk of compromise while prioritizing productivity. AI-powered phishing/C2 prevention: We better detect and stop credential theft and browser exploitation from phishing pages with real-time analytics on threat intelligence from 300 trillion daily signals, ThreatLabz research, and dynamic browser isolation. This means our AI makes us even more efficient in detecting new phishing or C2 domains. File-based attacks: We use AI in our cloud sandbox to ensure there is no tradeoff between security and productivity. Historically, in the case of the sandbox, a new file arrives and users must wait as it is analyzed, interrupting productivity. Our AI Instant Verdict in the sandbox prevents patient zero infections by instantly blocking high-confidence malicious files using AI, eliminated the need to wait for analysis on file we feel are very likely malicious. Our model fidelity is a result of years of ongoing training, analysis, and tuning interactions based on over 550 million file samples. AI to block web threats: Additionally, Zscaler's AI-powered browser isolation blocks zero day threats while ensuring employees can access the right sites to get their jobs done. URL filtering is effective in keeping users safe, but given that sites are either allowed or blocked, sometimes sites that are blocked are safe and needed for work. This is a productivity drain as users cannot access legitimate sites for work, resulting in unnecessary helpdesk tickets. AI Smart Isolation determines when a site might be risky and open it in isolation. This means organizations don't have to overblock sites to support productivity and can also maintain a strong web security posture. Stage 3: Lateral movement Once inside an organization, attackers attempt to move laterally to gain access to sensitive data. Zscaler's AI innovation reduces potential blast radius by employing automated app segmentation based on analysis of user access patterns to limit lateral movement risk. For instance, if we see only 250 of 4,500 employees accessing a finance application, we will use this data to automatically create an app segment that limits access to only those 250 employees, thus reducing potential blast radius and lateral movement opportunity by ~94 percent. Stage 4: Data exfiltration The final stage of an attack involves the unauthorized exfiltration of sensitive data from a company. Zscaler uses AI to allow companies to deploy data protections faster to protect sensitive data. With AI-driven data discovery, organizations no longer struggle with the time-consuming task of data fingerprinting and classification that delays deployment. Innovative data discovery automatically finds and classifies all data out of the box. This means data is classified as sensitive information immediately, so it can be protected right away from potential exfiltration and data breaches Zscaler's AI-driven security products provide organizations with robust protection across the four key stages of an attack. We also rely on AI to deliver cybersecurity maturity assessments as part of our Risk360 cyber risk management product. Rest assured, we are busy thinking, building, and adding new AI capabilities every day, so there is more to come, as AI-powered security is becoming indispensable in safeguarding organizations against cyberthreats. Fri, 26 Jan 2024 08:00:01 -0800 Dan Gould https://www.zscaler.com/blogs/product-insights/ai-detections-across-attack-chain Cloud Workloads: Cybersecurity Predictions for 2024 https://www.zscaler.com/blogs/product-insights/cloud-workloads-cybersecurity-predictions-2024 The year 2023 witnessed explosive transitions in the cloud security market, with every aspect of the ecosystem—vendors, products, and infrastructure—undergoing significant change. Looking ahead to 2024, cybersecurity for workloads (VMs, containers, services) in the public cloud will continue to evolve as customers continue to strike a balance between aggressive cloud adoption and compliance with corporate security needs. Within this, CIOs and CISOs will challenge their teams to build a security platform that consolidates point products, supports multiple clouds (AWS, Azure, and GCP), and automates to scale security operations. As a result, we will see zero trust architecture leading the way in securing cloud workloads, real-time data protection, and centralized policy enforcement. Here are the top 5 trends we believe will unfold in 2024. 1. Lateral threat movement into clouds from on-premises environments will increase The cloud is where organizations' most valuable assets—applications and data—are heading. Attackers are employing innovative techniques that involve compromising an organization's on-premises network and laterally moving to its cloud domain. These techniques are seeing increased popularity with threat actors as inconsistencies persist between on-premises and public cloud environments. An attack detailed by the Microsoft Security research team (source: MERCURY and DEV-1084: Destructive attack on hybrid environment) exemplifies this trend. The threat actors first compromised two privileged accounts, then leveraged them to manipulate the Azure Active Directory (Azure AD) Connect agent. Two weeks prior to deploying ransomware, the threat actors used a compromised, highly privileged account to gain access to the device where the Azure AD Connect agent was installed. We assess with high confidence that the threat actors then utilized the AADInternals tool to extract plaintext credentials of a privileged Azure AD account. These credentials were subsequently used to pivot from the targeted on-premises environment to the targeted Azure AD environment. Fig. On-premises compromise pivots to the public cloud 2. Serverless services will significantly widen the attack surface Serverless functions offer tremendous simplicity, allowing developers to focus solely on writing and deploying code without worrying about its underlying infrastructure. The adoption of microservices-based architectures will continue to drive the use of serverless functions due to their reusability as well as their ability to speed up application development. However, there is a significant security risk associated with serverless functions, as they interact with various input and event sources, often requiring HTTP or API calls to trigger actions. They also utilize cloud resources such as blob stores or block storage, employ queues to sequence interactions with other functions, and connect with devices. These touchpoints increase the attack surface, as many of them involve untrusted message formats and lack proper monitoring or auditing for standard application layer protection. Fig. Serverless functions can access the full stack of additional services, creating a wide attack surface 3. Identity-based security policies will be redefined as it pertains to public cloud protection As workloads start to mushroom in public clouds, each CSP will bring their own disparate identity capabilities. Unlike with users, there is no one ring (Active Directory) to rule them all. IT shops will continue to deal with disconnected identity profiles across on-premises, private cloud, and public cloud for workloads. That said, in 2024, security teams will continue to deal with multiple workload attributes to write their security policies, and higher level abstractions (like user defined tags) will start to gain wider adoption as such. This will drive consistency between cybersecurity and other resource management functions (billing, access controls, authentication, reporting) for cloud workloads. Fig. User Defined tags will be used to implement zero trust architecture to secure workloads in the cloud 4. Enterprises will evaluate and deploy cloud-delivered security platforms that support multiple public clouds Staffing people and building architectures specialized to secure each public cloud will place the onus on security teams to seek out the solutions that work best for them. Enterprises will evaluate tools from CSPs such as cloud firewall point solutions, but will increasingly look for architectures that can centralize their cloud security policy definitions, enforcements, and remediations. Only when cyber prevention is delivered from one central platform can cyber defense be applied to all workloads—not just a few selective ones. 5. CIOs' willingness to hedge their bets across AWS, Azure, and GCP will dictate the implementation of security tools that can span multiclouds. When it comes to vendor best practices, CIOs are looking to diversify their cloud infrastructure portfolios. Doing so allows them to reduce reliance on a single vendor, integrate infrastructure inherited from mergers and acquisitions, and leverage best of breed services from different public clouds such as Google Cloud BigQuery for data analytics, AWS for mobile apps, and Oracle Cloud for ERP. Fig. AWS shared responsibility framework for protecting cloud resources. [SOURCE] Every cloud vendor preaches the notion of “shared responsibility” when it comes to cybersecurity, placing the onus on the customer to implement a security infrastructure for their cloud resources. Savvy IT shops will ensure that they pick a cybersecurity platform that can support multiple public cloud environments. Customers can’t possibly entertain the idea of separate security tools for each public cloud—rather, they will standardize on one platform for all their needs. Deploying workloads in the public cloud is not a new trend in the corporate world, but the topic of cloud workload security continues to get hotter and hotter. While there are no clear answers yet, there are a few indications that customers will navigate towards in 2024. Namely, zero trust, as it provides immediate benefit in the near-term and a solid framework for cloud workload security into the future. Want to learn more about zero trust for cloud workloads? Click here for more Zscaler perspectives. This blog is part of a series of blogs that provide forward-facing statements into access and security in 2024. The next blog in this series covers Zero Trust predictions. Forward-Looking Statements This blog contains forward-looking statements that are based on our management's beliefs and assumptions and on information currently available to our management. The words "believe," "may," "will," "potentially," "estimate," "continue," "anticipate," "intend," "could," "would," "project," "plan," "expect," and similar expressions that convey uncertainty of future events or outcomes are intended to identify forward-looking statements. These forward-looking statements include, but are not limited to, statements concerning: predictions about the state of the cyber security industry in calendar year 2024 and our ability to capitalize on such market opportunities; anticipated benefits and increased market adoption of “as-a-service models” and Zero Trust architecture to combat cyberthreats; and beliefs about the ability of AI and machine learning to reduce detection and remediation response times as well as proactively identify and stop cyberthreats. These forward-looking statements are subject to the safe harbor provisions created by the Private Securities Litigation Reform Act of 1995. These forward-looking statements are subject to a number of risks, uncertainties and assumptions, and a significant number of factors could cause actual results to differ materially from statements made in this blog, including, but not limited to, security risks and developments unknown to Zscaler at the time of this blog and the assumptions underlying our predictions regarding the cyber security industry in calendar year 2024. Risks and uncertainties specific to the Zscaler business are set forth in our most recent Quarterly Report on Form 10-Q filed with the Securities and Exchange Commission (“SEC”) on December 7, 2022, which is available on our website at ir.zscaler.com and on the SEC's website at www.sec.gov. Any forward-looking statements in this release are based on the limited information currently available to Zscaler as of the date hereof, which is subject to change, and Zscaler does not undertake to update any forward-looking statements made in this blog, even if new information becomes available in the future, except as required by law. Thu, 25 Jan 2024 08:00:02 -0800 Sakthi Chandra https://www.zscaler.com/blogs/product-insights/cloud-workloads-cybersecurity-predictions-2024 Zscaler Academy: Reflecting on 2023 and Soaring into 2024 https://www.zscaler.com/blogs/product-insights/zscaler-academy-reflecting-2023-and-soaring-2024 2023 was a year of transformation and innovation for Zscaler Academy. We reimagined cybersecurity education, tailoring it to the evolving landscape of zero trust security. As we begin 2024, it's time to reflect on what we've achieved and show you what's on the horizon 2023: Building the Pillars of Zero Trust Learning New Training and Offerings: We revamped our curriculum, introducing the Zscaler for Users learning path and specializations in Data Protection, Cyberthreat Protection, and Workloads. Hands-on labs, live virtual training, and engaging workshops became the norm, bridging the gap between theory and practice. New Approach: We embraced a learner-centric approach, catering to diverse learning styles and preferences. Self-paced e-learning, interactive webinars, and immersive workshops offered flexibility and depth, empowering individuals at all levels. Certification: We evolved our certification program, aligning it with the latest zero trust advancements, and introduced an industry-standard third-party proctored certification exam. The Zscaler Digital Transformation Administrator (ZDTA) certification exam is the final step in the Zscaler for Users - Essentials learning path, and supports the journey of any security professional to validate their understanding of deploying and implementing the Zscaler Zero Trust Exchange platform. Roadshows and Virtual Training: We took Zscaler Academy on the road, hosting virtual and in-person events like Zscaler Training Roadshows and Virtual Training workshops around the globe. These interactive sessions fostered connections, knowledge sharing, and a sense of community among Zscaler users and partners A Year of Bridging the Cybersecurity Skills Gap Customers: We empowered customers to maximize the value of their Zscaler investments. Our training equipped administrators, security professionals, and end users with the skills to confidently navigate the Zero Trust Exchange. Partners: We supported our partners in their growth journey. The Partner Academy provided the knowledge and expertise needed to build successful Zscaler practices and deliver exceptional customer service. Workforce of the Future: We invested in the future by inspiring and equipping the next generation of cybersecurity professionals. Our initiatives are contributing to closing the cybersecurity skills gap, ensuring a talent pool prepared for the zero trust era through the Zscaler Academic Alliance Program. The New Charter Era: What Awaits in 2024 Micro-Learning and Micro-Credentials: We're embracing bite-sized learning, offering micro-credentials for specific skills. This agile approach will allow you to stay ahead of the curve and acquire targeted knowledge on the go. New Certifications: We'll be expanding our certification portfolio, introducing new paths that validate expertise in specific Zscaler solutions and emerging security domains. More Training Courses and Events: We'll continue to diversify our offerings, adding new training courses (like Ransomware Protection, Deception, Troubleshooting, and more), live workshops, and virtual events. Expect deeper dives into specific technologies, industry trends, and best practices. Personalized Learning: We're committed to personalization, utilizing data and insights to tailor learning recommendations and experiences to your individual needs and goals The Future Is Zero Trust, and Zscaler Academy Is Your Guide As we step into 2024, Zscaler Academy remains your trusted partner on your zero trust journey. We'll continue to innovate, adapt, and empower you with the knowledge and skills to thrive in the dynamic security landscape. Stay tuned for exciting announcements and updates! We're dedicated to making Zscaler Academy the leading destination for zero trust education, ensuring you're always prepared to secure your future in the age of zero trust. Join us in 2024! Let's keep learning, growing, and building a safer digital world together Wed, 24 Jan 2024 08:00:01 -0800 Prameet Chhabra https://www.zscaler.com/blogs/product-insights/zscaler-academy-reflecting-2023-and-soaring-2024 Navigating the Intersection of Cybersecurity and AI: Key Predictions for 2024 https://www.zscaler.com/blogs/product-insights/2024-predictions This article also appeared in VentureBeat. Anticipating the future is a complex endeavor, however, I'm here to offer insights into potential trends that could shape the ever-evolving cybersecurity landscape in 2024. We engage with over 40% of Fortune 500 companies and I personally have conversations with thousands of CXOs each year which provides me a unique view into the possibilities that might impact the security landscape. Let's explore these potential trends and see what the future of cybersecurity might look like. 1. Generative AI will increase ransomware attacks: The utilization of GenAI technologies will expedite the identification of vulnerable targets, enabling cybercriminals to launch ransomware attacks with greater ease and sophistication. Before, when launching a cyberattack, hackers had to spend time to identify an organization's attack surface and potential vulnerabilities that can be exploited in internet-facing applications and services. However, with the advent of LLMs, the landscape has dramatically shifted. Now, a hacker can simply ask a straightforward question like, "Show me vulnerabilities for all firewalls for [a given organization] in a table format.” And the next command could be, “Build me exploit code for this firewall," and the task at hand becomes significantly easier. GenAI can also help identify vulnerabilities among your supply chain partners and optimal paths that are connected to your network. It's important to recognize that even if you strengthen your own estate, vulnerabilities may still exist through other entry points, potentially making them the easiest targets for attacks. The combination of social engineering exploits and GenAI technology will result in a surge of cyber breaches, characterized by enhanced quality, diversity, and quantity. This will create a feedback loop that facilitates iterative improvements, making these breaches even more sophisticated and challenging to mitigate. Defense Strategy: Using the Zscaler Zero Trust Exchange, customers can make their applications invisible to potential attackers, reducing the attack surface. If you can’t be reached, you can’t be breached. 2. AI will be used to fight AI: We will be witnessing a promising development where AI is being harnessed by security providers to combat the ever-evolving nature of AI-driven attacks. Enterprises generate a vast amount of logs containing signals that could indicate potential attacks. However, isolating these signals in a timely manner has been challenging due to signal-to-noise issues. With the advent of GenAI technologies, we now have the capability to identify potential avenues of attack more effectively. By leveraging GenAI, we can enhance triage and protection measures by understanding which vulnerabilities hackers are likely to exploit. Additionally, this technology enables us to detect attackers and exploits in near real-time. As a result, cloud security providers will develop AI-powered tools to proactively prevent potential areas of exploitation. In addition, with the advent of AI and ML tools, we have the capability to predict and identify potential vulnerabilities in an organization that are likely to be exploited. This will help reduce cyber breaches. Defense Strategy: Zscaler is building tools such as breach predictors that could predict and prevent breaches powered by communication logs. Before any breach happens there is always reconnaissance activity. Because Zscaler sits in the middle of all communications, we have visibility into potential threats. This allows us to understand if a hacker has infiltrated an enterprise, and if so, suggest steps to prevent a breach. 3. The rise of firewall-free enterprises: Organizations are coming to a realization that despite significant investments in firewalls and VPNs, their security posture remains vulnerable. They are understanding that a true Zero Trust architecture has to be implemented. Realizing the inherent security risks and false sense of security provided by firewall-based approaches, customers will move away from Firewall and VPN as their main security technology. Over the next few years, firewalls will become archaic like mainframes. Organizations are awakening to the need for a more comprehensive and effective cybersecurity strategy. The coming years will witness the significant acceleration in the adoption and implementation of Zero Trust architecture and the rise of "firewall-free enterprises.” This transformative shift represents a crucial inflection point in the cybersecurity landscape. Defense Strategy: This shift reflects a changing approach to cybersecurity, driven by the understanding that a firewall-centric approach is ineffective in safeguarding against evolving threats, prompting customers to seek true Zscaler Zero Trust solutions. 4. Broader adoption of Zero Trust segmentation: The number one cause of ransomware attacks is a flat network. Once hackers are on the network, they can easily move laterally and find high-value assets and encrypt them and ask for ransom. Organizations have been trying to implement network-based segmentation to eliminate lateral movement. I have talked to hundreds of CISOs but have yet to meet one who has successfully completed network-based segmentation or microsegmentation. It is too cumbersome to implement and operationalize. In 2023, hundreds of enterprises successfully implemented the initial phase of Zero Trust architecture. Moving into 2024, we anticipate a broader adoption of Zero Trust-based segmentation. This approach simplifies implementation so you don’t need to create network segments and you use Zero Trust technology to connect a certain group of applications to a certain group of applications. Defense Strategy: Zscaler offers Zero Trust segmentation in two areas: User-to-application segmentation Application-to-application segmentation 5. Zero Trust SD-WAN will start to replace traditional SD-WAN: SD-WAN has helped enterprises save money by using the internet—a cheaper transport. But SD-WANs have not improved security, as they allow lateral threat movement. Zero Trust SD-WAN doesn’t put users on the network, it simply makes a point-to-point connection between users and applications, hence eliminating lateral threat movement. This protects enterprises from ransomware attacks. Zero Trust SD-WAN will emerge as an important technology to provide highly reliable, highly secure and seamless connectivity. Zero Trust SD-WAN also reduces the overhead as enterprises no longer have to worry about managing route tables. Zero Trust SD-WAN makes every branch office like an internet cafe or a coffee shop, your employees can access any application without having to extend your network to every branch office. Defense Strategy: Zscaler offers a Zero Trust SD-WAN solution that is easy to implement with a Plug-n-Play appliance. 6. SEC regulations will drive far more active participation of Board members and CFOs for cyber risk reduction: Recognizing the damage that cyber breaches could cause to businesses, these key stakeholders will more actively engage in cybersecurity initiatives and decision-making processes. The increased involvement of CFOs and Boards of Directors in cybersecurity underscores the recognition that it is not solely a CIO or CISO’s responsibility, but a vital element of overall organizational resilience and risk management. Newly introduced SEC disclosure requirements will serve as a catalyst for boards to become more engaged in driving cybersecurity initiatives in their companies. More companies will require at least one board member with a strong background in cybersecurity. Defense Strategy: Through Zscaler Risk360, we provide a holistic risk score for an organization which highlights the contributing factors to your cyber risk and compares your risk score with your peers with trends over time. In addition, Zscaler has added SEC disclosure reports generated by GenAI, leveraging contributing factors that have been used to compute your company's risk score. Mon, 22 Jan 2024 15:31:59 -0800 Jay Chaudhry https://www.zscaler.com/blogs/product-insights/2024-predictions Bringing Zero Trust to Branches https://www.zscaler.com/blogs/product-insights/bringing-zero-trust-branches Over the past five years, the tech industry has undergone significant transformation. Among the myriad changes in how organizations approach technology to gain a competitive edge, three primary shifts have had profound impacts: Migration of apps from traditional data centers to the cloud (the rise of SaaS) Hybrid workforce models, where employees operate from both regional offices and remote locations Proliferation of IoT/OT devices in factories and branch offices Many enterprises are finding that limitations in their WAN infrastructure and gaps in network security impede their ability to deal with these three shifts. Traditional SD-WANs expand the attack surface and allow lateral threat movement. They connect various sites through site-to-site VPNs or routed overlays, establishing implicit trust that grants unrestricted access to critical business resources, even for compromised entities. Moreover, coarse-grained segmentation policies allow threats to move easily within the network. With the rising number of threats and the adoption of IoT/OT devices, which are often invisible to the network, organizations need to ensure their WAN infrastructure adheres to zero trust principles. Traditional WAN infrastructure consists of multiple point products such as routers, firewalls, and VPNs, which can add up to substantial management challenges. Hence, organizations undertaking branch transformation need a solution that follows a “thin branch, thick cloud” model to reduce management complexities. Zscaler Zero Trust SD-WAN securely connects branches, factories, and data centers without the complexity of VPNs, ensuring zero trust access for users, IoT/OT devices, and servers. Using Zero Trust SD-WAN, enterprises can build a thin branch that eliminates unnecessary devices with a simple plug-and-play appliance that can be deployed using only an internet connection. Figure 1: Traditional SD-WAN vs. Zero Trust SD-WAN Zero Trust SD-WAN eliminates business risk Unlike traditional SD-WANs that extend the network to remote sites, clouds, and third parties, Zero Trust SD-WAN connects users, IoT/OT devices, and applications to resources they are entitled to access, without using routed overlays. This creates a zero trust network that eliminates the attack surface and prevents lateral threat movement. Since all traffic is proxied through Zscaler Zero Trust Exchange, there are no publicly exposed IP addresses or VPN ports for hackers to compromise. A recent Zscaler ThreatLabz report revealed a 400% increase in IoT and OT-based malware attacks since 2022, underscoring the need for organizations to have greater visibility and security around IoT/OT devices deployed in their networks. Often overlooked and invisible, IoT/OT is not adequately addressed when administrators design security policies for branch users, but as the ThreatLabz report shows, these devices represent a significant threat vector. Zero Trust SD-WAN provides complete device visibility, giving organizations a detailed view of all their IoT/OT devices as well as insights into the applications with which they communicate. Moreover, administrators no longer need separate policies for users and devices since the same policies can be applied consistently to both. Figure 2: IoT device discovery and classification Many organizations have server-to-client communication use cases. For instance, a print server in a data center may need to issue a print command to a remote printer in branch location. With Zero Trust SD-WAN, organizations don’t have to worry about exposed service ports that a hacker could exploit to breach the network. All branch communication is proxied through Zero Trust Exchange, which stitches the connection between the print server and the remote printer. Extending zero trust security to all entities, such as users, IoT/OT devices, and servers, enhances overall security. Zero Trust SD-WAN replaces site-to-site VPNs Traditional SD-WANs connect sites (e.g., branches, factories, data centers) using IPsec VPN tunnels. Routed overlays allow any device to communicate with any other device, server, or app, ensuring reachability between users, devices, and apps—reachability that hackers can exploit to easily access other resources in the network. With Zero Trust SD-WAN, branch traffic is forwarded directly to the Zero Trust Exchange, where Zscaler Internet Access (ZIA) or Zscaler Private Access (ZPA) policies can be applied for full security inspection and identity-based access control. Zero Trust SD-WAN dramatically simplifies branch communication with a zero trust network overlay that allows for flexible forwarding and simple policy management. Figure 3: Site-to-site VPN replacement Zero Trust SD-WAN simplifies mergers and acquisitions Combining two separate businesses into one entity can provide enhanced efficiency, increased market presence, and other advantages. However, integrating new systems and routing domains into the existing environment can be a slow, painful process that takes many months to complete. With Zscaler, the entire M&A integration process can be far simpler and faster. Zero Trust SD-WAN communicates only to the Zero Trust Exchange, eliminating the need to merge routing domains between existing and acquired sites. By deploying Zero Trust SD-WAN at an acquired site, enterprises can steer traffic to Zero Trust Exchange, which brokers the connection from the other end for secure communication. This results in successful day-one operation and onboarding of new sites in a matter of just weeks, or even days. Figure 4: M&A integration How does this all work? Apps defined in the ZPA portal are assigned a synthetic IP address. Once a user initiates a connection to the new app using the synthetic IP, Zero Trust SD-WAN at that branch site sends traffic to the Zero Trust Exchange. In the acquired site, where the app is hosted, the App Connector (built into Zero Trust SD-WAN) initiates an inside-out connection to the Zero Trust Exchange. The Zero Trust Exchange brokers the connection from the user to the app. Conclusion Organizations need a networking solution that protects them from today’s growing cyberthreats, but traditional SD-WANs increase security risk and networking complexity. In contrast, Zero Trust SD-WAN brings zero trust principles to WANs by securely connecting users, IoT/OT devices, and servers. To enhance the security of branches, factories, and data centers, organizations must transition from traditional flat networks with implicit trust to zero trust networks. Adopting Zero Trust SD-WAN offers numerous benefits, such as mitigating cyber risk, lowering cost and complexity, enhancing business agility, and implementing a single-vendor SASE solution. For more information, please visit the Zscaler Zero Trust SD-WAN webpage. Mon, 22 Jan 2024 17:50:01 -0800 Karan Dagar https://www.zscaler.com/blogs/product-insights/bringing-zero-trust-branches Introducing Zero Trust SASE https://www.zscaler.com/blogs/product-insights/introducing-zero-trust-sase The evolution of work and IT Workplaces are rapidly evolving and hybrid work has become the new normal. Legacy network architectures were designed around a static model of work where users were in fixed locations. Today’s branches look very different — with hoteling desks, co-working spaces, a mobile workforce, and internet-centric connectivity. As branches evolve, so too must the networking infrastructure used to connect them — one size no longer fits all. Legacy networks introduce risk & complexity The traditional model of connectivity is very network-centric — users, devices and servers connect to a network, and the network assures access to every other device on the same network. This model has too much implicit trust — any device can talk to any other device or server by default, enabling the lateral movement of threats and attacks such as ransomware. Network-centric connectivity also requires extending the network into public clouds and third parties using VPN tunnels, which can expand your attack surface into infrastructure that you do not directly control. Along with the proliferation of IoT devices in organizations, attack surface management becomes increasingly complex. Relying on routed overlays and traditional routing protocols also introduces additional complexity into networks. Traditional SD-WAN is not zero trust SD-WANs also take a network-centric approach and build routed overlays using site-to-site VPN tunnels and routing protocols. While they allow organizations to move away from expensive MPLS networks and solve many operational challenges, they introduce security risks by facilitating lateral movement. Controlling these risks requires network-based segmentation, which often necessitates additional firewall appliances at the branch and complex network-based security policies. Zero trust is a cybersecurity strategy that assumes every entity is untrusted by default — and only allows access to certain resources based on identity, context, and posture. This is fundamentally opposed to the way traditional networks work. We could limit the trust inherent in traditional networks through techniques like segmentation and admission control, but these approaches can dramatically increase complexity. It’s time for a new approach — built on zero trust principles. Introducing Zero Trust SD-WAN I previously announced our Branch Connector appliances for connecting branches through the Zero Trust Exchange. Today I am excited to announce Zero Trust SD-WAN — an industry-first zero trust solution for securely connecting branches, factories, hospitals, retail locations, and data centers — that eliminates the security risks of traditional SD-WANs. Using lightweight virtual machines or plug & play appliances coupled with the Zscaler Zero Trust Exchange, Zero Trust SD-WAN provides secure inbound and outbound zero trust networking for locations, without overlay routing, additional firewall appliances or policy inconsistencies. Fully integrated with our industry-leading SSE platform, Zero Trust SD-WAN enables robust security and simplifies branch network management. We are also pleased to announce general availability of our Z-Connector plug & play appliances — ZT 400, ZT 600 and ZT 800. Along with a lightweight virtual machine form factor, these appliances can support a wide range of customer requirements, ranging from 200 Mbps to multi-gigabit. With pre-provisioned config templates and zero touch provisioning, deploying a new branch can be as simple as plugging in an internet connection. New gateway capabilities The Zero Trust SD-WAN solution can be deployed in two modes: as a Forwarder, or as a Gateway. The Forwarder mode enables customers with existing WAN solutions to implement a zero trust overlay by deploying Z-connector appliances next to their existing routers and switches. Relevant traffic can be directed to the Z-connector appliances through conditional DNS resolution or policy-based routing. The Gateway mode terminates the ISP connection directly on the Z-Connector appliance, eliminating the need for additional routers or firewalls. The Z-connector acts as the default gateway for the site, forwarding all traffic to the Zscaler Zero Trust Exchange which provides secure connectivity to internet, SaaS, and private applications. Gateway mode supports rich WAN and LAN management capabilities, including dual ISP termination, app-aware path selection with ISP monitoring, high availability (active-active, active-passive), multiple LAN subnets, local firewall, integrated DHCP server, and DNS gateway. Zero Trust SD-WAN gateway capabilities will be available starting February 2024. Zero Trust SD-WAN reduces complexity and risk Zero Trust SD-WAN solves many critical challenges for our customers. Here are a few key use cases: Replace site-to-site VPNs: Avoid complex VPN configurations and route table management and eliminate the risk of lateral threat movement. Accelerate M&A integrations: Connect users to apps across organizations without merging routing domains or deploying NAT gateways. Reduce integration time from months to days. Secure OT connectivity: Eliminate VPNs and exposed ports for vendor remote access to OT resources. IoT discovery & classification: Discover and secure IoT devices on the network with AI-powered classification engines To learn more about these use cases, read our blog on bringing zero trust to branches. Industry-first SASE platform built on zero trust Secure Access Service Edge (SASE) is a term coined by Gartner to describe the convergence of networking and security to align with modern IT infrastructure and working patterns. While SASE embraces zero trust principles, many SASE solutions in the market simply bolt on traditional SD-WAN to an SSE service, with zero trust principles limited to user-to-app access. This still leaves sites exposed with too much implicit trust. With the introduction of Zero Trust SD-WAN, Zscaler is proud to deliver the industry’s first single vendor SASE platform built on zero trust and AI. Zero Trust SASE enables organizations to extend zero trust beyond just users, to branches, factories and data centers. Building on the strengths of our SSE platform — the Zero Trust Exchange — Zero Trust SASE reduces cost and complexity by eliminating traditional security and networking solutions. Transform your branch networks Legacy WAN architectures no longer work. The industry-wise disruptions around hybrid work and zero trust security present a unique opportunity to rethink and transform your network architecture. Zero Trust SD-WAN and SASE take a radically different approach to connecting users, devices, and apps without the risk of lateral threat movement. Visit our SASE resources page for additional product information, white papers and videos and read more about our Zero Trust SD-WAN capabilities here. Mon, 22 Jan 2024 17:50:01 -0800 Naresh Kumar https://www.zscaler.com/blogs/product-insights/introducing-zero-trust-sase How Zscaler’s Dynamic User Risk Scoring Works https://www.zscaler.com/blogs/product-insights/how-zscaler-s-dynamic-user-risk-scoring-works Access control policies aim to balance security and end user productivity, yet often fall short due to their static nature and limited ability to adapt to evolving threats. But what if there was an easy way to automate access control per user, considering individual risk factors and staying up-to-date with the latest advanced attacks? Zscaler User Risk Scoring takes dynamic access control and risk visibility to the next level using records of previous behavior to determine future risk. Similar to how insurance companies use driving records to determine car insurance rates, or banks use credit scores to assess loan eligibility, user risk scoring leverages previous behavior records to assign risk scores to individual users. This allows organizations to set dynamic access control policies based on various risk factors, accounting for the latest threat intelligence. User risk scoring empowers organizations to restrict access to sensitive applications for users with a high risk score until their risk profile improves. By considering factors such as past victimization by cyberattacks, near-misses with malicious content, or engagement in behavior that could lead to a breach, organizations can ensure that access control policies are tailored to individual risk profiles. Organizations can set user risk thresholds to allow or deny access to both private and public application How does user risk scoring work? User risk scoring plays a crucial role across the Zscaler platform, driving policies for URL filtering, firewall rules, data loss prevention (DLP), browser isolation, and Zscaler Private Access (ZPA); and feeding into overall risk visibility in Zscaler Risk360. By leveraging user risk scores within each of these security controls, organizations can better protect all incoming and outgoing traffic from potential threats. URL filtering rules are one way that risk scoring can be applied to policies within Zscaler Internet Access (ZIA) The risk scoring process consists of two components: the static (baseline) risk score and the real-time risk score. The static risk score is established based on a one-week lookback at risky behavior and is updated every 24 hours. The real-time risk score modifies this baseline every 2 minutes throughout the day, updating whenever a user interacts with known or suspected malicious content. Each day at midnight, the real-time risk score is reset. Zscaler considers more than 65 indicators that influence the overall risk score. These indicators fall into three major categories: pre-infection behavior, post-infection behavior, and more general suspicious behavior. The model accounts for the fact that not all incidents are equal; each indicator has a variable contribution to the risk score based on the severity and frequency of the associated threat. Pre-infection behavior indicators encompass a range of blocked actions that would have led to user infection, such as blocked malware, known and suspected malicious URLs, phishing sites, pages with browser exploits, and more. Post-infection behavior indicators include things like detected botnet traffic or command-and-control traffic, which show that a user/device has already been compromised. Suspicious behavior indicators are similar to pre-infection indicators but are less severe (and less guaranteed to lead to infection), covering policy violations and risky activities like browsing deny-listed URLs, DLP compliance violations, anonymizing sites, and more. *A more detailed sampling of these indicators is included at the bottom of this article. How can Zscaler customers use risk scoring? User risk scores can be found in the the analytics and policy administration menus of both Zscaler Internet Access (ZIA) and Zscaler Private Access (ZPA). They are also woven together with a range of additional inputs in Zscaler Risk360, which allows security teams to delve deeper into their organization’s holistic risk. Organizations can monitor risk scores for individuals and for the overall organization Zscaler also has deep integrations with many leading security operations tools, allowing the same telemetry and incident alert context that feeds into risk scoring to be shared with tools like SIEM, SOAR, and XDR via a REST API to streamline workflows. These scores can be used to: Drive access control policies User risk scoring gives network and security teams a powerful tool to use to drive low-maintenance zero trust access control policies, controlling both incoming and outgoing internet and application traffic. It can be combined with other dynamic rulesets (e.g., device posture profiles) and static rulesets (e.g., URL and DNS filtering and app control policy) to protect organizations from breaches without unnecessarily restricting user productivity. User risk, device posture, and other access policies work together seamlessly to optimize secure access Monitor overall organizational risk and key factors that can be improved Admins can monitor their company risk over time to assess the top areas of overall company risk and prioritize remediation efforts. They can see how risk scores are distributed across users and locations, and can benchmark their risk score against other companies in their industry. Company risk scores can be analyzed over time against industry benchmarks Monitor risky users on an individual basis and understand how (and why) their risk is trending If a user’s risk score spikes, admins can take action, whether that be isolating that user’s machine to deal with an active threat, or simply training a user that certain behaviors are posing an unacceptable risk. Admins can analyze individual users and double-click into specific incidents Overall, Zscaler User Risk Scoring, with its categorization of threats and aggregation of logs, offers valuable insights into an organization's security posture. By understanding the different types of risks and behaviors associated with cyberthreats, organizations can implement dynamic access control policies and proactively protect their critical assets and data. With risk scoring, organizations can navigate the ever-changing threat landscape with confidence. To learn about more of Zscaler’s unique inline security capabilities, check out our Cyberthreat Protection page. Sample Indicators for User Risk Scoring · Pre-infection behavior includes a range of blocked actions that would have likely led a user to be infected, such as: o Malware blocked by Zscaler’s Advanced Threat Protection or inline Sandbox o Blocked known and suspected malicious URLs o Blocked websites with known and suspected phishing content o Blocked pages with known browser exploits o Blocked known and suspected adware and spyware o Blocked pages with a high PageRisk score o Quarantined pages o Blocked files with known vulnerabilities o Blocked emails containing viruses o Detected mobile app vulnerabilities · Post-infection behavior includes a range of blocked actions that were attempted after a user was infected, such as: o Botnet traffic o Command-and-control traffic · Suspicious behavior includes policy violations and other risky sites, files, and conditions that could lead to infection, such as: o Deny-listed URLs o DLP compliance violations o Pages with known dangerous ActiveX controls o Pages vulnerable to cross-site scripting attacks o Possible browser cookie theft o Internet Relay Chat (IRC) tunneling use o Anonymizing sites o Blocks or warnings from secure browsing about an outdated/disallowed component o Peer-to-peer (P2P) site denials o Webspam sites o Attempts to browse blocked URL categories o Mobile app issues included denial of the mobile app, insecure user credentials, location information leaks, personally identifiable information (PII), information identifying the device, or communication with unknown servers o Tunnel blocks o Fake proxy authentication o SMTP (email) issues including rejected password-encrypted attachments, unscannable attachments, detected or suspected spam, rejected recipients, DLP blocks or quarantines, or blocked attachments o IPS blocks of cryptomining & blockchain traffic o Reputation-based blocks of suspected adware/spyware sites o Disallowed use of a DNS-over-HTTPS sit Fri, 19 Jan 2024 05:00:01 -0800 Mark Brozek https://www.zscaler.com/blogs/product-insights/how-zscaler-s-dynamic-user-risk-scoring-works It’s Time for Zero Trust SASE https://www.zscaler.com/blogs/product-insights/zero-trust-sase The workplace has changed for good. According to a recent Gallup poll, 50% of US employees are working in hybrid mode and only 20% are entirely on-site. Another forecast analysis from Gartner projected hybrid work being the norm for almost 40% of global knowledge workers by the end of 2023. Branch offices no longer look the same, and more and more organizations are moving to a cafe-like model for their workplaces. Combined with the shift to cloud and SaaS, this is driving fundamental shifts in IT infrastructure. The way we design, build, and secure our networks needs to evolve to support this new normal. One size does not fit all The old network-centric model of connectivity and security presents challenges when users and apps are everywhere. Trying to shoehorn traditional firewall/VPN-based security into an increasingly fuzzy and complex network environment has only resulted in more cost, complexity, and risk. Cyberattacks keep rising despite the increasing spend on firewalls, fueling threats such as ransomware. According to Zscaler ThreatLabz, ransomware attacks increased almost 40% between 2022 and 2023, with the average demand being $5.3M. The current generation of networking technologies was designed to solve problems from 30 years ago, when IT systems couldn’t talk to each other. It’s no surprise that we ended up with a networking stack designed to maximize connectivity and reachability between users and computing systems globally. While this has unlocked vast amounts of productivity gains and business value, it has come at the expense of cyber risk. An attacker needs to find just one entry point anywhere in the organization and can move laterally from there to access critical crown jewel applications and data. With an attack surface spanning branches, retail locations, clouds, remote users, and partners, securing traditional network infrastructure has become a complex and costly endeavor. Zero trust is disrupting networking Zero trust is a cybersecurity strategy that shifts the focus from networks to entities—users, devices, apps, and services. It asserts that no entity should be trusted by default and should only be explicitly allowed to access certain resources based on identity, context, and security posture, and then continuously reassessed for every new connection. Traditional networking does not lend itself to the zero trust model since it confers implicit trust—once you’re on the network, you can go anywhere and talk to any entity. Network architects can limit the amount of trust and the extent of lateral movement by segmenting the network, but this is complex and difficult to manage—it’s like building a superhighway system and adding checkpoints at every ramp and interchange. Zero trust networking is an opportunity to fundamentally rethink the way we build enterprise networks. Instead of starting with fully trusted routed overlays, we need to start with a zero trust foundation and then connect entities into an exchange that can broker connections as needed based on context and security posture. Figure: Zero Trust Architecture Traditional SD-WAN is not zero trust Traditional SD-WAN arrived on the scene over a decade ago and was meant to give organizations an alternative to expensive MPLS WAN services. Using multiple ISP connections and active path monitoring, SD-WANs drastically improved the overall reliability and performance of internet connections and offered organizations the confidence that mission-critical apps can work over the internet. Fast-forward a decade and through a global pandemic, and we no longer need to prove that the internet is fast and reliable enough to run enterprise apps. Gigabit fiber connections are readily available and most SaaS apps are optimized to be consumed over the internet. SD-WAN needs to solve different problems today—like ensuring a consistent experience and security for users between home and office, securing IoT device traffic and extending zero trust security to all sites, without the use of additional firewall/VPN appliances. Secure Access Service Edge (SASE) Gartner coined the term SASE in 2019 to describe the convergence of security and networking, delivered from a common cloud native platform that is better aligned with modern traffic flows. SASE is widely understood to be a combination of security services such as FWaaS, SWG, CASB, DLP, and connectivity services such as ZTNA and SD-WAN, delivered from the cloud. The shift to SASE represents an opportunity to rethink and rebuild security services from the ground up for cloud scale. Yet many SASE solutions simply extend the firewall/VPN model to the cloud and deliver a hosted version of the traditional security appliances. With bolted-on SD-WAN integrations, these solutions fail to deliver the promise of zero trust for anything beyond users. A better way Zscaler pioneered zero trust security for remote users and eliminated clunky remote-access VPNs, reducing cyber risk for thousands of organizations globally. We built an industry-leading AI-powered SSE platform that has been a leader in the Gartner Magic Quadrant for SSE two years in a row. Now, we’re excited to bring the same zero trust security to branches, factories, retail stores, and data centers. Join us on January 23 as we announce our industry-first SD-WAN innovations that help you transform your security and networking architecture with a Zero Trust SASE platform built on zero trust AI. Hear from your industry peers about their transformation journeys and the benefits they realized. Register now and save your spot! Tue, 16 Jan 2024 16:39:35 -0800 Ameet Naik https://www.zscaler.com/blogs/product-insights/zero-trust-sase The Mythical LLM-Month https://www.zscaler.com/blogs/product-insights/mythical-llm-month It’s clear: 2023 was the year of AI. Beginning with the release of ChatGPT, it was a technological revolution. What began as interacting agents quickly started moving to indexing documents (RAG), and now, indexing documents, connecting to data sources, and enabling data analysis with a simple sentence. With the success of ChatGPT, a lot of people promised last year to deliver large language models (LLMs) soon … and very few of those promises have been fulfilled. Some of the important reasons for that are: We are building AI agents, not LLMs People are treating the problem as a research problem, not an engineering problem Bad data In this blog, we’ll examine the role of AI agents as a way to link LLMs with backend systems. Then, we'll look at how the use of intuitive, interactive semantics to comprehend user intent is setting up AI agents as the next generation of user interface and user experience (UI/UX). Finally, with upcoming AI agents in software, we’ll talk about why we need to bring back some principles of software engineering that people seem to have forgotten in the past few months. I Want a Pizza in 20 Minutes LLMs offer a more intuitive, streamlined approach to UI/UX interactions compared to traditional point-and-click methods. To illustrate this, suppose you want to order a “gourmet margherita pizza delivered in 20 minutes” through a food delivery app. This seemingly straightforward request can trigger a series of complex interactions in the app, potentially spanning several minutes of interactions using normal UI/UX. For example, you would probably have to choose the "Pizza" category, search for a restaurant with appetizing pictures, check if they have margherita pizza, and then find out whether they can deliver quickly enough—as well as backtrack if any of your criteria aren’t met. This flowchart expresses the interaction with the app. We Need More than LLMs LLMs are AI models trained on vast amounts of textual data, enabling them to understand and generate remarkably accurate human-like language. Models such as OpenAI's GPT-3 have demonstrated exceptional abilities in natural language processing, text completion, and even generating coherent and contextually relevant responses. Although more recent LLMs can do data analysis, summary, and representation, the ability to connect external data sources, algorithms, and specialized interfaces to an LLM gives it even more flexibility. This can enable it to perform tasks that involve analysis of domain-specific real-time data, as well as open the door to tasks not yet possible with today’s LLMs. This “pizza” example illustrates the complexity of natural language processing (NLP) techniques. Even this relatively simple request necessitates connecting with multiple backend systems, such as databases of restaurants, inventory management systems, delivery tracking systems, and more. Each of these connections contributes to the successful execution of the order. Furthermore, the connections required may vary depending on the request. The more flexibility you want the system to understand and recognize, the more connections to different backend systems will need to be made. This flexibility and adaptability in establishing connections is crucial to accommodate diverse customer requests and ensure a seamless experience AI Agents LLMs serve as the foundation for AI agents. To respond to a diverse range of queries, an AI agent leverages an LLM in conjunction with several integral auxiliary components: The agent core uses the LLM and orchestrates the agent's overall functionality. The memory module enables the agent to make context-aware decisions. The planner formulates the agent’s course of action based on the tools at hand. Various tools and resources support specific domains, enabling the AI agent to effectively process data, reason, and generate appropriate responses. The set of tools include data sources, algorithms, and visualizations (or UI interactions). Agent core The agent core is the “brain” of the AI agent, managing decision-making, communication, and coordination of modules and subsystems to help the agent operate seamlessly and interact efficiently with its environment or tasks. The agent core receives inputs, processes them, and generates actions or responses. It also maintains a representation of the agent's knowledge, beliefs, and intentions to guide its reasoning and behavior. Finally, the core oversees the update and retrieval of information from the agent's memory to help it make relevant, context-based decisions Memory The memory module encompasses history memory and context memory components, which store and manage data the AI agent can use to simultaneously apply past experiences and current context to inform its decision-making. History memory stores records of previous inputs, outputs, and outcomes. These records let the agent learn from past interactions and gain insights into effective strategies and patterns that help it make better-informed decisions and avoid repeating mistakes. Context memory, meanwhile, enables the agent to interpret and respond appropriately to the specific, current circumstances using information about the environment, the user's preferences or intentions, and many other contextual factors Planner The planner component analyzes the state of the agent’s environment, constraints, and factors such as goals, objectives, resources, rules, and dependencies to determine the most effective steps to achieve the desired outcome. Here’s an example of a prompt template the planner could use, according to Nvidia: GENERAL INSTRUCTIONS You are a domain expert. Your task is to break down a complex question into simpler sub-parts. If you cannot answer the question, request a helper or use a tool. Fill with Nil where no tool or helper is required. AVAILABLE TOOLS - Search Tool - Math Tool CONTEXTUAL INFORMATION <information from Memory to help LLM to figure out the context around question> USER QUESTION “How to order a margherita pizza in 20 min in my app?” ANSWER FORMAT {"sub-questions":["<FILL>"]} Using this, the planner could generate a plan to serve as a roadmap for the agent's actions, enabling it to navigate complex problems and strategically accomplish its goals Tools Various other tools help the AI agent perform specific tasks or functions. For example: Retrieval-augmented generation (RAG) tools enable the agent to retrieve and use knowledge base content to generate coherent, contextually appropriate responses. Database connections allow the AI agent to query and retrieve relevant information from structured data sources to inform decisions or responses. Natural language processing (NLP) libraries offer text tokenization, named entity recognition, sentiment analysis, language modeling, and other functionality. Machine learning (ML) frameworks enable the agent to leverage ML techniques such as supervised, unsupervised, or reinforcement learning to enhance its capabilities. Visualization tools help the agent represent and interpret data or outputs visually, and can help the agent understand and analyze patterns, relationships, or trends in the data. Simulation environments provide a virtual environment where the agent can sharpen its skills, test strategies, and evaluate potential outcomes without affecting the real world. Monitoring and logging frameworks facilitate the tracking and recording of agent activities, performance metrics, or system events to help evaluate the agent's behavior, identify potential issues or anomalies, and support debugging and analysis. Data preprocessing tools use techniques like data cleaning, normalization, feature selection, and dimensionality reduction to ensure raw data is relevant and high-quality before the agent ingests it. Evaluation frameworks provide methodologies and metrics that enable the agent to measure its successes, compare approaches, and iterate on its capabilities. These and other tools empower AI agents with functionality and resources to perform specific tasks, process data, make informed decisions, and enhance their overall capabilities Adding LLM-based Intelligent Agents to Your Data Is an Engineering Problem, Not a Research Problem People realized that natural language can make it much easier and forgiving (not to say relaxed) to specify use cases required for software development. Because the English language can be ambiguous and imprecise, this is leading to a new problem in software development, where systems are not well specified or understood. Fred Brooks outlined many central software engineering principles in his 1975 book The Mythical Man-Month, some of which people seem to have forgotten during the LLM rush. For instance: No silver bullet. This is the first principle people have forgotten with LLMs. They believe LLMs are the silver bullet that will eliminate the need for proper software engineering practices. The second-system effect. LLM-based systems are being considered a second system because people treat LLMs as so powerful that they can forget LLM limitations. The tendency toward an irreducible number of errors. Even if you get the LLM implementation correct, LLMs can hallucinate or even expose additional errors that have been hidden because of lack of a way to exercise the backend in ways we have not been able to in the past. Progress tracking. I remember the first thing I heard from Brooks’ book was, “How does a project get to be a year late? One day at a time.” I have seen people assuming that if they sweep problems under the rug they will disappear. Machine learning models, and LLMs in particular, inherit the same problems of ill-designed systems with the addition of amplification of bad data, which we will describe later. Conceptual integrity. This problem has shifted from designing the use cases (or user stories) so that they show the conceptual integrity of the entire system to saying the LLM will bind any inconsistencies in the software magically. For example, if you want to have a user story that solves the order of a food app “I want to order a gourmet margherita pizza in 20 min”, by changing the question to: Can I get a gourmet margherita pizza delivered in 20 minutes? Show me all pizza places that can deliver a gourmet margherita pizza in 20 minutes. Show me all pizza places that can deliver a gourmet margherita pizza in 20 minutes ranked by user preference. We can easily see that different types of data, algorithms, and visualizations are required to address this problem. The manual and formal documents. Thanks to hype, this is probably the most forgotten principle in the age of LLMs. It’s not enough to say “develop a system that will tell me how to order things like a gourmet margherita pizza in 20 minutes.” This requires documentation of a whole array of other use cases, required backend systems, new types of visualizations to be created, and—crucially—specifications of what the system will not do. “Things like” seems to have become a norm in LLM software development, as if an LLM can magically connect to backend systems and visualize data it has never learned to understand. The pilot system. Because of these limitations, software systems with LLM based intelligent agents have not left the pilot stage in several companies simply because they are not able to reason beyond simple questions used as “example of use cases.” In a recent paper, we addressed the first issue of lack of proper specification of software systems, and showed a way we can create formal specifications for LLM-based intelligent systems, in a way that they can follow sound software engineering principles Bad Data In a recent post on LinkedIn, we described the importance of “librarians” to LLM-based intelligent agents. (Apparently, this post was misunderstood, as several teachers and actual librarians liked the post.) We were referring to the need to use more formal data organization and writing methodologies to ensure LLM-based intelligent agents work. The cloud fulfilled its promise of not requiring us to delete data, just letting us store it. With this came the pressure to quickly create user documentation. This created a “data dump,” where old data lives with new data, where old specifications that were never implemented are still alive, where outdated descriptions of system functionalities persist, having never been updated in the documentation. Finally, documents seem to have forgotten what a “topic sentence” is. LLM-based systems expect documentation to have well-written text, as recently shown when OpenAI stated that it is “impossible” to train AI without using copyrighted works. This alludes not only to the fact that we need a tremendous amount of text to train these models, but also that good quality text is required. This becomes even more important if you use RAG-based technologies. In RAG, we index document chunks (for example, using embedding technologies in vector databases), and whenever a user asks a question, we return the top ranking documents to a generator LLM that in turn composes the answer. Needless to say, RAG technology requires well-written indexed text to generate the answers. RAG pipeline, according to https://arxiv.org/abs/2005.1140 Conclusions We have shown that there is an explosion of LLM-based promises in the field. Very few are coming to fruition. It is time that in order to build AI intelligent systems we need to consider we are building complex software engineering systems, not prototypes. LLM-based intelligent systems bring another level of complexity to system design. We need to consider up to what extent we need to specify and test such systems properly, and we need to treat data as a first-class citizen, as these intelligent systems are much more susceptible to bad data than other systems Tue, 16 Jan 2024 19:14:07 -0800 Claudionor Coelho Jr. https://www.zscaler.com/blogs/product-insights/mythical-llm-month Unleashing the Power of Zscaler's Unparalleled SaaS Security https://www.zscaler.com/blogs/product-insights/unleashing-power-zscaler-s-unparalleled-saas-security Zscaler has made great strides in securing organizations across the board, solving real customer use cases such as protecting against ransomware, AI security, and securing data everywhere. One area that has received a lot of attention is SaaS security. Recently, Forresters released its latest Wave report for SaaS Security Posture Management, naming Zscaler as the only Leader in this category. The report puts a heavy emphasis on use cases that span beyond posture management such as app governance, shadow IT, identity access controls, advanced data protection, and more. Zscaler achieved the strongest position, achieving a perfect score in 7 out of the 12 categories. You can get your copy of the Forrester Wave here. As organizations increasingly adopt numerous SaaS-based services, there is a growing need for a comprehensive, fully integrated data security solution that covers all channels, including web, business and personal applications, public cloud data, endpoints, and email. Platforms provide multiple benefits, such as centralized policy creation, that reduces complexity and costs inherent in point vendor solutions. Solving Today’s Key SaaS Security Challenges Many organizations use multiple point solutions, which can create issues and headaches for IT and security teams. Here are some of the top use cases that are drive SaaS Security: Identity Management and Access Control To prevent leaks, data manipulation, and insider threats, users must be authenticated and authorized in line with zero trust principles for least-privileged access, including role-based access control and continuous monitoring. Effective anti-phishing measures are also critical. Identity and access issues mostly often stem from: Weak or compromised identity and access management (IAM) A lack of multifactor authentication (MFA) beyond single sign-on (SSO) Inadequate or misconfigured access controls Lack of Standardization Inconsistent security policies and procedures across SaaS providers can create challenges for security teams around consistent controls and enforcement, leading to a weaker posture, potential enforcement gaps, vulnerabilities, and even data corruption. Some of the major contributors to increased risk in this area include: Interoperability and integration issues between cloud providers Data transfers between environments Regulatory compliance challenges Data Residency and Governance Complying with industry and government data protection regulations can be complex when SaaS providers run widely distributed operations. It’s critical to understand how a given SaaS provider aligns with your organization’s compliance requirements, as well as to implement effective data encryption and access controls for data in transit and at rest. Common residency and governance issues arise from: Sovereignty and residency regulations (e.g., GDPR) Shared responsibilities between the customer and SaaS provider Unsanctioned apps (shadow IT) putting data outside the IT function’s purview To mitigate these risks, organizations should conduct thorough risk assessments, implement robust security policies and controls, regularly monitor SaaS applications for vulnerabilities, and stay up to date with security best practices. Furthermore, integrated solutions provide greater efficacy and context. Securing SaaS Platforms Requires Context The Power of Context In the realm of security, it’s essential to understand that it’s a matter of layers. These layers often converge, such as in the case of SSPM and data security. However, to truly grasp the significance of these layers, you need context. The ability to combine and analyze information from various security layers gives organizations a comprehensive understanding of their security posture and potential vulnerabilities. A Comprehensive, Unified Solution: Zscaler Data Protection brings together all the necessary components and functionality required for robust SaaS security. From access control and connectivity to SaaS and cloud integrations, our solution covers every aspect of securing your SaaS applications. Enhanced Data and Threat Security: With Zscaler, organizations can rest assured that their sensitive data is protected. Our platform offers robust data security measures, to ensure sensitive information remains secure and compliant with industry regulations. Furthermore, our threat security functionality helps identify and mitigate potential threats, safeguarding your SaaS applications from malicious attacks. Contextual Understanding for Effective Security: The power of our Advanced SSPM lies in its ability to combine and analyze information from various security layers. By providing a comprehensive context, organizations can make informed decisions and implement security measures that address their specific needs and vulnerabilities. Zscaler Advanced SSPM for SaaS Security We have invested substantial efforts in developing and expanding our solutions to meet the evolving landscape of SaaS security. For instance, our acquisition of Canonic in 2023, now known as AppTotal, lets Zscaler better help your organization detect and secure risky third-party app integrations into SaaS. This functionality was highlighted in this year’s Forrester SSPM Wave. Our Advanced SSPM incorporates access control, connectivity, SaaS integrations, cloud integrations, and data and threat security functionalities. Our comprehensive approach ensures that organizations can leverage the full spectrum of security measures required for safeguarding their SaaS applications Ready to secure your SaaS Platforms? Zscaler's Advanced SSPM stands out from the crowd due to its unique combination of components, capabilities, and reach. With a holistic approach encompassing access control, connectivity, SaaS integrations, cloud integrations, and robust data and threat security functionality, our solution empowers organizations to achieve unparalleled security for their SaaS applications. By leveraging the power of context, Zscaler's Advanced SSPM enables organizations to make informed decisions and implement effective security measures. Trust Zscaler to unlock the true potential of your SaaS security and elevate your organization's overall security posture. To learn more about Zscaler’s Advanced SSPM and Data Protection offering, visit our website, register for our webinar, or reach out to us for a demo. Wed, 17 Jan 2024 00:01:01 -0800 Salah Nassar https://www.zscaler.com/blogs/product-insights/unleashing-power-zscaler-s-unparalleled-saas-security 4 Ways Enterprises Can Stop Encrypted Cyber Threats https://www.zscaler.com/blogs/product-insights/4-ways-enterprises-can-stop-encrypted-cyber-threats Want to uncover the 86% of cyber threats lurking in the shadows? Join our January 18th live event with Zscaler CISO Deepen Desai to learn how enterprises can stop encrypted attacks, as well as explore key cyber threat trends from ThreatLabz. In today's digital world, we’ve come to trust HTTPS as the standard for encrypting and protecting data as it flows across the internet — the reassuring lock icon in a browser’s icon bar assures us our data is safe. Organizations worldwide have rightfully recognized this protocol as an imperative for data security and digital privacy, and overall, 95% of internet-bound traffic is secured with HTTPS. But encryption is a double-edged sword. In the same way that encryption prevents cybercriminals from intercepting sensitive data, it also prevents enterprises from detecting cyber threats. As we revealed in our ThreatLabz 2023 State of Encrypted Attacks Report, more than 85% of cyber threats hide behind encrypted channels, including malware, data stealers, and phishing attacks. What’s more, many encrypted attacks use legitimate, trusted SaaS storage providers to host malicious payloads, making detection even more challenging. Encrypted channels are a major blindspot for any organization that is not performing SSL inspection today, enabling threat actors to launch hidden threats and exfiltrate sensitive data under cover of darkness. As threats advance and the number of malicious actors grows, these types of attacks continue to increase. ThreatLabz analyzed more than 29 billion blocked threats over the Zscaler Zero Trust Exchange from September 2022 to October 2023, finding a 24.3% increase year over year, with a notable growth in phishing attacks and significant 297.1% and 290.5% growth for browser exploits and ad spyware sites, respectively. So, what can enterprises do to thwart encrypted attacks? The answer is simple: inspect all encrypted traffic. However, the reality of this task remains a huge challenge for most organizations. To fix the problem, we must first explore and understand why this is the case. A major enterprise blind spot: SSL/TLS Traffic As part of the 2023 State of Encrypted Attacks Report, ThreatLabz commissioned a separate third-party, vendor neutral survey of security, networking, and IT practitioners to better understand their challenges, goals, and experience with encrypted attacks. We found that 62% of organizations have experienced an uptick in encrypted threats — with the majority having experienced an attack, and 82% of those witnessing attacks over “trusted” channels. However, enterprises face numerous challenges that prevent them from scanning 100% of SSL/TLS traffic at scale — the antidote to encrypted threats. The most popular tools for SSL/TLS scanning include a mix of network firewalls (62%) and application-layer firewalls (59%). These tools come with challenges at scale, the survey found; the top barriers preventing enterprises from scanning 100% of encrypted traffic today include performance issues and poor user experience (42%), cost concerns (32%), and scalability issues with the current setup (31%). Notably, a further barrier for 20% of respondents is that traffic from trusted sites and applications is “assumed safe” — which, our research shows, is not the case. These issues point to challenges that are in contrast with enterprise inspection plans. While 65% of enterprises plan to increase rates of SSL/TLS inspection in the next year, 65% are also concerned that their current SSL/TLS inspection tools are not scalable or future-proofed to address advanced cyber threats. This finding echoes enterprises’ confidence in their security setups: just 30% of enterprises are "very" or "extremely" confident in their ability to stop advanced or sophisticated cyber threats. These findings suggest that while enterprises are well aware of the risk of encrypted attacks, encrypted channels remain a prominent blind spot to many organizations — and many attacks can simply pass through without detection. Shining a light on cyber threats lurking in encrypted traffic Threat actors are exploiting encrypted channels across multiple stages of the attack chain: from gaining initial entry through tools like VPN to establishing footholds with phishing attacks, to delivering malware and ransomware payloads, to moving laterally through domain controllers, to exfiltrating data, oftentimes using trusted SaaS storage providers and more. Knowing this, enterprises should include mechanisms in their security plans to stop encrypted threats and prevent data loss at each stage of the attack chain. Here are four approaches that enterprises can adopt to prevent encrypted attacks and keep their data, customers, and employees secured. Figure 1: stopping encrypted cyber threats across the attack chain 1. Inspect 100% of encrypted SSL/TLS traffic at scale with a zero trust, cloud-proxy architecture The key to an enterprise strategy to stop encrypted attacks starts with an ability to scan 100% of encrypted traffic and content at scale, with zero performance degradation — that’s step one. A zero trust architecture is an outstanding candidate for this task for a number of key reasons. Based on the principle of least privilege, this architecture brokers connections directly between users and applications — never the underlying network — based on identity, context, and business policies. Therefore, all encrypted traffic and content flows through this cloud-proxy architecture, with SSL/TLS inspection for every packet from every user on a per-user basis with infinite scale, regardless of how much bandwidth users consume. In addition to this, direct user-to-app and app-to-app connectivity make it substantially easier to segment application traffic to highly granular sets of users — eliminating lateral movement risk that is often the norm in traditional, flat networks. Meanwhile, a single policy set vastly simplifies the administrative process for enterprises. This is in contrast to application and network firewalls — themselves frequent targets of cyber attacks — which in practice translate to greater performance degradation, complexity, and cost at scale, while failing to achieve enterprise goals of 100% SSL/TLS inspection. In other words, stopping encrypted threats begins and ends with zero trust. 2. Minimize the enterprise attack surface All IP addresses, or internet-facing assets, are discoverable and vulnerable to threat actors — including enterprise applications and tools like VPNs and firewalls. Compromising these assets is the first step for cybercriminals to gain a foothold and move laterally across traditional networks to your valuable crown-jewel applications. Using a zero trust architecture, enterprises can hide these applications from the internet — placing them behind a cloud proxy so that they are only accessible to authenticated users who are authorized by business access policy. This simple fact empowers enterprises to immediately remove vast swaths of the external attack surface, prevent discovery by threat actors, and stop many encrypted attacks from ever happening in the first place. 3. Prevent initial compromise with inline threat prevention Enterprises have numerous tools at their disposal to stop encrypted threats, and here, a layered defense is the best one. Critically, these defenses should be inline — in the data path — so that security tools detect malicious payloads before delivery, rather than pass-through, out-of-band approaches as with many traditional technologies. There are a number of core technologies that should make up a best-practice defense. These include an inline sandbox with ML capabilities; in contrast, many traditional sandboxes assume patient-zero risk, an ML-driven sandbox at cloud scale allows companies to quarantine, block, and detonate suspicious files and zero-day threats immediately, in real time, without impacting business. Furthermore, technologies like cloud IPS, URL filtering, DNS filtering, and browser isolation — turning risky web content into a safe stream of pixels — combine to deliver enterprises what we would term advanced threat protection. While encrypted threats can pass by unnoticed by many enterprises, this type of layered, inline defense ensures that they won’t. 4. Stop data loss Stopping encrypted attacks doesn’t end with threat prevention; enterprises must also secure their data in motion to prevent cybercriminals from exfiltrating it. As mentioned, threat actors frequently use legitimate, trusted SaaS storage providers — and therefore “trusted” encrypted channels —to host malicious payloads and exfiltrated data. Without scanning their outbound SSL/TLS traffic and content inline, enterprises have little way to know this is happening. As with threat prevention, enterprises should also take a multi-layered approach to securing their data. As best practices, enterprises should look for functionality like inline DLP, which inspects SSL/TLS content across all data channels, like SaaS apps, endpoints, email, private apps, and even cloud posture. As a note, in addition to exact data match (EDM), Zscaler has taken an AI-driven approach to automatically discover and classify data across the enterprise, and these categories are used to inform DLP policy. Finally, CASB provides another critical layer of security, protecting inline data in motion and out-of-band data at rest. Diving deeper into encrypted attacks Of course, these best practices are the tip of the iceberg, when it comes to understanding and defending against the full range of encrypted attacks. For a deeper analysis of how enterprises can stop encrypted threats, as well as discover key trends in this dynamic landscape, be sure to register for our upcoming January 18th live webinar with CISO Deepen Desai. Moreover, to uncover our full findings, get your copy of the ThreatLabz 2023 State of Encrypted Attacks Report today. Fri, 12 Jan 2024 15:07:03 -0800 Will Seaton https://www.zscaler.com/blogs/product-insights/4-ways-enterprises-can-stop-encrypted-cyber-threats Hybrid Work and Zero Trust: Predictions for 2024 https://www.zscaler.com/blogs/product-insights/hybrid-work-and-zero-trust-predictions-2024 2023 was dubbed “the year of efficiency”. It saw many organizations work towards operational efficiencies in an effort to become nimbler. “More with less” was the mantra spoken by several C-level execs as they tightened their security posture while driving higher productivity. Moving into 2024, the proliferation of generative AI is expected to rapidly accelerate innovation, address inefficiencies, and boost productivity across the board. Such a focus on productivity has also kept the conversation around work-from-anywhere alive and well. From a productivity perspective, hybrid work continues to be the benchmark, allowing flexibility to hire talent from anywhere. Executives are finding the right balance between fully remote, in-office, and hybrid employees to maximize business efficiency. Irrespective of what every organization chooses to do going forward, finding the right balance between access and security is key for increasing and maintaining productivity. We at Zscaler have put together a list of the top predictions for 2024 when it comes to hybrid work trends: Return to office will peak Over the last few years, one question has echoed in everyone’s minds: What will the new workplace look like? 2023 saw many organizations test a hybrid work model, shifting away from a fully remote workforce. This trend is set to continue in 2024, with more and more companies fully embracing hybrid work, increasing the number of days to work from the office and collaborate. The KPMG CEO Outlook Survey found that 64% of leaders globally predict a full return to in-office work by 2026. Further research shows that in the US, 90% of companies intend to implement their return-to-office plans by the end of 2024, according to a report from Resume Builder. These trends will also see IT and security teams reaching for solutions that can support them while maintaining business growth Third party access requirements will grow With productivity and efficiency on the agenda for 2024, teams are extending their reach and skill sets beyond what’s available within the capacity of their full time employees. Namely, they’re hiring contractors to aid them in creating positive business outcomes. To do so, they need to adapt to working with remote contractors and have the tools and infrastructure in place to successfully manage staff along with the right level of security. Last year, a Linkedin study showed a higher growth in contract workers compared to full-time employees. This trend will continue into 2024 as organizations brace themselves for sudden changes in the market as well as their own bottom lines. These third-party users—contractors, vendors, or suppliers—will demand better access to business applications in order to be impactful. This level of fast, easy access to work will drive third-party productivity. Cyberattack risk will increase With workforces and applications becoming more dispersed, the attack surface has increased as well. Of course, bad actors have jumped on the opportunity, increasing their overall cyberattack output, including the recent social engineering attack in the entertainment and gaming industry. What’s more, generative AI has seen widespread organizational adoption, which, too, means more potential threat vectors. Bad actors are leveraging GenAI tools on their own time to discover vulnerabilities in critical sectors and add increased personalization to their attacks, resulting in a potential catastrophe for businesses of all industries through unwavering ransom demands. In addition, 2024 will see increased exploitation of legacy VPN and firewall infrastructure. The cost and complexity of maintaining physical devices that support VPNs, as well as patching their vulnerabilities, has left many IT teams in a rut of infrastructure maintenance rather than improvement. As such, IT teams are looking to amp up their security stack through the cloud to avoid and respond to threats. More mergers and acquisitions will take place Despite economic uncertainty and the current wave of geopolitical challenges, the outlook for M&A appears promising, per Nasdaq.The push to consolidate or divest in certain industries has driven M&A in the past year, and this momentum is expected to continue. Organizations will need to find ways to efficiently onboard new employees and give them application access to maximize productivity amid a merger or acquisition. Organizations that have implemented zero trust network access (ZTNA) have seen a 50% reduction in onboarding time for new employees. Additionally, they’re able to provide consistent access policies across both organizations without compromising security. VPNs will continue to lose fans Our 2023 VPN Risk report found that nearly 1 in 2 organizations experienced a VPN-related attack. This has been a strong reason to move away from legacy remote access solutions in favor of something more robust that can scale with the organization’s growth. With 92% of organizations considering, planning, or in the midst of a zero trust implementation in 2023, this trend will continue well into 2024. Reliance on VPNs will be reduced, and ZTNA will continue to gain traction due to its faster time to value. Indeed, a Zscaler customer reported a sub 48-hour implementation of Zscaler Private Access, effectively replacing their VPNs for remote employees. Organizations will adopt zero trust to better mitigate cyberattacks A zero trust architecture challenges threats by ensuring granular access control and multilayered network segmentation, delivering the best protection of organizations’ most critical data and communications. ZTNA is a ransomware deterrent, hiding crown jewel applications from the internet and making them virtually impossible to attack. n Gartner predicts that by 2025, at least 70% of new remote access deployments will be delivered predominantly via ZTNA as opposed to VPN services. Our 2023 VPN Risk Report suggests a continued growth in an understanding of risk by IT and security leaders as they continue their due diligence on effective zero trust solutions to replace legacy technologies Conclusion As workforces and applications become increasingly mobile, cloud security solutions offer the means of keeping them protected, without harming user experience. Amid a dynamic, evolving threat landscape, driven by artificial intelligence, the scale and agility offered through such solutions will help organizations better determine the right deployments for their needs.. Learn more about how you can protect your private apps and secure your hybrid workforce by leveraging Zscaler Private Access. This blog is part of a series of blogs that provide forward-facing statements into access and security in 2024. The next blog in this series covers SASE predictions. Forward-Looking Statements This blog contains forward-looking statements that are based on our management's beliefs and assumptions and on information currently available to our management. The words "believe," "may," "will," "potentially," "estimate," "continue," "anticipate," "intend," "could," "would," "project," "plan," "expect," and similar expressions that convey uncertainty of future events or outcomes are intended to identify forward-looking statements. These forward-looking statements include, but are not limited to, statements concerning: predictions about the state of the cyber security industry in calendar year 2024 and our ability to capitalize on such market opportunities; anticipated benefits and increased market adoption of “as-a-service models” and Zero Trust architecture to combat cyberthreats; and beliefs about the ability of AI and machine learning to reduce detection and remediation response times as well as proactively identify and stop cyberthreats. These forward-looking statements are subject to the safe harbor provisions created by the Private Securities Litigation Reform Act of 1995. These forward-looking statements are subject to a number of risks, uncertainties and assumptions, and a significant number of factors could cause actual results to differ materially from statements made in this blog, including, but not limited to, security risks and developments unknown to Zscaler at the time of this blog and the assumptions underlying our predictions regarding the cyber security industry in calendar year 2024. Risks and uncertainties specific to the Zscaler business are set forth in our most recent Quarterly Report on Form 10-Q filed with the Securities and Exchange Commission (“SEC”) on December 7, 2022, which is available on our website at ir.zscaler.com and on the SEC's website at www.sec.gov. Any forward-looking statements in this release are based on the limited information currently available to Zscaler as of the date hereof, which is subject to change, and Zscaler does not undertake to update any forward-looking statements made in this blog, even if new information becomes available in the future, except as required by law. Thu, 11 Jan 2024 08:00:01 -0800 Kanishka Pandit https://www.zscaler.com/blogs/product-insights/hybrid-work-and-zero-trust-predictions-2024 Digital Experience Monitoring Predictions for 2024 https://www.zscaler.com/blogs/product-insights/digital-experience-monitoring-predictions-2024 In 2023, we’ve seen an increase in companies focused on maximizing growth as it relates to productivity and innovation. Employers were looking to optimize employee experiences and reduce costs in hopes of driving increased revenues. According to Great Place To Work, 2023 revenue per employee for Fortune 100 Best Companies increased by 7% YoY, up from 4% from 2022. Revenue per employee increased in 2023 To ensure great employee productivity, companies need secure and fast application and data access from home, hotels, airports, and the office. This is confirmed by Hyatt’s recent earnings where they saw a 2x increase as travel surged, compared to pandemic levels. These trends continue to push IT teams to support employees as they securely access SaaS, public, and private cloud applications, (e.g., Salesforce.com, SAP, Microsoft Office 365, ServiceNow) from anywhere. Globally distributed enterprise is today’s reality However, if organizations continue to leverage legacy network architectures that rely on VPNs and firewalls, they are more susceptible to attacks. These technologies expand an organization's attack surface as they place users directly on a routable network. In a recent VPN risk report, 45% of organizations confirmed experiencing at least one attack that exploited VPN vulnerabilities in the last year. Of those, one in three became victims of VPN-related ransomware attacks. Security does not have to be a tradeoff for fast and reliable access. In a recent post, we analyzed the last 12 months of conversations with hundreds of IT professionals about their employee experience and they reported similar findings; that they lack visibility into Wi-Fi and ISP networks. Their current tools struggle to consolidate device, network, and application details such as system processes, memory, CPU, network latencies, packet loss across network hops, and application response times (DNS, SSL handshake, HTTP/TCP connect). IT must secure and optimize experiences even when networks are out of their control. Businesses continue to rethink their digital transformation journey to ensure a flawless end user experience while securing users, workloads, and devices over any network, anywhere. As both travel and revenue per employee increases, employers are learning how to optimize costs and employee productivity across the board. As we kick off 2024, one thing is clear: understanding how employee experience can impact revenue as a driving force to increasing profits is key. To aid IT teams, organizations need a better path forward, one that is designed with security and optimized end user experience driven by actionable AI. As organizations look forward to 2024, three top digital experience monitoring trends emerge: Zero trust growth will require integrated digital experience monitoring (DEM) AIOps is a requirement, not a “nice to have,” to reduce mean time to resolution Reduce overall IT costs Zero trust growth will require integrated DEM As organizations look to secure their environments leveraging zero trust architectures, they need an integrated digital experience monitoring solution to ensure flawless end user experience no matter where they are located. As we found in our customer conversations, many organizations fail to gain insights into zero trust environments with existing monitoring solutions. They also lack full end-to-end visibility such as last-mile ISP and Wi-Fi insights. Adding to the complexity, managing and correlating data across multiple tools for device, network, and application is time-consuming and frustrating to the end user. Zero trust solutions must include DEM by simplifying deployment through a single agent that combines security and monitoring. Monitoring insights should include device metrics (CPU, memory, disk, network bandwidth), network metrics (hop-by-hop latencies, packet loss, jitter, MOS scores, DNS times), application response times (TCP Connect, SSL handshake, HTTP Connect, TTFB, TTLB times), with intuitive correlation to help service desk and network operations teams AIOps is a requirement, not a “nice to have,” to reduce mean time to resolution (MTTR) As we’ve seen in 2023, generative AI has completely changed the industry, and we’ve seen new applications emerge that create data at exponential rates. We are only scratching the surface of the potential with these apps. Much of this data may not be seen by humans. However, insights from this data could be critical for organizations. Organizations may access thousands of SaaS-based applications to create solutions (e.g., images, text, code) to increase productivity. As these applications become critical for organizations, they must ensure their availability. For example, talking to a manufacturing company, they shared how they leverage generative AI to decrease the time required to produce website content. They take hand-drawn images and upload them into a generative AI solution to create hundreds of images based on different scenarios. This typically takes months, but it now takes minutes and frees up their team to think more strategically. However, to gain efficiency, IT must play many roles regarding the security and availability of these applications. Beyond the guardrails required, IT must ensure employees have access to the tools the business needs, which adds to the cost and complexity. Monitoring these new SaaS applications wherever the user connects will keep employees productive. As organizations look to increase employee productivity, security, network, and service desk, teams must collaborate closely to ensure excellent end user experiences. Providing meaningful insights for all the IT teams requires relevant data. Zero trust monitoring solutions must have machine learning models based on years of data across millions of telemetry points to be effective. As data is collected, these models must adapt and learn based on end user feedback to efficiently identify the root cause of issues. There are three key areas IT teams need to consider: Proactively identify recurring issues before users are impacted. For example, if a certain Wi-Fi router shows repeated issues, network teams can work with service desk teams and end users to proactively replace Wi-Fi routers so end users continue to have great access. Empower service desk teams to either resolve issues or escalate with confidence. For example, if an end user complains about an SAP issue, the service desk team must know a potential root cause in seconds, and route it to the appropriate L3 team. They will need an intuitive AI solution to identify the issue in seconds and share those insights. Drive increased monitoring intelligence with continuous updates to machine learning models. Zero trust monitoring solutions must expand monitoring vantage points and collect new insights to aid IT teams. Reduce overall IT costs As we’ve seen in 2023, macroeconomics are forcing organizations to think about maximizing productivity and profits. In 2023, many organizations have started their journey to zero trust solutions, and are ready to embark on integrating their security and monitoring stacks. In 2024, we’ll see leaders at these organizations ask tough questions around monitoring zero trust environments without adding complexity to their IT architectures. This will set organizations apart, as ones that have the right zero trust architecture will have included monitoring as part of the journey. Not only will it provide better insights for network operations and service desk teams, it will lower overall IT costs. They will be able to retire siloed monitoring solutions to reduce costs and gain better insights. For example, if service desk, desktop, network, and application teams all leveraged the same monitoring solution, they could confidently provide IT leaders with key insights and remove finger pointing, which still occurs as teams hardly look at the same datasets. IT leaders will want a consolidated monitoring stack to answer the following questions: What’s the root cause of Zoom, Teams, and Webex call quality issues and how do I correlate it to the end user’s device, network, and application? We leverage VPNs for private applications and experience application slowness. How do I identify if it’s the device’s CPU or one of the hops in the network? My users blame security for application slowness. How can we quickly verify it’s not? As we saw in 2023, organizations want to leverage existing IT investments where possible. Apart from consolidating their monitoring silos, in 2024, organizations will want to leverage existing ticketing systems. To do so, zero trust monitoring solutions must take AI-powered insights and push them into where service desk and network operations teams live. For example, many organizations have ServiceNow workflows, and smart integrations will provide IT teams with key insights to resolve issues in minutes. Summary As IT teams start planning 2024, it's key to find digital experience monitoring solutions that effectively support the hybrid workforce, leverage AI-assistance, and drive overall IT lower costs. As you embark on your 2024 initiatives, consider Zscaler's Digital Experience monitoring solution. Please don't take our word for it. See what our customers are saying: “15 minutes to resolve user experience issues, down from 8 hours” Jeremy Bauer, Sr. Director Information Security, Molson Coors Beverage Company “Zscaler helps us identify the issues that need to be addressed before they cause disruption to AMN users, so we can ensure a seamless experience from anywhere.” Mani Masood, Head of Information Security, AMN Healthcare “When I open my computer, it doesn't matter if I'm in California, Arizona, Nevada, or across the globe, I get the same experience and the same level of protection.” David Petroski, Senior Infrastructure Architect, Southwest Gas Interested to learn more about ensuring great digital experiences in 2024? Click here for Zscaler’s perspectives. This blog is part of a series of blogs that look ahead to what 2024 will bring for key areas that organizations like yours will face. The next blog in this series covers hybrid work predictions for 2024. Forward-Looking Statements This blog contains forward-looking statements that are based on our management's beliefs and assumptions and on information currently available to our management. The words "believe," "may," "will," "potentially," "estimate," "continue," "anticipate," "intend," "could," "would," "project," "plan," "expect," and similar expressions that convey uncertainty of future events or outcomes are intended to identify forward-looking statements. These forward-looking statements include, but are not limited to, statements concerning: predictions about the state of the cyber security industry in calendar year 2024 and our ability to capitalize on such market opportunities; anticipated benefits and increased market adoption of “as-a-service models” and Zero Trust architecture to combat cyberthreats; and beliefs about the ability of AI and machine learning to reduce detection and remediation response times as well as proactively identify and stop cyberthreats. These forward-looking statements are subject to the safe harbor provisions created by the Private Securities Litigation Reform Act of 1995. These forward-looking statements are subject to a number of risks, uncertainties and assumptions, and a significant number of factors could cause actual results to differ materially from statements made in this blog, including, but not limited to, security risks and developments unknown to Zscaler at the time of this blog and the assumptions underlying our predictions regarding the cyber security industry in calendar year 2024. Risks and uncertainties specific to the Zscaler business are set forth in our most recent Quarterly Report on Form 10-Q filed with the Securities and Exchange Commission (“SEC”) on December 7, 2022, which is available on our website at ir.zscaler.com and on the SEC's website at www.sec.gov. Any forward-looking statements in this release are based on the limited information currently available to Zscaler as of the date hereof, which is subject to change, and Zscaler does not undertake to update any forward-looking statements made in this blog, even if new information becomes available in the future, except as required by law. Tue, 09 Jan 2024 08:00:01 -0800 Rohit Goyal https://www.zscaler.com/blogs/product-insights/digital-experience-monitoring-predictions-2024 Data validation on production for unsupervised classification tasks using a golden dataset https://www.zscaler.com/blogs/product-insights/data-validation-production-unsupervised-classification-tasks-using-golden Abstract Have you ever been working on an unsupervised task and wondered, “How you I validate my algorithm at scale?” In unsupervised learning, in contrast to supervised learning, our validation set has to be manually created and checked by us, i.e. we will have to go through the classifications ourselves and measure the classification accuracy or some other scores. The problem with manual classification is the time, effort, and work that is required for classifications, but this is the easy part of the problem. Let’s assume that we developed an algorithm and tested it very well while manually passing on all the classifications, what about future changes to that algorithm? After every change we should check the classifications manually ourselves again. While the data classified might change with time, it might also grow to huge scales with the evolution of our product, and the growth of our customers, then our manual classification problem would of course be much more difficult. Have you started to worry about your production algorithms already? Well, you shouldn’t! After reading this, you will be familiar with our proposed method to validate your algorithm score easily, adaptively, and effectively against any change in the data or the model. So let's start detailing it from the beginning. Why is it needed? Algorithm continuous modifications always happen. For example, we are having: Runtime optimizations Model improvements Bug fixes Version upgrades How are we dealing with those modifications? We usually use QA tests to make sure the system keeps working. At the same time, the best among us might even develop some regression tests to make sure, for several constant scenarios, that the classifications would not be changed What about data integrity? But what about the real classifications on prod? Who verifies their change? We need to make sure that we won’t have any disasters on prod when deploying our new changes in the algorithm. For that, we have two optional solutions: Naive solution - pass through all the classifications on prod (which is of course not possible) Practical solution - use samples of each customer data on prod - using the margin of error equation. Margin of error To demonstrate, we are going to take a constant sample from each customer’s data, which would represent the real distribution of the data with minimal deviation, which we will do using the Margin of Error equation, sometimes known from election surveys, where the surveys are sometimes based on some equation derived from the Margin of Error equation. So, how does it work? We can use the first equation used for calculating the margin of error, to extract the needed sample size desired. We would like to have a maximum margin of error of 5%, while we should use a constant value of Z = 1.96 if we want the confidence of 95% (might be changed if we would like to have another confidence level) The extraction of the required sample size is demonstrated in the following equation: While this equation is an expansion of the equation above, it might be used when we have the full data size, to be more precise. Otherwise, we’ll be left only with the numerator of that equation - which is also fine if we don’t have the full data size. This is a code block demonstrating the implementation of this equation in Python: We can now freeze those samples, which we call a “golden dataset,” and use them as a supervised dataset that will be used by us in the future when making modifications, and serves us as a data integrity validator on real data from prod. We should mention that because optional changes on prod data might happen with time, we encourage you to update this golden dataset from time to time. The flow of work for end-to-end data integrity: Manual classification to create a golden dataset Maintaining a constant baseline of prod classifications Developing a suite of score comparison tests Integrating quality check into CI-process of the algorithm So, how will it all work together? You can see that in the following GIF: We may now push any change to our algorithm code, and remain protected, thanks to our data integrity shield! For further questions about data integrity checks, or data science in general, don’t hesitate to reach out to me at [email protected]. Fri, 05 Jan 2024 14:41:10 -0800 Eden Meyuhas https://www.zscaler.com/blogs/product-insights/data-validation-production-unsupervised-classification-tasks-using-golden Data Protection Predictions for 2024 https://www.zscaler.com/blogs/product-insights/data-protection-predictions-2024 As IT teams reflect on 2023 and look forward to 2024, we can all agree that data is the lifeblood of an organization. To that end, every organization’s goal should be to have visibility and control of data, wherever it’s created, shared, and accessed. New cloud apps, GenAI, remote work, and advanced collaboration approaches are driving a greater need to centralize protection controls and analytics as well as increase efficiency. Without further ado, here are five predictions on how this will come together in 2024. 1. SaaS data gets a new protector While CASB has been a staple of SaaS data protection for quite some time, a new kid on the block is getting popular: SaaS security posture management (SSPM). SSPM comes at the problem of cloud data protection from a different angle. Where CASB focuses on securing collaboration risks attached to data (like sharing data with open links), SSPM focuses on securing the cloud itself. Shared responsibility models put the onus on your organization to ensure your SaaS apps have airtight configuration and integration posture. Since many of the largest breaches have stemmed from cloud misconfigurations, this is a growing concern. SSPM was built to address this very issue. Via API and a shadow IT catalog, SSPM scans your SaaS apps and platforms (e.g., Microsoft 365, Google) and reveals misconfigurations or integrations that put you at risk of a breach. As SSPM begins to show up on radars worldwide, it’s important to not fall into point product land. Adding yet another point product to your environment is how many organizations end up with a frankenstein security stack. As such, security service edge (SSE) becomes a logical final resting place for this core technology. Why? Complete SaaS security needs to be more than just controlling misconfigurations and integrations—you also need to think about SaaS identity (least-privileged access and permissions) and context visibility (who, what, where, and why). SSE excels in both these areas since it is becoming the de facto cloud security stack, which has all this information in spades. Additionally, SSE was built with extensibility for new features in mind. Pairing SSPM with the CASB, DLP, and data protection aspects of SSE delivers a fantastic platform from which to launch your SaaS security efforts. You get a unified approach to all four areas you need for airtight, holistic SaaS security: secure identity, secure data, shadow IT governance, and cloud posture. 2. Managed or unmanaged device? Who cares! In 2024, challenges with unmanaged (BYOD) endpoints used by your employees and partners will start to become a thing of the past. These cast-offs of the IT community have been a thorn in the side of security for some time since, to keep BYOD users productive, you still need to give them access to good stuff—like sensitive data. Since you don’t own or manage BYOD endpoints, you don’t have control over that data once it lands on the device. With managed devices, you have lots of control levers to keep data secure. You can ensure patch level and device posture are up to snuff, or even remotely wipe the machine if need be. Not so much with BYOD. With newer approaches like browser isolation, handling BYOD becomes a snap. Just throw those devices into an isolated browser before you send them off to access all that sensitive data. This way, the data remains in the isolated browser and never lands on the unmanaged device. Data is streamed to the device and appears on the screen, but you can’t cut, paste, print, or download it. Look for vendors who can deliver this game-changing functionality without the need for a software agent, and with easy-to-configure BYOD portals that make getting app access as easy as logging in and clicking on the app of choice. 3. Secure the life cycle, not just the data Another approach to posture that is gaining traction is data security posture management. While SSPM focuses on SaaS apps, DSPM focuses on the life cycle of your data to ensure it always has the right security posture. It’s about who, what, where, and why, much like SSPM in our first prediction. However, in this case, the hero of the story is your data. Why are organizations focusing on this? Pick the most sensitive, crown-jewel piece of information in your organization. Naturally, you’d like to know where it is, where it moves to, who has access to it, if there are risky behaviors attached to it, and guidance on how to close those risks. In essence, you want to protect and follow that data anywhere. DSPM helps you do that, at scale, across all your sensitive data, with in-depth context to make the right protection decisions. The result is a consistent safe data posture that is inherently stronger and more airtight than before. Much like SSPM, look for DSPM to become a core part of SSE. Paired with other key data protection technologies like DLP, CASB, and centralized policy control, DSPM will be an invaluable addition to data protection programs that need to up their game around control of sensitive data. 4. The lines between threat and data protection continue to blur At 2023's Black Hat conference, it was astounding how many people wanted to talk about data protection. For a conference traditionally focused on stopping cyberattacks, this was profound, and it alluded to a shift happening across the industry. After all, it’s true what they say: it’s all about data. Today’s cyberthreats are as much about stealing data as hurting company productivity. Adversaries have realized data is a gold mine, and they will continue to exploit it. So, as security architects think about building out defenses against today’s threats, data protection will become an integral part of the equation. As we blast through 2024, watch out for new data protection offerings that give you more choices on the surface—but that also risk a fragmented approach. The moral of the story is keep your eye on the prize. There’s a reason data protection is part of SSE, one of the fastest-growing security architectures in the last decade. When data protection is centralized in a high-performance inspection cloud with a single agent, things become super streamlined and unified across all channels you need for great protection. Remember that DLP is the core building block of data protection. With a centralized DLP engine, all data across endpoint, network, and at rest in clouds triggers the same way. This leads to a single point of truth for protection, investigations, and incident management, which is what every IT team wants. 5. Every prediction blog will have something about GenAI Our other predictions will have varying hit rates, but this one is 100% guaranteed. No 2024 prediction blog will be complete without GenAI. It’s going to revolutionize the world right before it destroys it, right? Like all new technology crazes, there will be an equilibrium process. Sure, GenAI will enable us to move faster and smarter, but there will be a learning curve around what it does well, and what it doesn’t. Companies will try to integrate it across their business stack to varying degrees of success. But one thing is for sure: data will be headed to GenAI at an alarming rate, so data protection will need to focus on controlling what data goes into GenAI while leveraging GenAI’s power to find risks faster. (I realize I just said, in essence, “using GenAI to catch GenAI leaking data to GenAI,” so apologies for that.) Basically, GenAI is just another productivity tool we need to protect against misuse. Treat GenAI like a shadow IT app. To control it, you need a platform that delivers complete visibility and the proper levers to enable it safely within your organization while ensuring sensitive data doesn’t leak to it. The other half of this is using GenAI to make security smarter. AI will continue to find its way into the ubiquity of computing. We will take for granted its power to help us deliver more powerful correlation, context, analysis, and response times. That’s the relentless pursuit of better security, which is what we’re all about. But let's avoid calling anything in the future “NexGenAI,” because as a marketer, that’s just not cool, man. Putting it all together If you’ve made it this far, you’ve probably picked up on a few themes. Great data protection requires context, integration, posture, and a platform to bring it all together. There’s no telling how far security service edge will take us, but it’s set up for a great year as its architecture expertly enables new features, improves on existing ones, and delivers all-around unified, high-performance data protection. If you’re looking to up your data security game in 2024, we’ve got you covered. Jump on over to read about the Zscaler Data Protection platform or get in touch with us to book a demo. Interested in reading more about Zscaler's predictions in 2024? Read our previous blog in the series about cyber predictions. Forward-Looking Statements This blog contains forward-looking statements that are based on our management's beliefs and assumptions and on information currently available to our management. The words "believe," "may," "will," "potentially," "estimate," "continue," "anticipate," "intend," "could," "would," "project," "plan," "expect," and similar expressions that convey uncertainty of future events or outcomes are intended to identify forward-looking statements. These forward-looking statements include, but are not limited to, statements concerning: predictions about the state of the cyber security industry in calendar year 2024 and our ability to capitalize on such market opportunities; anticipated benefits and increased market adoption of “as-a-service models” and Zero Trust architecture to combat cyberthreats; and beliefs about the ability of AI and machine learning to reduce detection and remediation response times as well as proactively identify and stop cyberthreats. These forward-looking statements are subject to the safe harbor provisions created by the Private Securities Litigation Reform Act of 1995. These forward-looking statements are subject to a number of risks, uncertainties and assumptions, and a significant number of factors could cause actual results to differ materially from statements made in this blog, including, but not limited to, security risks and developments unknown to Zscaler at the time of this blog and the assumptions underlying our predictions regarding the cyber security industry in calendar year 2024. Risks and uncertainties specific to the Zscaler business are set forth in our most recent Quarterly Report on Form 10-Q filed with the Securities and Exchange Commission (“SEC”) on December 7, 2022, which is available on our website at ir.zscaler.com and on the SEC's website at www.sec.gov. Any forward-looking statements in this release are based on the limited information currently available to Zscaler as of the date hereof, which is subject to change, and Zscaler does not undertake to update any forward-looking statements made in this blog, even if new information becomes available in the future, except as required by law. Thu, 04 Jan 2024 08:00:01 -0800 Steve Grossenbacher https://www.zscaler.com/blogs/product-insights/data-protection-predictions-2024 AI: Boon or Bane to Security? https://www.zscaler.com/blogs/product-insights/ai-boon-or-bane-security Security professionals believe offensive AI will outpace defensive AI A recent Cybersecurity Insiders report found that AI is transforming security—making fundamental (and likely permanent) changes to both the attacker and defender toolkits. The “Artificial Intelligence in Cybersecurity” report surveyed 457 cybersecurity professionals online and also tapped into Cybersecurity Insiders’ community of 600,000 information security professionals to find out what CISOs and their frontline teams think about AI’s impact on cybersecurity. The report reveals some sobering findings on what security professionals most fear about AI in the hands of malicious actors. According to the report, 62% of security professionals believe offensive AI will outpace defensive AI. Here’s a breakdown of the report and Zscaler’s take on what to do to combat AI-driven cyberattacks. Source: 2023 Artificial Intelligence in Cybersecurity Report, Cybersecurity Insiders AI increases the sophistication of cyberattacks Unsurprisingly, 71% of respondents believe AI will make cyberattacks significantly more sophisticated, and 66% think these attacks will be more difficult to detect. Source: 2023 Artificial Intelligence in Cybersecurity Report by Cybersecurity Insiders These findings align with observations by the Zscaler ThreatLabz security research team. For instance, the 2023 ThreatLabz Phishing Report noted that AI tools have significantly contributed to the growth of phishing, reducing criminals’ technical barriers to entry while saving them time and resources. Concerningly, the use of AI in phishing campaigns is projected to grow in the coming years. Bracing for AI-enabled ransomware and cyber extortion attacks should be top-of-mind for security practitioners. Think about it: ransomware attacks typically start with social engineering, which 53% of respondents believe will grow more dangerous because of AI. For instance, attackers can use AI voice cloning to impersonate employees to gain privileged access, or use generative AI to help craft convincing phishing emails. Moreover, it will also get easier for attackers to discover and identify zero-day vulnerabilities. Also, the business model of encryption-less extortion—in which threat actors steal data and demand a ransom to avoid a leak, rather than encrypting files—will benefit from advancements in AI-enabled tools that can drastically speed up the development of malicious code, exacerbating the threat to both public and private organizations Organizations plan to increase AI usage in security Zscaler strongly recommends that security practitioners prepare for more coordinated and effective attacks on larger groups of people, as threat actors will leverage AI to launch more sophisticated scams across different communication channels, such as email, SMS, and websites. As the Cybersecurity Insiders survey found, security teams plan to invest more in defensive AI capabilities to do just that. Source: 2023 Artificial Intelligence in Cybersecurity Report by Cybersecurity Insiders In another notable finding, 48% of respondents believe the use of deep learning for detecting malware in encrypted traffic holds the most promise for enhancing cyber defenses. At Zscaler, we have always advocated for inspecting most (if not all) TLS/SSL traffic and applying layered inline security controls. Today, at least 95% of traffic is encrypted (Google Transparency Report), and the Zscaler ThreatLabz 2023 State of Encrypted Attacks report shows that 85.9% of threats are now delivered over encrypted channels, underscoring the need for thorough inspection of all traffic. The Zscaler Zero Trust Exchange inspects HTTPS at scale using a multilayered approach with inline threat inspection, sandboxing, data loss prevention, and a wide array of additional defense capabilities. On top of all that, the AI-powered Zscaler cloud effect means that all threats identified across the global platform trigger automatic updates to protect all Zscaler customers. Strategies for combating AI-powered adversaries Technology has always been a double-edged sword. The age of AI has arrived, and it is just beginning. Accordingly, organizations should prioritize the adoption of AI for cyberthreat protection—so it is gratifying that 74% of respondents say AI is a “medium” to “top” priority for their organization. Additionally, partnering with security vendors who offer superior AI capabilities is crucial. This is easier said than done, as most vendors now claim to leverage AI. The best way forward is to educate yourself, look to vendors with a proven record of technological innovation, and engage them in proofs of concept to assess the efficacy of their solutions for yourself. To find out more about why you need an AI-powered zero trust security platform such as Zscaler’s, watch this on-demand webinar. To read the full “Artificial Intelligence in Cybersecurity'' report by Cybersecurity Insiders, get your complimentary copy here. Mon, 08 Jan 2024 08:00:01 -0800 Apoorva Ravikrishnan https://www.zscaler.com/blogs/product-insights/ai-boon-or-bane-security Elevating Cybersecurity: Introducing Zscaler and Microsoft Sentinel's New SIEM & SOAR Capabilities https://www.zscaler.com/blogs/product-insights/elevating-cybersecurity-introducing-zscaler-and-microsoft-sentinel-s-new The evolution of Security Information and Event Management (SIEM) and Security Orchestration, Automation, and Response (SOAR) technologies has been pivotal in shaping modern cybersecurity strategies. Traditionally, SIEM systems were primarily focused on data aggregation and alert generation, often resulting in an overwhelming number of alerts for security teams to handle. However, as cyberthreats grew more sophisticated, the need for a more proactive and responsive approach became evident. This led to the emergence of SOAR solutions, which complement SIEM by adding layers of automation, orchestration, and advanced response capabilities. Microsoft Sentinel represents the culmination of this evolution. As a cutting-edge SIEM and SOAR solution, Sentinel offers not only comprehensive data collection and analysis but also integrates automated response mechanisms. These advancements allow for quicker, more efficient handling of security incidents, ultimately enhancing the ability of organizations to swiftly adapt and respond to the ever-changing threat landscape. Keeping pace with these advanced features, Zscaler is excited to unveil two new integrations as part of our zero trust collaboration with Microsoft Sentinel. These are: Cloud NSS for ZIA log ingestion into Microsoft Sentinel Zscaler's Cloud NSS, our innovative cloud-to-cloud log streaming service, now makes its way to Microsoft Sentinel, making it faster and easier to deploy, manage, and scale log ingestion from the Zscaler to Microsoft Sentinel Cloud. Fig: Cloud NSS overview This service enables native ingestion of Zscaler’s comprehensive cloud security telemetry into Microsoft Sentinel, enriching investigation and threat hunting for cloud-first organizations without the need to deploy any infrastructure. Key benefits include Reduced complexity: Since Cloud NSS operates in the cloud, it removes the need for additional on-premises hardware or infrastructure. This not only cuts down on physical resource requirements but also simplifies the overall security architecture. Streamlined log management: Cloud NSS facilitates the efficient management and scaling of log ingestion. It simplifies the process of collecting and analyzing security logs, making it easier for organizations to manage large volumes of data. Scalability and flexibility: Cloud NSS is inherently scalable, accommodating the growing data and security needs of an organization. This flexibility ensures that as a company grows, its security infrastructure can grow and adapt without major overhauls. Expanded Zscaler Playbooks for Microsoft Sentinel The expanded Zscaler Playbooks for Microsoft Sentinel mark a significant advancement in our joint capability with Microsoft Sentinel. All Zscaler Playbooks leverage OAuth 2.0 for authentication, which result in: Better security: OAuth 2.0 secures your APIs with dynamic credentials, which are time-bound and generated on demand for a client. Limited exposure of credentials: Unlike the authentication model that uses API keys and ZIA admin credentials and may involve user management outside the organization's identity provider, OAuth 2.0 does not require ZIA admin credentials for authentication. Granular access control: The Client Credentials OAuth flow employs API Roles to define permissions required to access specific categories of cloud service API. Fig: OAuth 2.0 Flow Take advantage of the following Zscaler Playbooks to automate your workflows: Zscaler-OAuth2-Authentication: Authenticate using OAuth 2.0 Zscaler-OAuth2-BlacklistURL: Blacklist an IP in the Advanced Threat Protection Module. Zscaler-OAuth2-BlockIP: Block an IP using a URL category blocklist. Zscaler-OAuth2-BlockURL: Block a URL using a URL category blocklist. Zscaler-OAuth2-LookupIP: Lookup the URL category an IP belongs to. Zscaler-OAuth2-LookupSandboxReport: Lookup a Sandbox Report. Zscaler-OAuth2-LookupURL: Lookup the URL category a URL belongs to. Zscaler-OAuth2-UnblacklistURL: Un-blacklist a URL in the Advanced Threat Protection Module. Zscaler-OAuth2-UnblockIP: Remove an IP from a URL category blocklist. Zscaler-OAuth2-UnblockURL: Remove a URL from a URL category blocklist. Zscaler-OAuth2-WhitelistURL: Whitelist a URL in our Advanced Threat Protection Module. Fig: Zscaler-OAuth2.0 LookupURL Playbook The new Zscaler Playbooks for Microsoft Sentinel can be downloaded now from the Zscaler GitHub repository - https://github.com/zscaler/microsoft-sentinel-playbooks Wed, 20 Dec 2023 08:00:01 -0800 Paul Lopez https://www.zscaler.com/blogs/product-insights/elevating-cybersecurity-introducing-zscaler-and-microsoft-sentinel-s-new Securing DNS over HTTPS (DoH) https://www.zscaler.com/blogs/product-insights/securing-dns-over-https-doh DNS is often the first step in the cyber kill chain. Snooping on DNS queries yields a treasure trove of information and manipulating DNS resolution is one of the key methods of compromise. While innovations like encrypted DNS over HTTPS (DoH) help conceal queries, they can introduce new challenges for network security admins trying to implement PDNS mandates and inspect DNS traffic for signs of compromise. Fortunately, Zscaler’s DNS security capabilities built into the Zero Trust Exchange can help. DNS is a key vector in the cyber kill chain One of the first steps in any network communication is a DNS query. Given the plaintext nature of these queries, bad actors often conduct reconnaissance on the target infrastructure by snooping on DNS queries. Manipulating DNS can give attackers the ability to conduct man-in-the-middle attacks, compromise endpoints, and steal data. DNS queries are typically connectionless and easier to subvert by modifying or bypassing resolver settings on network devices or endpoints. At the same time, DNS queries can also serve as early warning for threats and one of the best opportunities to neutralize them before any communication is established between the target and the malware site or C2 server. Attackers know this and often take great pains to conceal their DNS activity using obfuscation techniques such as cycling through many domains and subdomains. DNS is also often abused as the protocol for malware command and control, with TXT records being used to send commands and A records used to exfiltrate data. One of the earliest and best opportunities to identify and neutralize threats, while identifying infected hosts and bad actors, is in the initial DNS requests and within the ongoing DNS communications. DNS is used by most legitimate web and non-web applications in at least one stage of any given session. This is often at the very beginning of any session like with normal web requests or SSH sessions but sometimes repeatedly throughout sessions. Bad actors recognize the need for DNS communications and the opportunity to leverage DNS in several stages of any given attack. From simply getting targets to malicious locations, post-infection mid-stage for C2 instruction, to the exfiltration of data at the end stages, DNS is often integral in an attack chain. Attackers take great pains to conceal their attacks’ DNS activity by using a variety of common means to conceal or obfuscate request and response communication, by cycling through many domains or subdomains, by abusing the protocol or using it for malicious purposes, or by poisoning entries and pointing resolutions to attacker-controlled resolvers. Figure 1: DNS Control Breaks the Kill Chain New emphasis on DNS security in an encrypted world One of the biggest changes underway is the push to encrypted DNS. DNS over HTTPS (DoH) started out purely as a privacy tool but is now increasingly recommended by national governments worldwide as a way for industries to maintain security and integrity, in addition to privacy, in what has been up until now one of the last major services to remain widely unencrypted. Many of these same national governments led by the Five Eyes intelligence alliance and their close allies often require their national agencies to use DoH as a key ingredient in their Protective DNS (PDNS) mandates. Figure 2: National governments increasingly recommend–and in some cases, require–a PDNS solution Unfortunately, attackers are also aware of both the trend toward encrypted DNS and the opportunity it presents them – particularly when DoH is often not able to be inspected or only partially inspected. Attackers are also aware that DoH is increasingly enabled by default in most browsers, configurable by users and processes. And since DoH is increasingly recommended as a best practice in the PDNS recommendation and mandate and beyond, it is no longer unusual to have DoH traffic in a corporate network and it is no longer acceptable to be simply blocked when encrypted web (HTTPS generally) is able to be inspected. New challenges for legacy solutions Bad actors understand better than most that legacy-gen firewall and proxy designs – whether now “cloudified” or still trapped in virtual or physical boxes – are inherently limited to simple block/allow policies for encrypted DNS. Crime syndicates further understand that administrators are hesitant to enable TLS decryption since this often results in a noticeable performance degradation on legacy-gen appliance-based firewalls. Inspecting SSL/TLS results in a step function of added hardware spend or complicated user and traffic segmentation or added network administration complexity – usually all three at once. Figure 3: Using only a pure-play DNS resolver service may mean some DNS queries bypass DNS security controls Some of the more advanced legacy vendor solutions extend their general DoH block/allow policies to known resolver services. This means they trust certain third-party DoH resolvers but continue to leave the content of the DNS over HTTPS communication uninspected and policy unenforced. Because of this inspection gap, companies often need to concurrently engage and manage unintegrated third-party DNS service providers and manage policies across multiple platforms. All the while trying to ensure that no attacker malware or insider circumvents this by reaching a new unsanctioned or malicious DoH service provider. Figure 4: Legacy-gen firewall-only and standard proxy solutions have limitations on DNS inspection and may not have or may miss DNS inspections. They also usually require another vendor to deliver a complete DNS solution. Securing DNS with Zscaler using zero trust controls in the cloud Zscaler provides a proxy and security control layer in our Zero Trust Exchange for all traffic including DNS. All DNS over HTTPS and standard DNS traffic is fully inspected regardless of what DNS resolver service is used by the endpoint. Zscaler also secures recursive and iterative requests. Using Zscaler for DNS brings the zero trust approach to all DNS for complete security. This means that all DNS transactions are inspected and secured according to security policy for all users, workloads, and servers, all the time. This not only fulfills the security demands of Protective DNS and other DNS security best practices but extends corporate security to all DNS over HTTPS and to all corners of the customer estate from mobile users to cloud workloads to guest Wi-Fi access points. Figure 5: Complete zero trust DNS security Complete zero trust DNS security steps for every DNS transaction that ensure NIST 800-207 principles including: What is the identity of the user? Endpoint, workload, server? Which DNS protocol is being used (DoH, UDP:53, TCP:53)? What is being communicated, requested? Domain, metadata, tunnel, record type? Where is the DNS transaction trying to go? Where should the DNS transaction go instead? Is the inspected request side transaction allowed considering the above? What is the category of the inspected content? Allow/Block/Log request and action. Does the request need to be translated to another DNS protocol (UDP to DoH, etc)? What is the inspected response back to the matching allowed request? Is this expected and allowed for the user? Domain, content, metadata tunnel, record type, etc.? Allow/Block/Log response and action. If the response is allowed, does it need to be translated back to the original DNS protocol (DoH to UDP, etc)? Complete the allowed DNS transaction. Complete and enrich log data for the transaction. Global scale of the Zscaler Trusted Resolver Another unique capability of the Zscaler service is the optional Zscaler Trusted Resolver (ZTR). The ZTR are clusters of DNS resolvers in almost all of our 150 global data centers that can be used by Zscaler customers for public recursive queries. DNS requests to any provider can be configured to be intercepted as they transit to the Zero Trust Exchange and instead are resolved at the data center instance of ZTR nearest to the requestor. Optionally the ZTR can be addressed explicitly and further remove any need for a public resolver or third-party service. DNS resolution is not currently part of either the SASE or SSE definitions, so most of these vendors do not offer a DNS resolution capability. DNS resolution requires a separate vendor if Zscaler is not used. Zero trust DNS security is provided for all DNS transactions independent of whether the Zscaler Trusted Resolver is used or not. The benefits of ZTR are centered on having a fast, highly available DNS service that is globally distributed and returns geographically localized resolutions. Since ZTR supports DNSSEC, there is the added advantage of high-integrity resolutions in addition to Zero Trust DNS security. Capability Zscaler Other SSE Solutions Legacy-Gen Firewalls DNS Providers Standard DNS content inspection ✅ ✅ ✅ Can be bypassed Basic DoH inspection ✅ If DoH then Block/Allow If DoH then Block/Allow Typically bypassed DoH content inspection ✅ X Limited by TLS decryption capacity Typically bypassed Global DNS resolution service ✅ X X ✅ Complete DNS Control — better security and performance The Zscaler Zero Trust Exchange is the only cloud-native solution that offers complete DNS security along with better DNS performance through our Trusted Resolver. DNS Control is a single, consolidated function embedded within the wider Zscaler service that directly delivers the highest DNS security efficacy with the best possible user experience all while reducing vendor sprawl, complexity, and cost. Organizations needing to improve DNS security and privacy or facing PDNS mandates can ensure that they have protection against emerging DNS-based threats. All customers with ZIA and Cloud Firewall enabled can configure DNS Control rules, including full inspection of DoH traffic. Fri, 15 Dec 2023 08:00:01 -0800 Stefan Sebastian https://www.zscaler.com/blogs/product-insights/securing-dns-over-https-doh How Zscaler Helps Healthcare IT Infrastructure Teams Sleep at Night https://www.zscaler.com/blogs/product-insights/how-zscaler-helps-healthcare-it-infrastructure-teams-sleep-night You are part of the infrastructure networking team at a large healthcare provider. It’s 6 a.m. and you are laying down to attempt to get some sleep after you and your team just completed a major infrastructure upgrade that started at 11 p.m. and ran until 5 a.m. (because that’s when change windows happen in healthcare). The change involved a complex migration of the core network infrastructure connecting the entire hospital IT environment - medcarts, Wi-Fi, glucometers, VoIP, Vocera, Imprivata workstations - i.e., all clinical operations with IT dependencies from check-in to discharge are reliant upon this critical part of the infrastructure you just completed open heart surgery on. The IT application team members join the testing bridge at 5 a.m., nearly as bleary-eyed as you, but without the in-depth understanding of what just happened over the last six hours. All the application testers finish their testing as planned (a point-in-time spot check) to the best of their ability (since they don’t REALLY know what changed, what they are testing for, and just want to get ready to do their own jobs in three hours) and all tests come out GREEN or validated. The call wraps and you can try to rest your weary eyes. Until …… your phone rings…. it's your manager……. application X that tested fine (or wasn’t tested at all since it wasn’t expected to have issues) is having problems. The clinical night shift on the nursing floor didn’t report the issue to IT until the end of their shift because they were (rightly) doing their jobs - taking care of patients. The next eight hours involve patient and employee dissatisfaction because the workflow is interrupted, multiple tech support calls, escalations, until it’s finally resolved and everyone can get back to work as usual, until the next time IT decides to do an “upgrade.” We’ve all been patients and it’s likely we have been impacted by said “IT problem.” What if prior to the next major scheduled upgrade, all the workstations throughout the entire hospital were periodically checking their most critical application’s performance (an on-premises application, an internet application, and an application hosted in private cloud) both before, and after the change. Hundreds or thousands of unique views of application’s performance from the clinic, hospital, and home users near and far are graphed and centralized into a dashboard to give the IT teams confidence the change they just completed ACTUALLY is working as expected. Not a point-in-time check in the wee hours of the morning. Enter Zscaler Digital Experience (ZDX). With ZDX, powered by the Zscaler Zero Trust Exchange, unexpected IT issues after a planned infrastructure change could become a thing of the past, improving the IT service and helping leading healthcare providers deliver the safe and quality care their patients expect and deserve. Using a multitude of metrics from the end user’s computer (CPU, memory, disk) through the network (like DNS and HTTP response times, packet loss, jitter, etc.) toward the defined application, ZDX computes a score equating to “poor,” “okay,” or “good.” A snapshot of a user’s application performance over time with the option to utilize the AI/ML feature “Analyze Score” And since a single computer doing a spot in time check is not sufficient to confidently identify issues, the Zscaler Digital Experience provides a holistic vantage point from all locations. ZDX is a user experience monitoring and improvement platform, and proactive problem resolution is a key benefit since we know administrators and service desk teams are not staring into a system waiting for a problem to occur. With Zscaler’s alerting capabilities and integration with third-party service management tools like ServiceNow, team members can create alerting criteria relevant to the impending change and be proactively alerted if criteria is met. Here are a few examples: Finally, with the most recent Zscaler + Imprivata integration, a holistic vision of application performance pre- and post-change is a reality for all devices running Zscaler Client Connector, including Imprivata workstations (which, for many healthcare providers, represent a significant percentage of workstations critical for patient care). In conclusion, by leveraging ZDX, healthcare providers can proactively address potential issues and ensure a seamless transition during infrastructure upgrades. The comprehensive monitoring and alerting capabilities of ZDX, combined with its integration with Imprivata and other service management tools, enable healthcare organizations to deliver the safe and quality care that patients expect and deserve. With ZDX, unexpected IT problems can become a thing of the past, allowing healthcare providers to focus on their primary goal of providing excellent patient care. Visit our healthcare page for more information about how Zscaler is working with healthcare organizations to secure, simplify and transform with zero trust. We'd also love to show you the benefits of ZDX firsthand! Schedule a custom demo here. Tue, 12 Dec 2023 06:43:39 -0800 Paul Sullivan https://www.zscaler.com/blogs/product-insights/how-zscaler-helps-healthcare-it-infrastructure-teams-sleep-night What is Next with Zscaler Risk360™ https://www.zscaler.com/blogs/product-insights/what-next-zscaler-risk360-tm In recent months, we’ve spoken to dozens of organizations about better cyber risk management. Be it in Europe, Asia, or the Americas, the need for an accurate, repeatable method of managing and mitigating cyber risk is acute. There are many reasons for this. The first is the need to quantify and mitigate cyber risk to proactively improve security postures, reducing the chances of a breach. To complicate matters, security leaders are being increasingly asked to report on cyber risk—both internally to executives and board members, and externally for compliance reasons, such as the new SEC cyber security reporting requirements. In an effort to meet both of these requirements, security teams often rely on manual processes of pulling data from disjointed tools, attempt to normalize the data, then spend time building reports. This is time-consuming, and distracts security team members from being able to proactively protect the enterprise. Sometimes, these leaders resort to third-party, outside-in cyber risk point products, which are a great expense to purchase and a hassle to set up, only to receive an incomplete risk picture. For these reasons, Zscaler brought Risk360 to market early this summer. Built on the Zscaler Cloud, Risk360 helps security leaders overcome these challenges—and we’re already sharing a major product update with compelling enhancements. How Risk360 helps Zscaler Risk360 is our powerful risk quantification and visualization framework for remediating cyber risk. Risk360 ingests data from external sources, as well as a company’s own Zscaler environment, to create a detailed view of enterprise cyber risk posture across all four stages of a potential cyberattack. It leverages over 110 unique risk factors across the attack chain. These factors are the risks, threats, and potentially dangerous user actions that create organizational cyber risk. Risk360 quantifies these factors to deliver a full picture of cyber risk and track it over time, while also providing clear mitigation detail to kick off security workflows. Additionally, it estimates potential financial exposure and generates CISO Board slides in a single click that report an organization’s current cyber risk in an executive format. Risk360 is unique in that it gives CISOs the ability to evaluate the efficacy of their cybersecurity controls across the four stages of attack: external attack surface, compromise, lateral propagation, and data loss. What’s more, Risk360 leverages Zscaler’s architecture, sitting inline to traffic. We leverage the data and signals that flow through our architecture to populate Risk360, meaning organizations can manage risk with their current Zscaler deployment without having to deploy any additional agents. What’s New In Risk360 Zscaler released Risk360 a few months ago, but we’re already delivering a major product update. Let's take a closer look at what's new today: New Integrations with CrowdStrike: Risk360 now integrates with CrowdStrike, allowing organizations to pull risk signals from CrowdStrike's threat intelligence platform. Incorporating this additional data source enhances the ability of Risk360 to identify potential compromise risks. Highlighting UEBA Risks: User and Entity Behavior Analytics (UEBA) are a critical component of modern cybersecurity. They help organizations detect and mitigate potential threats posed by insiders or compromised user accounts. Risk360 now includes new factors that specifically highlight UEBA risks to analyze user behavior and identify anomalous activities. AI-Driven Cybersecurity Maturity Assessments: These new assessments are powered by Zscaler’s Generative AI, which includes custom in-house developed large language models (LLMs). These reports can replace expensive third-party consulting initiatives, and give companies a better idea of how far along they are on their zero trust journeys. Expanded Financial Modeling: Risk360 now offers expanded financial modeling capabilities with Monte Carlo simulations. This advanced modeling technique allows organizations to simulate various scenarios, factoring in residual risk, inherent risk, and risk tolerance, building on the financial risk exposure estimates already present in Risk360. By providing a more accurate estimate of financial loss, Risk360 enables organizations to prioritize their mitigation efforts through a financial lens. Security Risk Framework Mapping: To align with industry best practices, Risk360 now maps to popular security risk frameworks such as MITRE ATT&CK and NIST CSF. This allows organizations to map their cybersecurity controls and risk posture against recognized standards and frameworks, greatly assisting in their effort to reduce risk as well as achieve and maintain compliance. SEC Compliance Support: Risk360 now offers enhanced reporting capabilities, including SEC disclosure samples to streamline compliance with S-K 106 (b) in describing cybersecurity processes. With these new updates, Risk360 will continue empowering organizations by giving them a comprehensive, data-driven approach to cybersecurity risk management. To learn more, register for our webinar discussing Zscaler Risk360 and the launch of Zscaler Business Insights, or request a demo from your Zscaler team Tue, 12 Dec 2023 04:00:01 -0800 Raj Krishna https://www.zscaler.com/blogs/product-insights/what-next-zscaler-risk360-tm Zscaler Business Insights: Optimizing office utilization and delivering SaaS savings https://www.zscaler.com/blogs/product-insights/zscaler-business-insights-optimizing-office-utilization-and-delivering-saas Work is always evolving. As we know, the workforce has become increasingly distributed over the past few years. During that time, to make up for teams being in different places, enterprises often relied on SaaS applications to spark collaboration and productivity. Now, the shift we see happening is the adoption of hybrid work models where employees are remote some days and in the office other days, balancing in-office collaboration with work-from-home flexibility. Corporate leaders entrusted to oversee this shift are tasked with keeping associated costs in check to maximize enterprise budgets—both SaaS app spend and the considerable financial resources devoted to an organization's real estate footprint. This makes plain the need for visibility into cost drivers like office space usage and SaaS adoption and spend. However, the current status quo of gaining insights to make informed business decisions is challenging. Companies frequently rely on inaccurate, manual processes like best-guess efforts to track SaaS applications in spreadsheets. Many real estate and facilities teams use the same process to analyze badge reader data to understand when employees come into the office to help with return-to-work strategies. In an effort to move away from manual SaaS tracking, some companies ironically deploy expensive SaaS management apps to track SaaS usage, that frequently fail to deliver a truly accurate picture of a company's SaaS estate. To address these challenges, Zscaler is launching Business Insights, the newest addition to the Zscaler Business Analytics portfolio. This powerful product empowers organizations to right-size their SaaS application usage and spend while optimizing office utilization. With Business Insights, organizations will make data-driven decisions for more efficient and cost-effective hybrid work models. Optimizing SaaS usage and Spend One of the key features of Zscaler Business Insights is its ability to provide organizations with comprehensive visibility into their SaaS usage and spend. Many businesses struggle with SaaS sprawl, where redundant applications and unused licenses lead to unnecessary costs and operational overhead. With Business Insights, IT leaders can: • Gain full visibility into SaaS applications to address SaaS sprawl • Track employee engagement and usage of SaaS apps • Rationalize SaaS app use, identifying redundant apps • Visualize the SaaS cost savings from eliminating unused apps and licenses Optimizing Return-to-Office Journeys As organizations navigate the transition back to the office, it is crucial to have insights into employee office usage to optimize workplace utilization. Business Insights provides key data to support the return-to-office strategy, helping organizations make the best use of office space and identify areas where office capacity can be reduced. With Business Insights, IT, procurement, and real estate team leaders can: See which days employees come into the office vs. working hybrid or remote Know which departments come into the office most and least View hour-by-hour office footfall visualizations to optimize offices for conference rooms, meals, and other workplace amenities Track weekly, monthly, and quarterly office usage trends Shift to closed-loop location management with data integrations to show how offices are being utilized versus capacity (coming soon) Zscaler Business Insights offers organizations the power to use their existing Zscaler architecture, with nothing new to deploy, to optimize their SaaS usage and office utilization, enabling a more efficient and cost-effective digital transformation. Learn more and request a demo here. Tue, 12 Dec 2023 04:00:01 -0800 Aditya Jayan https://www.zscaler.com/blogs/product-insights/zscaler-business-insights-optimizing-office-utilization-and-delivering-saas Why Rethinking Legacy Network Architectures Is Key For Enterprises https://www.zscaler.com/blogs/product-insights/why-rethinking-legacy-network-architectures-key-enterprises Ransomware attacks increased by 37% in 2023, with the average enterprise paying ransom payments exceeding $100,000. The latest Zscaler ThreatLabz report discusses this in detail. This is just one example of cyberattacks that are plaguing institutions. These attacks happen due to large attack surfaces, compromising systems, moving laterally through the organization, and then exfiltrating the data. Opportunities for Cybercriminals It's not that organizations aren't looking to prevent attacks from happening, it's that organizations need to rethink their network architecture, especially given the refactoring that has happened over the last couple of years. IT teams had to pivot as organizations went 100% remote and then shifted again as many organizations became hybrid, with employees coming into the office a couple of days a week. These changes impact employees and how they work. Now, they expect great experiences from both their home environment and their in-office environment, but often fail to consider the security implications. Protecting the company's assets shouldn’t introduce complexity or impact end users. As organizations have dealt with these changes in strides, they have also continued investments in cloud resources to keep up with business demand. Workloads must also have seamless, secure connectivity across clouds, VPCs, and data centers. Keeping these workloads safe is essential as they are extensions of your data centers and applications. The simple yet complex question is: "does it make sense to continue purchasing VPNs and firewalls, expanding unsecured WAN connectivity, and connecting cloud workloads without rethinking the current network architecture for the future?" When dissecting this question, thinking through all the challenges and potential approaches can be difficult. Here are some sample questions to consider: Are VPNs and firewalls required in all circumstances? Do they open up more of an attack surface, making it easier for threat actors? My WAN infrastructure connects users and devices to all applications, which works fine, but is it secure, and can I reduce costs (MPLS) since many people work from home? How can I ensure that my end users' performance is optimal with the hybrid workforce? Are they being backhauled with a VPN to a data center, and is that the most efficient way? Do we have the necessary monitoring tools to identify last-mile ISP and Wi-Fi issues? We are moving mission-critical workloads to the cloud, but securing them is challenging. The cloud has some security capabilities, but what are the best methods to educate our staff and move away from point solutions? Looking back at the last few years, it's impressive how IT teams have kept the lights running with minimal staff. They've endured a lot, from their expanded roles to ensuring the business thrives. We can't anticipate what the future holds. However, if you architect it in a way that can solve many challenges through simplicity and lower costs, it's worth considering. Take a few moments to download our ebook and dive deeper into future-proofing your network architecture. Tue, 12 Dec 2023 08:58:18 -0800 Rohit Goyal https://www.zscaler.com/blogs/product-insights/why-rethinking-legacy-network-architectures-key-enterprises Empowering Distributed Organizations with Zscaler Business Analytics https://www.zscaler.com/blogs/product-insights/empowering-distributed-organizations-zscaler-business-analytics Organizations are doing their best every day to navigate the challenges of being a distributed enterprise—which can be many. Employees today work from everywhere: from home, on the road, and increasingly in hybrid return-to-office models. Wherever they are, organizations want their employees to stay productive, empowered by the right SaaS apps and never derailed by IT issues. Meanwhile, they need to address the increased attack surface and enterprise cyber risk that arise from usage of new SaaS apps and users who come and go worldwide. Struggling to find insights These trends, part and parcel of digital transformation efforts, underscore a challenge for IT leaders. How can they control the costs and risks (both cyber and productivity) of managing a modern enterprise distributed across SaaS apps, homes, and offices around the world? IT leaders seek the right analytics and insights to address these concerns. In conversations with our customers, they often tell us they lack sufficient insights into cyber risk, digital experiences for remote workers, and cost drivers​ like SaaS or poorly optimized office footprints. They also tell us that, unfortunately, the status quo doesn’t always help. To gain insights, organizations often either buy expensive point tools or burden their teams with manual data analysis and reporting. Point tools—like network monitoring platforms, SaaS app management, or third-party risk tools—always add overhead but seldom provide the accuracy or complete risk pictures organizations need. And of course, manually analyzing raw data from cyber tools or badge readers, and tracking it all in spreadsheets, proves ineffective and time-consuming. The Zscaler Business Analytics portfolio With today's announcement, organizations can turn to the Zscaler Business Analytics portfolio to control the costs and risks that confront distributed enterprises. Zscaler will deliver the right first-party data to guide secure and productive digital transformation journeys. Zscaler is the only security vendor to deliver a full Business Analytics portfolio. Zscaler Business Analytics leverages the latest real-time data from across an organization and shapes it into actionable insights using the organization's current Zscaler deployment—with no new tools or vendors needed. The portfolio consists of three key products: Zscaler Digital Experience keeps users productive by rapidly detecting and resolving app, network, and device issues. Zscaler Risk360 manages and mitigates cyber risk across all four stages of an attack. Zscaler Business Insights optimizes SaaS spend and office utilization. The Zscaler architectural difference Before we share more detail on today's news, let's look at why Zscaler is the only security vendor to offer a Business Analytics portfolio. In a word: architecture. Our inline proxy architecture sits inline with traffic, so all customer traffic flows through Zscaler. The Zscaler Zero Trust Exchange handles some 320 billion transactions and 500 trillion signals per day. This is more data than other security vendors, and it includes not just security signals, but also user and device activity from which we can deliver our Business Analytics portfolio. Competing architectures are firewall-oriented, focused on creating a castle-and-moat network, and since they don’t sit inline, they’re incapable of gathering or delivering the data or insights organizations need to guide digital transformation and distributed business. Unveiling Business Insights Today, we’re unveiling the newest part of the Zscaler Business Analytics portfolio, Business Insights. Business Insights delivers the right details needed to guide two critical business initiatives: right-sizing SaaS use across the organization and providing key insights to guide back-to-office journeys. Thanks to Business Insights, IT, procurement, and real estate teams will be able to: Index their entire SaaS portfolio to identify cost savings by highlighting unused licenses. Automatically identify areas of wastage and app overlap (e.g., multiple comparable UCaaS solutions) through a generative AI-powered app catalog. Showcase metrics on an organization’s hybrid work strategy, visualizing global employee footprint across regions and offices to inform decisions on a return-to-office strategy. Help optimize office planning for staff such as meals, space, and facilities, by displaying footfall analytics. We encourage you to read more in our launch blog post for Business Insights. Enhancing cyber risk management with Risk360 Earlier this year, we launched another essential component of the Zscaler Business Analytics portfolio. Zscaler Risk360 empowers organizations to proactively quantify and mitigate cyber risks across the entire attack chain. Leveraging real-time data from an organization's Zscaler environment, it provides intuitive visualizations, financial exposure details, and actionable insights to guide effective risk mitigation strategies. Risk360 helps IT leaders have more productive discussions with boards and executives, enabling them to make informed decisions to protect their organization's digital assets. Since unveiling Risk360 over the summer, we’ve been hard at work adding new capabilities to help organizations move to a more effective security posture: AI-driven cyber maturity assessments use advanced LLMs to generate detailed cyber posture reports showing where organizations are on their zero trust journey. New risk factors including CrowdStrike and user-oriented risk factors (UEBA).​ Expanded financial risk modeling, now including Monte Carlo scenarios. Risk framework mappings for MITRE ATT&CK and NIST CSF. SEC sample disclosures for S-K Item 106. See our blog for more details on the enhancements coming to Risk360. Optimizing digital experiences with ZDX Organizations need help to gain device, network, and application monitoring insights as their point tools lack the holistic intelligence required by the service desk and network operations teams. To ensure flawless end-user experiences anywhere, these IT teams are under immense pressure to resolve issues and get users back to work faster. Good news! We have recently enhanced Zscaler Digital Experience (ZDX) to provide IT teams with AI-powered insights to reduce ticket volumes, expedite triage, and simplify collaboration between the service desk and network operation and application teams. Zscaler looks across all devices, networks, and applications and can provide root cause analysis, boosting productivity and end-user satisfaction in seconds. Zscaler Digital Experience Incident Dashboard In today's distributed business landscape, gaining the right insights to guide a secure, productive, and informed digital transformation is paramount. Zscaler Business Analytics offers a comprehensive portfolio of solutions that empower organizations to optimize SaaS app usage and costs, enhance cyber risk management, and deliver exceptional digital experiences. With Business Insights, Risk360, and ZDX, your organization can leverage actionable insights to drive efficiency, reduce costs, and ensure a seamless user experience. By partnering with Zscaler, you can navigate the complexities of the modern business landscape with confidence. Tue, 12 Dec 2023 04:00:01 -0800 Raj Krishna https://www.zscaler.com/blogs/product-insights/empowering-distributed-organizations-zscaler-business-analytics What Did Plato Have to Say About Zero Trust Security? https://www.zscaler.com/blogs/product-insights/what-did-plato-have-say-about-zero-trust-security Plato was a philosopher from the fifth century B.C. whose work guided human thought for centuries. Nearly 2,500 years later, his influence still echoes everywhere. This is true even in cybersecurity when it comes to zero trust. How so? To answer that question, let’s take a look at one of Plato’s famous teachings. Plato’s allegory of the cave This allegory might sound somewhat strange to our modern ears, but let’s dive in. Imagine a deep, dark cave. Within the cave, several individuals have been chained up for the entirety of their lives, and the only thing they have ever been able to see is the cave wall in front of them. Behind them is another group of people. This latter group is using the light from a fire, along with shapes and replicas of things that exist outside the cave, to cast shadows onto the aforementioned cave wall (take a look at the image below if you’d like some help visualizing things). When the prisoners see these shadows, they are left to believe that the shadows themselves are the “real things,” and that the shadows do not correspond to anything else. For example, if they see shadows of bird shapes, they assume that the shadows are what birds truly are; they do not know that what they see are just shadows cast by replicas that are designed to look like real things (real birds) that exist outside the cave. To see the true forms behind the shadows and the imitations casting them, one would need to leave the cave and behold reality in the light of the sun—where things are quite different. (If you would like to read more about this allegory, you can find a highly scholarly source here). What does this have to do with cybersecurity? Now, we can’t be sure that Plato was thinking about cybersecurity when he came up with the above allegory (although he almost certainly was). Either way, the allegory of the cave has clear applicability when it comes to our present topic. For decades, organizations have been shown the shadows of faulty replicas of what cybersecurity actually is. They have been led to believe that what they are seeing is the “true form” of how security is supposed to look. Specifically, they have been presented continually with hub-and-spoke networks guarded by castle-and-moat security models. But this kind of architecture is a poor fit in the modern world with its remote workers, cloud applications, and increasingly sophisticated cyberthreats that know how to take advantage of the security status quo. Today, these perimeter-based architectures have multiple, crucially important challenges: They endlessly extend the network to more and more users, locations, devices, and clouds, meaning that the attack surface is expanded for cybercriminals They enable cyberthreat infections and data loss because the appliances on which they are built lack the scalability necessary to inspect traffic (particularly encrypted traffic at scale) and enforce real-time security policies They permit lateral threat movement by placing users and entities onto the network, where they can move from resource to resource and cause extensive damage They also suffer from a variety of other challenges related to cost, complexity, operational inefficiency, poor user experiences, organizational rigidity, and more Zero trust for true security Zero trust is the security reality that makes the perimeter-based shadows and shapes pale in comparison. In fact, the truth is even harsher: perimeter-based architectures are not even shadows or shapes of zero trust, the true form of security. That’s because zero trust is a fundamentally different architecture that separates security and connectivity from network access, and delivers comprehensive security as a service from the cloud. As a result, it does not suffer from the aforementioned challenges of perimeter-based architectures. With a zero trust architecture powered by the Zscaler Zero Trust Exchange, organizations can: Minimize the attack surface by hiding devices and apps behind the Zero Trust Exchange, eliminating exploitable tools (e.g., firewalls and VPNs), and stopping endless network expansion Stop compromise and data loss through high-performance, cloud-powered inspection of all traffic, including encrypted traffic at scale, to block threats and data loss in real time Prevent lateral threat movement by connecting users, devices, and workloads directly, one to one, instead of connecting them to the network as a whole Solve other critical challenges by decreasing complexity, increasing operational efficiency, enhancing user experiences, and improving organizational agility, all contributing to greater economic value It’s time to cast the shackles aside, depart from Plato’s cave, and behold the true form of security in the light of day. It’s time to embrace zero trust and never look back. Register for our upcoming webinar, “How to Reduce Cyber Risk While Embracing Digital Transformation,” to learn more about zero trust architecture and how it is helping modern organizations solve their networking and security problems. You will also hear firsthand from a Zscaler customer as they discuss their benefits and learnings from embracing the Zero Trust Exchange. Wed, 10 Jan 2024 08:00:02 -0800 Jacob Serpa https://www.zscaler.com/blogs/product-insights/what-did-plato-have-say-about-zero-trust-security Secure Private Access – ZPA Private Service Edge on Equinix Network Edge https://www.zscaler.com/blogs/product-insights/secure-private-access-zpa-private-service-edge-equinix-network-edge In 2023, there has been a more than 37% increase in ransomware attacks. The average ransom payment for enterprises has surpassed $100,000, with an average demand of $5.3 million1. Even the White House laid down a mandate to curb such attacks, calling for organizations to bolster their security with zero trust. A zero trust architecture establishes a connection to the specified application only and not to the entire corporate network. In the past, enterprises used remote access VPN technologies to connect remote workers to corporate applications. This approach expands the attack surface and results in lateral movement of threats across a company’s internal systems. A zero trust architecture, however, curtails such movements and eliminates the attack surface. Zscaler Private Access (ZPA) is the Zero Trust Network Access (ZTNA) platform that applies the principles of least privilege to give users secure, direct connectivity to private applications running on-prem or in the public cloud while eliminating unauthorized access and lateral movement. As a cloud native service built on a holistic security service edge (SSE) framework, ZPA can be deployed in a matter of hours to replace legacy VPNs and remote access tools. In exploring secure private access, many organizations have adopted ZPA Private Service Edge, in which a localized version of Zscaler Private Access (ZPA) is deployed within the customer’s data center. This has enabled Zscaler customers to access private applications regardless of the location of the user and the app, with reduced latency and secure access. Now, Zscaler and Equinix together bring the ZPA service on Equinix Network Edge. ZPA Private Service Edge on Equinix Network Edge ZPA Private Service Edge (PSE) is a service that supports localized brokering in the same customer environment where private applications are hosted, such as colocation. The ZPA on-premises service enforces policies and stitches together the connection between an authorized user and a specific private application. When branch users or home office users are looking to access an application that is running in a private cloud, the connection between the user and the application is made with ZPA Private Service Edge, which is the shortest path to connectivity. ZPA PSE is now available on Equinix Network Edge. This integration enables customers to host ZPA service locally in the same environment where their private applications are hosted. The joint solution improves application performance by reducing latency. It reduces unnecessary hops that traffic would need to travel if the ZPA service was hosted in the public cloud. ZPA PSE service on Equinix Network Edge offers many benefits to customers, including: Delivering a superior user experience: Connecting users directly to private apps eliminates slow, costly backhauling over legacy VPNs while continuously monitoring and proactively resolving user-experience issues. Minimizing lateral movement: Applications are made invisible to the internet and unauthorized users, and IPs are never exposed using inside-out connections. Enforcing least-privileged access: Application access is determined by identity and context— not an IP address—and users are never put on the network for access. Stopping attacks with complete inspection: Private app traffic is inspected in line to prevent the most prevalent web attack techniques. Agility: Easily scale up or scale down resources, depending on usage. Cloud cost optimization: Run enterprise applications and ZPA services while optimizing overall cloud costs. Performance: Minimize the impact on application performance by eliminating the need to incur additional hops. Resilience: Ensure uninterrupted business continuity during blackouts, brownouts, and black swan events. Figure 1: Zscaler PSE on Equinix Network Edge ZPA Private Service Edge manages the connections between a Zscaler Client Connector for remote or branch users, a Zscaler Branch Connector for IoT/OT devices or servers, and the App Connector. ZPA Private Service Edge deploys as a lightweight virtual machine that is installed by customers within their own network environments. Once set up, ZPA Private Service Edge works in the same way as the ZPA cloud service. Notable use cases of the joint solution include: Connectivity Optimization: Fastest path of access for users. Disaster Recovery: Continued access to critical apps during a brownout, blackout and black swan event. Regulatory Compliance: Secure private access with zero trust architecture for regulatory purposes. Global Reachability: Extends Zscaler capabilities to more locations across the world. Zscaler and Equinix Collaboration Zscaler is a leader in cloud security with more than 40% Fortune 500 customers and 12+ years running a cloud service that sits in the data path with a proven scale of more than 320 billion transactions. Globally, Zscaler has more than 5,600 customers and a revenue exceeding $1.5 billion in global revenue in 2022. We’re combining these capabilities with Equinix, the world’s digital infrastructure company®, has the most dynamic global ecosystem of 10,000+ companies including 55%+ of the Fortune 500 customers and 460,000+ physical and virtual interconnections. Equinix is the world’s most expansive, secure, and sustainable data center platform with $7.2B+ of global revenue in 2022. Zscaler and Equinix have been collaborating for 12+ years to accelerate cloud transformation for customers. Through this partnership, customers get global coverage with data centers in 32 countries and coverage across six continents. Together, Zscaler and Equinix enable customers to have an optimized connectivity experience, so users can focus on enabling the business. ZPA Private Service Edge on Equinix Network Edge is offered today and is generally available. Please reach out to the Zscaler account team to request a demo. For more details on the solution, please visit: https://www.zscaler.com/partners/equinix References: 1- Zscaler ThreatLabz 2023 Ransomware Report Mon, 11 Dec 2023 08:00:01 -0800 Karan Dagar https://www.zscaler.com/blogs/product-insights/secure-private-access-zpa-private-service-edge-equinix-network-edge Defend Against Ransomware & Identity-Based Attacks: Boost Your Cyber Defense with Zscaler ITDR™ https://www.zscaler.com/blogs/product-insights/defend-against-ransomware-identity-based-attacks-boost-your-cyber-defense Modern cyberattacks are diverse, use different tools and techniques, and target multiple points of entry. Ransomware is still one of the top threats organizations face today, and it’s only getting worse as threat actors continue to employ new techniques such as identity threats. Identity-based attacks are the driving force behind ransomware – as a single point of attack can now provide attackers with a potentially life-changing opportunity. Cyberattackers are now after your identities. Compromising identities Threats such as ransomware often use identity-based attack techniques. Identity attack techniques (such as lateral movement and compromising a valid credential) are typically used by the attacker to move quickly to a more lucrative target in the organization and evade prompt detection. Threat actors are targeting enterprise Active Directory (and Azure AD accounts) to gain a foothold in a target’s environment. Cybercriminals have a variety of methods to gain access to identities. A leaked or stolen password can often be used to break into databases with multiple credentials. In fact, passwords still account for 80% of all cyberattacks and are a growing concern among security professionals. Hackers often use automated scripts to try different stolen username and password combinations to take control of people’s accounts. When a user’s account gets compromised, they can fall victim to fraud, identity theft, unauthorized financial transactions, and other criminal activities. For instance, Kerberoasting is an identity attack technique used by cybercriminals to obtain valid Active Directory (AD) credentials. Kerberoasting attacks target AD service accounts because they often offer higher privileges and enable attackers to hide for extended periods of time. Kerberoast attacks are also notoriously hard to detect amid daily telemetry, making them even more attractive to cybercriminals. Password exposures are used by attackers to compromise databases and execute data exfiltration attacks on endpoints. Identity tools don’t detect these incidents and there’s no way for security teams to learn about a compromised credential or password exposure. Lateral Movement Fuels Cyberattacks Once an attacker gets their hands on a user or identity, all they have to do is hand over the credentials they’ve stolen to the identity provider that’s responsible for user authentication, and the lateral movement begins. That’s why lateral movement poses such a significant identity threat, as attackers have access to stolen user credentials, as well as the ability to pull credentials out of compromised machines which allow cybercriminals to log in to multiple machines in the same environment, distribute a ransomware payload, or encrypt multiple machines at once. The security teams lack visibility and there aren’t tools in their stack that can discover or alert all these incidents in an environment, which is alarming since the hacker legitimately compromises the AD. Attackers relentlessly seek to compromise service accounts, which often have high privileges, so that they can conduct lateral movement virtually undetected and thus access multiple machines and systems easily Sealing the Identity Gaps Identity compromise is the most common starting point for a breach, so identity threat detection is often the first alarm that goes off. Now, these crucial early indicators are made possible with Zscaler ITDRTM. Zscaler ITDR provides security teams with the visibility and protection they need for their identity management systems. You can detect identity-based attacks and be able to identify anomalous credential abuse, attempts at privilege escalation, and lateral movement. Reducing Risk with Actionable Insights, for Better Response Zscaler ITDR automatically surfaces hidden risks that might otherwise slip through the cracks, such as unmanaged identities, misconfigured settings, and even credential misuse. The solution offers organizations visibility and autonomous response capabilities, while also providing continuous assessment of AD misconfigurations, vulnerabilities, and active threats in real time and giving prescriptive guidance to close exposures and gaps in customer AD environments. Restrict or terminate those identities causing trouble and shut down threats before they have a chance to wreak havoc. You could also respond with capabilities such as tricking the attacker into misdirection and deception. For example, when a solution detects an identity-based attack, it can provide fake data that redirects and lures an attacker to a decoy using Zscaler Deception. Zscaler provides a deception environment of decoy systems and data mimicking production assets to misdirect attacks, engage attackers, and collect information on adversary tactics, techniques, and procedures (TTPs). Zscaler automatically isolates the compromised system conducting the identity-based attack from the rest of the environment, limiting interaction only with the decoy environment. Besides, Zscaler ITDR is integrated into the Zscaler Zero Trust Exchange which dynamically applies access policy controls to block compromised users when an identity attack is detected. This paralyzes the hacker from laterally moving across the systems and further checks the spread of ransomware. Conclusion While breaches are inevitable, and preventative security measures are not enough, Zscaler boosts your cyber defense stack against identity attacks. Zscaler ITDR delivers complete visibility in a single pane of glass and helps your security teams to detect and respond, in real time, to emerging identity threats in your cybersecurity environment including ransomware and sophisticated identity attacks. Read more about our ITDR technology here. Tue, 05 Dec 2023 08:49:16 -0800 Nagesh Swamy https://www.zscaler.com/blogs/product-insights/defend-against-ransomware-identity-based-attacks-boost-your-cyber-defense Demystifying Workload Security in Google Cloud Platform https://www.zscaler.com/blogs/product-insights/demystifying-workload-security-google-cloud-platform Deploying and configuring cloud workload security shouldn’t have to be so difficult. If you’re still working with the complex traditional way of deploying and managing legacy firewalls or VPNs in the cloud, it’s high time to move on and look at Zscaler Workload Communications. Zscaler Workloads Communications has now expanded its support to Google Cloud, one of the most widely adopted clouds, alongside AWS and Microsoft Azure. How it works Before we jump into design options for Workload Communications on Google Cloud, if you need a quick refresher on Zscaler Cloud Connector (VMs that facilitate secure egress traffic for cloud workloads and enable Workload Communications), you can read about it here. Workload Communications on Google Cloud Platform Let’s take a closer look at different Google Cloud networking design options as well as the pros and cons of each design. Google Cloud has an interesting feature called Shared VPC Architecture or Shared Project, which provides great flexibility for the Networking team to centralize cloud security management and control. Using Shared VPC Architecture, a developer can focus on the development side while the Networking team completely manages and controls networking. Using Shared VPC Architecture in Google Cloud is a recommended best practice. For more information, check out Shared VPC | Google Cloud. Google Cloud Provisioning Responsibilities Roles Responsibilities Shared Project (Host Project) Owned by the networking team and includes complete network constructs like Shared VPC, subnets, routing, and more. Cloud Connector instances are part of this project. Network resources in Shared Project are shared with Service Projects. For example, subnets are shared with different Service Projects. App Project (Service Project) Owned by the development team. Owners will use whatever network resources are shared by the Shared Project for deploying any instances in App Projects. Single Shared VPC Regional Cloud Connector Design This is based on a Single Shared VPC where: The workloads and Cloud Connectors are part of the same VPC but different Projects Cloud Connectors are part of a Shared Project in complete control of the Networking team Subnets from this shared VPC will be shared with Services projects for Developers to deploy any app VMs or serverless apps A VPC in GCP is a global construct that can span all supported regions. In most cases, if you want to avoid VPC peering and use plain Single Shared VPC for each environment (Prod, UAT, Dev, Pre-Prod, etc.), you can proceed with this design. By default, Google Cloud doesn’t allow subnet-to-subnet communication inside the same VPC. Therefore, even though the workloads and Cloud Connectors are part of the same VPC, you still have access control at the subnet level using Google Firewall, and you can span multiple regions with a single VPC as it's a global construct in GCP. Pros and cons of this design: Pros: Zscaler Cloud Connectors are deployed regionally—workloads can access the internet using regional Cloud Connectors along with regional load balancers. Provides a low-latency solution. Avoids cross-region traffic flows, optimizing customer costs. Plain vanilla design with Single Shared VPC per environment. Decentralized design improves fault tolerance. Enables grouping and sharing of Cloud Connector instances at the region level. Minimal VPCs or VPC peerings as workloads and Zscaler Cloud Connectors are part of the same VPC. Cons: Requires network tags for workloads to forward traffic to regional Cloud Connectors. Automation pipelines should be in place for tagging workloads. Requires strong IAM controls as Project-level network tags can be changed at any time by the Project owner or editor. Tag edits could impact the traffic flow for the specific instance. Single Shared VPC Centralized Cloud Connector Design This design is similar to the first, except Cloud Connectors are hosted in a centralized location, while workloads can be part of different regions. As a Single Shared VPC design with cross-regional access, it is mostly used in cases where workloads span multiple regions and you want to group geographically closer regions to send traffic through a centralized location. This helps avoid the need to deploy and manage Cloud Connectors in each region for geographically closer workloads. Pros and cons of this design: Pros: Easy to deploy, with no need for any network tags for workloads. Plain vanilla design with a Single Shared VPC per environment. Simple routing changes with two default routes: one for workloads without any network tags and another for CCs with tags pointing to internet GW. Cons: Cross-region traffic flow design. No low latency, no fault tolerance. Cross-regional traffic flow cost will need to be accounted for in this design. Cloud Connectors are deployed centrally in a single region. Multi-VPC Shared VPC Cloud Connector Design This is mainly for cases where you want VPC-level isolation for each Project in your organization. Because Google doesn’t support Transitive VPC architecture yet, this design requires you to configure Hub & Spoke VPC peering as well as peering between Workload VPCs. Once again, the VPCs are completely managed by the Networking team as part of the Shared Project ownership and shared this VPC’s with Spoke Projects along with Peering and routing. As part of routing, you just need to make sure to export/import the default route from the Hub VPC to Spoke VPCs. Pros and cons of this design: Pros: Easy to deploy, with no need for any network tags for workloads. Simple routing changes with two default routes: one for workloads without any network tags and another for CCs with tags pointing to internet GW. VPC level of isolation for each Project. Cons: VPC peering for Workload VPCs and Cloud Connector VPCs—Google has a limit on the number of VPC peerings, but doesn’t support Transitive Traffic, and thus requires VPC peering for any traffic flow. Complex routing changes depending on the traffic flow requirements. Conclusion Every design has pros and cons depending on your organization's requirements. Whichever design you choose, Zscaler Workload Communications provides the flexibility to secure it seamlessly, with complete automation support using Terraform. There’s no need for Trust/Untrust VPCs—Zscaler Cloud Connectors can be deployed as part of a Single Shared VPC shared across workloads or as part of an Isolated VPC as mentioned in the above designs. If your organization is looking for seamless multicloud security with unlimited scale for firewall, proxy, TLS decryption, DLP, and more, look no further than Zscaler Workload Communications. To learn more, visit our product page. You can also sign up for our self-guided hands-on lab. Fri, 01 Dec 2023 08:01:01 -0800 Siripuram Pavan Kumar https://www.zscaler.com/blogs/product-insights/demystifying-workload-security-google-cloud-platform