Blog Category Feed Zscaler Blog — News and views from the leading voice in cloud security. en Threading the needle on innovation and security with ChatGPT By now, you are familiar with ChatGPT given how quickly the tool has grown. In fact, it became one of the fastest-growing consumer applications the world had seen just two months after its release. Due to the attention it has drawn, every organization has drawn a different conclusion with respect to the use of large language models in advanced chatbots–both the quotidian use of the technology at work and even the larger societal questions they raise. In a recent post, we explored how to both limit the use of AI tools and safely adopt them: “Make generative AI tools like ChatGPT safe and secure with Zscaler.” For many organizations, these tools have generated considerable interest as they look to harness them to spark productivity. It has also drawn the attention of the authorities, given how it collects and produces data. Apart from data compliance questions that are being actively debated, something that is not up for debate is protecting corporate intellectual property or sensitive customer data from being used in ChatGPT or other AI projects. Like we mentioned in our previous post, we have identified hundreds of AI tools and sites, including OpenAI ChatGPT, and we created a URL category called ‘AI and ML Applications.’ Organizations can use this URL category to take a few different actions: block access outright to these sites, extend caution-based access, in which users are coached on the use of these tools, or allowing use—but with guardrails—though browser isolation, in order to protect data. We are going to look further into how isolation will thread the needle on sparking productivity while maintaining data security. First, it is important to understand why data security matters so much. The Economist Korea has detailed a ChatGPT data security cautionary tale. In April, Samsung witnessed three separate leaks of sensitive company data with ChatGPT in their semiconductor division. One instance involved using the tool to check confidential source code, another employee requested “code optimization,” and a third uploaded a meeting recording to generate a transcript. That information is now part of ChatGPT. Samsung, for its part, has limited the use of the tool and may ban it or even build an internal version. “Can I allow my users to play and experiment with ChatGPT without risking sensitive data loss?” At Zscaler, we hear from organizations on both sides of the ChatGPT coin, which is why we offer various actions with respect to accessing these sites. Isolation can help in the following ways. Balancing data security with innovation Leveraging the Zscaler Zero Trust Exchange, those large language model applications or AI internet destinations in the ‘AI and ML Applications’ URL category can be rendered in Zscaler Browser Isolation. The benefit? Powerful controls over how sensitive data is handled, both files and specific pieces of sensitive information. To begin with, you could block all uploads and downloads (on the AI sites that allow them) and restrict clipboard access outright but, given the desire to allow some use, allow for text inputs. This way, users can still type in queries and play around with any generated insights. What about risks associated with users sharing sensitive data in their chat queries? With Browser Isolation, any sensitive data typed in can be detected and protected from being sent by the Zscaler DLP engine. What is more, for full visibility into how employees are using these sites, we can detect and log the queries employees are posing to a site like ChatGPT, even if DLP rules are not activated. Figure 1. DLP workflow to protect data sharing in browser isolation Let’s say, for instance, a user attempts to upload a lot of text, including sensitive data, in a prompt. Within the confines of Browser Isolation, which is fired up as an ephemeral container, the users can input data but no sensitive data will be allowed to leave the company - it will be blocked in the browser isolation session. What about avoiding intellectual property issues by ensuring no data can be downloaded onto a user’s computer? This can be avoided by blocking downloads in isolation or by using temporary “protected storage” without any such data going to the actual endpoint or violating any endpoint upload/download policy. Similarly, any output productivity files, such as .docx, .xlsx, etc., can be viewed as read-only PDFs within that protected storage without those necessarily being downloaded to the endpoint itself. Moreover, even if uploads are allowed—to AI tools or any other site—the DLP engine can detect sensitive content in the files and even leverage Optical Character Recognition to find sensitive data in the image files. Ultimately, organizations will also want full tracking of how employees are using these sites in Isolation. This is where the DLP Incident Receiver comes into play, capturing and logging any requests to AI applications for review by security or audit teams. But what about organizations who are not yet ready to allow use of AI applications? As we outlined in our previous blog, companies can simply block access to the “AI and ML applications” URL category. So, the Zscaler Zero Trust Exchange provides complete protection while enabling productivity for your users and preventing shadow IT usage. See our protections in action in this short demo: To learn more about how to empower teams to safely use ChatGPT, contact your account team, sign up for a demo of Zscaler for Users, and read our other blog on the topic. Fri, 26 May 2023 09:01:56 -0700 Dan Gould IDC study finds Zscaler Data Protection can save $2.1 million annually With today’s economic backdrop, saving money is always an interesting topic of discussion. If you’re looking to be the hero and help your company save money, this blog is for you. A study led by IDC highlights the impact a cloud-delivered approach to Data Protection can have on the bottom line. Based on existing customers, Zscaler and IDC worked together to answer a simple question, “after deploying and using Zscaler Data Protection, how much of an impact has it had on your organization?” The results were eye-opening and underscored the power of the Zscaler Platform. Let’s explore the findings Customer savings using Zscaler Survey participants are global customers comprising tens of thousands of users and sizable, mature IT organizations, as seen in the following table. What were the results across savings, both direct and indirect? When calculated across revenue gains, increased productivity for users, improved IT efficiencies, and security savings, the average Zscaler customer saved $2.1 million dollars annually, as seen in the following table. That’s an impressive number, and something that can have a material impact on an organization. Let’s explore how Zscaler Data Protection helps drive increased value across these three broad topics: risk, productivity and efficiency. How Zscaler Data Protection reduces risk Traditional approaches to data protection struggle to secure cloud and mobility. Users are off-network and accessing their cloud apps over the internet. Centralized data center technologies just can’t follow these connections, and often struggle to scale SSL inspection, where most data loss hides. With Zscaler’s cloud-delivered approach, all users, cloud apps, and connections are always protected from data loss. Organizations can deliver an always-on, consistent DLP policy and easily scale protection across data in motion, at rest, and in use. This drives down the costs associated with data loss across multiple vectors. Organizations can easily scale inspection across more transactions and data, and more data loss incidents can be detected. These risk reductions can be very impactful to organizations. As told by organizations participating in the study: “There’s a huge risk with data loss through use of unsanctioned applications, and Zscaler Data Protection has given us the ability to block applications and repositories to which people submit data. “It’s easier for us to meet SLAs (service-level agreements) in terms of how quickly we can detect issues with Zscaler Data Protection. Especially for PII and GDPR, we now meet 100% of our requirements. How Zscaler Data Protection improves productivity Another huge benefit of Zscaler Data Protection is the productivity it enables within an organization. Traditional approaches are often based on point product complexity. Companies organically build themselves into complex situations as they move from problem to problem purchasing individual approaches, which exponentially drive up operational cost and complexity, while taking a toll on their users. A hidden cost often overlooked is the time wasted among users who struggle with overly strict security policy or recovering from incidents that distract them from their job. With Zscaler, organizations can streamline security and empower users to connect from anywhere, seamlessly. Cloud apps become fast and responsive, while data loss incidents decrease. Additionally, IT teams can easily scale data protection to new cloud apps and areas of the business in nearly half the time as previous solutions, as seen in the following graphic. What do organizations surveyed have to say about these productivity improvements? “We need 5–10 hours per month to maintain Zscaler Data Protection, and we would have to double that and pay about 10% more for other tools.” “We have two to three data loss incidents a month that are now no longer going to inappropriate locations because of Zscaler Data Protection, so our productivity loss due to data loss is basically zero. Before, if someone lost data, we’d have to figure out where it went and take steps to pull back the files.” How Zscaler Data Protection improves IT efficiencies Many data protection programs struggle because operations become far too complex. Multiple teams must interact together, and the daily grind of managing alerts can lead to fatigue. Add to that reporting and visibility requirements that are difficult across siloed products, and you have a perfect storm. In this case, the power of a unified platform can’t be understated. Zscaler Data Protection combines all data protection requirements and visibility into one purpose-built SSE-delivered approach. When delivered in a unified fashion, operations become more efficient from a time perspective. All teams benefit, as seen in the following table. When it comes to efficiencies, what did surveyed organizations have to say? “We can monitor what paths and what data people are putting in those applications with Zscaler Data Protection, so we’ve been able to grow without worrying about more user impact.” “Zscaler Data Protection is scalable because we don’t have to create new capacity. It really wasn’t even applicable before because we didn’t try to expand because it was too much effort and would have taken too long.” Is Zscaler Data Protection right for your organization? Many companies are finding that unifying data protection across a security service edge can drastically simplify how they find and secure sensitive data. Zscaler is the world’s largest security service edge, with over 300 billion daily transactions, and has helped thousands of organizations secure their most critical data and assets. With savings like we’ve seen in this blog, it’s no wonder that more than 40% of the Fortune 500 and 30% of the Forbes Global 2000 use Zscaler. If you’d like to learn more, you can read the complete IDC whitepaper, visit our Data Protection page, or contact us for a demo. Thu, 25 May 2023 08:00:01 -0700 Steve Grossenbacher What is an MSSP and how does it help SMBs? Cybersecurity is an essential part of modern business operations. It doesn’t matter what industry you’re in, how many customers you serve, or what products or services you sell. Everyone needs to be protected. But not every business can afford the software, manpower, and expertise required to adequately shield an organization and its customers from cyberthreats. It requires a significant investment that small and medium-sized businesses have difficulty shouldering on their own. That doesn’t mean these companies are without options. There are ways for businesses to employ effective cybersecurity protection relative to what they can afford. One of the most practical and popular options is leveraging a Managed Security Services Provider (MSSP). With an MSSP, a small business can improve its cybersecurity posture and protect its data against cyberthreats within its budget. What is an MSSP? An MSSP is a partner that provides a range of managed security services to organizations in order to help protect digital assets from cyberattacks, data breaches, and other security threats. MSSPs play a crucial role in helping organizations ensure the safety of its business operations in today’s constantly-evolving threat landscape. Why should a business use an MSSP? MSSPs address this security gap. MSSPs have specialized expertise in cybersecurity and have access to the latest tools and technologies to protect against potential bad actors and mitigate online security risks. An MSSP can analyze an organization’s infrastructure for potential vulnerabilities and reduce them through the use of software, policies, and employee awareness training. The MSSP essentially becomes an extension of the customer’s IT department. They are a trusted advisor that recommends, manages, and supports efforts to protect facilities, equipment, and data from digital threats. Because of this, MSSPs are becoming increasingly popular among small- and medium-sized businesses (SMBs)—although large enterprises employ them, too. According to an Organisation for Economic Co-operation and Development (OECD) report, “SMEs tend to delegate responsibility for their digital security either explicitly or implicitly to external third parties.” This reduces the burden on in-house IT teams, improves the effectiveness of security measures, and provides a more cost-effective solution for managing security. What are the benefits of using an MSSP? Businesses of any size can benefit from leveraging an MSSP, but some examples of how SMBs in particular can take full advantage of an MSSP are: Access to expertise Smaller or newer businesses often lack the in-house expertise necessary to manage and maintain comprehensive cyberthreat protection, data protection, and more. An MSSP provides access to a team of cybersecurity experts who can offer guidance and implement best practices to protect sensitive information and defend against hacking attempts. Cost-effective A well-equipped and properly staffed IT security team can be expensive and out of reach for an SMB. SMBs are not typically equipped to hire the staff necessary to implement the software, monitor for active threats, and respond to incidents. On the other hand, hiring an MSSP gives the business access to enterprise-level security services and experienced Security Operations Center (SOC) capabilities at a fraction of the cost of hiring and equipping a full-time cybersecurity team. Scalable As SMBs grow, their security needs change. Limited IT staff need to research which software and hardware tools to purchase, which can be difficult if you don’t know what to look for. An MSSP can provide best practices and scalable security solutions to meet their evolving needs. Compliance management Many SMBs are subject to industry-specific regulations and standards, such as HIPAA or PCI-DSS. An MSSP can help ensure that the business is compliant with these regulations by providing regular audits, risk assessments, and reporting. What kind of services do MSSPs provide? MSSPs work with organizations to assess their security requirements and develop customized solutions to meet those needs. Typically, MSSPs use a combination of technology, processes, and human expertise to serve customers. MSSPs offer a range of services, which can include vulnerability assessments, risk management, as well as management and support for different zero trust or network security solutions. According to OECD, “SMEs that can demonstrate that they implement best practices to manage digital security risk can raise their business profile by increasing security within their supply chains.” MSSPs are available to support SMB customers across the globe and some of the most common MSSP services include: Security Monitoring and Threat Detection MSSPs offer 24/7 security monitoring, which helps businesses proactively identify security threats and risks. They use various tools and technologies to monitor network traffic, log files, and other security data. Incident Response When an MSSP provides incident response services, they are helping businesses respond to security incidents such as data breaches or cyberattacks. This involves developing incident response plans, conducting investigations, and providing guidance on remediation and recovery. Vendor-Managed Services If a business is using vendor solutions such as endpoint protection, cloud security, network security, zero trust, or vulnerability management, then an MSSP can help deploy, manage, and support these solutions. If the business doesn’t currently leverage any of these services, the MSSP can assist in evaluating and sourcing them. The MSSP provides full lifecycle support, which helps provide increased value for both customers and their vendor partners. What value does Zscaler provide? Zscaler partners with MSSPs offer managed security solutions to our joint customers. Our vendors and customers benefit especially from our: Cloud-native architecture Zscaler offers the world’s largest cloud-native security architecture, which provides fast and reliable security services. The cloud-based architecture means it can scale up or down to meet the needs of its partner MSSPs and joint customers. This makes it a good fit for both small and large organizations that require flexible and scalable security solutions. It also eliminates the need for multiple on-premises security appliances, which can be costly and time-consuming to manage. Comprehensive user security Zscaler offers a comprehensive suite of security services for users, including access control, cyberthreat protection, data protection, digital experience monitoring, and zero trust. Zscaler for Users equips the modern distributed workforce to be productive and secure from anywhere. Zscaler's Zero Trust Network Access (ZTNA) solution provides secure access to applications and services, without exposing them to the internet. This reduces the risk of cyberthreats and provides better visibility and control over user access, while enabling employees to work effectively from anywhere in the world. For more information on Managed Security Services Providers and how Zscaler partners with them, reach out to us at Thu, 18 May 2023 17:04:08 -0700 Bill Oehlrich Make generative AI tools like ChatGPT safe and secure with Zscaler In the last few weeks, I’ve interacted with over 50 CISOs and security practitioners during the RSA conference and elsewhere on the US East Coast. ChatGPT and other generative AI were top of the mind for many executives from across financials, manufacturing, IT services, and other verticals. Here are the top questions I received: What controls does Zscaler provide to block ChatGPT? How can I safely enable ChatGPT for my employees instead of outright blocking it? What are other organizations doing to control how their employees use ChatGPT and generative AI? How do I prevent data leaks on ChatGPT and avoid the situation Samsung recently found itself in? What is Zscaler doing to harness the positive use cases of AI/ML tools such as ChatGPT? ChatGPT usage has gained momentum in the consumer and enterprise space during the last six months, but other AI tools, like Copymatic, AI21, etc., are also on the uptick. Many of our larger enterprise customers, especially in the European Union and those in the financial industry worldwide, have been working on formulating a policy framework to adopt such tools safely. Strike a balance with Intelligent access controls Zscaler has identified hundreds of such tools and sites, including OpenAI ChatGPT, and we have created a URL category called ‘AI and ML Applications’ through which our customers can take the following action on a wide variety of generative AI and ML tools, including: Block access (popular control within Financials and regulated industry) Caution-based access (Coach users with risk of using such generative tools) Isolate access (access is granted through browser isolation only, but any output from such tools cannot be downloaded to prevent IP right/copyright issues) Fine-grained DLP controls on ChatGPT Since ChatGPT is the most popular AI application, we created a pre-defined ChatGPT Cloud Application to provide more preventive controls around it. For many organizations that do not want to block access to ChatGPT but are worried about data leakage and are concerned about giving up IP rights on content uploaded to ChatGPT, they can enable fine-grained DLP controls as well. Customers can also set up stringent data protection policies using Zscaler DLP policies. Since OpenAI is available over HTTPS, Zscaler’s inline SSL decryption provides complete visibility into the content/queries that users post to the ChatGPT site and the downloaded content. This short demo video demonstrates how Zscaler DLP is blocking users from uploading credit card numbers to the ChatGPT site. The Zscaler DLP simply applies policy inline to prevent data loss and the same function would apply to source code via Zscaler Source Code dictionaries. Insight into ChatGPT usage and queries Additionally, many customers asked if they could also view ‘what queries our employees were posting to ChatGPT whether DLP triggers on it or not?” We absolutely allow for it with the integrated Incident receiver with Zscaler’s DLP policies. This short demo video illustrates how to configure a “DLP rule without content inspection” in the data protection policy. That means you can capture any HTTP POST request made to the ChatGPT application greater than 0 bytes and send it to Zscaler Incident Receiver, which, in turn, can send any captured POST to ChatGPT to the enterprise incident management tool or an auditor’s mailbox. This capability can give you peace of mind and allows your organization to safely enable ChatGPT within their environment for the employees. Cyber security concerns with ChatGPT Though ChatGPT and many tools like it have been built to prevent misuse and prevent the creation of malicious code on their platform, they could be tricked to generate code for ethical hacking or penetration testing and then that code can be tweaked to create malware. Publicly known examples of such malware might not exist yet, but there is enough chatter on the dark web to keep an eye on it. Additionally, a lot of email security products use NLP processing techniques to determine phishing or social engineering attacks. ChatGPT can help write perfect English emails and avoid spelling mistakes or grammatical errors making it hard for such controls to be not very effective. Next steps The adoption of ChatGPT and generative AI is heading to the mainstream, and there is a likelihood that an ‘enterprise version’ will soon emerge that will allow organizations to extend existing cyber and data security controls such as CASB, data at rest scanning, SSPM, etc. We at Zscaler have been harnessing the power of AI/ML across the platform to solve hard problems ranging from cyber security to AIOps for the last few years. Our machine learning capabilities classify newly discovered websites, detect phishing and Botnets, catch entropy on DNS traffic to detect malicious DNS, and even help automate root cause analysis to troubleshoot user experience issues more proactively. We recently showcased a prototype of ZChat, a digital assistant for Zscaler services. Now we are looking at extending this technology to cyber security and data protection use cases and will announce a few fascinating generative AI innovations in our upcoming Zenith Live conference. Stay tuned and I look forward to seeing you at the event. Wed, 17 May 2023 08:13:01 -0700 Dhawal Sharma Zscaler Recognized as Leader in 2023 GigaOm Radar for Deception Technology Attacker tactics are ever-evolving, and unless detection methods evolve with them, your organization could be at risk. This conundrum is addressed head-on by deception technology (DT), which enables defenders to construct traps for attackers and collect useful data for better decision making. Zscaler has been recognized as a leader among all vendors evaluated in the 2023 GigaOm Radar for Deception Technology Solutions. The comprehensive industry report asserts that deception technology vendors are developing advanced platforms that can detect, analyze, and defend against zero-day and advanced attacks early with low rates of false positives. In the report, you will learn when you begin to examine the deception technology vendor landscape and solutions in this market, there are specifics to each company's application of the technology. These distinctions give this market the breadth necessary for practically any size business to choose a solution that meets its requirements. There are various methods for integrating and utilizing DT in a contemporary organization. Some businesses may decide to work with a vendor to manage the solution because they value simplicity and effectiveness equally, while others may opt for autonomy in their deception solution to boost the productivity of their current team. The vendor solutions that are plotted by the GigaOm Radar are distributed among a set of concentric rings, with those that are closer to the center having a higher overall value. According to the placement of Zscaler in Figure 1, it’s a feature-rich solution and a well-designed product that has a good mix between platform play, innovation, and maturity. The research claims that Zscaler Deception successfully blends state-of-art deception technology and implements all defined key criteria to offer a comprehensive range of security services resulting in a powerful and harmonious solution. The solution offers a wide array of features combined with a broader platform, features that aren’t typically available from a single vendor. Figure 1 Zscaler Deception earned this leading position by providing crucial capabilities including actionable insights which turn data into useful information for security and IT teams. Zscaler creates a risk-based score metric for each event. When multiple events are correlated, the risk score grows exponentially. Its seamless integration with other Zscaler products makes it an integral part of the risk assessment process. GigaOm notes that “The platform’s threat intelligence sources stand out. They comprise internet-based sensors that are capable of detecting targeted attacks aimed specifically at your organization and the Zscaler global cloud furnishes data on an impressive 280 billion transactions daily.” The GigaOm report states that “Zscaler blends state-of-art deception technology with a comprehensive range of services resulting in a powerful and harmonious solution. It is a meticulously crafted solution with a wide range of features when combined with the broader platform, features that are not typically available from a single vendor”. Zscaler Deception is used to detect compromised users, identity-based attacks, stop lateral movement, and defend against human-operated ransomware, hands-on keyboard threats, supply chain attacks, and malicious insiders. The solution covers the environment with decoys and fake attack paths to derail active threats without adding operational overhead or false positives. Zscaler Deception reduces detect/response time, generates private threat intelligence, and boosts SOC efficiency with high-fidelity alerts. Read more about how Zscaler Deception can be used to respond to security events faster, in a more coordinated manner, and reduce reliance on overburdened security and SOC response teams. To better understand how improving Deception Technology in threat detection, analysis, and response can improve your security posture, download the 2023 GigaOm Radar for Deception Technology report. Fri, 12 May 2023 16:12:47 -0700 Nagesh Swamy Divide and Conquer the Pyramid of Pain for Multi-Cloud SecDataOps with Zscaler Posture Control One of the biggest drivers for cloud migration has been how high volumes of data can be easily produced, analyzed, and stored in cloud environments. Starting from large data warehouse platforms, relational databases and object stores the cloud offers a huge range of options when it comes to building a robust data operations platform for enterprise applications. As of 2022 over 60% of corporate data sits in one or more cloud services, as such protecting such business-critical assets is increasingly becoming vital. Across 2022 to 2023 the primary causes of data loss in the public cloud were related to misconfigured cloud storage, insider threats/overprivileged IAM access, loss of stale data resources, or supply chain vulnerabilities. In most cases when data exfiltration occurs from a cloud environment, it is too late; for most threat actors data exfiltration is the destination of a multi-stage attack that precludes several steps before the exfiltration. This aspect is easily understood by observing the TTPs in the MITRE ATT&CK framework related to data exfiltration. A targeted data theft incident usually consists of 5-6 key steps before the final exfiltration occurs and spans across exploiting vulnerabilities to obtain initial access and then leveraging misconfigurations to expand and exfiltrate data. We need to disrupt this kill chain to build an effective data security strategy. To make matters more interesting not all data is created equal - as such identifying which data represents the organizational crown jewels and evaluating critical access paths is a crucial part of establishing an effective data security strategy. However, it is far easier said than done, the risk to data in multi-Cloud environments is extremely diverse, with data potentially sitting across over 750+ cloud services across multiple CSPs, API interfaces, and potentially thousands of cloud identities and IAM entitlements; organizations face an uphill battle. So, how should organizations approach such a complex topic? To break this down, let’s introduce the concept of “SecDataOps”, here we combine the pyramid of pain commonly used in security operations and apply it to establish the effort model for operationalizing data security in the public cloud. In this model, while building an inventory of cloud assets could be trivial, for a security solution to achieve, converting that into an effective threat model with actionable insights and behavioral detections takes significant efforts in terms of correlation and risk prioritization and a significant challenge faced in the current fragmented tool landscape – Figure 2: Multi-Cloud SecDataOps Pyramid of Pain 1. Inventory of Cloud Assets - The first pillar of an effective data security strategy must start with an inventory of all data holding assets, in many cases these locations are not obvious e.g., disk/database snapshots or code templates. Based on the effort scale for most of the cloud-native tools and CSPM solutions this is a reasonable table stake capability. Zscaler’s Posture Control solution provides an inventory of data-holding resources and assesses these for any risk, driven by misconfigurations or accidental exposure such as via unwanted sharing of database/disk snapshots or stale data. Figure 3: Multi-Cloud Storage Inventory 2. Data Discovery & Classification – Sensitive data is dispersed across the entire cloud software development lifecycle. Starting from code templates, long forgotten environment variables, to cloud object stores, disks & database snapshots. It is also worth understanding that in addition there is significant diversity in the data itself starting from plain text, and embedded data to advanced metadata where sensitive data could be hidden; having the capability to identify both structured and unstructured data and auto-classify based on standard fingerprints and organization-specific exact data matches are some of the critical capabilities that are required for completeness. The solution must be able to identify data at rest and in motion within a cloud environment. Effective tagging and establishing ownership of resources is also a critical part of this process. 3. Data Entitlements & Exposure – Who/What is entitled to data in the Cloud resources? The answer to this fundamental question can make or break your data security strategy in the cloud and unfortunately is the least understood topic in cloud security. IAM entitlement management is often caught up in operational silos of security teams and developers building new application resources in the cloud. Repeated data breaches in the recent timeline highlight potentially large blast radiuses around developer access or cloud automation accounts. So why exactly are organizations struggling? To put this into perspective our data science team looked at possible entitlement paths that exist across human and machine (non-human) identities and resources they can access across many AWS accounts and the results give us a glimpse into the level of complexity. Figure 4: Distribution of Human Identities and possible actions in AWS In the above graph, we observe over 30% of human identities have 100k+ permissions in AWS with at least 10%+ identities having more than 10M permissions. The general propensity of the graph being skewed to the right indicates there are at least 10-15% overprovisioned/misconfigured Human identities in every AWS tenant that we analyzed, that is almost on average 150+ over-provisioned users/1000 users who have access to sensitive data. A similar pattern is observed across Non-Human/machine identities. Figure 5: Distribution of Non-Human Identities and possible actions in AWS In the case of machine identities, 20-40% of identities has 10-100k entitlements whereas 3-5% contained 1-10 Million entitlements. So, in conclusion, human identities continue to have more privileges and represent a bigger risk to data in cloud accounts, however, the impact of misconfigured non-human identities is higher due to a lack of compensating IAM controls such as MFA on machine identities. The sheer volume of data required and the lack of native capabilities in the CSP to make any sense of entitlements means that most organizations struggle with this critical part of data security in the cloud. Zscaler Posture Control abstracts complexity associated with multi-cloud entitlement management by automatically aggregating cloud identity-related identity-related risk and its impact on cloud data. Posture Control categorizes Human and Non-Human identities into power categories that specifically indicate the potential threat model for each area of cloud data risk. Figure 5: Automated categorization of identities based on risk and impact to various cloud data services. The second exposure factor in the cloud is related to exposed/exploitable vulnerabilities that can be leveraged to gain initial access to workloads. The combination of network exposure, an exploitable vulnerability, and over-provisioned identities represents the ideal attack path for most cloud attacks. 4. Anomalous resource behavior – The risk to data in the cloud can be further enriched by combining weak signals such as cloud posture and identity-related misconfigurations with malicious API calls, DNS queries, or, IP communications. This can significantly drive SecOps efficiency by improving the fidelity of alerts. However, building such a capability with point solutions is extremely complex and operationally inefficient. 5. Risk Prioritization & Holistic Reduction in Data Risk – ‘Know thy enemy and know yourself; in a hundred battles, you will never be defeated – Sun Tzu’, knowing how an attacker could potentially exploit weaknesses to exfiltrate data and putting effective guardrails around those represents the epitome of cyber defense efficacy. Frameworks like MITRE ATT&CK and D3FEND have become de-facto standards when it comes to mapping adversarial behavior to evaluating the efficacy of organizational cyber-defense capabilities. However, to truly operationalize context-driven security architecture such as ATT&CK an enterprise cyber-defense capability needs to define key attack targets (crown jewels), key attack paths, attacker methodology, and indicators of activity, and this is achieved by following the pyramid of pain proposed here. Zscaler Posture Control and Zscaler’s Zero Trust platform combine signals across asset posture, identity entitlement, vulnerability, sensitive data, network exposure & activity to help customers conquer SecDataOps challenges in a multi-cloud environment. Figure 7: Zscaler 360 SecDataOps capability to secure multi-cloud Tue, 09 May 2023 10:00:01 -0700 Arnab Roy Drive Faster MTTR with the Latest Innovations in Zscaler Digital Experience As we discussed in our announcement blog, Zscaler is continuing efforts to empower IT Operations and Service Desk teams to deliver flawless digital experiences. Many organizations are still struggling to support a distributed workforce, as users and data continue to put a strain on IT departments with already limited time and resources. In order to drive better outcomes, IT must have intelligent tools at their disposal to efficiently troubleshoot internet connectivity, accelerate Mean Time To Resolution (MTTR) through precise AI-powered issue detection, and root cause analysis, and expand and support the digital workforce quickly and confidently. Maximize digital dexterity with global insights The internet is now the new corporate backbone, and it is vital to quickly gain insights to improve end user productivity from anywhere. Network operations and service desk teams need simple intuitive solutions across the enterprise to monitor ISPs, internet, transport, unified communication solutions, and more. However, it’s not only important to monitor and triage issues but to also understand repetitive issues across the enterprise so they can be addressed. 1. Monitor quality of WebEx meetings: Webex call quality is our new addition to the already existing UCaaS monitoring capabilities of Zoom call quality and Teams call quality monitoring. This new capability enables you to monitor your users' Webex meetings; among two or more participants. This integration provides real-time visibility into their meeting experience. The ZDX score for call quality can reflect either a Mean opinion score (MOS) average, or metric thresholds for latency, jitter, and packet loss. Call quality works in parallel with cloud path probes and device metrics and events to help you identify issues that are unique to a device or the network. Call quality data is retrieved using the Webex API, and data is captured in real time as a meeting is in progress. This enables your support teams to quickly get to the root cause of an ongoing issue in a meeting or a past meeting where the user experienced problems. ZDX with integrated Webex insights 2. Quarterly insights for productivity reviews (QBR): ZDX quarterly business report (QBR) provides executive and operations teams comprehensive analysis about the organization’s end user, application, and network experience while providing them key insights into their business to take action. The QBR report provides trend analysis on users’ ZDX score, applications experience, ISP information, DNS performance, network latency, alerts, and much more. It also provides you with insights into your user Wi-Fi connectivity and detailed outlines about configured application scores across various geographical regions. ZDX Quarterly Business Reports showing enterprise insights Achieve faster IT resolutions using AI To accelerate MTTR, ZDX leverages AI and ML to automate root cause analysis to eliminate fragmented data, finger pointing across IT teams, and alert fatigue. Furthermore, with AI-powered analysis and dynamic alerts, IT teams can quickly compare optimal versus degraded user experiences and set intelligent alerts based on deviations in observed metrics. Read more in this AI ebook. 1. Automated root cause analysis (ARCA): ZDX can quickly identify the root cause of user experience issues with AI-powered root cause analysis capability. It spares IT teams the labor of sifting through fragmented data and troubleshooting, thereby accelerating resolution and keeping employees productive. Watch this video to see it in action! ZDX One-click automated root cause analysis 2. Perform AI-powered analysis: ZDX ARCA has multiple modes which can be leveraged to dig deeper into the issue. ARCA has three modes: single point score, range mode, comparison mode. Single point in time mode: Allows you to select a specific point on a low ZDX score and run an analysis on what could have caused the score to dip at the specific time, which provides insight into what’s going on with an end user’s experience in a matter of seconds. ZDX single point mode analysis Time range mode: allows service desk teams to select a time range and get the two most common problems across the specified time range. ZDX range mode analysis Comparison across time mode: compares two points in time to understand the differences between the points. This function determines a good versus poor user experience. It visually highlights the differences between application, network, and device metrics. ZDX compare two points 3. Dynamic alerting to reduce alert fatigue: With ZDX score as a metric, customers can set up criteria for ZDX score as a metric for a particular application and a probe. If there is a significant latency increase to reach a certain application, it is accounted for in the score drop. The new ZDX score drop metric for an alert type enables the user to set the threshold sensitivity (low, medium, high). This enables customers to reduce the number of alerts and focus on major deviations observed in their ZDX score baselines. ZDX dynamic alerts Effortlessly scale your global enterprise Desktop support teams often struggle to resolve device issues for remote workers across regions globally. ZDX offers a range of key metrics including device health and active processes, OS metrics for ChromeOS and Android (Windows and MacOS are already supported), and Windows OS metrics that are critical to troubleshooting device issues. ZDX also enhances networking capabilities with remote packet capture, ZPA web caching, and third-party proxy support to drive better visibility across the global enterprise. 1. Get endpoint performance insights with detailed process level monitoring: With ZDX Advanced Plus you are now able to get time-series data about the top five processes which are consuming the device CPU, memory, disk IO and network IO. This powerful enhancement enables ITOps teams to quickly get to the offending process which could be negatively impacting the end user’s experience. Watch this video to see it in action! ZDX process level monitoring 2. Get deeper visibility into overall endpoints’ health: With our new Microsoft Endpoint Analytics integration, IT and service desk teams can identify issues with user software or devices that might be impacting performance and reliability. Metrics are garnered from the Microsoft Intune API and mapped to individual ZDX users and devices to provide Endpoint Analytics scores and data. Watch this video to see it in action! The user details page provides the following endpoint analytics information for a specified user device: Endpoint analytics score: Reflects the weighted average of the Startup Performance score and Software Reliability score. Health status: Shows the most recent status of the device, designated as unknown, has insufficient data, needs attention, or is meeting goals. Last updated: Indicates the date and time when metrics were last collected from the Microsoft Intune API. Although data is collected from the API every three hours, metrics might not repopulate every three hours within the UI. Some Startup Performance events might take up to 24 hours to be displayed, and some Software Reliability events might take up to 3 days to be displayed. View endpoint analytics: Accesses scores and metrics for Startup Performance and Software Reliability. ZDX integrated with Microsoft Intune analytics 3. Capture packets for troubleshooting user issues on-demand: We introduced a new knob in Deep Tracing which enables Ops teams to launch on-demand packet captures without any end user intervention. Operations teams can now effortlessly launch packet capture on the user’s device without the need to manually install packet sniffing tools like Wireshark. Once the packet capture is complete, the location of the .pcap file is listed within the Deep Tracing Session results. ZDX remote packet capture 4. Monitor private apps without causing impact: ZDX CloudPath now has support for caching probe data, so that we can continue to monitor their performance without negatively impacting them. ZDX CloudPath with App Connector 5. Get end-to-end visibility when using third-party proxies: ZDX CloudPath is adding support for detecting traffic paths when it is traversing via your on-prem or cloud-based third-party proxy. This new capability for CloudPath will continue to evolve to add support for more complex customer networks. ZDX CloudPath with external proxy visibility 6. Support for ChromeOS devices: We now support ZDX on ChromeOS. This new platform provides end user experience visibility now for users of Chromebooks. Large school districts and customers who have been extensively using ChromeOS devices can get full visibility into end user experience. ZDX ChromeOS support Solve issues faster with ZDX With the groundbreaking innovations above, your business can now tackle today’s and tomorrow's user experience challenges with ease. With these enhancements, you can gain insights from your existing environment, achieve faster IT resolutions using AI, and scale your global enterprise to further enhance end user experience, no matter where they are located. To learn more about these innovations watch our webinar, sign up for a demo and review the latest features today! Tue, 09 May 2023 04:00:02 -0700 Rohit Goyal Mastering IT Operational Excellence: Strategies for Improving Productivity and Reducing Ticket Volumes Many organizations today are in the midst of improving end user productivity and reducing costs, but the approach has been to look at IT solutions in silos. As budgets tighten and IT operations struggle to maintain low ticket volumes, IT operations teams are faced with whether to renew expensive siloed monitoring tools or adopt new ones that provide full end-to-end visibility. These are just table stakes for most organizations. Next-generation digital experience monitoring solutions will have to provide industry-leading capabilities to enhance end user experience. The first step is to maximize digital dexterity with global insights. The internet is now the corporate network, and enabling IT teams with internet health will assist in keeping end users productive from anywhere. The second step is to achieve faster IT resolutions using AI. Ticket volumes have inevitably increased since the start of the pandemic and the adoption of remote and hybrid work, with operations troubleshooting local Wi-Fi, ISPs, devices, and more. To wrangle tickets, IT operations need AI to help pinpoint where issues could reside, which would accelerate mean time to resolve (MTTR) and get users back to work faster. The impact would also reduce the number of tickets being escalated from L1 to L2 to L3, and alleviate those teams to focus on business-driving projects. The third step is scaling your global enterprise. Macroeconomics have pushed organizations to show profits faster which leads to mergers and acquisitions. As organizations grow, being able to handle hundreds or thousands of new employees overnight places additional challenges on an already burdened IT staff. Digital experience monitoring solutions that provide device health (CPU, memory, disk I/O, Wi-Fi) and integrate into existing ITSM workflows no matter where the user is located is vital. If gaining insights from your existing environment, achieving faster IT resolutions using AI, and scaling your global enterprise is key your organization’s success, check out our Amplifying IT Operational Excellence with Next-Generation Digital Experience Monitoring Solution event, where we'll unveil groundbreaking innovations that will help your business tackle modern monitoring challenges. Fri, 28 Apr 2023 12:24:31 -0700 Rohit Goyal The Impact of Public Cloud Across Your Organization (Part Six) We’ve been writing this series using the common metaphor of a “Journey”. This is certainly not a groundbreaking metaphor in the IT space. It seems enterprises are always on a journey somewhere. Since we have been talking about the Journey of Digital Transformation, it occurred to me that we never really defined it. There are literally hundreds of definitions out there. I think Clint Boulton of CIO magazine captured a solid definition of Digital Transformation in a recent article from 2021. “Digital transformation marks a rethinking of how an organization uses technology, people, and processes in pursuit of new business models and new revenue streams, driven by changes in customer expectations around products and services.” What is interesting to me about this view is that it removes one key aspect of the journey… the end. Digital transformation for organizations is not a trip that has an end. Customer expectations will always evolve. New ways of interacting with your base, competitors, and the market at large, will continue to emerge. The strategies of most enterprises are not focused on technology for technology's sake. Enterprises are driven strategically by the need to grow market share, drive cost efficiencies, attract talent, increase revenues, dominate the competition, and expand into new markets. The only way many Enterprises can do this is to re-think their teams, organizations, tooling, and processes to embrace digital transformation as a model. As has been discussed, the consumption of the public cloud is one of the major tactics employed by organizations to realize their strategic vision of digital transformation. However, there is often a singular focus on what a new capability or service can deliver for a project. These services are not always understood through the lens of service configurations and their relationship to increased risk to the enterprise. New services need to be tracked against relevant compliance frameworks. Security operations centers need correlated and contextually prioritized signals for incidence investigation and response. The fact is that there are several centers of gravity within the enterprise (e.g. Operations, Platform Engineering, Compliance, etc.) that all have equally critical roles and responsibilities to the Enterprise’s strategic initiatives. The industry needs to move beyond point solutions to platform approaches. Investment in platforms as opposed to tools that address these different groups not only will reduce investment costs but will also provide opportunities for new synergies between these groups. Tangible Benefit: Continuous Compliance For example, take a simple update to a compliance framework for an enterprise leveraging multiple Cloud Service Providers. This one change will require investigations across different clouds, accounts, and projects to even determine if it is applicable. Second, given that the implementation of services across clouds leverages different architectures and configurations, each CSP offering will have different instructions to achieve compliance. These changes would then need to be somehow communicated to platform engineering and automation teams. Manifests would need to be updated, deployments updated, etc. A platform that would update compliance frameworks automatically upon release or allow custom policies to be added on demand for all clouds would be the first step. This would allow a platform to immediately evaluate the entire cloud estate for relevance and any surface and signals indicating a need to make a change. Remediation guidance should be a part of the updated policy and provided by the platform to cloud operations. The platform should insert and evaluate these new policies for both run-time (already deployed) and any new manifests, version control commits and pipeline builds providing the details automatically to those platform engineering teams. Essentially, the true platform approach provides the following: Allows the compliance team to decide the control and its relevance The operations team (or the platform vendor) to specify specific configuration and remediation guidance settings specific to each Cloud Service Provider (CSP) The SOC/NOC to get immediate feedback on the current deployments in light of the updated control(s) The platform automation team simply continues their work with the new policy automatically inserted into the process at the IDE, Version Control, and Pipeline inflection points Streamlined approach and implementation of continuous compliance. Driving the Platform Approach: Integrating Posture with Dynamic Policy Enforcement Gathering information on service misconfigurations, compliance violations, exposed assets with critical vulnerabilities, and even over-entitled permission sets is fundamental to secure digital transformation. It is, however, only one part of the problem. Digital transformation requires the security platform to potentially take action based on those signals. For example, understanding the scope of exposed S3 buckets with no versioning or MFA delete is critical to understanding a threat vector for ransomware. Other dimensions should also be explored. How do we integrate a Data Loss Prevention Engine to surface which of those buckets have PII data or sensitive financial information? Can the platform take flow logs and run them through an attestation service to determine whether source and destination traffic coming in and out of the VPCs originates from “sketchy” locations. Furthermore, can the platform (if it is determined that is the case) dynamically block or quarantine assets through the existing security policy enforcement points (PEP)? Posture Control and Cloud Native Protection Platforms (CNAPP) have traditionally focused on identifying the threat vectors. I submit that to secure digital transformation efforts, richer integration with these PEP engines will help to address security in the public cloud space. The more organic and seamless those integrations are, the more organizations can reduce tooling, simplify workflows and focus on their policy and not the operation of these disparate systems. Conclusion At the end of all this, there is no one magic bullet for secure digital transformation. Security is and always has been a journey. It is a constant back and forth between adversaries and the protections. As more enterprises leverage the public cloud to fuel their ultimate goals of digital transformation, the need to richly integrate cybersecurity into the adoption of these new amazing capabilities is paramount to ultimate success. That success is not going to be measured by the number of security controls, or corner case features that exist in any CNAPP platform. It is going to be measured by the ability of organizations to achieve their goals in whatever terms they define them; delivering new customer experiences, driving costs out of the business, and entering new markets. Here at Zscaler, our goal is to help our customers realize those terms in the most secure way possible. Learn more: The Impact of Public Cloud Across Your Organization series Posture Control Free Cloud Security Assessment Tue, 02 May 2023 09:00:01 -0700 Scottie Ray The Impact of Public Cloud Across Your Organization (Part Five) The history of the term “DevOps” is a bit murky at best. In truth, there really isn’t even a single agreed-upon definition. However, most agree that somewhere between late 2006 and 2009, the principles of combining developer best practices with classic operational methods as a combined discipline began to take center stage in IT circles. Many organizations realized that the ability to lead in their marketplace was tied to the ability to develop, release, patch, iterate, and operate their applications (customer-facing or internal) in a continuous delivery fashion. The goal of this approach was not only to increase velocity but to do so while maintaining quality and reliability. In the early stages of this shift, security was not always an integral part of the process. Often, it was relegated to the evaluation of deployed artifacts in their “run-time” state. If issues were discovered, it was incumbent on the deployment team to “roll back” to the previous state and address the issue. While automation frameworks made this process easier, they still cost the business in delays, efficiency gains, and even customer satisfaction. What was needed was the integration of security into this DevOps paradigm. Hence, we have seen terms like DevSecOps emerge to call attention to this need. When public Cloud Service Providers (CSP) became more widely consumed, so did the various automation frameworks to deploy cloud infrastructure and services. Enterprises deployed their IaaS substrate using native tools like Cloud Formation & Azure Resource Manager (ARM) templates and industry multi-cloud tools such as Hashicorp’s terraform. Organizations that were deploying modern application frameworks on container orchestration engines relied more on declarative modes of deployment leveraging a combination of version control and CI/CD systems. In general, however, the parties responsible for authoring, maintaining, and using these tool sets did not come from the security centers within the organization. Securing the “Build Phase” When evaluating the posture of the enterprise cloud estate, it is not enough to evaluate the run-time state of workloads and services. Policy evaluation must also be made at the digital transformation efforts' design and build stages. This is especially important for organizations leveraging automation on an increasing scale. Cloud Native Application Protection Platforms (CNAPP) must take this growing requirement into account if they are to provide value to these “DevOps” or “Platform Automation” centers of excellence within customer enterprises. They should provide mechanisms for the evaluation of security policies at multiple points along the path. However, it is critical that these solutions maintain three important principles. First, security and compliance policies must be authored, controlled, and updated by the security teams within the enterprise. We cannot ask platform automation teams to instantly become experts in security and compliance. Nor should they divide their focus away from the enterprise's digital transformation efforts for which they are a critical component. It is not enough to highlight a potential code block that will result in a security violation or risky configuration. The solution should be flexible enough to allow for exceptions as well as provide guidance directly to the code author for a compliant alternative. Second, it is not feasible to expect that security teams are going to have access or even the expertise to interact deeply with the IDE, version control, or pipelines leveraged by those teams. These tools are firmly in the domain of the DevOps or automation teams. The process by which their code and builds are evaluated against the security and compliance policies of the organization needs to be as transparent as possible. Third, the ability to “shift-left” (as it has been termed) security into the design and build process needs to be flexible to accommodate operational differences from one organization to the next. Every organization is slightly different in how they handle automation. An effective solution must be able to inject policy evaluation in the following areas. Integrated development environment (IDE): When the authors of a given infrastructure as code (IaC) manifest are coding, the security teams must assess the organization’s policies directly within their IDE and be notified as to whether or not the file elements are compliant. Code repository: When a pull request is submitted to a repository owner, any applicable security configuration policies should be evaluated and reported in an automated fashion. This can be done either by giving the originator the responsibility to correct the code settings or by the repository owner who can address it prior to merging into a branch. Image registry: The resultant images that are built and stored in registries are the next point where policies and vulnerabilities can be investigated. New weaknesses are unknown by the builders and may be discovered after initial builds are stored in a registry. Therefore, performing a consistent and regular scan of these images provides a foundational layer of protection. CI/CD pipeline: Much like IaC manifests described above, application deployment should also be evaluated against the organization’s security policy framework during the CD phase. CI/CD tools such as Jenkins and Azure pipelines should be integrated into a security platform to ensure that the application manifest is policy compliant prior to deployment. Post-deployment: Continuous scanning extends into the running environment after the previous steps take place. Conclusion: Securing Your Foundation The public cloud increasingly serves as a foundation for the deployment and operations of new services. Along with the obvious scale, resiliency, and flexibility provided by this model, it also lends itself to an ever-increasing level of automation. Any enterprise looking at a CNAPP solution needs to ensure that policy and compliance controls can be linked to specific build and deployment tools. These platforms provide broad risk visibility and mitigation coverage across public cloud infrastructure and services, workloads, and data. Effective CNAPP protections are not only about posture configuration of the cloud environment. They are also the workloads running in those environments, as well as the deployment processes of those functions. Whether organizations are using traditional instance–based workloads or container–based applications, it is critical to inject security policy assessment that controls and prevents compromised efforts to gain high–velocity digital transformation. The Zscaler Posture Control (ZCP) platform recognizes the challenges that security and platform teams face in reconciling the needs of security and compliance with the demands of digital transformation. To that end, ZCP allows organizations to evaluate policies at all phases of deployment; from design and build through deployment and run-time. Learn more: The Impact of Public Cloud Across Your Organization series Posture Control Free Cloud Security Assessment Tue, 25 Apr 2023 09:00:01 -0700 Scottie Ray CXOs and Zero Trust: Seven Considerations for Successful Digital Transformation “The most dangerous phrase in the language is: ‘we’ve always done it this way.’” Grace Hopper, Computer Scientist Business has changed dramatically over the last several years. To remain agile and competitive, organizations must embrace digital transformation. But, doing so securely means stepping outside of the old ways of establishing a network perimeter, protecting it, and trusting everything inside. Doing things the way they have always been done doesn’t work in the hybrid workplace where the perimeter is everywhere. Business leaders must ensure they have the flexibility and capability to support evolving business needs today and for the foreseeable future. Ensuring employees can continue to work from anywhere while the business remains agile and secure requires a fundamental shift in networking and security to an architecture based on zero trust. Executive leadership relies upon their networking, security, and infrastructure architects and IT leaders to understand and lead this transformation journey from a technical standpoint. Yet, digital transformation requires more than technical expertise. Transformation touches all aspects of an organization and requires a shift in culture and mindset that can only be driven from the top down. To nurture the organization on its transformation journey, it is essential that the executive team—from the CEO and CFO to CTOs, CIOs, and CISOs—seek to understand zero trust. Without asking questions and clarifying any confusion that may exist, the journey will be arduous and fraught with challenges. "Nothing in life is to be feared; it is only to be understood." Marie Curie, Physicist and Chemist Zero trust has moved from a nebulous idea to a transformation enabler for organizations over the last several years. Yet, its growing popularity has created much confusion around zero trust, what it is (or isn’t), how it works (or doesn’t), and why it's important. Sanjit Ganguli, Nathan Howe, and Daniel Ballmer have sought to help CXOs clarify the confusion around zero trust in their new book, “Seven Questions Every CXO Must Ask About Zero Trust.” Let’s take a peek into what you’ll find in their executive’s guide to secure digital transformation. What is zero trust and why is it critical for secure digital transformation? Organizations are turning to zero trust to secure themselves and enable the hybrid workplace. Yet, if you listen to all the marketing hype, you’re likely still confused about exactly what zero trust means and why you need it in the first place. Zero trust is a strategy—a foundation for your security ecosystem—based upon the principle of least-privileged access combined with the idea that nothing should be inherently trusted. But how does that serve as an enabler of digital transformation? And why do we need to transform in the first place? You’ll explore how a zero trust architecture differs from the old model of a flat network with defined segments, where everything inside the perimeter is trusted. They will show how the unique zero trust architecture provides the ability to securely connect entities–whether they are users, apps, machines, or IoT devices–to resources in a fast, seamless, and secure manner. What are the main use cases for zero trust? What are the drivers for zero trust adoption? In most organizations it is one, or a combination of three main use cases. The book explores the three main zero trust use cases in detail: Secure work from anywhere - Ensuring your employees can be productive—whether they work from corporate headquarters, a home office, in a coffee shop, or on the road—requires providing fast and secure access to applications on any device, from any location. Instead of leveraging VPNs to connect users to a corporate network, a true zero trust architecture uses policy to determine what they can access and how they can access it, and then securely provides fast, seamless, direct connectivity to those resources. WAN transformation - The old ways of extending the flat, routable, hub-and-spoke network to every branch, home office, and user on the road leaves organizations exposed and vulnerable to attack. A zero trust architecture enables organizations to transform the network from a hub-and-spoke architecture to a direct-to-cloud approach that reduces MPLS and backhauling of traffic to the data center and improves user experience. Secure cloud migration - Zero trust not only applies to users and devices, it also ensures that workloads can securely communicate with the internet and other workloads. Implementing a true zero trust architecture also provides secure workload configuration and strong posture control capabilities. What are the business benefits of moving to zero trust? Top of any CXO’s mind is understanding the justification of a zero trust transformation and the business benefits it delivers. The book dives into this topic in detail and explores several key business benefits of a zero trust architecture. Optimizing technology costs - A zero trust architecture securely connects users, devices, workloads, and applications, without connecting to the corporate network. It delivers fast, secure, direct-to-app connectivity that eliminates the need to backhaul traffic and minimizes spending on MPLS. The authors discuss how a cloud-delivered zero trust platform can consolidate point-product hardware and eliminate the need for CapEx investments in firewalls, VPNs, VDI, and more. This ultimately drives network and security product savings and increases ROI. Operational savings - Organizations not only reduce technology costs through zero trust, they also reduce the time, cost, and complexity of managing a hub-and-spoke network and broad portfolio of point product solutions required to secure it. A true, cloud-delivered zero trust architecture also centralizes security policy management, handles change implementation like patches and updates in its cloud, and automates repeatable tasks. Simplifying operations frees up time for admins to focus on more strategic projects that drive value to the organization. Risk reduction - Shifting from a perimeter-based model to direct-to-cloud, a zero trust architecture provides greater protection for users, data, and applications. When implemented correctly, a zero trust approach eliminates the attack surface and reduces risk by connecting users directly to apps rather than the corporate network. Sensitive data is protected by preventing passthrough connections and inspecting all traffic. Using the principles of zero trust, organizations can securely connect any entity to any application or service from any location. Improved agility and productivity - Zero trust serves as an invisible enabler for your business to enhance collaboration, improve agility and productivity, and deliver a great user experience. Zero trust allows employees to securely work from anywhere on any device. And because users are connected directly to applications (based on identity, context, and business policy), latency is reduced, frustrations are lessened, and users can be more productive. And it’s not just end users who benefit. M&A integrations can be streamlined. Through a zero trust approach, organizations can simplify operational complexities, reduce risk, and reduce one-time and recurring costs to ultimately accelerate time-to-value. How does zero trust drive success for an organization? It may be tempting to adopt the mantra, “If it ain’t broke, don’t fix it,” or the phrase that opened this blog, “We’ve always done it this way.” But, supporting the status quo is often the position taken by individuals and teams who either have a vested interest in protecting the current infrastructure, or perhaps more likely, are simply unsure of how to proceed without having the whole system come crashing down. Examining the pain points faced by organizations like yours on their zero trust journey, how they overcame them, and the benefits they attained, makes it possible to illuminate a path to navigating your journey as well. How is zero trust deployed and adopted? What are some common obstacles? You’re onboard with the idea of implementing a zero trust architecture, but how can your organization actually deliver upon that commitment? Like any significant journey, it is helpful to break your endeavor into smaller, more tangible pieces. The authors divide zero trust transformation in four phases and provide guidance to help the entire journey go more smoothly: Empowering the secure workforce - Many organizations find that starting here can deliver immediate benefits to the organization and serve as a catalyst to fuel the remainder of the journey. By replacing legacy networking and security technology with a cloud native zero trust architecture, organizations can enable employees to seamlessly and securely access the internet, SaaS, and private applications from anywhere–without connecting to the corporate network. This prevents lateral movement and protects against advanced threats and data loss. By monitoring end user digital experiences, IT teams can optimize performance and enhance productivity for the organization. Protect data in the cloud - Given the volume of data residing in SaaS applications, like M365, Salesforce, or ServiceNow, and private applications, it makes sense that the next phase would be ensuring data in the cloud is protected. This includes securing internet and SaaS access for workloads, ensuring secure workload-to-workload communications in the cloud, plus enabling posture control for home-grown and cloud native workloads running in any cloud–ultimately simplifying cloud workload security and making it easier to manage. Enable customers and suppliers - Much like your employees, third-party partners and contractors need seamless and secure access to authorized enterprise applications. Decoupling application access from the network and applying zero trust principles enables organizations to tightly control partner access–connecting users to private applications from any device, any location, and at any time–without ever providing access to the network. Partners no longer need to jump through hoops to connect to applications, and the organization increases security posture and reduces the risks posed by VPNs and other traditional third-party access approaches. Modernize IoT and OT security - The last phase is to provide zero trust connectivity for IoT devices and secure remote access to OT systems. Providing fast, secure, and seamless access to equipment–without a VPN–enables quick and secure maintenance operations. And because OT networks and systems are no longer visible to the internet, attackers can no longer leverage cyberattacks to disrupt production. The result is increased uptime and improved safety for employees and plant operations. What are the non-technology considerations for successful adoption of zero trust? Zero trust is not simply a technology swap-out handled by IT. Zero trust transformation is a foundational shift that touches all aspects of the business. The authors examine how successful implementation requires reshaping organizational culture and mindset. It requires communication and collaboration across teams, developing new skills, simplifying and realigning processes, and adjusting organizational structure to support implementation and operation. To produce the desired outcomes, zero trust transformation must be led from the top-down and must include everyone from the most senior leaders, to IT practitioners, to your internal end users and beyond. What do I look for (and not look for) in a zero trust solution? Most business leaders will tell you that digital transformation is a journey, not a destination. A single project or product cannot get you there. But, knowing what to look for in a solution makes a world of difference and helps organizations avoid potential pitfalls. The authors discuss seven key areas solutions should address for digital transformation success. Proven track record and fully addresses the specific needs of your enterprise Built on core zero trust tenets Cloud-native infrastructure that inspects all traffic, including SSL/TLS, at scale Flexible, diverse, and scalable to every user, app, and resource, regardless of location Delivers an optimal end-user experience Strong ecosystem integrations Easy to pilot and deploy, giving you confidence in ability to deliver in production "If you attack the problem right, you'll get the answer." Katherine Johnson, Mathematician Digital transformation makes business more agile and productive. But to succeed in your transformation, you must begin by establishing a solid foundation based on zero trust. The brief introduction provided above barely scratches the surface of the essentials every CXO must understand to successfully guide their organization’s zero trust journey. Tackle transformation challenges head-on with Sanjit, Nathan, and Daniel by examining zero trust, its benefits, implementation, and obstacles. Gain insight into best practices learned from CXOs on their zero trust journeys. Download the complimentary ebook, “Seven Questions Every CXO Must Ask About Zero Trust” today. To discover how Zscaler can help your business along your zero trust transformation, read our white paper, “Accelerate Secure Digital Transformation with Zero Trust Exchange: The One True Zero Trust Platform.'' Thu, 20 Apr 2023 08:00:01 -0700 Jen Toscano Beyond the Perimeter 2023: Context-driven Security for Comprehensive Zero Trust Protection Last month, some of the brightest minds at Zscaler and CrowdStrike teamed up to host Beyond the Perimeter 2023, the third edition of our annual security transformation series. The two-hour event showcased how Zscaler and CrowdStrike work together to secure the complex, modern work-from-anywhere world, and how context-aware security provides a real-time exchange of information so businesses can make smarter access decisions at scale. In this post, we recap the event’s keynote presentation and the “fireside chat” with our joint customers; in a follow-up post, we’ll recap the technical and business breakout sessions. Takeaway: Hybrid work is the future of your business This year’s event kicked off with an inspiring keynote presentation delivered by Steve House, SVP, Product at Zscaler and Amol Kulkarni, Chief Product Officer at CrowdStrike. The speakers highlighted some intriguing statistics regarding the future of remote work: 77% of organizations see hybrid work as the future, and 85% of all organizations will be cloud-first by 2025. There’s no question that the modern workforce and workloads are quickly migrating to the cloud, but cybercrime is moving full steam ahead too. The presenters named four useful key metrics for analyzing the modern threat landscape: the time it takes to move laterally from one host to another, the shift away from traditional malware, the surge in interactive intrusions, and a dramatic increase in access brokers. Cybercriminals are getting faster, moving beyond malware, and putting tremendous pressure on security teams. “Every organization has unique challenges. Zero Trust, if implemented properly, can help meet specific needs, and still ensure a fantastic ROI on your security strategy.” —Amol Kulkarni, Chief Product Officer for CrowdStrike With budget constraints, operational efficiencies, and vendor consolidation already impacting security decisions for many customers, the speakers outlined a compelling case for moving away from existing siloed point products that can’t communicate with one another and toward a comprehensive, consolidated platform, rooted in Zero Trust and leveraging context-driven security. Takeaway: Leveraging context is how you get to Zero Trust. CrowdStrike’s 2023 Global Threat Report shows that adversaries are doubling down on stolen credentials: The 2022 report revealed more than 80% of attacks in 2022 involved misused credentials, underscoring that context is critical when evaluating and securing remote users at scale. The speakers showed attendees how the CrowdStrike Falcon and Zscaler Zero Trust Exchange platforms provide deep visibility into users, devices, and locations, delivering superior cyberthreat protection by connecting users directly to their apps and identifying and resolving performance issues. The speakers named the five points of context Zscaler and CrowdStrike leverage to accelerate Zero Trust protection: users, endpoints, networks, the apps users are trying to access, and where those apps live. This context, provided by the joint solution, gives organizations a wealth of behavioral data so they can apply least-privileged access controls, security automation, and continuous verification—the principles of Zero Trust—with confidence. “We are proud to be part of the strongest partnership in security.” —Steve House, SVP, Product at Zscaler This approach minimizes the attack surface, prevents lateral movement, and arms companies with detection and response capabilities to threats in real time so they can safely automate workflows, apply adaptive access to applications, and share threat intelligence. The Zscaler and CrowdStrike integration has helped more than 900 joint customers secure their digital transformation, simplify operations, and improve user experience, including industry giants like Carrier, United Airlines, Mars, and Cushman and Wakefield. “Carrier rapidly scaled global remote access for 26,000 users in just 9 days.” —Steve House, SVP Product, Zscaler Takeaway: This is radically transforming customer ecosystems today Next up was a fireside chat with Anup Purohit, Global CIO of Wipro, and Wayne Fajerski, Deputy CISO at Edward Jones, guest speakers who dove into the specific challenges their organizations faced that led them to Zscaler and CrowdStrike to secure their Zero Trust journeys. Edward Jones’ story is familiar: The pandemic expedited a shift to remote work, making fast access a priority for users anywhere, any time, and on any device. They needed to quickly get Zero Trust access for a sprawling network of more than 17,000 locations, 20,000 advisors, and 55,000 associates. Protecting these mostly mobile clients and safely providing third-party access required Edward Jones to quickly modernize its tech stack and improve its security posture. Wipro needed to move away from a legacy VPN solution and modernize app support and connectivity, increasing visibility and access control for remote users and implementing SSL inspection for web traffic. With 95% of Wipro’s applications living in the cloud, managing multiple appliances on-premises had become costly and complex. Wipro needed to consolidate its tools, build a new cloud-native security stack, and secure its roaming users. A steady increase in the adoption of cloud-based applications, assets, and workflows has left businesses struggling to provide real-time, frictionless login while protecting hybrid workers and enterprise assets. Zscaler and CrowdStrike enable organizations like Edward Jones and Wipro to confidently extend a true Zero Trust security posture across today’s sprawling user-access landscape, keeping them safe and productive. Stay tuned for Part 2, where we will recap and share highlights from the two tracks - Business Track: Secure your digital transformation and Technical Track: Best practices for architecting Zero Trust strategies. For more details or to watch a recording of Beyond the Perimeter 2023, click here Tue, 02 May 2023 08:00:01 -0700 Leena Kamath Beyond the Perimeter 2023: Context-driven Security for Comprehensive Zero Trust Protection On Tuesday, April 11, Zscaler and CrowdStrike hosted Beyond the Perimeter 2023, the third edition of our security transformation series. The event showcased how joint customers of Zscaler and CrowdStrike are able to leverage the incredible security partnership to deliver true Zero Trust, thanks to the transformative power of context-aware security. In this post, we’ll examine the technical and business breakout sessions from the Beyond the Perimeter (BTP) event, which detailed how this works at a granular level. In this post, we recap the event’s technical and business breakout sessions. Takeaway: Shared intelligence gives security teams new superpowers Beyond the Perimeter participants who chose the technical track experienced a deep dive into how the CrowdStrike and Zscaler integration improves security and operations across users, devices, networks and applications. Eddie Parra, Zscaler Global Lead & Senior Director, Solution Architecture and Rohan Upalekar, Zscaler Solutions Architect, joined Chris Kachigian, CrowdStrike Vice President, Global Solution Architecture, for the session. Trust, connectivity, and security are still evolving, and minimizing the attack surface remains a top priority. An engaging demonstration showed how Zscaler Private Access and Zscaler Internet Access work with CrowdStrike’s Zero Trust Assessment to bring risk-based context to device posture, so enterprises can confidently provide access—secure access to internet apps, SaaS, and private apps. Additionally, Zscaler recently built an integration between the two solutions to help prevent lateral movement by threat actors. Zscaler Deception technology deploys decoys to lure attackers and identify threats, sharing intelligence with the CrowdStrike Falcon® platform to provide organizations with advanced zero-day threat detection and faster threat remediation to prevent lateral movement. Finally, participants learned how CrowdStrike Falcon Insight XDR leverages high-fidelity telemetry from Zscaler to stop cyberthreats faster and more effectively, including cross-domain detections and automatic cross-platform workflow responses. Extended endpoint and network visibility across their ecosystems helps joint customers thwart even the most sophisticated attacks with rapid threat detection and response. Takeaway: Strong, cloud-first security improves business outcomes Attendees who chose the business track were rewarded with a deep dive into the operational benefits of the Zscaler-CrowdStrike integration, courtesy of Tina Thorstenson, CrowdStrike VP, Industry Business Unit, Mike Murphy, Zscaler VP, Value Consulting & Sales Enablement, and Edmée Ernoult, Zscaler Value Creation Advisor. Steadily increasing activity from threat actors, and a massive uptick in China-nexus espionage, are making business challenging for organizations. The CrowdStrike® Falcon OverWatchTM team measures breakout time, and the CrowdStrike 2023 Global Threat Report noted the average breakout time for interactive eCrime intrusion activity declined from 98 minutes in 2021 to 84 minutes in 2022. CrowdStrike focuses on making every second count when it comes to empowering customers to minimize costs and other damages caused by attackers. If defenders (enterprises) hope to defeat their adversaries (cybercriminals), they need to drastically cut their threat response time, as expressed by the emerging standard “1/10/60” security rule. The 1/10/60 rule is a best-practices security goal that says you should be able to detect an attack in one minute, investigate it in ten minutes, and respond in 60 minutes. To get there, organizations need a security platform that continuously prioritizes business risk and reliably stops breaches—a priority Zscaler and CrowdStrike work together to provide. Reducing cost and complexity is another business priority, in the current economic environment, and joint customers who’ve deployed the Zscaler and CrowdStrike integrations gain business efficiencies including cost and operational savings, risk reduction, and faster time to value. Specifically, IT investments and personnel costs are a major cost consideration, including the ongoing costs of patching outdated infrastructure. But modernizing that infrastructure instead, with a cloud-first solution, can make SOCs more efficient, reduce maintenance tasks, and free IT teams to refocus on strategic projects. Zscaler and CrowdStrike’s deep integrations increases operational efficiency for any organization, letting enterprises do more with less, securing the business while enabling true digital transformation. For more details or to watch a recording of Beyond the Perimeter 2023, click here Thu, 04 May 2023 08:00:01 -0700 Leena Kamath Exploring the New Department of Defense (DoD) Zero Trust Strategy In October 2022, the Department of Defense (DoD) released a DoD Zero Trust Strategy document in which four key goals were identified including Zero Trust Cultural Adoption, DoD Information Systems Secured and Defended, Technology Acceleration, Zero Trust Enablement. With so many US Government guidances around Zero trust, the strategy guide focuses on DoD’s unique operational requirements. This was the subject of our conversation on a Government Technology Insider podcast. DoD Zero Trust Strategy - What is it and why is it really needed? The DoD Zero Trust Strategy document is designed to provide guidance to the MILDEPs and DoD agencies for deploying and measuring the success of their zero trust designs. The document states: “This Zero Trust Strategy defines an adaptive approach for how DoD must champion and accelerate the shift to a Zero Trust architecture and framework…The intent of the strategy is to establish the parameters and target levels necessary to achieve Zero Trust (ZT) adoption across systems and networks.” Within one of the defined goals, it lays out high level capability roadmaps for the DoD including seven DoD Zero Trust pillars, 45 different core capabilities that map to those pillars, and three different target levels. Each defense agency has its unique scenarios that they need to consider. Therefore, it's crucial to have a cross-functional team to help map out the zero trust capability model. This model can serve to get everyone aligned, to understand the current mode of operation and map out the future mode of operation. Barriers for Agencies to Implement a Zero Trust Strategy While the DoD strategy guide is partially designed to overcome barriers like differentiating definitions of Zero Trust, and how to measure it, there are still a few barriers DoD organizations must consider. One potential barrier to implementing zero trust security is the existing security infrastructure and access controls that are already in place. The military, for example, has its own security infrastructure and access controls and, moreover, a process of evaluating its compliance under the Risk Management Framework (RMF) that has been built over many years. These systems are deeply ingrained in the organization's culture, and it is not easy to simply rip and replace them with a new approach like zero trust. To overcome this barrier, the DoD and other organizations will need to find ways to integrate zero trust into their existing security infrastructure and access controls gradually. This will require careful planning and a deep understanding of the existing systems and processes. An additional barrier to implementing zero trust security is the need to stay flexible and adaptable as technology evolves. Specific capabilities or attributes that may be relevant today may become irrelevant in a few years due to changes in technology. To overcome this barrier, the DoD and other organizations will need to build a culture of continuous improvement and innovation, with a focus on staying flexible and adaptable to change. Zero Trust Roadmap Agencies can use the DoD Zero Trust strategy document to create an executable guide, or roadmap to include timeline gates, for their particular agency to implement a zero trust architecture. A simple three phased approach can help an organization align resources to both their zero trust strategy and modernization. 1. Inventory Assets. The first step is to get an accurate inventory of assets. This is critical because in a zero trust environment, each device and end user will have a risk score, and there will be conditional access control to those applications and devices. 2. Identify unifying zero trust assets to divest in. The second step is to create an inventory of assets that can be displaced when moving to a zero trust architecture. For example, with the adoption of cloud computing, agencies will start to see a lot more of the implementation of solutions like Secure Access Services Edge. This will result in a massive exodus of hardware leaving the shelves of facilities. 3. Operational value assessment of assets. Finally the third step is to work with the operations and acquisition teams to understand the value of assets that will be displaced. This is important because agencies will need to build a business case to explain why moving to a zero trust architecture makes sense. While the above steps are a good starting point, there are undoubtedly many other things that agencies can do to prepare for the implementation of a zero trust architecture. The important thing is to start now, even if it's just taking small steps, and to not consider each step as one needing perfection, but instead one needing to be built upon along the way. Waiting until fiscal year 2024, when planning and preparation are scheduled to begin, could be too late. Zero Trust’s Cultural Shift Zero trust requires a cultural shift and operational alignment of what IT security really means. People are starting to understand that IT security is not just a supporting function but a domain in itself, and zero trust is a critical strategy in this domain. The future of IT security is about maneuvering within the domain of cyber warfare. This realization is changing the place of IT security at the table of senior leadership, and the adoption of the term zero trust is driving this cultural shift. The future of zero trust is not just about the capabilities it provides, but also aligning talent and culture. The four main goals of the zero trust strategy guide fall under one capability, but the alignment of talent and culture is part of the other goals. Zero trust is not a one-time process or a checklist item; it requires a daily mindset shift, adaptive security, and constant learning. Zero trust is no longer just a security model; it is a strategic initiative that is becoming a domain of warfare. It is essential to identify where an organization stands within the three target levels of the zero trust capability model and make incremental progress every day towards measurable successful implementation. With modern solutions aligning with zero trust, organizations will see significant changes in their security posture in 2024. Zero trust is driving a cultural shift towards a daily mindset shift, adaptive security, and constant learning, making it a strategic initiative in the DoD and across the entire US government. The entire podcast can be heard here. Tue, 18 Apr 2023 08:21:38 -0700 Patrick Perry Zscaler Launches New Innovations to Improve Best-In-Class DNS Security To learn more about the top DNS threats facing organizations today (and what to do about them), check out the paper “Decoding Modern DNS Threats” At Zscaler, our mission is to enable seamless, secure access to the internet and applications in every possible circumstance. That means any user, device, or workload, connecting to any resource, on any port or protocol, from anywhere in the world, at any time. The DNS security features built into the Zscaler Zero Trust Exchange are critical to our fulfillment of that mission. As a cloud native proxy, the Zscaler Zero Trust Exchange delivers scalable inspection, advanced threat protection, and DNS resolution at more than 150 edge locations for optimal performance and security around the globe. Today, Zscaler is excited to announce new innovations that further this promise. We’ve made enhancements to the security, availability, flexibility, and performance of our DNS security module, including: DNS encryption of plaintext traffic into DNS-over-HTTPS (DoH) for better privacy and security Availability improvements with enhanced failover capabilities that automatically redirect traffic to a secondary resolver if the primary fails DNS security enhancements including improved DNS tunnel protection to prevent data exfiltration, and enhanced DGA detection to block any command and control malware activities Protective DNS enablement, encrypting and sending all government agency traffic to protective DNS (PDNS) resolvers in alignment with mandates from the NSA, CISA, and the National Cyber Security Centre Better user experience with configurable DNS ECS to provide the best localized resolution based on the country, and to ensure users experience webpages with their local language, content, and currency Enhanced error handling and reporting to provide more control and visibility Figure 1: Zscaler DNS Security Overview 88% of companies suffer from DNS attacks DNS is often referred to as the phone book of the internet. DNS’ job is to translate web addresses, which people use, into IP addresses, which machines use. But, DNS was not designed with security in mind. And even though companies have invested incredible amounts of money into their security stack, their DNS traffic often goes unmonitored. This has only gotten worse with the adoption of encrypted DNS, known as DNS-over-HTTPS (DoH), which has grown annually since its introduction in late 2018. Organizations end up with little to no visibility over what’s happening with these DNS queries, and attackers can exploit that in various ways. IDC's 2022 Global DNS Threat Report revealed that 88% of organizations interviewed had suffered DNS-related attacks over the previous year, primarily phishing, malware, and DDoS attacks. 70% had experienced application downtime as a result. Organizations need better DNS security. They need it with a cloud native solution that can scale to inspect both encrypted and unencrypted traffic, and they need it delivered with speed and performance regardless of where their users, devices, and applications are located around the world. The latest in DNS protection and resolution For years, Zscaler has been proud to partner with our customers to tackle this problem with a differentiated approach to DNS resolution and security. Zscaler is the only security vendor that combines optimal DNS resolution closest to the user with best-of-breed DNS filtering, security, horizontally scalable DoH inspection, and data exfiltration protection. Below is a deep dive into the new features that further enhance all of these benefits: DNS Gateway for better availability Organizations handle DNS resolution in one of two ways: they either build their own DNS resolvers into their data centers, or they work with third-party providers. In 2016, cybercriminals used the Mirai botnet to wage a DDoS attack against a leading DNS provider, causing massive website and application outages. This made it clear that no one is immune to attacks, and highlighted the risk of relying exclusively on one DNS provider. Zscaler’s DNS Gateway reduces this risk by providing automatic failover to secondary resolvers if the primary resolver fails. Figure 2: DNS Gateway features Increased security (and PDNS compliance) with DoH translation The DNS gateway also has the ability to translate plaintext DNS into DNS-over-HTTPS (DoH). This is a differentiated capability that ensures any traffic, regardless of source and destination, gets the privacy and security benefits of encryption. For federal agencies, this feature may be of particular importance: DoH translation means that Zscaler can send all traffic to protective DNS (PDNS) resolvers. PDNS resolvers are government-sanctioned, security-focused DNS resolvers that are available (and in some cases mandated) to use by any US organization that has access to United States Department of Defense (DoD) information, as well as to a range of public services in the UK. DNS tunneling and DGA detection Some DNS-based attacks can be particularly difficult to stop. Luckily, Zscaler has the world’s largest inline security cloud, which processes 300 billion requests per day and stops over 8 billion threats. This tremendous amount of data not only means that a threat stopped for one customer immediately improves protections for customers everywhere, but it also feeds into our AI and machine learning models to detect new and evasive threats. DNS tunneling One of the most popular DNS threats is DNS tunneling, in which threat actors take advantage of the flexible nature of DNS queries to hide communications to command-and-control servers, download malware, or exfiltrate data. This is challenging to detect both due to the broad nature of DNS queries (a website can be called pretty much anything so a DNS query can be pretty much anything), and due to IT visibility gaps, particularly when it comes to encrypted traffic. Zscaler’s machine learning algorithms uncover DNS tunneling by examining key data such as variations in the amount of data, the volume and distribution of domain and subdomain requests, and the reputation and co-occurrence of various domains and subdomains. If we discover malicious activity attempted anywhere, we immediately update our protections for all customers. DGA detection Attackers frequently use domain generation algorithms (DGA)—programs that can rapidly generate thousands of new domain names—to allow them to bypass DNS block lists. Zscaler analyzes the domains and the traffic itself to detect and block attacks using DGA. These domains can also be blocked, isolated, or cautioned at the policy level using URL and/or DNS filtering categories for miscellaneous/unknown and newly registered domains. Finally, all files traversing the Zscaler Zero Trust Exchange undergo analysis by our advanced threat protection engine and inline sandbox in order to ensure that no malware makes it to the client. Improved user experience with DNS ECS Third-party DNS resolvers typically lack user context when a request comes via an intermediary. If a user in Argentina is connecting to a resolver in Brazil, the Brazilian resolver won’t know where the request is coming from, and may route the request to a content delivery network (CDN) that is far from the user, causing latency. Even worse – the user may receive web content in the wrong language and currency, i.e., Portuguese and Brazilian reals rather than Spanish and pesos. With DNS ECS, third-party resolvers get the context that they need to improve this user experience. The resolvers deliver content using the closest CDN, and with the correct language and currency. This is fully optional and configurable subject to privacy requirements. Error handling and reporting improvements DNS Security gives admins superior visibility and control over their DNS traffic, irrespective of the protocol or the type of encryption used. This includes forensically complete logging, dashboarding, and the ability to define rules to control requests and responses. With this launch, organizations have even more flexible and granular control over error handling and even better reporting on both an ad-hoc and quarterly basis, with answers to questions such as: Who are the top DNS talkers? What DNS protocols are being used? What categories are my users hitting? Who are my top blocked users? What were the attempted DNS tunnel data exfiltrations? Figure 3: DNS Security reporting dashboard Part of the world’s leading zero trust solution DNS Security is just one of many capabilities of the Zscaler Zero Trust Exchange, which helps organizations reduce business risk while enabling and simplifying digital transformation. We consistently strive to provide better value, security, and performance across our platform, and are proud to offer these new benefits to our customers everywhere. To learn more about DNS Security, visit our website. Wed, 19 Apr 2023 05:00:01 -0700 Mark Brozek Stop Attacks Even Before They Happen: Unleash The Power of Zscaler Deception As technologies advance, cyberthreats advance with them. Cyberattackers are finding innovative and better ways to infiltrate your environment and carry out stealthy attacks that aren’t easy to detect by traditional defenses. Human-operated attacks represent a more challenging threat, as cyberattackers are skilled and adaptable and play on a number of tactics that help them to strategize what works best to get them inside an environment. According to Verizon, more than 60% of breaches involve credentials. It is now even more challenging to identify active threats, privilege escalation, and lateral movement because attackers are concentrating on bypassing MFA, hacking users, and attacking apps as the initial point of entry. They prowl around the perimeter of the company looking for a way inside, which they may do by tricking a user into clicking on a malicious link (phishing), opening a corrupted attachment, or providing login information by stealing passwords or getting credentials from the dark web. They may also succeed by exploiting a zero-day or unpatched vulnerability. However, the attackers are vulnerable too. Let's try to comprehend a bit more on how we can manipulate to defraud attackers and render their endeavor ineffective! Trust your traps Deception technology outsmarts the attacker and dupes them into their own traps. This technology offers early insight into attacks by exposing an actor’s tactics, techniques, and procedures (TTPs) and alerting security teams to take immediate response actions to thwart them before the attacker can penetrate the environment. Deception is used to divert an attacker's attention away from important assets and onto fictitious ones, wasting the attacker's time, money, and efforts. Zscaler Deception is a part of active defense to enhance security posture, sustain networks under an assault, and promptly identify threats far before an attack occurs. Zscaler Deception provides you visibility and insight into the attacker’s every move. How does it work? It’s worth noting that an attacker must "trust" the environments in which they introduce their malware on the web apps and services. Zscaler Deception exploits their “trust” and confidence and lures the attacker toward pre-setup traps. The solution populates your IT environment with false resources that seem like production assets, but no legitimate user ever accesses them. Once they are touched, it triggers an alarm. Zscaler Deception built for a zero trust architecture Then, it employs deception-based alerts to identify malicious activities, produce threat intelligence, stop lateral movement, and orchestrate automated threat response and containment. The alarm is a high-fidelity alert to the SOC team indicating the presence of an attacker, and responses are deployed swiftly to deter the attack thus making their attempt futile. The solution leverages the Zscaler Zero Trust Exchange active defenses to make your environment hostile to attackers and able to track the full attack sequence. Benefits of Zscaler Deception Zscaler Deception helps you identify known or unknown security threats that can harm your organization. Discover and eliminate stealthy attacks: Proactively detect sophisticated threats and disrupt targeted attacks such as Advanced Persistent Threats (APTs), zero-day threats, ransomware, supply chain attacks, and lateral movement in real time. Detect and alert on the most elusive cyberthreats in your organization by laying decoys and false user paths that lure attackers. Reduce noise due to false alarms: Zscaler Deception not only scales well but provides high-fidelity alerts that remediate many of the pain points of security teams to tackle the daunting task of looking at huge volumes of false alarms. Zscaler Deception helps to weed out false positives, meaning it can save a SOC team‘s critical resources. SOC teams now can devote their time to alerts that need attention. Security teams can elevate their focus from simple detection to prevention and meaningful intelligence on threat actors. Detect compromised users: With zero trust reducing the attack surface, leave no room for adversaries to maneuver by detecting attacks that leverage credentials stolen through phishing or the dark web. Stop lateral movement: Identify and stop attackers who have gotten past conventional perimeter-based defenses and are attempting to move laterally through your environment. Zscaler employs countermeasures, like endpoint lures and application decoys, to intercept adversaries and hinder their ability to move laterally or discover targets. Protect against ransomware: Decoys serve as tripwires that allow ransomware to be found at every point in the kill chain. Simply having decoys in your system prevents ransomware from spreading. Preventive threat detection: Zscaler enables you to detect sophisticated adversaries, such as organized ransomware operators or APT groups. Decoys placed around the perimeter catch stealthy pre-breach recon actions that frequently go undiscovered. Accelerate your incident response: Zscaler helps SOCs by providing them high-fidelity and real-time alerts. By automating the response, security teams drive efficiency and reduce complexity for a faster mean-time-to-response (MTTR). Zscaler deception with zero trust exchange for better threat detection and response There is a “no one size fits all” solution in cybersecurity practice to deal with security incidents. However, the goal is to reduce your attack area and improve your capacity for incident response. The fusion of Zscaler zero trust security with deception technology is one of the most potent combos. Zero trust, also known as "least-privileged access," restricts access to only the bare minimum of necessary resources while presuming that every access or user request is hostile until the user's identity and the context of the request are verified and authorized. In a zero trust environment, Zscaler Deception decoys serve as tripwires to identify malicious activities and lure attackers away from carrying out attacks. Zscaler Deception expands your threat detection and response to include the most complex attacks, including identity-based threats and advanced persistent threats (APTs). With Zscaler Deception, your security teams can now detect an attack quickly, understand the attacker's strategies, and create a playbook of automated countermeasures to outwit and dissuade the attacker. Discover why deception is critical to modern security systems. Read the white paper - Deception Technology: An integral part of the next generation SOC Watch out for our next article on Zscaler Deception, where we will continue discussing how we can track and hunt the most elusive threats! Thu, 13 Apr 2023 16:18:12 -0700 Nagesh Swamy The Cal-Secure Cybersecurity Roadmap As part of our webinar on Cal-Secure, Dylan Pletcher, Chief Information Security Officer for the California Department of State Hospitals had a Q&A session with Carlos Ramos, Principal Consultant at Maestro Public Sector and former State of California CIO. Their conversation focused on the perspective of an IT leader that is in the line of fire dealing with cyber threats, and how to implement the recommendations in the Cal-Secure roadmap as well as leverage some of the resources and capabilities that are offered. Carlos Ramos - You're on the leading edge - keeping your environment, your agency, and its mission secure from cyber threats. The Department of State Hospitals provides 24-hour care. You run hospitals up and down the state, leveraging a lot of technology on behalf of the vulnerable population that you serve. These technology systems include providing direct patient care, case management, pharmacy systems, and patient records. You have facilities management and control systems. You have to run warehouses and procurement and supply chain systems, along with all of the typical operational systems that agencies rely on such as administrative systems and office automation. How do you keep up with cybersecurity threats with such a vast array of solutions to safeguard? Dylan Pletcher - The California Department of State Hospitals is like a small city. We have our own police force, fire department, cabinet shops, electricians, plumbers, food service, schools, laundry, postal services, and of course, a lot of in house medical services. We have primary care pharmacy, physical therapy, radiology, mental health services and so on. As you can imagine, we have a whole host of electronic systems and devices ranging in security capabilities from medical IoT devices and blood testing gear the size of an Amazon Echo, to a large x-ray machine and complex automated pharmacy dispensing systems. Some have wired or Wi Fi access, some have access or connect to Bluetooth like portable ECG devices. But one thing they have in common is that they all want to talk to the internet. Trying to keep them secure and permit only necessary traffic is definitely a challenge. Segmentation is important, and we're starting a project leveraging Zscaler Private Service Edge. For example, we have additional network access control to profile unmanaged devices into quarantine, on authorized devices, as well as on the people side of things. One of my key responsibilities is maintaining an open dialogue with all of our business units. Engagement early and often on any initiative is critical to ensure a partnership that fosters security, in addition to the usability of that system. That communication can't be limited to just executive staff - stakeholders need to be involved as well. We have a staff of 13,000 employees, and every one of them has a direct line to me and my staff for any security concerns. We have to be more than just the enforcers of information security policy, we have to be advocates as well. Education is at the forefront of our program. We have annual training, we have phishing exercises, we email out tips and tricks regularly. Occasionally I'll do presentations to larger groups such as our accounting staff or our legal staff. I've noticed a drastic change in attitudes since I shifted away from simply stating policy, and instead turned to helping our employees stay safe in their personal lives. Some of my messaging now has the flavor of doing this to keep your personal account safe. And oh, by the way, do this at work, too. This has really helped us instill a culture of security that sticks with them 24 hours a day. Carlos - Are people becoming more aware and more focused on cybersecurity? Dylan - Absolutely. A good example is I was walking back from lunch one day, and one of our executives passed by and said, “Hey, Dylan, I just want to let you know, I caught that fish”. I'm really encouraged by instances numerous times during the day, when people will see me and comment, “Hey, that tip that you said about turning on multi factor authentication, that was really important to me." That really has been a focus - turning away from just saying, here's the policy into this is why you should do it. If you can make it relatable to them, I think you really do engage with them so much better. Carlos - I'm sure that during your time in information security, you've seen a lot of change. How do you deal with the evolving nature of cyber threats? Dylan - We're moving away from a device centric view, where this PC can communicate with that server, and moving more towards the need to let only permitted executables communicate with other applications. With our shift to remote work and cloud services, it became even more important to limit access when the device or application isn't acting the way it's expected to. A threat that's often overlooked is insider threat for deliberate or careless action, and Cal-Secure does have insider threat as a Phase Five item. I did see that overlap with other capabilities, such as continuous monitoring, data loss prevention, privilege, and access management. Depending on your environment it may make sense to implement different things sooner. The threat from users that are susceptible to phishing is truly frightening. Multi-factor authentication is in Cal-Secure phase one, and that's for good reason. Credentials are pitifully easy for an attacker to obtain. One of the easiest ways to make yourself a less appealing target is to implement MFA and when you pair that with strong anti phishing and anti malware tools, you really have a good foundation to build upon. Carlos - The pandemic changed the workforce, changed the way that we operate, and really what we had to rely on in the way of systems to be able to do our job. Did it change the way that you approach cybersecurity or keeping your system safe with the impact of work from home? Dylan - We have clinical staff and with patients in hospitals, there's no way to be entirely working from home. Still, most of the employees on the administrative side have had to do extensive remote work. Some business units went home in March 2020, and are still nearly 100% remote to this day. Supply chain issues were very significant and getting PPE was tough. Food that was served in group settings before had to shift to individual servings and in separate environments. Getting our hands on electronic gear was tough due to shortages of computers and video conferencing tools. We knew even before the pandemic that we did not want users to have network level access. So the traditional VPN was out before it even started. But we did have to buy a lot of licenses and expand our virtual environment to get access to virtual applications. We also had some virtual desktops where applications didn't work well in that kind of virtual environment. So they could actually connect to their desktop that was on prem and run applications that way. We suffered the same issues that everybody else had with laptops, headsets, and cameras not being able to do that. We were able to get laptops to most of the people that were already working remotely, or those that had a mobile mentality, even before the pandemic started. We did have to get a few Chromebooks to handle some of the folks just so we have managed devices that would be connecting to our environment. And we did have even a couple of people that had to take desktops home. Our technology stack allowed us to keep a secure environment even as we moved remotely, we had the plumbing already in place to do remote to do virtualized applications. Zscaler’s ZPA (Zscaler Private Access) really was a game changer for us. It allowed us to eliminate a lot of the public facing virtual applications and tie them down to the user, putting restrictions even on where they could connect from. Some people were taking advantage of working from home and they were traveling - so they'd be in Hawaii or on the East Coast, and still be able to work. From our standpoint, it didn't make a difference where they were, but we still wanted to be able to control where they came from. Carlos - As a practitioner, what advice would you have for your fellow practitioners in terms of leveraging the Cal-Secure roadmap? Dylan - I've leveraged Cal-Secure in three different ways. The first is that it gives me leverage to say, “Yes, this tool is expensive, but it's critical”. And you don't just have to trust me. Cal-Secure is something that I can point to that has the weight of the State CISO and the Governor's endorsement. So that's a powerful tool for justifying spending. The second is it helps us strategize and plan our next steps. If your department needs some guidance, it gives you some guidance. Phase One through Phase Five, maybe those aren't necessarily the ways that you would want to implement it; maybe you want to shift things around a little bit. And that's fine, too. It depends on what your environment is. Third, I would say it helps to tell a story to those that are less IT savvy, that our strategy and our roadmap are in line with industry best practices. It's always good to have that validation that what we're trying to do internally with our department is something that's valid. Next Steps Watch the full webinar on Cal-Secure. Learn more about our State and Local government solutions. Request a custom demo or whiteboard session with a Zscaler expert Tue, 11 Apr 2023 20:04:36 -0700 Ian Milligan-Pate Join Leading Experts and Innovators at Zenith Live ’23 and Enhance Your Digital Experience With a hybrid workforce, making sure users can be productive from anywhere is key. However, when users aren’t always in the same location, troubleshooting can be difficult. Many organizations use multiple tools to try to solve hybrid workforce issues, but it takes a long time to correlate the information, and network operations and service desk teams often use different tools. It’s time to take control of your monitoring silos and reduce overall costs. Want to learn all about reducing costs and improving end user experience using AI, APIs, and best practices in just a couple of days? Well, it’s that time of year: Zenith Live ’23 is right around the corner, and we have a packed agenda for you. Attend Zenith Live to: Learn from industry experts and thought leaders about the latest trends and best practices Network with your peers and build valuable relationships to advance your career Join hands-on labs to enhance your technical skills and knowledge Be inspired to think creatively about the industry as a whole Zscaler Digital Experience Sessions at Zenith Live Zenith Live ’23 features a full lineup of expert-led sessions to help you support user productivity with Zscaler Digital Experience (ZDX), including best practices, live training and discussion, and hands-on demos to empower you to take full advantage of the platform. Here are some ZDX sessions to look forward to: Ensuring a Great User Experience for Your Business Apps Digital transformation is shifting app, network, and security designs, and posing challenges for network operations and service desk. Join to understand traffic patterns, ensure a great user experience, and lower MTTI/MTTR with AI analysis. ZDX unifies monitoring silos Best Practice: Operationalize ZDX to Ensure Great User Experiences Gain expert insights on the essential practices when deploying and operationalizing ZDX. This includes how to activate ZDX, configure policies, and alerts to unlock its full potential, enabling you to derive maximum value. Automating Root Cause Analysis for Poor User Experience Using Machine Learning Supporting a hybrid workforce relying on public/private SaaS is challenging. Pinpointing app issues, latency, or poor user experience is time-consuming for IT ops and service desk. Join product experts to learn how to solve user experience issues in minutes. ZDX Automated Root Cause Analysis Advanced Mean Time to Detection (MTTD) and Recovery (MTTR) Acceleration with Deeper Network Path Analytics for Network Operations For user productivity, reliance on SaaS or IT-hosted apps and services is crucial. Join this session to explore Wi-Fi, ISPs, and internal/destination network performance for optimal digital experiences. Integrating Digital Experience Insights into Your ITSM and Observability Processes Using APIs Learn about ZDX public APIs, including Outbound APIs to integrate ZDX Telemetry with observability processes and platforms, and our deeper API integration with ServiceNow to automate ITSM processes. ServiceNow integration with ZDX APIs What's New: Innovations with Zscaler Digital Experience Zscaler is revolutionizing user experience insights for customers. Join this session to gain a competitive edge by preemptively resolving potential issues with our latest advancements and upcoming roadmap features. Sign up now! Learn directly from our experts how to improve digital experiences while keeping your environment secure. Don’t miss your opportunity—seats fill up fast! Register for Zenith Live ’23 and sign up for the Zscaler Digital Experience (ZDX) sessions so you can hear it first! Zenith Live ’23 at the ARIA Resort & Casino | Las Vegas, Nevada June 13-15 (dedicated training day on June 12) Register now Zenith Live ’23 at the InterContinental Berlin | Berlin, Germany June 27-29 (dedicated training day on June 26) Register now Mon, 10 Apr 2023 15:45:01 -0700 Rohit Goyal Improve Network Path Analysis Beyond Traceroute With Zscaler Digital Experience With applications and end users as distributed as they’ve become, most organizations have adjusted their workforces to hybrid environments. With this adjustment, IT must ensure excellent end user experiences no matter where users or applications are located. This means understanding each user’s environment—even those not under IT’s control.. Without the proper tools in place, troubleshooting takes longer and undue burden is placed on network operations teams lacking the ability to quickly pinpoint issues. In this blog, we’ll dive into a useful networking troubleshooting tool, “traceroute”, or “tracert” on Microsoft Windows. What is traceroute? Traceroute is a tool built into many different devices and operating systems, such as Microsoft Windows, Macs, or routers and switches. Traceroute is a network diagnostic tool which provides the path packets take between a device and a destination. This destination can be anywhere from a website on the internet to local devices on the network (e.g., printers). Traceroute indicates the number of hops between the two points on the network and provides the round trip time (RTT) for each one of the hops. Why use traceroute? Traceroute is used for troubleshooting network problems related to connectivity, congestion, or performance. Herein, it indicates where packets are dropped or delayed. Additionally, it can be used by network operations teams to analyze the route(s) packets take to ensure they’re on the optimal path. In some cases, it can help network engineers understand if hops along the path are oversubscribed, which would result in a poor end user experience. Traceroute example What happens when packets travel over the internet? They typically traverse many layer 3 devices (routers) before eventually reaching their destination. Tools such as traceroute can help identify an issue, but keep in mind that it is NOT scalable to manually run traceroute from each end user’s device. Here is a typical output from a traceroute: traceroute to (, 64 hops max, 52 byte packets 1 ( 1.252 ms 0.685 ms 0.676 ms 2 ( 4.858 ms 4.949 ms 4.951 ms 3 ( 5.300 ms 5.312 ms 5.312 ms 4 ( 22.306 ms 22.315 ms 22.305 ms 5 ( 22.203 ms 22.196 ms 22.186 ms 6 ( 23.907 ms 23.905 ms 23.905 ms 7 ( 23.883 ms ( 24.125 ms ( 24.113 ms 8 ( 23.872 ms 23.861 ms 23.851 ms “(” is the destination, “64 hops max” is the maximum number of hops, and “52 byte packets” indicates the size of the packets. In the traceroute above, “1.252 ms”, “0.685 ms”, and “0.676 ms” are the response times, in milliseconds, to the three packets sent to the first hop. The first three hops in this output indicate a local network, and eventually, the hops travel over different network providers. It takes 8 hops to reach the end destination. You can adjust traceroute settings based on the device you’re using. In this example, setting the “-m” flag sets the maximum number of hops traceroute will use before ending. prompt> traceroute -m 255 traceroute to (, 255 hops max, 60 byte packets 1 router.local ( 0.745 ms 0.808 ms 0.916 ms 2 ( 2.726 ms 2.799 ms 2.923 ms 3 ( 14.330 ms 14.514 ms 14.622 ms 4 ( 15.021 ms 15.210 ms 15.291 ms 5 ( 15.228 ms 15.293 ms 15.369 ms 6 ( 16.478 ms 11.819 ms 13.952 ms 7 ( 11.560 ms 11.657 ms 11.772 ms 8 ( 11.666 ms 12.802 ms 11.698 ms 9 ( 11.693 ms 11.760 ms 11.856 ms 10 ( 11.828 ms 11.942 ms 11.973 ms Traceroutes leverage time to live (TTL), which is normally used to prevent routing loops and general overconsumption of network resources. The TTL is typically set to a value, and each router decreases the TTL by 1. If the TTL reaches 0, it’s dropped. Traceroute works by sending a TTL=1, which makes it to the first hop and the router signals that it dropped the packet. Then, traceroute increases the TTL=2 and so on until the packet reaches its destination. In some cases, networks block these types of packets. Additionally, packets don’t always take the same route. Sometimes, the routes change based on a router's availability. For instance, if a router is unavailable, the packet would have to take another route to reach its destination. Fortunately, there are typically many routes to a destination. However, not all routes are optimal for the best user experience. This makes troubleshooting even more difficult, as a user could have issues over a particular route. Manual traceroutes lack full context Imagine a user is having application response issues, but the network operations team doesn’t see any latency issues after analyzing the route from the user’s device to the destination.What happened? It appears that the user’s issues have been resolved, but the user calls the service desk again the next morning with a similar issue. Again, when the NetOps team manages to get access to the device and check the route, they don’t see any issues. In this case, getting access to the machine or getting the end user to run a traceroute requires coordination and time is wasted. Depending on the issue, the operations team may catch it. For example, if it’s the Wi-Fi or internal network, it might be easier to identify. However, if it’s an ISP related issue they may or may not catch it, as those routes typically switch within milliseconds. With manual tools such as traceroute, they are good point-in-time solutions, but don’t provide insights across a user’s experience in real time, which only serves to increase troubleshooting times. Traceroute also shows IP addresses and sometimes DNS names to help identify the routes. However, it doesn’t show the geographical information. With easily accessible geographical data, it’s easier to visualize the different hops. For example, if a user is located in Florida, US, and they’re accessing a SaaS application whose traffic routes to Europe then back to the US, network operations can quickly discern that there’s something wrong with the route. Ensure flawless end user experiences with Zscaler Digital Experience (ZDX) ZDX helps IT teams monitor digital experiences from an end user perspective to optimize performance and rapidly fix application, network, and device issues. In particular, one of the key ZDX capabilities for network operations teams is rapid visualization where network paths cause performance issues. NetOps teams are given the power to track metrics across Wi-Fi, local ISPs, and corporate and vendor networks to easily spot latency or packet losses between hops. Let’s examine how ZDX helps you quickly triage network performance issues. Below is an example of a path from an end user to a destination. ZDX captures network path data at higher frequencies, lets you choose a point in time when a user indicated they were having performance issues, and offers visibility into the Cloud Path to pinpoint the root cause. ZDX dashboard showing the overall Cloud Path Drill into the hops to see the actual latency within ZDX ZDX Cloud Path latency chart to identify issues Cloud Path probes are used to collect the following metrics: Hop Count: The number of hops between each hop point on the path. Packet Loss: the % of packet loss at each hop point on the path. Latency (Average, Minimum, Maximum, and Standard Deviation, aka Jitter): This is the roundtrip path time measured in milliseconds. ZDX Cloud Path command line view Network hops block ICMP traffic, now what? There are instances when network hops such as routers or firewalls block ICMP packets, limiting the response. ZDX uses Adaptive Mode to select the best protocol for the cloud path to reach its destination. Adaptive mode tries the TCP, UDP, and ICMP protocols for each run and picks the best available protocol for the probe. Configuring Adaptive Mode within ZDX Overall, traceroute is a valuable tool, but it presents clear challenges when it comes to troubleshooting an end user’s experience. Namely, it lacks historical information, reverse traceroutes are difficult, choosing the optimal protocol is time consuming, and it doesn’t scale for global organizations. Network operations teams can benefit from tools such as ZDX to quickly analyze an end user’s experience and provide root cause. To learn more about ZDX, sign up for a demo! Thu, 06 Apr 2023 11:14:31 -0700 Rohit Goyal How to Cut IT Costs with Zscaler Part 6: Decreasing Carbon Footprint Organizations everywhere are grappling with economic uncertainty. Day in and day out, they are forced to respond to increasing financial pressures that do not seem to be going away in the foreseeable future. While one may assume that security, networking, and IT teams are insulated from this pressure due to their responsibility for the ongoing safety and operational functioning of their companies, that is sadly not the case. They also are faced with shrinking headcounts, tightening budgets, and the need to do more with fewer resources. How can these teams respond? This six-part blog series explains how Zscaler, the One True Zero Trust Platform, saves money for its customers. Be sure to explore our previous installments that each discuss a key way that Zscaler delivers these savings: Part 1: Enhancing Security Posture Part 2: Optimizing Technology Costs Part 3: Increasing Operational Efficiency Part 4: Improving User Productivity Part 5: Streamlining M&A This sixth and final installment in this blog series focuses on the enterprise’s carbon footprint. Appliances: The root of all evil Standard, perimeter-based architectures inevitably entail the use of hardware appliances—and not just a few, but dozens or even hundreds. Whether for security or networking purposes, appliances have fixed capacities to deliver a particular level of service. So, as organizations grow, the prevailing strategy is to purchase additional or more powerful hardware to achieve the capacity necessary for servicing the expanding enterprise. As they do so, organizations regularly overprovision to avoid outages during traffic surges; in other words, they purchase powerful equipment with excess capacity that usually goes unused. Complicating this further is the fact that security appliances are often point products, and addressing new risks usually means deploying additional hardware solutions. With all of the above in mind, if a company opens a branch site, it is common for this stack of appliances (or at least a portion of it) to be duplicated at the new location. This quickly multiplies the number of appliances in use. In addition to the heavy CapEx incurred using this approach, it dramatically expands an organization’s carbon footprint. Bad for the environment and worse for your wallet Deploying appliances means increasing the organization’s energy utilization. This comes from a combination of keeping the appliances themselves running, as well as using the high-powered cooling systems necessary to keep them from overheating. The associated electricity consumption has detrimental effects on the environment, but also means that organizations will face massive utilities costs. Even for smaller companies with fewer appliances, these costs will still be significant relative to their size. Figure 1: More appliances mean more power consumption Getting green with Zscaler As the world’s largest security cloud, the Zscaler Zero Trust Exchange delivers security as a service. This means that customers don’t have to install, power, or cool additional appliances when they deploy the One True Zero Trust Platform. And by delivering comprehensive security that is consistent with Gartner’s vision for security service edge (SSE), Zscaler eliminates up to 90% of customers’ point product appliances, meaning they can significantly reduce their power requirements, carbon footprints, and energy bills. To further decrease the environmental impact of security, the Zero Trust Exchange is built upon an efficient, multi-tenant architecture that maximizes the use of resources. Additionally, Zscaler’s 150 points of presence around the world are powered by 100% renewable energy. The company also proudly maintains carbon neutral status and plans to achieve net zero emissions by 2025. Figure 2: Key benefits of the Zero Trust Exchange On average, Zscaler enhances customer power usage effectiveness (PUE) by 50%, demonstrating the strength of the Zero Trust Exchange when it comes to reducing an organization’s carbon footprint and power consumption. This is one of several reasons that the average Zscaler customer experiences a return on investment (ROI) of 139%, as detailed in this ESG Economic Validation study. To learn more about how Zscaler saves money for customers, download our white paper. Or, to see the real-world success stories of customers who saved money by embracing the Zero Trust Exchange, download our ebook. Wed, 05 Apr 2023 11:12:55 -0700 Jacob Serpa Internet 2.0: A Quantum Leap for Secure Global Connectivity The connectivity requirements of enterprises are rapidly changing due to cloud and hybrid work models. More agile scenarios are required in the course of digitization, and they must offer guaranteed connectivity for both people and machines. Even though the pandemic has brought a lot of movement into digitization efforts, the classic network infrastructure is only slowly catching up with modern requirements. At Mobile World Congress 2023, Deutsche Telekom, Teridion, Intel, and Zscaler presented an approach for WAN backbone redesign that is equivalent to a quantum leap: Secure connectivity is made available via a network over the internet – with guaranteed service levels worldwide. What makes guaranteed connectivity difficult today So far, organizations have designed their connection to the internet either via MPLS or more modern via a SASE framework. The path via MPLS uses the backbone of a single carrier and, in contrast, in the Secure Access Service Edge model, contractual capacity is purchased from various line owners, such as AWS, Google, Vodafone, or Deutsche Telekom, and traffic is routed from point A to point B. If MPLS is used, the customer usually has a service level agreement that the respective carrier secures worldwide via subcontracts in order to provide the agreed bandwidths. However, the organization usually only gets the upper limit of the bandwidth guaranteed and fluctuations are possible. In the event of unforeseen events affecting line capacity, such as unpredictable peaks in demand, severed submarine cables, or natural disasters, routing must be manually readjusted by the provider. With SASE, too, the customer is dependent on the ordered routing path being available via its providers, as spontaneous and fast redirection is technically not possible if required. For both models, the customer is dependent on their chosen providers for connectivity. Internet 2.0: A network over the internet Uninterrupted connectivity with guaranteed service levels has so far remained a dream for future-proof connections of modern applications. At Mobile World Congress 2023, Deutsche Telekom, Teridion, Intel, and Zscaler jointly ushered in a paradigm shift for connectivity, offering WAN network functionality as a service. The offer is based on Teridion's solution, which provides customers with the optimal data path through the internet. AI algorithms ensure that guaranteed service levels are met. The global infrastructure of the Teridion Liquid Network is the foundation for the global WAN-as-a-Service solution. It is based on an elastic network architecture that lays a network for connectivity over the internet. The lines of more than 25 global telecommunications providers and cloud providers are used, which are networked like an intelligent brain with over 500 points of presence worldwide. More than 2,000 Liquid Metal routers detect availability, throughput, latency, and jitter parameters and provide the best connection. As a result, traffic is directed through different routes depending on the time of day, and the customer benefits from the price advantage of optimized routing in addition to the guaranteed throughput speed. With the help of Zscaler, Teridion can bill the routing of data streams via the various carriers and cloud providers as required. Last mile via Deutsche Telekom's uCPE As the requirements of companies change and today's applications at the edge already require agile connections, Deutsche Telekom provides an innovative uCPE box for last-mile connectivity that routes data traffic from the application to Teridion. This customer premises equipment, in the form of a magenta box, was developed with Intel and ensures the virtualization of a wide variety of network services in a single device that is easy to install and operate. The box is equipped with several Intel multicore processors, which can be added flexibly as needed thanks to Intel On Demand and VNFs (Virtual Network Functions). In order to securely map the connection of applications at the edge, the Zscaler Tero Trust Exchange comes into play as another piece of the puzzle. The global security cloud ensures the secure inside-out connection of the applications through the App Connector. Based on policies, only an authorized user can access the application. In this way, companies can eliminate their attack surface on the internet, prevent lateral movement of attackers, and prevent data loss. The Branch Connector also tunnels the data traffic into the Teridion network, so that end-to-end security of the entire data stream can be guaranteed. A future-oriented range of applications This solution approach, which is made available by Deutsche Telekom, creates new possibilities for secure connections of edge applications or remote access scenarios. Since the box can be used anywhere in the network, it is particularly suitable for industrial application scenarios or in a retail environment, where access and protection of production lines or IoT devices must be guaranteed. In the course of the digitization of operational technologies (OT), which was previously managed by specialist departments, the secure convergence of OT and IT is feasible and monitoring can be conveniently designed via a single user interface. This makes the solution equally suitable for companies, managed service providers, or as a partner platform for system integrators – wherever high-performance and secure remote access to applications or machines is important. In addition, the approach is suitable for any organization that operates an international business and needs a reliable data transfer to connect with each other with guaranteed speed and bandwidth. At Mobile World Congress, the power of the partner concept was demonstrated in a live demo with India and China. Agility for tomorrow today The solution presented at the Telekom booth enables organizations to redesign their WAN backbone into a more flexible and cost-effective model for future-oriented applications at the edge. The premium internet Zero Trust Exchange delivers performance levels that are equivalent to today's MPLS connections – and can be created and operated with much less complexity. Always-on and secure data streams provide security for companies and machines and form the foundation for uninterrupted business operations. At RSA, the solution approach will be demonstrated live at the Zscaler booth on Thursday, April 27, 2023. See you there! Sun, 02 Apr 2023 07:29:20 -0700 Markus Breuer How to Secure Sensitive Data in the Public Cloud with Integrated CNAPP and DLP Protecting critical business data requires two things: An understanding of where sensitive data resides. Comprehensive context of data, so you have knowledge of the possible paths that allow access to this particular data. However, cloud environments present a unique challenge to security teams. In the cloud, data can reside in any of the hundreds of cloud native data services such as databases, object storage, attached and detached disks (ebs-volumes), and more. In addition, access to critical data may be cleared in trivial and non-trivial forms, each presenting its own hurdle to detect and mitigate. For example, data can be found in S3 buckets, where a single configuration opens the bucket to the public. The data could also be in a cloud native DB service, such as AWS RDS, and access is cleared through IAM roles that may be over-permissive and over-privileged. As a result, data protection in the cloud requires the identification and classification of data; assessment of its exposure through access/attack path analysis, and continuous monitoring of such attempts by any bad actors. To achieve this, threat intelligence is required, along with a deep integration of solutions, including Data Loss Protection (DLP) and a Cloud Native Application Protection Platform (CNAPP). These solutions must all speak the cloud language, be aware of all the cloud-specific data stores, and be able to prioritize exposure based on true risk derived from the data. In the following section, we describe the journey of a security admin responding to a malicious actor scanning a cloud asset. We demonstrate how through the combination of cloud-focused DLP, vulnerability scanning, and permission analysis, the security team is able to identify a real “show stopper” and mitigate what could have been a compromise of business-critical PII data. PII data leakage through vulnerable instance and detached disk In the following use case, we describe how Zscaler Posture Control, our CNAPP solution, —integrated with DLP and ThreatLabz threat intelligence—empowers the security admin to respond and mitigate a critical attack that could have resulted in theft of sensitive PII data, residing on a stale disk inside an AWS account. The trigger As Posture Control continuously monitors the network traffic directed to our customers' workloads, enriched with ThreatLabz intelligence, the system raises an incident where one of the externally facing EC2 instances is seeing an increased volume of scanning by IPs that are classified as malicious. In the wild, externally facing instances are constantly scanned from various sources across the internet. From the security team’s perspective, context on the source of the scanning—as well as the target's risk—is essential to prioritize and mitigate. The source context ThreatLabz provides the context regarding the source of the scanning. It indicates these IPs are currently involved in known attacks. Hence, this event needs to be investigated. At this phase, the focus of the investigation is to assess the risk exposure of the scanned assets. Zscaler Posture Control risk assessment - target context Posture Control enriches the context of the event with the following data points: The instance is exposing a vulnerable application library It is assigned with a ComputeAdmin role that allows the instance access to all disks in the account There is a stale disk in this AWS account that was once attached to a production instance and contains a DB copy. This copy contains business-critical data as well as PII data Given all that, from the attacker's perspective, the malicious actor could easily see that the instance runs an exploitable application. Post exploitation, the attacker would immediately obtain the ComputeAdmin role’s credentials and begin moving laterally across the assets in the AWS account. Using AWS CLI native commands—such as describe-volumes and attach-volume—an attacker can attach any volume to the compromised instance and scan its content, discovering the DB Copy, and exfiltrating it unnoticed. Containment and mitigation This complete view of the intention to exploit, as well as the critical potential damage, enabled the security team to raise a show stopper alert. With that context, they can mitigate the attack in various points, as provided by Posture Control: Limit network exposure of the instance Patch the application Reassign the instance with a role, according to the principle of least privilege Conclusions Security teams are often under significant loads of events to be investigated, exposure to be sealed and vulnerabilities to be mitigated. Without context, correlation, and true risk scoring, fatigued teams are missing the most critical data exposure risk. In addition, in the more complex exposure cases, they would not know how to correctly respond without impacting the business. Posture Control’s integration to DLP and ThreatLabz not only allows gathering of all the important data points, but it helps correlate and prioritize risk, with the combination of thereof. This integration provides the ability to understand the cloud, understand the data, and identify any malicious attempt via industry-leading threat intelligence. For more information, please check out our launch blog and on-demand launch webinar. If you’re interested in learning more about Posture Control, we have a free cloud security risk assessment here. Mon, 03 Apr 2023 08:00:01 -0700 Aharon Fridman OWASP Top 10: Injection Attacks, Explained Welcome to the second installment of our OWASP Top 10 blog series, where we’ll be discussing one of the most critical web application security risks - injection attacks (ranked #3 on the OWASP Top 10). Injection attacks refer to a range of tactics used by hackers to trick web applications into performing unintended actions such as destroying databases, compromising backend systems, or hijacking other clients connected to the application. In technical terms, injection flaws permit attackers to send hostile code or input data to an application, tricking the code interpreter into executing unintended commands or accessing data without proper authorization. Injections aren’t new but are among the most dangerous types of attacks for web applications because when successful, they can lead to data theft, denial of service, and even complete system compromise or destruction. Not only are injection vulnerabilities dangerous, but they’re also pervasive, especially in legacy applications which is why injection attacks are still considered one of the top web application security risks by OWASP. Some of the most prevalent and easily exploitable injection flaws are SQL (structured query language) injection and cross-site scripting (XSS). In 2019, millions of Fortnite users were impacted by a cyber attack that utilized SQLi and XSS tactics to gain unauthorized access to player accounts and compromise their data. The hackers manipulated the game's backend database using SQL injection and stole user session tokens via XSS to bypass login authentication. The Fortnite hack highlighted the severity of injection attacks and highlights the need for stronger web application security measures. Now, let’s take a closer look at SQL injection attacks and the potential consequences they can have on organizations. SQL injection attacks Source: SQL injection (SQLi) attacks involve inserting malicious code into a SQL query through user input, which is then executed by a website’s backend database server. Successful SQL injection exploits can result in massive data loss and disruption as the attacker gains access to and manipulates the database such as reading and modifying tables, executing privileged operations. These types of vulnerabilities remain prevalent because user input must be sanitized/validated before being passed into a query, but software developers often forget to do it or don’t do it properly. What’s more, databases themselves are attractive, high-value targets for attackers because they often contain critical or sensitive information. To protect against SQL injection attacks, it’s important to take the following steps: Use prepared statements, stored procedures, or an ORM Validate and allowlist user input Limit user privileges and use the principle of least privilege Monitor logs for signs of SQL injection attacks Keep software up to date with security patches and updates Cross-site scripting attacks Cross-site scripting (XSS) vulnerabilities pose a serious threat to web applications because they enable attackers to inject malicious scripts into legitimate websites and trick unsuspecting users into executing them. This can lead to the theft of sensitive data, such as login credentials or financial information, or even the complete takeover of a victim's web session. In some cases, these attacks can also pave the way for other types of attacks, such as malware distribution or further exploitation of vulnerabilities in the web application. These attacks are insidious, exploiting application vulnerabilities by manipulating user input without proper validation or encoding. Attackers start by sending seemingly harmless scripts to unsuspecting users, which browsers view as trustworthy. Once the scripts are executed, they can gain access to sensitive information, like session tokens and cookies, and even manipulate website content. To mitigate the risk of XSS attacks, it’s important to take the following steps: Use a Content Security Policy (CSP) to specify allowed content Use output encoding or HTML entity encoding to display user input safely Follow secure coding practices and use secure frameworks Verify user input on the server side to prevent malicious scripts Install a web application firewall (WAF) for extra protection. If you are a Zscaler Private Access customer make sure to use AppProtection Avoid using certain functions with user input, like eval() and setTimeout() Turn off unnecessary browser features or plugins Educate developers and users about XSS attacks Use HTTP-only cookies to secure session data Regularly update and patch web applications and servers to address vulnerabilities 3 practical recommendations to defend your apps against injection attacks Don't let injection attacks take down your web applications. Sure, writing secure code is important, but that’s not within the control of most network and security teams. So what can you do? A zero trust-based architecture is recommended to: 1) Minimize the attack surface presented by the exposed IPs of firewalls and VPNs that protect your internal applications by making your infrastructure undiscoverable 2) Minimize lateral movement of users and threats across the network by enforcing least-privileged access and only connecting a specific, authorized user to a specific, authorized resource 3) Prevent compromise by threats hidden in encrypted traffic by blocking malicious content with inline inspection of user-to-application traffic Protecting your applications from injection attacks requires a multi-layered approach. In addition to implementing a zero trust-based architecture, organizations can also enhance their application security with Zscaler Private Access (ZPA). Strengthen your application security with Zscaler Private Access When your applications are protected by Zscaler Private Access (ZPA), they are concealed from the internet and attackers. Only authenticated and authorized user traffic can access them, which reduces the attack surface and blast radius for lateral movement. But that doesn’t mean your risk goes down to zero. In case of a user's identity being compromised, skilled attackers can exploit known (or unknown) software vulnerabilities. Inline defenses are needed to identify malicious traffic and hidden threats. That's why we integrated high-performance security inspection into ZPA. Similar to the robust inspection capabilities for public apps, we can inspect for threats against private apps. With this essential layer of visibility into private application traffic, you can identify and block the most common web attacks, including SQLi, XSS, remote code execution, and more, using ZPA’s predefined OWASP Top 10 controls. Additionally, we provide the flexibility to create inspection profiles and write your own signatures. With these measures in place, you can have peace of mind knowing that your internal applications are safe from web attacks. ZPA AppProtection monitors all incoming and outgoing application traffic. How to get started with ZPA AppProtection Getting started with AppProtection has never been easier. Simply define your AppProtection Profile and select from a list of OWASP Predefined Controls to customize how your traffic is inspected and managed. Choose a Paranoia Level that matches your level of concern, set up your AppProtection policy with the desired override actions, and review your profile configurations before deploying them in an AppProtection Policy. With our user-friendly AppProtection Dashboard, monitoring and responding to any detected attacks is a seamless process! Safeguard your apps from injection attacks and ensure business continuity Let’s face it - injection attacks are a ticking time bomb when it comes to application security. These sneaky attacks can impact both legacy and modern applications, and the consequences can be devastating - think massive data loss and total system compromise. Fortunately, there are practical steps that organizations can take to safeguard against injection attacks. By implementing zero trust principles, validating user input, and monitoring for attacks, organizations can significantly reduce their risk of being victimized by injection attacks. For those using ZPA, AppProtection has your back with high-performance security inspection capabilities to detect and prevent these types of attacks. It’s essential for organizations to take web application security seriously and follow best practices to protect against injection attacks and other web-based threats. Ready to learn more? We’ve got you covered: Demo Video Data Sheet Case Study Tue, 28 Mar 2023 09:15:12 -0700 Marc Mastrangelo The Impact of Public Cloud Across Your Organization (Part Four) I have a confession…I have never been able to successfully see an autostereogram clearly. An autostereogram is one of those images that look like a blurry mess, but if you stare at it and change your focal perspective, it reveals a crystal clear, often 3D image. For those of us that have been around long enough, these were the main feature in the very popular the “Magic Eye” book series. For Security and Network Operations Center (SOC/NOC) personnel, it feels like the public cloud has a similar effect. Their critical role to the organization can seem murky at best within an infrastructure that is consumed rather than operated. Add to that the differences across cloud service providers (CSP), and the problem begins to look more and more like my inability to see the image above. In this series, we’ve examined Cloud Native Application Protection Platforms (CNAPP) from the lens of different teams within the enterprise. This post explores roles in the SOC and NOC and how CNAPP can help demystify these roles in the public cloud. Let’s start with outlining just a couple of the critical functions provided by these teams. Threat Hunting Incident Investigation & Response Hardening Assets, Applications and Services Vulnerability Management The public cloud environment presents opportunities for bad actors and challenges for application operators. The same global reach that benefits the enterprise also dissolves any traditional notion of a perimeter. The thousands of signals, metrics, telemetry sources, and events quickly become noise and contribute to alert fatigue. In the wake of those overwhelming signals, a SOC/NOC team needs three critical capabilities at their fingertips. Critical SOC/NOC Capabilities in Public Cloud First, the team needs to ensure that the collection of the appropriate events, logs, and signals is happening at scale. Since the public cloud is a consumed set of resources, teams need to ensure that they are able to pull data at the foundational level from the cloud service provider (CSP). Reliance upon OS-level agents to provide complete or derived insights into the underlying infrastructure cannot compete with direct access to the CSP API substrate. It is important to note that SOC/NOC teams need to be able to extend the types of data points beyond the traditional Cloud Security Posture Management (CSPM) solutions of the past few years. Pulling data from the CSP on configuration, deployments, etc. is simply the starting point. These teams also need to ingest data concerning vulnerabilities, risky flows, and even data loss prevention signals to truly get a unified insight into the current state of the cloud estate. Specifically, vulnerabilities can be a source of blind spots in the public cloud. Many vulnerability management systems were designed for traditional on-prem environments, depending on the use of agents to gain visibility to the workload. Since deployment in a traditional data center was usually controlled by Central IT, there were often process gates or ticketing flows to ensure those agents were installed. In the public cloud, deployments are often initiated by the Line of Business (LOB), fully automated, and done outside of a centralized control framework. Assets are often temporary or ephemeral in nature, with a lifespan of minutes or even only seconds for some assets. Agent deployments representing tool sets not germane to the application owner/operator could be left out creating blindspots in the public cloud environment. How are SOC teams supposed to see if their tool is dependent on these traditional agent-based requirements? Leveraging CNAPP for SOC/NOC Operations CNAPP platforms leverage agentless technologies to evaluate vulnerabilities, providing maximum coverage for cloud workloads. Not only does this have the benefit of reduced performance concerns for the workloads in question, but it also reduces operational complexities around upgrading, health checks, and deployment blind spots. Vulnerability data represents one input into a correlation engine that consumes several different types of security-relevant signals. This engine should be able to take configuration, provisioning, identity, flow, and vulnerability information to provide quick and meaningful alerts that take into account all dimensions. Architecturally, stitching together separate domain-specific back-ends with a brittle front-end UI will limit correlation and potentially miss interesting patterns that represent important threat vectors. Third, operators need effective and actionable classification to reduce the time to response. Understanding insights from various threat categories (e.g. Authentication Configurations, Over Exposed Assets with Power Identities, Data at risk from Ransomware) reduces identification times. Classification also allows SOC teams to quickly route specific threat types to the appropriate sub-teams either natively or through existing IT Service Management (ITSM) solutions preserving existing operational investments and processes. Finally, since enterprises cannot solely rely upon vendors to have pre-built every relevant threat condition, any engine requires customization. New threats, vulnerabilities and identity-based threats happen every single day, within an increasing number of threats that are targeted to specific enterprises. SOC/NOC teams require a CNAPP platform that can create custom investigations and queries of the environment in minutes. Since requiring SOC/NOC teams to learn cloud-specific query languages is not feasible or desirable, these operations must be abstracted across multiple CSPs. A single intuitive process to interrogate all cloud accounts for new and specific signals is a fundamental need in today’s public cloud environment. Making Threat Hunting Intuitive Similar to the example in Part 2 of this series, consider an example where we need to quickly look across the dimensions of configuration, identity, and vulnerability management to pinpoint the high risks that should be prioritized. At times, the standard canned policies that are resident within a platform do not meet the current need. Teams need to pivot their search based on new threats and vulnerabilities. The data is resident within the CSPs, but how to get it out can become an exercise in API consumption. Every CSP has a different set of calls, terms, etc. and for multi-cloud organizations, this heterogeneity gives way to delay in identifying the threat. Consider the example in the figures below. The search is to locate assets that meet the following parameters: Determine what compute instances are Internet Facing With power identities (e.g. Administrator or Power User) And critical vulnerability of a particular severity Where the CVE actually has an available fix And the compute instance has access to cloud storage Performing these searches using native or complex API interactions requires deep knowledge of each CSPs API. While a native search will ultimately work, abstracting the details of the underlying CSP API into human-readable drag-and-drop approach investigations, teams can quickly and effectively query multiple cloud environments with little to know required knowledge of the underlying CSP API structure. This directly equates to speed of search and remediation and reduces the knowledge set required to create and use custom queries. In addition, these investigations should be able to be saved or converted into durable policies that will provide the trigger for future alerts. Empowering SOC & NOC Teams Efficiently SOC/NOC teams require a myriad of toolsets to effectively protect the organization against the myriad of modern threats facing enterprises today. The public cloud environment represents a complex yet critical set of applications and services for today’s digital enterprise. The ability to provide some of those critical capabilities in a unified platform reduces costs, and operational complexities while at the same time increasing effectiveness. CNAPPs, including Zscaler’s Posture Control platform, are designed from the ground up to ingest data across control domains. The collection, correlation, classification, and customization engine is what enables our customers’ SOC/NOC teams to protect their assets in the public cloud world. Learn more: The Impact of Public Cloud Across Your Organization series Posture Control Free Cloud Security Assessment Mon, 27 Mar 2023 09:00:02 -0700 Scottie Ray Beyond the Perimeter 2023: Context-driven Security for Enhanced Protection The recent explosion of hybrid work and the massive shift of business assets and activities to the cloud have helped organizations quickly adopt a flexible way of working. IT and security teams have been relentlessly working to secure corporate data and protect users. But these changes have also dramatically expanded enterprise attack surfaces, multiplying the risk of cyberattacks to the point where they have now become ubiquitous and expected. More than a third of polled executives reported that their accounting and financial data was targeted by cybercriminals in the past year, and, and nearly 50% of executives expect an increase in cyber events in the year ahead, according to the Deloitte Center for Controllership. One thing is abundantly clear: traditional security tools cannot handle the complexity of safeguarding enterprises in what has become a work-from-anywhere world. Additionally, the current economic situation has tightened budgets and added operational inefficiencies. In their rush to provide frictionless access to far-flung global teams, companies are unintentionally introducing security gaps. And cybercriminals are thriving by exploiting them, preying on the inherent weaknesses of traditional security solutions. Thwarting constant attacks at every possible entry point is overwhelming SecOps teams, and the friction of manual access management and troubleshooting for an ever-evolving workforce is wearing out IT teams. Something has to change. To succeed, enterprises need reliable end-to-end protection for all their users, apps, and assets. They need a modern security stack that can handle today’s complex, evolving threatscape without impeding user access and workflows in a competitive business environment. And IT and security teams need full visibility into who is attempting access at all times, so they can isolate risks before they become security situations, and ultimately secure the enterprise without adding friction. What enterprises need is context: the ability to look beyond a user’s credentials and supply critical details like user location, device posture, and network connection, providing more informed real-time assessments that teams can compare to existing threat intelligence to make smarter access decisions. Luckily, there is a solution for that. Join Zscaler and CrowdStrike for Beyond the Perimeter 2023, the third edition of our virtual event series, on April 11 (North America), April 12 (EMEA), and April 13 (Asia Pacific). Attendees will see firsthand how Zscaler and CrowdStrike join forces to give customers the critical context they need to make smarter access decisions without adding unnecessary friction. In this virtual two-hour event, we will show the deep integration between the Zscaler and CrowdStrike platforms that makes this possible. Join us for a special fireside chat curated by Punit Minocha, EVP, Corporate and Business Development, where you will hear first-hand customer stories from Edward Jones and Wipro on how they implemented a cloud-first cybersecurity strategy for reducing the attack surface and securing the endpoints. They will be joined by CrowdStrike and Zscaler experts onstage to share best practices for making zero trust a reality. Through lively discussions and engaging demos, Beyond the Perimeter 2023 will give participants a deeper understanding of the Zscaler-CrowdStrike integration. They will learn how these two best-of-breed platforms work together to provide context-aware security through: Partnership: Deep integrations of our best-in-breed platforms enable high-fidelity telemetry sharing and seamless real time information exchange. Prevention: Customers can leverage AI and machine learning to parse this data and stop known and unknown cyberthreats, in real time, at any scale. Protection: The seamless integration shrinks your attack surface, prevents lateral movement, expedites or automates response, and reduces risk. The rush to enable a work-from-anywhere landscape has left many organizations vulnerable; it can be dangerous beyond the perimeter. But CrowdStrike and Zscaler provide a next-generation security standard, focused on context-driven security that is bringing zero trust’s promise to life for customers like you. Join us at Beyond the Perimeter 2023 this April 11, 12, or 13 and see firsthand how CrowdStrike and Zscaler work together to provide the detailed real-time context that makes true zero trust security a reality without impeding productivity. If you’re ready to leverage the power of context to reduce risk, increase operational efficiency, and secure your enterprise against today’s cyberthreats, register now! We would love to see you there! Tue, 21 Mar 2023 16:39:35 -0700 Kanishka Pandit How to Cut IT Costs with Zscaler Part 5: Streamlining M&A This blog series discusses the various ways that organizations can save money by retiring their perimeter-based network and security architectures and embracing a zero trust architecture powered by Zscaler. Part one revolves around improving your security posture to avoid costly breaches. Part two focuses on optimizing technology costs. Part three covers the importance of increasing operational efficiency. Part four explains the benefits of enhancing user productivity. The topic for this blog post is: Mergers and acquisitions Mergers and acquisitions (M&A) are strategic priorities for many companies seeking growth, expansion, and synergized value. However, integrating IT systems is a critical step in the process that can make or break the success of an acquisition. The traditional IT integration approach, as shown in Figure 1, involves connecting the two organizations’ networks to ensure that the buyer’s users and apps can access the acquired company’s users and apps (and vice versa). But this network integration process is complex, time-consuming, and often leads to unexpected technical and security issues that can delay integration timelines and result in significant cost overruns. In fact, 70% of M&A deals fail to achieve their intended value due to these integration challenges. Figure 1: Traditional approach to M&A IT integration Enter Zscaler The good news is that Zscaler offers a modern approach to IT integration that eliminates the complexity, delays, and cost overruns associated with the traditional, network-centric approach. With the Zscaler Zero Trust Exchange, connectivity is no longer dependent on network integration—granting users access to apps does not require giving them access to the network. Instead, Zscaler acts as an intelligent switchboard to securely connect users and applications, one-to-one, no matter where they are located—as shown in Figure 2. This is done with a lightweight agent deployed to end-user devices, eliminating the need to buy, provision, or configure any circuits, network, or hardware. There are also no technical challenges like network IP deconfliction, making the process much more straightforward, efficient, and inexpensive. Figure 2: Zscaler zero trust approach to M&A IT integration By using this zero trust approach, IT teams can cut down on complexity by 60% on average and reduce integration timelines, allowing users to access systems and data rapidly. As a result, businesses can capture value and achieve synergies far more quickly, with 50% faster time-to-value on average. Shortening timelines also leads to a 40% reduction in M&A IT costs. Overall, Zscaler enhances overall security for the merging companies and the customers typically experience a return on investment (ROI) of 139%. In summary, Zscaler's modern approach to IT integration is an innovative solution that addresses the challenges associated with traditional M&A integration methods. By eliminating the complexity and delays of network integration, businesses can achieve synergies and value capture activities far more quickly, leading to significant cost savings and a higher ROI. If you're interested in learning more about the One True Zero Trust platform and how it can help accelerate your M&A time-to-value and reduce costs, download our white paper. Or, to see the success stories of customers who saved money with the Zero Trust Exchange, download our ebook. Tue, 21 Mar 2023 08:00:01 -0700 Ankit Gupta Positioning Zscaler Workload Communications for a Growing Business with AWS Gateway Load Balancer Zscaler Workload Communications helps businesses secure the connectivity of their cloud workloads. Whether connecting workloads to the internet, or to other cloud workloads, you can use Workload Communications to apply the appropriate ZIA and ZPA policies to defend against cyberthreats, eliminating lateral threat movement and data exfiltration. Coupled with AWS Gateway Load Balancer (GWLB), businesses can now also gain additional scalability and availability, ensuring they are positioned to support their expanding cloud footprint. What is AWS Gateway Load Balancer? AWS Gateway Load Balancer (GWLB) combines both gateway and load balancing capabilities. As a gateway, GWLB provides connectivity and helps steer traffic between a source and a destination. As a load balancer, it helps distribute network tasks across a set of resources. Generally, GWLB is used within AWS environments to distribute inbound or outbound traffic evenly across a fleet of virtual appliances. These appliances are then used to apply services against that traffic - such as Firewall or IDS/IPS inspection. The advantage of GWLB is that it is an AWS native service – it is highly available, highly scalable, and can be inserted nearly anywhere within the environment. This means that businesses can worry less about building these aspects into their own infrastructure and focus on more business-critical decisions such as security policy. How do Workload Communications and AWS Gateway Load Balancer work together? About Workload Communications As stated earlier, Zscaler Workload Communications enables businesses to secure the connectivity of their cloud workloads, whether to the internet or to other applications, wherever they reside. Under the hood, Workload Communications leverages lightweight virtual machines called Cloud Connectors to steer egress traffic from cloud workloads to Zscaler’s cloud platform, the Zero Trust Exchange. As traffic passes through the Zero Trust Exchange, all applicable business policies, such as those applying SSL inspection or Data Loss Prevention, are applied to help secure the business. Where does Gateway Load Balancer Fit in? A common architectural deployment model for Workload Communications is hub-and-spoke. In this model, traffic from VPCs housing workloads (spokes) is routed to a transit/egress/security VPC (hub) containing Cloud Connectors. With this model in mind, businesses must determine the best method to: Get workload traffic from spoke VPCs to the hub VPC containing the Cloud Connectors Distribute workload traffic across a fleet of Cloud Connector appliances This is where Gateway Load Balancer comes in. To use Gateway Load Balancer in combination with Workload Communications, businesses will generally need to: Place Gateway Load Balancer endpoints (GLWBe) around the cloud environment to accept traffic from workloads (this can be within workload VPCs, or even inside the transit/egress/security VPC) Deploy a GLWB and register Cloud Connectors as its Target Group Configure route tables for workload traffic to be directed to the GWLB Figure: Hub-and-spoke with AWS Transit Gateway and GWLB Figure: Hub-and-spoke with distributed GWLB Though hub-and-spoke deployment models are, by far, the most common, keep in mind that Cloud Connectors can also be deployed adjacent to the cloud workloads they process traffic for, alongside GWLB. In fact, inserting appliances into the cloud environment behind GWLB is quite flexible, depending on the needs of the organization. Figure: Co-located with GWLB NOTE: Zscaler maintains an extensive library of AWS CloudFormation and Terraform scripts to do all of this for you! So, don’t fret if the above bullets seem a bit foreign or complex. Let our scripts do the work for you so that you can be guaranteed you’re following our best architectural practices. Once completed, whenever workload traffic egresses, it will be received by the GWLBe and sent to the GWLB before being distributed across the Cloud Connectors and sent to the Zero Trust Exchange. Why use Gateway Load Balancer with Workload Communications? Security is critical for cloud workloads. With heightened risks from new and existing threats, businesses need to ensure that their workloads and data are protected. Workload Communications is able to remove these risks and protect businesses by securing the connectivity of their cloud workloads. Just as important is the ability to ensure the scalability and availability of the security protecting those cloud applications. This is the value that AWS Gateway Load Balancer provides and what many Zscaler customers are seeing when they combine it with Workload Communications to protect their cloud workloads. Where can I learn more? Workload Communications product page. AWS Gateway Load Balancer product page. Learn how to deploy Workload Communications on AWS here. Mon, 20 Mar 2023 08:00:02 -0700 Franklin Nguyen The Impact of Public Cloud Across Your Organization Compliance – com·pli·ance /kəmˈplīəns/ – “The state or fact of according with or meeting rules or standards.” In today’s environment, it is hard to escape compliance. Regulated industries like government, healthcare, and financial services have lived with it for years. It may also be oriented around a regional context like the EU law of General Data Protection Regulation (GDPR). There are other organizations that use these standards to provide a benchmark to measure their own posture and configuration. Whatever the drivers may be for an organization, their ability to measure and report against these frameworks can require significant investments in time, human capital, and expertise. In addition, adherence to these standards allow or block the ability to conduct operations within market segments thus having direct revenue impacts. Finally, for some organizations that are already operating under a compliance requirement, the penalties for non-conformance can be substantial in some cases, exposing the enterprise to legal or other penalties. Compliance teams are sometimes situated at an intersection between legal, security, and IT departments. Often, they do not have direct access to the IT infrastructure they have to evaluate. They may need to open tickets with central IT, obtain reports and then work through the various and varied controls within relevant compliance frameworks to the enterprise. This process, which creates dependencies between teams, also creates constraints and potential prioritization conflicts. For example, an already small central IT team may have a large and urgent upgrade project that needs to be accomplished, yet they also have to balance the request from compliance to query and deliver the information needed for a mandatory quarterly audit. These resource choke points can create delays, organizational stress and interdepartmental conflict. Enter the world of public cloud. In this series, we have discussed themes of the public cloud such as velocity, scale, and elasticity. We have seen how these benefits also create challenges for multiple groups within the organization. Teams impacted by public cloud during this transition is the risk management or compliance team. The level of effort required to measure relatively static traditional IT environments against regulations can require hours or days. Factoring in a highly dynamic environment of public cloud with its idiosyncrasies, ephemeral environments, complex IAM and dynamic networking and storage constructs, compliance teams certainly have their work cut out for them. Cloud Native Application Protection Platforms (CNAPP) typically expose data sets that are automatically mapped towards industry standards such as CIS, SOC, NIST and others. Thus, the solution can be extended to the compliance and risk management team for the same investment, lowering costs and overhead for the enterprise as a whole. The key is to ensure ease of use, and the ability to update and report on demand for compliance in even highly dynamic environments. In order to do this, the solution must support: Compliance-focused roles within the platform that allow members to create, manage and run reports against public cloud infrastructure without the need for Central IT intervention. This helps to reduce organizational friction. The ability to map findings to native control(s) for each framework being tracked. Preferably, this mapping is pre-built and maintained by the provider of the tool. The ability to exclude portions of the cloud estate from specific compliance evaluation (e.g. evaluate production for NIST controls, but resources with “development” tags are ignored). Continuous reporting and alerting on new asset deployment(s) that violate compliance without having to manually pull a new query. The ability to customize and create organizational-specific benchmarks, and ignore or resolve findings that are not applicable, or are being ignored for a legitimate, agreed-upon reason. Provide specific remediation guidance to responsible service owners in formats native to their tools and workflows. Putting Information into a Compliance Context Zscaler Posture Control extends traditional configuration along with identity and entitlement data to the risk management and compliance teams. Putting this data in the context of compliance within a unified platform approach reduces the time required to audit and report on public cloud-based infrastructure while eliminating the need for separate compliance-focused toolsets. Reducing audit preparation time combined with automatic reporting on newly deployed assets allows risk management teams to operate at the speed of the cloud independently from traditional central IT or a cloud operations team. Please check out the other parts of this series as we examine the requirements of other teams within a public cloud enterprise. We will continue to examine how Zscaler is designing platforms from the ground up to address those requirements while reducing the manual stitching together of individual point solutions, lowering costs for customers while delivering critical insights in an ever-complex multi-cloud world. See the power of Zscaler Posture Control with our free cloud security risk assessment. Mon, 13 Mar 2023 08:00:01 -0700 Scottie Ray What’s Next for ZTNA? New Insights from the Enterprise Strategy Group Do you feel like you’ve squeezed a decade's worth of digital transformation into the last three years? I certainly do. As evidence, take a look at the before and after of my home office. It went from a makeshift dining table turned desk to a fully loaded workspace with lights, mics, and cameras - a pandemic innovation! During this time, zero trust network access, or ZTNA, technology benefited from a “perfect storm” of market conditions that drove a wave of innovation: the mass shift to remote work, the surge in digital tools and apps, and the accelerated cloud push. This rare convergence of events forced many organizations to rethink their access models, many of which relied on traditional remote access VPNs. From that storm, ZTNA emerged as a viable alternative to VPNs. In a new paper titled “Making Sense of the Quickly Evolving ZTNA Market,” the Enterprise Strategy Group (ESG) examines the trends, challenges, and evolution of the growing and crowded market for ZTNA. According to ESG, ZTNA has become so popular that more than two thirds of companies are replacing their VPNs with ZTNA tools or are interested in doing so. First-generation ZTNA – Just a VPN replacement? John Grady, ESG principal analyst and author of the paper, says that while most ZTNA tools provide clear advantages over VPNs, few can effectively address the challenges network security teams face today to protect and secure access to applications. The key issues organizations encounter with first-generation tools are: 1. Inconsistent user experience: While cloud-only tools were ideal for connecting solely on the remote access use case, they introduce a sort of inverse backhauling model when users in offices have to access on-premises resources and are routed through the cloud to do so. 2. Access is still too broad: Most ZTNA tools provide basic separation to reduce the attack surface. However, when legitimate users are compromised, this does not prevent attackers from moving laterally and accessing other resources. ZTNA tools should go further and extend segmentation mechanisms across workloads and devices to more effectively prevent lateral movement inside a cloud or data center environment. 3. Limited security services: ZTNA took a significant step forward in protecting applications when compared to VPN, but most tools fail to address the issue holistically. It is common for ZTNA solutions to establish a connection between a user and application, and then step out of the way rather than continuing to scan the traffic for security threats, leaving the door open to attackers. According to Grady, secure access and ZTNA should not be operated in a silo and must be part of a larger, integrated platform focused on protecting distributed users and applications. This means being part of a broader secure access service edge (SASE) or security service edge (SSE) platform. For many organizations, ZTNA is not just a part, but a foundational aspect, of these architectures, with 58% of organizations that have begun to implement SASE/SSE citing ZTNA as the starting point for their project. “Ultimately, tools supporting this broader type of approach require a more substantial set of features and capabilities compared to those focused solely on the remote access use case,” says Grady, which misses the massive shift towards comprehensive zero trust architectures. ZTNA, evolved - Overcome first-gen hurdles with Zscaler Zscaler supports broader security transformation initiatives and emerging use cases with its next-generation ZTNA, Zscaler Private Access. It provides any-to-any connectivity, zero trust segmentation, and integrated, continuous security. This combination helps network security teams reduce their attack surface, minimize lateral spread, and ensures a consistent and seamless user experience that just works. Next-generation ZTNA Zscaler Private Access has immediate benefits when replacing current access approaches that are ill-matched for digital transformation. It brings significant advantages to user experience, scalability, and agility, and improves an organization’s overall security posture. As a result of zero trust initiatives, 77% of companies have realized at least one security and one business benefit.¹ To find out how to evolve your secure access strategy, read ESG’s new white paper or register for our webcast with ESG - and get ready to take your zero trust security to the next level. ¹Source: Enterprise Strategy Group, The State of Zero Trust Security Strategies, May 2021 Wed, 08 Mar 2023 08:00:02 -0800 Linda Park How to Cut IT Costs with Zscaler Part 4: Improving User Productivity Being responsible with financial resources has always been a concern for companies the world over. But with the economic uncertainty of the last few years, organizations have increased their focus on the need to save money. As a result, networking, security, and IT teams have been tasked to do more with fewer resources. Naturally, reduced headcounts and shrinking budgets make it difficult for these teams to do their jobs. So what’s the good news? This blog series details the various ways that Zscaler, the One True Zero Trust Platform, helps its customers to save money. Part one explains how enhancing security posture helps with avoiding breach-related costs. Part two reveals the way that Zscaler can simplify infrastructure and optimize technology costs. Part three covers how organizations can increase their operational efficiency and reduce management overhead. This fourth installment revolves around: Decreased user productivity Poor user experiences are a leading cause of impaired productivity. When employees’ digital experiences slow to a crawl, they are unable to perform their work duties effectively. In other words, time, money, and value-creation opportunities are wasted. Unsurprisingly, perimeter-based architectures are inherently unfriendly to user experiences. Under yesterday’s architectures, user traffic flows through stacks of appliances in the data center—including remote user traffic destined for cloud apps. This traffic backhauling means that users are not able to connect directly to applications, and the additional network hops add significant latency that slows down digital experiences and disrupts productivity. At the same time, the appliances in the data center lack the levels of scalability necessary to handle traffic surges. So, when the load on an appliance increases beyond what it is designed to handle, it can lead to poor performance and outages. As a result, user productivity comes grinding to a halt. Making the situation more difficult is the fact that traditional tools for monitoring user experiences are optimized for data centers. They lack the end-to-end visibility required in our cloud-first, hybrid-work world, and ultimately impede troubleshooting. Figure 1: Challenges to user experience Zscaler for superior digital experiences With its unique zero trust architecture, Zscaler eliminates traffic backhauling. The Zscaler Zero Trust Exchange acts as an intelligent switchboard that securely connects users, workloads, and devices. Direct-to-app connectivity and cloud security delivered from 150 points of presence mean that traffic no longer needs to be routed to the data center. What’s more, Zscaler automatically optimizes the flow of traffic to apps via the shortest path to ensure the best user experience. And as the world’s largest security cloud, the Zero Trust Exchange boasts the scalability necessary to service customers of all sizes so they can avoid outages and performance issues. This highly differentiated architecture means that employee time (which would have been wasted previously) is freed up for performing work duties and generating value for the organization. Figure 2: The Zscaler Zero Trust Exchange In addition to the above, Zscaler provides integrated digital experience monitoring (DEM). Unlike traditional tools that lack full context on experience issues created outside their purview in the data center, Zscaler has end-to-end visibility from the device up to the cloud. And with continuous monitoring and real-time alerts, user experience issues can be resolved proactively—before they turn into widespread productivity problems. With streamlined detection and resolution, as well as advanced AI-powered root cause analysis, help desk teams can help users stay working, minimize tickets, and benefit from increased productivity themselves. Figure 3: User experience monitoring with Zscaler Digital Experience (ZDX) To further illustrate these advantages, the ESG Economic Validation study on the Zero Trust Exchange found that the average organization leveraging Zscaler annually recovers $5.2 million worth of productivity that would otherwise have been lost due to poor user experiences. Overall, the average Zscaler customer experiences an ROI of 139%. To learn more about the economic value of the Zero Trust Exchange, download our white paper and be sure to stay tuned for the next installment in this blog series, which will discuss how Zscaler can save money for your organization during mergers and acquisitions. Mon, 06 Mar 2023 10:08:40 -0800 Jacob Serpa New IPSec Traffic Forwarding Guidance for Zscaler Customers Key points: Zscaler recommends using IKEv2 protocol wherever possible as it is faster, more secure, and more resilient than IKEv1 Zscaler recommends using AES-GCM encryption rather than NULL encryption Background Zscaler has been supporting IPSec as a traffic forwarding mechanism for many years. During this time, we have introduced multiple options to forward traffic to the Zscaler cloud. These have included Z-tunnel 1.0 aka HTTP-based tunnels, and Z-tunnel 2.0 which brought in the support for TLS/ DTLS-based encrypted tunneling mechanisms. These Z-tunnels are established by the Zscaler Client Connector or the Cloud Connector. GRE or IPSec tunnels have been the methods of choice to forward traffic from office locations, and have been employed by thousands of customers. IPSec tunnels are preferred by organizations that need the added security of encryption, integrity, and authentication of the traffic when it is forwarded to the Zscaler cloud. Internet Key Exchange (IKE) guidance IPSec peers negotiate the authentication and encryption algorithms using the Internet Key Exchange (IKE) process. The IKEv1 (RFC 2409) introduced in 1998 was succeeded by IKEv2 (RFC 4306) in 2005, which was updated in 2014 and continues to be updated—with the latest being RFC 8983 in 2021. The IKEv2 updates brought with them numerous advantages over v1, some of which are: It requires fewer messages to be exchanged between the peers to establish the modalities for IPSec It takes up less bandwidth than IKEv1 It is more resistant to Denial of Service (DoS) attacks It natively supports NAT-Traversal (NAT-T) Automatic detection of liveness of IPSec tunnel so IKE connection can be reestablished if necessary It supports newer encryption algorithms such as AES-GCM Some customers have endpoint devices that support only the older standard of IKEv1. Zscaler supports IKEv1 although our recommendation has always been to use the IKEv2 because of the aforementioned reasons. Most modern implementations of IPSec support both IKEv1 and IKEv2. Some operating systems, such as Cisco’s IOS and Juniper’s JUNOS, may need a manual configuration to enable the use of IKEv2. We recommend that customers use IKEv2 when available. Encryption algorithm guidance Zscaler supports NULL encryption, AES-CBC, and AES-GCM encryption algorithms. Among these, historically, we have always recommended NULL encryption because the traffic forwarded to the Zscaler cloud egresses to the destination using either HTTPS (> 90% of the traffic) or HTTP without being encrypted any further. However, there are thousands of customers who chose to encrypt the traffic being forwarded to the Zscaler service using an encryption algorithm. Support for AES-GCM, which is a relatively newer encryption standard, was added a few years ago. AES-GCM has the following advantages over the older AES-CBC standard: It provides authentication, removing the need for an HMAC SHA hashing function It is faster than AES-CBC because it allows parallelization and it uses hardware acceleration such as AES-NI AES-GCM is a more modern algorithm. For example, TLS1.3 does not support AES-CBC anymore It is not susceptible to some of the attacks that the AES-CBC algorithm can be exploited with, such as padding oracle attacks It is for the aforementioned reasons that Zscaler is now updating its recommendation to use the AES-GCM encryption algorithm. Summary Zscaler strives to maintain a high level of security in the service offered to its customers. If you are currently using IKEv1 as the key exchange mechanism for IPSec forwarding to Zscaler, you should upgrade to IKEv2 as soon as possible. If you are using AES-CBC as the encryption algorithm and AES-GCM is available for your IPSec device on-premises, you should move to AES-GCM to take advantage of the improved performance and capabilities. If you are using NULL encryption as per our earlier recommendation, note that you will need to purchase an additional subscription to enable the encryption. You can choose to continue using NULL encryption at no additional cost. Learn more about Zscaler guidance on traffic forwarding here. Thu, 02 Mar 2023 08:00:02 -0800 Mithun Hebbar Zero Trust Citizen Access As government applications move to the cloud and more and more services are provided digitally, the number of citizens accessing services online has increased dramatically. With many organizations adopting work from home policies, and COVID-19 encouraging people to spend less time in public places, government agencies are under pressure to provide online services. One of the top priorities is to modernize application infrastructure and mitigate risks from lateral movement. Another priority is to improve user experience and efficiency, especially for citizen access to services. The goal is to enable constituents to interact efficiently and securely from anywhere, rather than having to come to a physical location and interact with paper. The 2021 Adobe Digital Government Services Survey showed that 77% of citizens surveyed would use more government services if they were accessible from the Internet. At the same time, few government agencies say they have the capabilities to deliver their services from the Internet in a secure and effective manner. Most government agencies host public facing services for their citizens. Realizing that there is no defined standard for consolidated agencies to comply with, each agency runs their own individual security programs. Traditionally, organizations have protected data centers and offices with the castle and moat architecture. That was great when people drove to work every morning and citizens came to a physical location to interact with the government. Still, as people started working remotely, VPNs became a necessity, extending the network and creating more risk of compromise. Eventually, organizations started shifting to cloud infrastructure, leading to a natural evolution of extending the network into the cloud, exposing multiple attack surfaces. As the network access has changed and expanded beyond the on-premise data center, the traditional architecture creates more opportunity for attacks and compromise. Who’s on the other end of the VPN, where are the connections coming from, and how do we prevent lateral movement across the network from a single access point? We’re creating doors and windows of access within these protected networks and it has become very challenging to protect data while delivering a good user experience, for employees, partners or citizens. The First Step Traditionally, citizens access government services by typing a URL into a web browser. There are inherent security risks with this approach, as citizens (users) are connected directly to the web application over the internet. The firewalls protecting the application provide attack surfaces for bad actors, and vulnerabilities in firewall or applications can be exploited. Some agencies have taken a first step to addressing these challenges by adopting some concepts of a “Zero Trust Architecture”. Zero Trust states, among other things, that no user should have access to any resource, without first being authenticated. When citizens are authenticated prior to gaining access to services, agencies gain significant visibility and control over application access. As a result, many agencies have adopted identity programs that allow citizens to authenticate via social media platforms (Instagram, Google, LinkedIn) as well as using Multi Factor Authentication (MFA). This eliminates the need for agencies to manage login credentials for all citizens, while still providing the ability to manage access. This is a great first step that is essential in the journey to Zero Trust, but still leaves some gaps. Even after authentication, an agency’s firewalls, applications, and services are still visible to the internet, and bad actors can still discover and attempt to compromise those components. We can start the concept of understanding identity and who our users are, but the network is still reachable. The infrastructure is still on the Internet. Is there a way to make a secure connection without spending substantial resources on rearchitecting and redesigning the infrastructure or making massive changes in the near term? Zscaler thinks there is, and that’s what leveraging zero trust strategies is all about. Zero Trust Citizen Access - How it Works ZTCA is a new capability in Zscaler Private Access (ZPA) that has been designed specifically to provide citizens with simple, secure and highly scalable access to any public web or legacy applications. Zscaler’s Zero Trust Citizen Access provides a complete Zero Trust architecture for government agencies by not only requiring user authentication for every citizen, but also removing firewall and application attack surfaces by “hiding” web applications - including legacy apps - from the internet, making them undiscoverable to potential attackers. Using ZPA’s - Browser Access solution functionality, Agencies do not need to maintain heavy security infrastructure. ZPA, serves as an Embedded Application Security architecture, and helps replace the following: Remote access services SSL portals External FW / IPS infrastructure DDoS protection Global Load Balancing Dedicated Internet & WAN circuits In contrast to the traditional approach discussed prior, citizens do not establish direct connections to firewalls, web servers, or applications. Citizen users are connected only to the Zscaler cloud once they are authenticated. The Zscaler cloud validates access policy for that user and leverages a technology called “Application Connectors” to connect that user to their requested application. Application Connectors sit in front of applications and allow them to communicate with the Zscaler cloud, but not the public Internet. This means that applications are not visible or discoverable from the internet, and because they can’t be seen, they can’t be attacked. Application Connectors effectively remove the attack surface from the process. When a citizen successfully authenticates, Zscaler proxies the connection between the citizen and the Application Connector – it “stitches” the connections together temporarily, for only the duration of the web session. This means the application is only accessible to authorized users (citizens) who have authenticated to the Zscaler cloud. This approach also allows for a number of other measures to be taken that provide mitigation against a range of network-based attacks. Zero Trust is a cybersecurity strategy that focuses on restricting access to only those who need it at a given time. This approach has traditionally been used for internal networks, but it can also be applied to external-facing websites to create Zero Trust Citizen Access. This approach involves adding a layer of Zero Trust outbound-only connections to constituent-facing websites, reducing the vectors of compromise and increasing security. Implementing Zero Trust citizen access does not require a complete infrastructure overhaul, nor does it take years to implement. It is a layer that can be added onto existing infrastructure to provide near-term value. Citizen access remains the same, with users going to the same URLs and experiencing the same user interface. The only change is the addition of an identity prompt that allows for explicit usernames and passwords or social logins. ZTCA leverages application segmentation, a key facet of ZPA that creates a segment of one between a named citizen and a named application. It means that citizens are never brought on the network and the application is never exposed to the Public Internet. Public agencies can rely on ZTCA to deliver real-time visibility into citizen activity, identify citizens who access applications via browsers, eliminate the public attack surface, reduce the risk of lateral movement all while greatly increasing the scalability of their services. Standardizing critical infrastructure is essential for creating a robust cybersecurity posture, ensuring efficient communication between government agencies and their constituents. Organizations should shift focus to modern cybersecurity maturity models that reduce attack surfaces, implement best practices, and standardize critical infrastructure. As technology evolves, it is essential to maintain the security and efficiency of critical infrastructure. For more information about Zero Trust Citizen Access: Download the Zero Trust Citizen Access solution brief Watch the Zero Trust Citizen Access webinar on demand Schedule a deep dive session with one of our zero trust experts Wed, 01 Mar 2023 10:49:21 -0800 Jose Padin Private Equity and Cyber Resiliency - A Zero Trust Approach After a record-breaking M&A year in 2021, 2022 saw many investors slowing the pace of deal activity and bringing transaction volumes back down to pre-pandemic levels. One area that has not seen a significant slowdown is within the private equity (PE) space. According to a McKinsey report, in just the first half of 2022, PE deal value contributed 26% of total deal value, and it is on track to outperform 2021 figures. While PE firms have already deployed approximately $2T of capital in 2021, they continue to raise funds and have substantial dry powder to influence valuations and premiums in 2023 and beyond. In fact, certain technology-oriented PE funds, like Thoma Bravo, continue to exceed funding expectations - raising a record $32B in 2022. This suggests that PE activity is likely to remain strong going into 2023. With public attention and record-breaking deal volume, PE firms and their portfolio companies have become targets for cybercriminals. According to a report from Lockton, PE firms from the UK, to Hong Kong, to the US are all seeing increased cyberthreats. Specifically, these bad actors are becoming more sophisticated in how they approach PE firms and their portfolio companies, with targeted attacks and cyberthreats like: Email spoofing Social engineering Targeted phishing attacks Malware Ransomware Denial of service The news of these attacks has not typically made headlines to the same extent as public company breaches, however, the impact on the affected company, its valuation, and its operations is no less severe. Studies from Performance Improvement Partners have shown that PE firms and their portfolio companies face an equally challenging cyber landscape with: 300% more cyberattacks on financial services organizations than peers in other industries 71% - the number of organizations victimized by ransomware 63% - the number of organizations that paid the ransom SEC 4A rule changes - the regulatory environment is becoming more complex with regulators imposing cyber disclosure requirements on private equity funds The impact of these challenges–either in isolation or in aggregate–can significantly impact the entire PE investment lifecycle (e.g. from initial due diligence, to value creation, to eventual exit). In an era of rising interest rates leading to depressed valuations, and with investment horizons now extending to 7 or 8 years (from the historical 4- to 5-year period), PE firms need to find innovative ways for risk mitigation and value creation. So, how are leading PE firms addressing these challenges? Most firms agree that the best cyber defense is preventing cyberthreats in the first place. Industry experts agree that the zero trust approach is the best technique to secure both PE firms and their portfolio companies. Zero trust is a set of technologies and functionalities that enable secure access to internal applications for remote users. It operates on an adaptive trust model, where trust is never implicit, and access is granted on a need-to-know, least-privileged basis defined by granular policies. Please read our whitepaper to explore this topic further. Fri, 24 Feb 2023 08:57:37 -0800 Akshay Grover How to Cut IT Costs with Zscaler Part 3: Increasing Operational Efficiency In the current economic climate, it is not surprising that the majority of IT leaders are actively looking for ways to shore up their organizations to weather financial downturns, while also strengthening their security postures. In our blog series, “How to Cut IT Costs with Zscaler,” we are examining IT cost challenges and how organizations around the world are solving them with zero trust. In the first blog of this series, we discussed how to curb the rising cost of data breaches. Next, we looked at how organizations can optimize technology costs to drive economic value. In this third installment, we’ll explore how increasing operational efficiency reduces costs, and why perimeter-based security approaches fail to deliver the desired results. Let’s dive in. The rules have changed For the majority of businesses around the world, the pandemic propelled organizations from a slow and steady walk toward embracing digital transformation to an all-out sprint to make the shift to remote work and deploy cloud applications and services to enable their employees to work from anywhere. This transition changed the rules around where data and applications are accessed, how they are accessed, and on what devices. And in this new paradigm where data, users, and applications are broadly distributed, the risk increases exponentially. The result is a complicated dynamic where organizations must enable users to access a multitude of applications hosted in the cloud—connecting from personal devices that neither users nor businesses intended to ever connect to the corporate network—all while being provided consistent protection, regardless of their connection location. Things have gotten complicated This foundational shift has left organizations that use perimeter-based architectures to secure their hybrid workforce more vulnerable to attack. In addition to increased risk and the high CapEx investment required (as described in earlier blogs in this series), using a perimeter-based model that leverages firewalls and VPNs leads to a number of significant costs and challenges related to operational efficiency. Operational complexity of managing multiple hardware solutions Perimeter-based security models utilize a wide range of appliances at each internet access point to provide security and connectivity. Sometimes the appliances are physical hardware, sometimes they are virtualized, but often you find a mixture of both. Regardless of the form factor, you can’t simply set it up and forget it. Managing a perimeter-based security architecture is a hands-on sport that requires significant overhead to deploy, configure, provision, and test solutions, and then troubleshoot issues post-deployment. That doesn’t even take into account the cost and complexity of patch management, security updates, software upgrades, and constantly refreshing aging equipment as organizations grow, users and applications become more dispersed, and the variety of solutions expands. It’s enough to exhaust even the largest and most efficient IT teams. The bigger the network, the more operational complexity and time required. Duplicating and updating policies across disjointed security tools On top of the operational complexity mentioned above is the challenge of uniformly applying consistent security policies across your organization. It requires manually replicating policies in multiple dashboards across disjointed tools with varying capabilities, and ensuring that they are continually kept up to date. At best, it’s an extremely onerous exercise. In truth, it places an unrealistic burden on your IT teams to tailor policies to each individual solution—a costly burden that ultimately leaves the company with fragmented policy configurations, soaring operational costs, and considerable security gaps. Tedious management oversight from skilled and expensive labor Addressing both of these challenges requires a team of highly-skilled employees who know the ins and outs of getting disjointed tools to work together. In the current market—where this skillset is in high demand—it is going to cost organizations a lot of money. Otherwise, businesses will be left asking already stretched IT teams to do more. That simply isn’t a sustainable option. Tackling these challenges with the Zscaler Zero Trust Exchange Fortunately, organizations can tackle these challenges and reduce costs by increasing operational efficiency with a zero trust architecture. The Zscaler Zero Trust Exchange is an integrated, cloud-delivered platform of services that delivers direct-to-cloud connectivity that enables organizations to consolidate point products and eliminate security appliances. Figure 1: The Zscaler Zero Trust Exchange According to a recent ESG Economic Value Report, organizations can eliminate up to 90% of their security appliances by implementing the Zscaler Zero Trust Exchange. Minimizing the number of point product appliances using the cloud-based Zero Trust Exchange dramatically reduces the operational complexity of policy management and appliance maintenance requirements. Figure 2: How Zscaler Cuts Operational Costs This, combined with the fact that Zscaler handles change implementation like patches and updates in its cloud and automates repeatable tasks, further simplifies operations and frees up 74% of an admin’s time to focus on more strategic projects that drive value to the organization. Figure 3: Practitioner Time Savings ESG Economic Validation Study Stats based on customer experiences–results may vary To further illustrate these benefits, let’s take a look at two Zscaler customers. Baker & Baker reduced their administrative and overhead costs by 70% over firewall and VPN infrastructure with the Zero Trust Exchange. Similarly, Commonwealth Superannuation Corporation (CSC) reduced infrastructure complexities by 90% and achieved a 30% reduction in management overhead. Ultimately, embracing a zero trust architecture with Zscaler helps customers reduce operational complexity and costs, contributing to the average customer achieving an ROI of 139%. Take the first step Discover how a zero trust architecture can help your organization reduce costs through improving operational efficiency by downloading our white paper, “Delivering Unparalleled Security with Superior Economic Value: The Power of the One True Zero Trust Platform.” Also, explore how Zscaler customers like you have reduced operational complexity and eliminated inefficiencies to capture real economic value with the Zero Trust Exchange in our ebook, “How Companies Reduce Costs with the One True Zero Trust Platform.” Stay tuned for the next blog in this series, where Jacob Serpa will explore how Zscaler delivers superior economic value by enhancing productivity and collaboration. Thu, 23 Feb 2023 10:30:09 -0800 Jen Toscano The Top Data Protection Challenges for an Enterprise As organizations navigate the digital landscape, protecting sensitive data from breaches and insider threats, while adhering to regulatory compliance, has become a paramount concern. As more and more data is migrated to the cloud, the challenges of maintaining visibility, security, and governance over that data have become increasingly complex. In this blog, we will delve into three of the top challenges organizations face as they strive to protect their data, and how Zscaler can help reduce risk while also increasing productivity. Challenges: 1. Enabling cloud app productivity while reducing risk In today’s digital world, there is an increasing emphasis on user productivity and collaboration. This means users can work from anywhere and have the ability to access and share data as needed. This presents a big challenge to IT teams: how to enable the best user experience without compromising security. With data widely distributed and accessible over the internet, legacy data center security just can’t keep up. They need a more modern way to secure these connections and data. Data protection technologies like DLP or CASB are an important ingredient to this challenge. 2. Preventing accidental data exfiltration Accidental data exfiltration is another big challenge when it comes to data protection. Users often forget security best practices and cause accidental data exfiltration. One of the biggest examples is the GitHub credential exposure problem. There have been numerous cases where developers inadvertently include sensitive information—such as passwords, API keys, and other credentials—to a GitHub repository. Once the sensitive information is on GitHub, it can be easily discovered by bad actors using automated tools, who can then use the credentials for malicious purposes such as accessing sensitive data, stealing identities, or launching attacks on other systems. Another issue is collaboration on SaaS applications. SaaS data can be easily shared with unauthorized users. It takes literally two clicks to share SaaS data at rest, which can cause users to accidentally share sensitive data. 3. Protecting data from insider threats Insider threats can pose a significant risk to organizations, as insiders (employees, partners, contractors, etc.) have authorized access to company systems and may have knowledge of the organization's policies and procedures. Insider threats can come from malicious users who want to steal the “secret sauce,” but are often simply due to user error. Here is an example of a user error that recently exposed sensitive patient information: In one large breach, a global organization blamed user error for leaving a list of credentials online for more than a year that exposed access to sensitive patient data. The developer left the credentials for an internal server on GitHub in 2021. The credentials allowed access to a Salesforce cloud environment containing sensitive patient data. Another example, where the exposure happened due to a malicious insider: In July 2020, it was revealed that an employee of another large organization had stolen valuable proprietary data and trade secrets over a period of eight years. This employee, who was seeking to use the information for their own professional gain and to start a rival company, gradually exfiltrated more than 8,000 sensitive files from the company's systems. It was discovered that the employee had convinced an IT administrator to grant them access to the files and had emailed commercially sensitive calculations to a co-conspirator. How do we solve these challenges? Solving all three of these challenges starts with the right security architecture, which revolves around a unified cloud platform, as defined by Gartner’s Security Service Edge. Let's explore the key steps needed to transform your data security with this transformative architecture. Visibility Visibility is the starting point of any data protection plan. Unless you have visibility into the “what, where, and how” of your applications and data, you cannot implement a strong data protection program. Visibility covers a big spectrum of use cases to make sure you do not have any blind spots. Visibility into applications With thousands of cloud applications being used—many of which are not IT approved—the first challenge is to efficiently get visibility into all the applications that are being used in the organization and review their potential for risk. Visibility into application instances You also need visibility into different application instances (e.g. determining whether an application is being used is a personal or corporate instance). Can you see across different tenants? So for example, due to an M&A you may have multiple corporate instances of the same application across the parent and acquired company. Visibility into data Organizations need visibility into what kind of data is being uploaded on SaaS applications. Often, organizations don’t want to block every single application, but want to have control over the data being uploaded. For eg: if sensitive data is being uploaded on sanctioned applications or malware is downloaded from the applications. Organizations also need visibility into what kind of data exists on corporate applications, and ensure that they are appropriately classified, not overshared, and are in compliance with various regulations like GDPR, CCPA, and HIPAA. In 2018, a healthcare center reported a data breach in which the threat actor managed to access the PHI of more than 300,000 patients. To prevent such incidents, an organization needs to first understand where their most sensitive data is stored and the risks associated with it, then put appropriate controls in place to safeguard the data. This is easily implemented by data discovery and DLP classification to identify, classify, and secure sensitive data across your organization. Visibility into user activity Another important element of visibility is understanding user activity (e.g. are there sudden download spikes from a particular user?). Visibility into user activities can help companies gain insight into potential threats or breaches. Visibility to application settings Visibility into application settings is another important aspect of data protection. Some of the key elements of application settings that you might want visibility into are: 1. SaaS application posture It’s imperative to understand the posture of all the SaaS applications being used in your organization and ensure all security configurations are up to the latest compliance frameworks. For example, a weak password policy or disabled MFA for some users can make the application vulnerable to attacks. Manually doing assessments of hundreds of corporate applications in an organization is a challenging and lengthy process. 2. Third-party applications Organizations need visibility into all the third-party applications that have been enabled using corporate credentials. This is important because when an employee is logging in, the third-party application asks for permission to access data (e.g. Read Access to Google Drive, Gmail, etc.). When the employee grants these permissions, the application now has access to their corporate Google Workspace account and the IT department doesn’t know about it. This creates issues because your employees can use a number of applications using their corporate account, and some applications are not safe (e.g., if granted access to Gmail, an application can send rogue emails). At-scale inspection of all traffic. In addition to ensuring visibility, organizations should be able to inspect SSL traffic at scale; without that, organizations would still have blind spots. In addition, all ports and protocols should be covered by the inspection to gain full visibility. Granular Controls Another important prerequisite for solving these challenges is the ability to have granular controls in each of these areas: 1. Integrated shadow IT visibility and control View usage of all cloud applications based on the risk score Identify risky apps with high volumes Consider blocking high-risk apps for file sharing and webmail categories Restrict access to corporate applications using tenancy restrictions where possible 2. Data classification and remediation Data protection without content inspection Data protection with content inspection for data in motion Data discovery and exposure for data at rest in sanctioned apps 3. Application Settings SaaS security posture management controls Third-party OAuth control 4. Bring your own device (BYOD) controls Now, that’s quite a list. So, the question is, how does someone start? We recommend a crawl, walk, and run strategy to implement data protection in your organization. Let’s go over how can you implement this strategy successfully and overcome the various challenges discussed earlier. Challenge Crawl Phase Understand your environment Walk Phase Prevent dangerous events Run Phase Implement advanced controls Enabling Cloud App Productivity while reducing risk Monitor applications being used by employees with risk scores and security attributes to assess the risk exposure and identify necessary controls Identify top unsanctioned applications that have the most file uploads Generate a report on visibility into applications’ admin settings and misconfigurations Implement policies to block complete access to apps based on risk score Block access to applications based on certain risk attributes such as poor terms and conditions, suspicious locations, allow anonymous access, etc. Use tenant restrictions to block access to personal instances of SaaS and IaaS apps where business-sensitive data can be copied Admins assign misconfigurations to respective owners for manual fix Implement granular cloud application controls such as allowing & viewing, but blocking uploads, posts, etc. Restrict data access from BYOD/unmanaged devices for sanctioned applications Prevent download, copy, or print of data when sanctioned apps are accessed via unmanaged devices Block access for anomalous users and devices Prevent accidental data exfiltration Scan your most critical applications Identify sensitive data that is externally or publicly shared Monitor for any malware in your environment Identify all corporate code repositories Identify all personal code repositories Scan corporate repositories for hardcoded AWS, Azure, GCP, SSH, and other keys Make sure there are no public repositories third-party application discovery Notify and coach your end users on violations Identify bulk downloads of data Manually remediate high-risk violations Quarantine malware Create exclusions for executives and highly-sensitive data Scan all of your applications Automatically remediate sharing violations Identify third-party OAuth access and block rogue applications Protecting Data from Insider Threats Monitor applications in use and which users access each app Identify which unsanctioned applications have the most file uploads Monitor sensitive data types being uploaded (can vary by industry) Look for tagged files being uploaded (AIP/MIP) Look for password protected files Identify bulk downloads of data Create an incident management program to monitor files Block high-risk exfiltration to unsanctioned applications Educate and coach end users to use sanctioned applications Recalibrate your rules Gain better detection through EDM, IDM, and OCR Automate and recalibrate your rules Create a honeypot of sensitive data and match if anyone is trying to steal it Create a user group for departing employees and enforce tighter controls The Zscaler Data Protection Solution is a simple but powerful way to secure all channels, ensuring the protection of all users anywhere and controlling data in SaaS and public cloud, all backed by a robust and intuitive data discovery engine. With Zscaler’s data protection solution, you get an integrated platform providing you with: Cloud Data Loss Prevention - Prevents data loss to the internet that can inspect all internet and SSL traffic for all ports and protocols. The Zscaler DLP solution is backed by an advanced data classification engine that supports advanced classification techniques like machine learning, EDM, IDM, and OCR. Cloud Access Security Broker (CASB) - With Zscaler integrated CASB, organizations can restore SaaS app control without the cost and complexity of third-party overlays. Get complete shadow IT visibility, block risky apps, and quickly identify dangerous data sharing—all with a single, unified DLP policy. Security Posture Management - Zscaler Cloud Security Posture Management (CSPM) and SaaS Security Posture Management (SSPM) scan public and SaaS clouds for risky settings or compliance violations and enable rapid remediation. Cloud Browser Isolation - Zscaler Cloud Browser Isolation restores data control over BYOD without requiring a problematic reverse proxy deployment. With Cloud Browser Isolation, you can stream data to BYOD as pixels only, enabling safe access and viewing while preventing download, copy, and printing. Thu, 23 Feb 2023 08:00:01 -0800 Megha Bindal How to Balance Cloud App Productivity and Security with Zscaler Your users access hundreds of sanctioned and unsanctioned cloud applications every day. They constantly upload and download data that needs protection. This presents challenges to IT and security teams to decide: should they take a sledgehammer approach and block the application, or should they allow the access and deal with the consequences later? One of IT’s biggest concerns is how to safely enable access to cloud applications without affecting user productivity. With the Zscaler Data Protection solution, IT can achieve this goal by providing granular controls for cloud applications to block only the risky activities instead of the entire application. Cloud App Control policies let organizations control access to cloud applications at a granular level, based on users, tenants, domains, and activities. Zscaler’s inline CASB identifies corporate vs. personal tenants along with activities such as upload, download, share, edit, post, view, login, logout, and more in real time—and provides the control needed to manage application access. Let’s look at how Zscaler solves some different use cases. Tenancy restrictions Many users have personal instances of corporate-approved SaaS apps, such as OneDrive, Outlook, Gmail, and others. Separately, they’re fine—but using both their corporate and personal instances of an app at once could lead to a data leak. Imagine an employee uploading next quarter’s sales projections to a personal Google Drive instead of a corporate one. It could lead to not only a data leak, but also possible damage to the company's reputation. Similarly, organizations that use AWS want to restrict users’ access to certain critical accounts. Schools that use YouTube EDU want students to have access only to the content the school has selected. In scenarios like these, you can use Zscaler's tenancy restriction feature to restrict access to personal accounts, corporate accounts, or both for certain cloud applications. Simply create a Tenant Profile, specify the allowed tenants in it, and associate it with the respective Cloud App Control policy. SaaS apps deny access to all tenants not explicitly mentioned in the Tenant Profiles. Tenant Profiles also provide options to restrict access to personal Microsoft 365 tenants, consumer access to Google accounts, and so on. Tenant Profile configuration Domain-based access to SaaS applications In today’s highly connected world, most organizations can’t work in silos—they work with multiple partners and third-party vendors. To collaborate with these partners, you need to provide them access to their corporate collaboration or file sharing applications. However, giving partners full access to corporate domains may lead to data exfiltration. In another scenario, a developer syncs his corporate GitHub repository with his personal GitHub account, leading to exfiltration of the source code, along with any hard-coded credentials such as AWS keys and passwords. To ensure that employees, partners, and other users can access only allowed instances of an application, Zscaler provides the Cloud Application Instance feature. With it, you can create multiple instances of a cloud application based on different domains (corporate, partner, trusted/untrusted) and add them as criteria in Cloud App Control policies. For example, you could allow partners access only to their partner Box instance, or allow developers to log in to corporate GitHub accounts only from corporate networks, preventing accidental data leakage. The Zscaler DLP solution also supports cloud application instances: you can choose cloud application instances and create rules based on the content. For example, allow employees to upload files in their personal Box accounts, but block upload if a file contains sensitive data such as PII, PHI, or PCI data. Cloud Application Instance configuration Activities based access to SaaS applications You need fine-grained control over what users are allowed to do in a SaaS application. For example, your organization may not want employees to rename or add comments to files uploaded to corporate OneDrive. You might want to restrict employees from posting sensitive content on social networks but still let them view those sites. Zscaler Cloud App Control policies provide fine-grained, activity-based access controls for critical applications across all categories. You can easily create rules to block activities like file renaming and posting on social media while still allowing other activities across these applications. Fine-grained rule configuration In addition to Cloud App Control, Zscaler monitors millions of web and SaaS applications across hundreds of categories with powerful URL filtering. Leverage this to allow or block users’ access to these applications as well as any custom applications. To find out more about the capabilities of Zscaler Data Protection, you can explore our other resources or watch these short demo videos. Tue, 21 Feb 2023 12:55:32 -0800 Niharika Sharma Achieve Dynamic Zero Trust Services Automation with Zscaler and Consul-Terraform-Sync Migrating to the cloud offers organizations greater scale and agility for deploying applications. But with that agility comes greater complexity and a higher volume of manual tasks. These challenges prevent operators from taking full advantage of the benefits the cloud offers and increases strain on their teams. To address these challenges, operators need a way to automate and optimize their existing processes to move at the speed that cloud networking demands. The complexity of managing the security policies and compliance for those applications, exacerbated by the technical difficulties faced by security teams who use manual processes for change management, may lead to delays in implementation and operations, as well as security risks. An application can be made both continuously secure and reliable, with closer collaboration between the DevOps and DevSecOps teams, via practices that reinforce security at every stage of the development pipeline. Transparent security promotes expedited application deployment and makes the DevOps and Platform team an equal stakeholder in producing highly resilient and secure applications. The DevOps Promise and Reality The promise of DevOps is to deliver a fully automated continuous integration, delivery, and deployment all the way to the point where users can consume those services. However, the reality in many organizations is that they still are not able to implement a fully automated process when deploying applications into their production environment. Instead, many organizations still rely on manual processes and multiple teams to fully manage their day-N operations, which hinder the progress of making those applications available to the end user quickly, consistently, and securely. Application developers that need to scale their applications are typically required to create new change management tickets that flow through multiple teams in the organization such as system admins, network admins, and security operations. All of which have their own timelines and requirements to ensure a change can be deployed. This is a process that in some cases, may require multiple change requests, and in the end, there’s still the need to manually configure the required changes for the final consumption. All these promises and challenges, increase the risk of mistakes, slow the process, and prevent a standardized deployment model. Consul-Terraform-Sync Network Infrastructure Automation is how HashiCorp Consul addresses the complexities of cloud-based services and enables dynamic updates across a multi-cloud environment to ensure consistent security and compliance at the speed applications are developed, deployed, and made available for user consumption. One way that Consul provides infrastructure automation is through Consul-Terraform-Sync (CTS). Consul-Terraform-Sync runs a daemon that watches Consul state changes at the application layer (based on service health changes, new instance deployed, etc) and forwards the data to the Zscaler Terraform modules that are then automatically triggered. CTS uses Terraform as its underlying automation tool and leverages the Terraform Provider ecosystem to drive relevant changes. All these capabilities combined allow organizations to automate their day-N operations, so that the infrastructure is in constant alignment with the application state, while at the same time, the entire process is abstracted into a declarative model, as displayed in the picture below: In addition to all the benefits mentioned thus far, CTS guarantees that your automation process across the Zscaler platform is easily repeatable with consistent results. Zscaler + Consul-Terraform-Sync (CTS) Many organizations leveraging Zscaler to secure their cloud environment and control their user’s access via zero trust policies are adopting more of a DevOps mindset every day. These organizations require agility and tools that will enable their teams to deploy applications fast and securely regardless of the environment where those applications will be hosted. Zscaler’s integration with Consul-Terraform-Sync (CTS) provides 3 different modules for complete automation of day-N operations. Manage ZPA Application Segments: With the CTS module for ZPA, you can automate application segments and application server creation based on access requirements originating from the Consul Services Catalog. Orchestrate Firewall Management: Dynamically automate IP source group changes in the ZIA Cloud Firewall with the CTS module for ZIA to ensure strict adherence to security and compliance policies. Using Consul-Terraform-Sync (CTS), Zscaler and HashiCorp Consul can facilitate day-N dynamic updates across the Zscaler platform based on application and security teams' demands. This joint solution was designed with scalability in mind while at the same time maintaining a zero-trust model. As new services are registered or deregistered from the Consul catalog, Consul-Terraform-Sync updates application segments or application server IP addresses, FQDNs, and TCP ports for the relevant applications in the Zscaler Private Access platform. The module is also designed to update Zscaler Internet Access Cloud Firewall module IP Source Groups to ensure only authorized IP addresses monitored by Consul are filtered via predefined Cloud Firewall rules. Zscaler Private Access with Consul-Terraform-Sync (CTS) Zscaler Private Access provides 2 CTS automation modules, which leverages the ZPA Terraform Provider ZPA Application Segments: From a ZPA perspective, an application is a fully qualified domain name (FQDN), local domain name, or IP address, that is defined by an administrator on a standard set of ports. An application segment resource groups a set of defined applications based on access type or user privilege. This CTS module is designed to add, update, or delete application entries in an application segment as new applications are registered in the Consul catalog. ZPA Application Servers: Although Zscaler Private Access can dynamically discover each application in the environment as users access them, there are specific cases in which organizations may want to individually define each application server construct. The Application Server CTS module will add, update, or delete individual application server objects based on Consul catalog updates. Zscaler Internet Access with Consul-Terraform-Sync (CTS) Zscaler Internet Access CTS module utilizes the ZIA Terraform Provider, to create, update, or delete Source IP Group entries. The Source IP Groups allow you to group and control source IP addresses within the Zscaler Cloud Firewall, by specifying individual IP addresses. This CTS module will dynamically add, update, or delete individual application IP addresses within a Source IP Group. Benefits Eliminate manual ticketing processes Consul-Terraform-Sync is designed to automate many different tasks across different cloud environments that are traditionally handled manually by DevOps teams. For example, updating entries at scale in a Zscaler Private Access application segment or updating IP source group entries in the Zscaler Internet Access that can automatically reflect in a cloud firewall rule. Native Integration Between CTS and Terraform Cloud Organizations leveraging Consul Enterprise and Terraform Cloud can integrate Consul-Terraform-Sync via its native “terraform-cloud” driver. By leveraging Terraform cloud with CTS, there are multiple benefits such as creating different project folders and workspaces for different requirements, as well as moving workspaces in between projects. Terraform Cloud provides the ability to configure notifications to external systems such as webhooks, Teams, email, and Slack. If the organization requires a review of a particular configuration before they are applied to the production environment, customers can configure these notification capabilities to send a webhook notification to an ITSM system such as ServiceNow for incident creation and approval. Adopt Best Practices and Reduce Risk Minimize impact from misconfiguration errors across multiple ZPA application segments and ZIA IP Source Groups. This CTS integration will not only help organizations to reduce their risk but also ensure that their ZPA and ZIA constructs are kept up to date according to the state of their real application environment, and as changes are performed. Related Resources: Zscaler Terraform Providers Zscaler and HashiCorp Partnership Consul-Terraform-Sync Announcement Tue, 21 Feb 2023 08:00:01 -0800 William Guilherme The Impact of Public Cloud Across Your Organization In this second blog of our 6 part series on The Impact of Public Cloud Across Your Organization, we are going to look at Cloud Native Application Protection Platforms (CNAPP) through the lens of a cloud operations team. Hopefully, we can answer the question, “how can CNAPP help CloudOps teams do their jobs more effectively?” DISCLAIMER: I know, I know, I know…The term “cloud operations” has countless variations. What is under that moniker at one organization may not be 100% aligned with yours. I get it, I do. I am going to use some generalities here and focus on critical capabilities, with the hope that many of those capabilities with your cloud operations team’s organizational responsibility. As they say, your mileage may vary. In this post, we are going to focus on the job functions listed below. Other responsibilities (e.g. DR planning and testing, training, etc.) are out of the scope of this discussion. Establishing and maintaining a catalog of approved assets (e.g. approved images for compute deployments) Asset management, including both initial deployment and configuration of cloud assets and ongoing management. Monitoring of cloud estate for usage, configuration drift, and other operational issues Patching and updating of cloud resources as required Coordinating with internal Lines of Business (LOB) owners, security, and network teams for incident resolution Troubleshooting General access and security of cloud resources Identity and access management responsibilities As public cloud operating models become more mainstream for enterprises, identifying and delivering an approved service catalog to internal teams is of paramount importance. The pace of new services that are rolled out every month in the public cloud space is much faster than traditional enterprise software. Ungoverned consumption of these services can lead to cost overruns, entitlement complexities, insecure configurations, and potentially new threat vectors. It is not enough that the cloud operations team have principal responsibility in defining this catalog, they also have to manage the existing cloud estate to identify drift from those approved services. Understanding What You Have “You cannot protect what you don’t know” is one of the first principles of any enterprise cyber security strategy. It follows that clarity of consumption is the first and primary step of any operations team’s ability to protect the enterprise in the public cloud space. At the core of this need, is the ability to quickly and intuitively survey the entire multi-cloud estate in real-time. The ability to see what resources are deployed, and into what regions, and identify appropriate and required tagging frameworks is simply table stakes for any cloud operations team. Understanding and visualizing the changes that have occurred on a given asset and the identities that are responsible for that change allow the Operations team to either remedy or assign the remediation task(s) to the appropriate department. All of the major Cloud Service Providers (CSPs) have consoles to see deployed resources and services, but the truth is for many organizations, consuming multiple cloud service providers is a reality. Being able to pull all the asset data from each of these disparate environments cleanly is critical to a speedy evaluation of those deployed assets and services. Cloud operation teams also operate through a policy lens. The ability to programmatically investigate and report on unsanctioned deployments should also be a critical capability, not only for reducing risks but for saving time and costs as well. Prioritizing the Focus Cloud operations teams also focus on patching and updating existing assets and services. For many, the difficulty lies in understanding which services should be the focus of these activities. If an organization has 10,000 compute instances across multiple clouds, and 80% of them have a brand new vulnerability just reported in the news, that data point does not really help identify those instances that pose an imminent risk in and of itself. A cloud security operating platform should be able to understand and correlate other risk signals to help focus the team’s response. Upon learning of the new vulnerability, operations teams should be able to identify what assets are vulnerable that have other attributes such as: Public exposure Access to storage accounts or buckets containing sensitive data Strong identities and entitlements Environmental tags These additional signals are what allow cloud operations teams to prioritize those assets that represent the highest risk. Who has Access? Cloud operations teams also need to look beyond asset configurations, understanding the identities and entitlements related to those assets. Many organizations start their “cloud journey” (come on, you know I had to use that term at least once) by a LOB asking whether an application that is currently running on-prem can run effectively in a public cloud environment. The benefits of scale, elasticity, global reach, and costs can drive these evaluations. Security often does not. In fact, application owners might not have deep background knowledge of cloud infrastructure concepts. The differences between identity and access management (IAM) in a traditional data center and a public cloud environment can also be extremely daunting. Roles are assigned not only to humans but to services (e.g. compute, secrets, storage) and the interplay between them can be complex. An application owner may simply find it expedient in these early evaluations to give themselves (or cloud services) “root” like privileges just to remove some of these complexities. Over time, as more applications move to the public cloud, these robust entitlements become a threat vector in and of themselves. As the enterprise matures to develop a robust cloud operations structure, it becomes incumbent to go back and work towards the least privilege model. On the surface, IAM and asset management seem like different disciplines and one can make the argument that they are. However, it is critical for the operations team to integrate their view and understanding of identity and asset management into a single platform. It is incredibly difficult if not impossible to do this with two different solutions. For example, consider a scenario where a compute administrator has full access to the console of various instances, but they have no direct entitlement to a storage bucket. If one or more compute instances have access to that bucket, the compute administrator can potentially assume the role of an instance, giving that admin inadvertent access to the bucket. Operations teams need to be able to see and identify that weakness, where a full compute admin, may be able to elevate or assume a role on the compute instance to leverage the compute role in the access and exfiltration of data from a cloud storage object. No easy task. Conclusion There is myriad of requirements for the modern cloud operations team today. Understanding their daily responsibilities to the enterprise and mapping those requirements to a platform capability is the approach that guides the development of Zscaler Posture Control. Please check out the other parts of this series as we examine the requirements of other teams within a public cloud enterprise. We will continue to examine how Zscaler is designing platforms from the ground up to address those requirements while reducing the manual stitching together of individual point solutions, lowering costs for customers while delivering critical insights in an ever-complex multi-cloud world. See the power of Zscaler Posture Control with our free cloud security risk assessment. Mon, 20 Feb 2023 08:00:01 -0800 Scottie Ray Zscaler Posture Control and Splunk Integration: Cloud Transformation in the SOC Cloud technologies with new development practices have significantly increased the velocity of fixes, enhancements, and features. In some environments, developers are making changes and release code to production several times a day. CI/CD tools make it easy to spin up new cloud resources and deploy code quickly. Infrastructure as Code (IaC) is widely adopted by organizations to quickly deploy, manage, and provision cloud resources repeatedly. Infrastructure is defined using code to consistently scale, update, delete, and provision infrastructure using automation tools. An inadvertently misconfigured template could affect thousands of cloud workloads. This amplification effect expands the attack surface, paving the way for new attack vectors. The focus on speed combined with IaC automation can cause a rapid spread of security issues. Security teams are unable to keep up with these changes. The latest audit of infrastructure may be out of date the day it's released. The SOC plays a key role in keeping the enterprise safe. They need to have visibility into vulnerabilities introduced and be able to communicate changes needed to development teams. With Zscaler Posture Control integrations, Splunk customers can rest assured their cloud-native applications remain secure and compliant in each phase, from development to deployment. These enhancements enable Splunk customers to get complete control over cloud-native environments with fast risk detection and response as they accelerate their secure digital transformation journey. Let’s take a closer look at this integration and how it helps to address some of the challenges. Comprehensive visibility and control: The integration simplifies operations for security teams with the ability to easily view actionable security data using a single console, reducing the need to pivot across disjointed management tools for point products. The new panels include IaC alerts and the top policies that generated them. This is useful for a security team to understand the risks of new templates coming out. Security exposure and attack vectors are also exposed in the dashboard, helping the SOC team identify gaps and threats in a single pane. Streamlined incident response: By combining the power of Zscaler and Splunk, customers can improve security, while streamlining incident response. It provides the SOC valuable insight into IaC vulnerabilities and misconfigurations in cloud infrastructure from a new dashboard within the Zscaler App for Splunk. Reduced MTTR: It also helps in reducing MTTR with closed-loop workflows. In most cases security incidents that are logged lack context which makes it extremely difficult for security or cross-functional teams to quickly triage and remediate risk in real-time. Together, Zscaler Posture Control advanced risk correlation and Splunk can accelerate security incident workflows. public exposure, vulnerabilities, and misconfigurations can be identified, investigated, and remediated to help reduce the risk of a breach. The dashboards provide risk prioritization and visibility, and can reduce cross-team friction when discussing a security issue. This helps to reduce response time and increase productivity. Fig: Main Posture Control dashboard with top IaC critical alerts Fig: Attack Vector alert: Identities without MFA and risk of permission elevation Fig: Cloud infrastructure alerts - exposed management ports and snapshot data We hope you are as excited about Zscaler integration with Splunk. We are confident that integration will complement your security automation initiatives to streamline your multi-cloud estate's overall operational workflow and security. We encourage you to learn about integration or sign up for a free cloud security risk assessment today. Wed, 15 Feb 2023 08:00:02 -0800 Rahim Ibrahim Giving Data Protection a good (MIP) Label Whether you’re a Microsoft shop or not, chances are you’ve heard of Microsoft Information Protection (MIP), or its former designation; Azure Information Protection (AIP). Designed to help identify and label sensitive data, Microsoft’s suite of data protection technologies is often a popular option for organizations trying to maximize their Microsoft ecosystem and license. To be clear, AIP was rebranded to MIP, and then subsumed into Microsoft Purview Information Protection (MPIP). All the name-switching confusion aside, the goal of the technology is admirable: empower users to adopt a data protection program. Within Microsoft 365, MIP allows users to easily label documents as sensitive or internal, providing the document with valuable meta information that can help the rest of the M365 platform handle it appropriately. Once labeled, policy can be created to help control data—e.g. preventing it from being shared inappropriately, or only allowing “view” and not “edit” access. The importance of integrating with SSE One of the reasons MIP has become so popular is mainly because the challenge of securing data has never been more difficult. With users regularly off-network and working remotely, and cloud apps distributing data far and wide, organizations are now faced with new challenges. Gone are the old days when centralized data center protections could solve your problems. They can’t follow your data off-network and are losing more visibility every day. As such, today’s forward-leaning companies are embracing a more modern approach, mainly focusing on Gartner’s Security Service Edge (SSE). Built from the ground up for the world of cloud and mobility, this cloud-delivered platform helps unify security, data protection, and visibility across any connection or location. With the rise of SSE, it’s only natural that organizations look to marry its advantages with their existing Microsoft deployments. So, while MIP is a powerful concept, there are some aspects of its approach that need to be supercharged. Enter Zscaler. How Zscaler supercharges MIP As the world’s largest security cloud and the pioneer of the Zero Trust Exchange, Zscaler’s strategic position alongside sensitive data can significantly uplevel MIP functionality. Since Zscaler natively sits inline and can scale to inspect all SSL, it has an eagle’s eye on the comings and goings of all sensitive data. The first MIP integration leverages this fact to help further enforce policy. The integration starts with a syncing between Zscaler’s cloud and Microsoft MIP labels. Once Zscaler receives these labels, it can use its best-in-class DLP engine to enable policy blocking across any sensitive or internal labels attached to outbound data. Doing this inline, leveraging the metadata attached to the document, enables organizations to add a valuable real-time blocking and control aspect to MIP data. This ensures sensitive data doesn’t leave the organization to unsanctioned or risky SaaS Apps (outside M365). The second Zscaler MIP integration focuses on data at rest inside of M365. Since Zscaler already knows the MIP labels and also has a full multi-mode CASB, it can go hunting inside of your Microsoft 365 platform for improperly labeled data. Together, Zscaler CASB and DLP can find sensitive data at rest and check if it has been properly labeled. If it hasn’t, it can update the document with the proper label, and moving forward it will be treated accordingly by MIP policy within the M365 Platform. Seeing the Zscaler MIP integration in action While the integration story sounds great, a picture is worth a thousand words. To that end, we’ve compiled a demo of the integration so you can see how it looks and feels inside the Zscaler console. Ready to learn more about Zscaler Data Protection? While Microsoft MIP integration is a powerful feature, rest assured that Zscaler’s Data Protection solution has much much more to offer. Thousands of organizations have looked to embrace Zscaler for their data protection initiatives, and have successfully deployed it to help further protect and empower their M365 ecosystem. To learn more about how Zscaler can help transform your data security, visit our solution page, or feel free to talk to one of our experts. Mon, 13 Feb 2023 08:00:02 -0800 Steve Grossenbacher Enterprise Security Predictions for 2023 Across industries, the trend toward hyper-distribution of people, applications, and data is a phenomenon that holds both promise and peril. On one hand, we can connect talent, software, and information faster than ever to innovate and create value. On the other hand, we’ve opened a sprawling attack surface that changes daily and is prone to cyberthreats and data loss. It’s a landscape that challenges the very effectiveness of network security we trusted for many years. On the world stage, economic uncertainty, conflicts, and rivalries are on the rise, putting pressure on security and IT leaders to create flexible but secure, resilient, and cost-effective infrastructure to help their organizations navigate and operate through whatever storms may appear on the horizon. With this landscape as a backdrop, I wanted to share a few predictions to help organizations prepare for the year ahead: 1. The Crimeware-as-a-Service (CaaS) model will thrive - From software to cloud computing, the as-a-service model has become so ubiquitous and lucrative that it’s been adopted as a viable model for carrying out cyberattacks, such as phishing, malware, and ransomware campaigns. Many of the same benefits of an as-a-service model can also be applied to cyberthreats - threat actors at any technical skill level can significantly reduce sunk costs, including development time, and gain the specialized support and expertise needed to carry out successful attacks. Because threat actors no longer need special skills in order to carry out attacks, a life of cybercrime is accessible to nearly anyone with a computer and internet connection. As a result, CaaS offers are here to stay while the frequency and magnitude of cybercrime are increasing, so the risk to enterprises has never been higher. 2. Insider threats will become more prevalent - As organizations brace for a turbulent year ahead amidst a fluctuating macroeconomic environment, shifting workplace styles and talent shortages, it is critical that security teams take a closer look at safeguarding their organizations against the intentional and unintentional threats posed by insiders. Contributing factors such as the increased use of third-party contractors and greater employee movement through hiring and attrition exacerbate this threat. In particular, the rise in hybrid work environments that are still using antiquated VPN technology - which cybercriminals are adept at exploiting through social engineering to gain access to the corporate network - further compounds this threat, which can cause the devastating loss of sensitive information, productivity, revenue, and reputation. Once the network is compromised, attackers can easily move laterally across a routable network to infect applications and find high-value targets, which is why a zero trust security approach - in which users are only connected to specific applications and never to the network - is the only way ensure the security of any mobile, cloud-centric organization. 3. Cybersecurity talent shortages will continue - Just over a year ago, CyberSecurity Ventures estimated there were 3.5 million unfilled cybersecurity jobs open globally and predicted that we would have the same gap in 2025. What’s clear, as leaders, is that we need to invest in retaining and developing our security teams and deploy technologies to help them scale even as threats grow in volume and diversity. Shifting to modern architectures, like zero trust, that minimize the attack surface to reduce the volume of attacks is one approach. Another approach is reducing the noise-to-signal ratio with innovations like deception technology which creates high-fidelity alerts when an attacker triggers a decoy or honeypot inside your environment, versus older technologies that produce a flood of low-fidelity alerts that overstretched security teams either ignore or turn off. A Zero Trust security approach - where no user is inherently trusted and access policies are enforced based on context - is the only way to ensure the security of any mobile, cloud-centric organization. 4. Advanced AI will allow organizations to more intelligently and proactively stop cyberthreats - In 2023, to help their cybersecurity teams scale, enterprises will also continue to capitalize on AI, machine learning (ML), and intelligent automation to advance cyber defenses. Advancements in AI and ML provide high-fidelity intelligence and contextualization that results in better threat detection and helps organizations speed investigations and automate response for faster and more effective remediation. AI engines are also adept at finding and categorizing data distributed across many locations so busy administrators can rapidly apply granular protection policies to guard against external exfiltration or even inadvertent data loss. As more security vendors are incorporating AI and ML into their offerings, it will make it easier for enterprises to take advantage of these benefits, leading to an increase in adoption. 5. Successful organizations will look to consolidate security point products into an integrated cloud security platform - For years, enterprises have had to cobble together a myriad of security point products in an attempt to build a “best-of-breed” system to address all their business needs. However, with the explosive influx of products that cannot effectively integrate and operate seamlessly, the planning, implementation and management of multiple security products is too complex and resource-intensive for IT professionals, while still leaving the organization vulnerable to attacks. Because security is such a critical part of any organization’s operations, a fully-integrated cloud security platform approach is the most practical and effective architecture since it allows for faster deployment, unified management, easier service upgrades and more strategic software lifecycle management while incorporating pre-tested API-based integrations of adjacent tools in the security landscape. 6. Zero trust architecture adoption will accelerate - CISOs and CIOs will start appreciating the true benefits of a zero trust architecture. A recent Zscaler study published in December 2022 indicated that 90% of global enterprises are adopting zero trust, yet have not unlocked its full business potential. In 2023, the broader market will realize that zero trust is simply not achievable by spinning up virtual machines of firewalls and VPNs in the cloud because this still requires users to connect to the network, which by definition, does not constitute zero trust. Zero trust security dictates that no user is inherently trusted and access policies are enforced based on context - including the user’s role, location, device, and the data they are requesting - to block inappropriate access and lateral movement throughout an organization’s environment. With technology advancements and the emergence of solutions and services that will make zero trust security easier to implement, I expect that increasing numbers of CISOs and CIOs will regard a cloud-native zero trust architecture as the preferred means of securing and connecting their distributed organizations. Do these predictions validate what you’re seeing and experiencing in the security industry today? When you think about how security will evolve in 2023, what are your top concerns and considerations? For a customized demo on how Zscaler may be able to address your organization’s security needs today and in the future or to speak with a security expert about how digital transformation can support modern organizations, please get in touch with us here. Forward-Looking Statements This blog contains forward-looking statements that are based on our management's beliefs and assumptions and on information currently available to our management. The words "believe," "may," "will," "potentially," "estimate," "continue," "anticipate," "intend," "could," "would," "project," "plan," "expect," and similar expressions that convey uncertainty of future events or outcomes are intended to identify forward-looking statements. These forward-looking statements include, but are not limited to, statements concerning: predictions about the state of the cyber security industry in calendar year 2023 and our ability to capitalize on such market opportunities; anticipated benefits and increased market adoption of “as-a-service models” and Zero Trust architecture to combat cyberthreats; and beliefs about the ability of AI and machine learning to reduce detection and remediation response times as well as proactively identify and stop cyberthreats. These forward-looking statements are subject to the safe harbor provisions created by the Private Securities Litigation Reform Act of 1995. These forward-looking statements are subject to a number of risks, uncertainties and assumptions, and a significant number of factors could cause actual results to differ materially from statements made in this blog, including, but not limited to, security risks and developments unknown to Zscaler at the time of this blog and the assumptions underlying our predictions regarding the cyber security industry in calendar year 2023. Risks and uncertainties specific to the Zscaler business are set forth in our most recent Quarterly Report on Form 10-Q filed with the Securities and Exchange Commission (“SEC”) on December 7, 2022, which is available on our website at and on the SEC's website at Any forward-looking statements in this release are based on the limited information currently available to Zscaler as of the date hereof, which is subject to change, and Zscaler does not undertake to update any forward-looking statements made in this blog, even if new information becomes available in the future, except as required by law Fri, 10 Feb 2023 07:00:01 -0800 John Knightly How to Cut IT Costs with Zscaler Part 2: Optimizing Technology Costs Fiscal responsibility and cost-effective security have always been a priority for businesses. During times of economic uncertainty, the pressure IT leaders face to tighten budgets, reduce costs, and make every penny matter–all while stopping sophisticated cyberthreats–increases significantly. In the first blog of this series, we began our examination of IT cost challenges and how organizations around the world solve them by looking at the rising cost of data breaches enabled by perimeter-based architectures. In this second installment, let’s take a look at how organizations can optimize technology costs and why perimeter-based security approaches increase infrastructure costs and simply don’t make financial sense in today’s cloud and hybrid work environment. Hubs & spokes, and castles & moats, oh my! Hub-and-spoke architectures were designed to route traffic to data centers to connect users, devices, and workloads to the network so they could access the applications and resources they needed. This approach worked when users and applications resided in the corporate office/data center. However, as users increasingly work from everywhere and applications migrate to the cloud, the network must continuously be expanded to provide private connections to every application, branch site, user, and device in every location. This interconnected network is protected by castle-and-moat security architectures that only offer organizations costly options that are a poor fit when it comes to securing the modern business. Stacks of security appliances everywhere The first option is to deploy point product security appliances—including firewalls, virtual private networks (VPNs), intrusion prevention systems, sandboxes, and more—to every office and remote location. The upfront CapEx investment in buying and deploying stacks of security appliances would be untenable (not to mention the cost and complexity of managing them). Plus, this approach still leaves organizations vulnerable when it comes to home offices and traveling users. Smaller + cheaper does not equal better Few organizations can afford to replicate the HQ gateway security stack at all locations due to the cost of purchasing, configuring, managing, and maintaining such a complex deployment. Instead, organizations often compromise by deploying smaller, less expensive firewalls and security appliances to branch offices and remote locations. While that may reduce a portion of large upfront costs, it still drives the same management requirement. More importantly, it leaves the organization vulnerable to attack. Why? These smaller, less expensive devices are also less capable as they provide fewer security controls. This compromise also leaves your offices and remote users vulnerable, and you are only as strong as your weakest link. Organizations cannot afford the risk of this alternative. Call it a backhaul, boomerang, trombone, or hairpin - the result is the same The other option is to backhaul traffic to data centers via Multi-Protocol Label Switching (MPLS) or VPN, and to run that traffic through large, centralized stacks of security appliances, which still entail massive CapEx costs. But this approach also leaves organizations struggling to keep MPLS and bandwidth spending in check. Backhauling traffic to the data center before ultimately sending it to cloud or SaaS applications introduces a hairpin effect. Essentially, you end up paying twice for your internet, SaaS, and cloud-bound traffic—once to carry your traffic over a costly private connection from the office or remote user to the data center, and again for it to go over the web to the requested resource–only to make the same trip in reverse. Additionally, as users become more widely distributed and the organization expands in size or geography, the user experience degrades due to traffic bottlenecks and latency, and costs grow exponentially (but more on that in a future blog). Figure 1: Infrastructure requirements of perimeter-based architectures Capacity planning requires a crystal ball Choosing any of the aforementioned approaches leaves CIOs and CISOs facing the arduous task of capacity planning, which is a tricky balancing act. Legacy solutions, even virtual appliances, cannot scale the way the cloud can. This creates the need to predict traffic volume and organizational demands over the appliance lifecycle. Capacity planning calculations encompass many issues, including number of users, devices, platforms, operating systems, locations, and applications, as well as bandwidth consumption, edge and WAN infrastructure, traffic patterns across global time zones, and more. And that’s just for near-term operations. Plans must also account for annual growth in cloud-bound traffic looking three years or more into the future. Beyond routine operations, capacity planning forces you to forecast for accommodating sudden, unplanned spikes in bandwidth that cause slowdowns, frustrating users and customers alike. Underestimating capacity requirements yields poor performance and a poor user experience, hindering an organization’s ability to fulfill its business mission. Overestimation leads to unnecessarily high costs and equipment sitting idle. Either way, resources are wasted. There is a better way Unfortunately, these tactics only provide a temporary and costly band-aid because firewalls, VPNs, and other legacy security approaches are not designed for the scale, service, or security requirements of modern business. Instead of costly hardware refreshes and the hefty infrastructure costs of perimeter-based architectures, organizations can reduce costs and capture superior economic value by embracing a zero trust architecture. With the Zscaler Zero Trust Exchange, an ESG economic value study found that organizations can reduce MPLS spend by 50% and cut up to 90% of their appliances, contributing to an ROI of 139%. The Zscaler Zero Trust Exchange is an integrated platform of services that securely connects users, devices, workloads, and applications. It delivers fast, secure, direct-to-app connectivity that eliminates the need to backhaul traffic and minimizes spending on MPLS. As a cloud-delivered platform, Zscaler enables organizations to consolidate point-product hardware and eliminate the need for CapEx investments in firewalls, VPNs, VDI, and more. Figure 2: The Zscaler Zero Trust Exchange Because the Zero Trust Exchange is built upon a cloud-based architecture designed to scale seamlessly with customer demand, it also eliminates capacity planning and over-provisioning, and frees up vital capital for more pertinent investments. Where to next? To explore in detail how a true zero trust architecture can help you eliminate the financial burdens of costly infrastructure, download our white paper, “Delivering Unparalleled Security with Superior Economic Value: The Power of the One True Zero Trust Platform.” Alternatively, dive into real-world success stories and gain insights into how organizations like yours cut IT cost and complexity with the Zero Trust Exchange in our ebook, “How Companies Reduce Costs with the One True Zero Trust Platform.” Don’t forget to keep your eyes open for the next installment in this blog series, which will explore how Zscaler delivers superior economic value by improving operational efficiency. Tue, 14 Feb 2023 08:00:02 -0800 Jen Toscano Cal-Secure: How California is Charting a Course for Whole of State Cybersecurity Zscaler recently hosted a webinar on Cal-Secure, the State of California’s first multi-year cybersecurity roadmap. The webinar featured several distinguished speakers including Vitaliy Panych, State CISO at the California Department of Technology, Dylan Pletcher, CISO at the State of California Department of State Hospitals, Angelo Di Carlo, Senior Systems Engineer at Zscaler, and moderated by Carlos Ramos, Principal Consultant at Maestro Public Sector and former State of California CIO. The complete webinar can be watched on-demand here. Cal-Secure program overview Vitaliy Panych, State CISO at the California Department of Technology, provided an excellent overview of the Cal-Secure program, a cybersecurity roadmap developed by the state of California to address the increasing threats posed by cyber attackers. With the growing sophistication of attacks and the ease with which they can be carried out, it has become increasingly important to have a clear and well-defined cybersecurity strategy in place. Cal-Secure was developed in collaboration with state agencies, military departments, vendors, local entities, and former CISOs, with more than 450 hours of workshops dedicated to its creation. The Cal-Secure roadmap has three pillars: people, process, and technology. The first pillar, people, focuses on building a world-class cybersecurity workforce, as well as addressing the talent gap that is currently being faced in the cybersecurity field. In California alone, there are 80,000 open job vacancies in both the public and private sectors, and that number is likely to be even higher. Building a confident and empowered multidisciplinary team is essential to ensuring the success of Cal-Secure. The second pillar, process, focuses on building a federated cybersecurity oversight program that will allow the state to keep itself accountable, track maturity, and work with partners at the state, federal, public, and private sector levels. The third pillar, technology, focuses on delivering clear, fast, secure, and dependable public services, making technology access easy and repeatable across the state, building security and privacy controls into everything that the state does, and centralizing security control services. A key aspect to the program is how to measure and govern state agencies so that they're moving and solving the right priorities at the right time. Cal-Secure includes a four-year and a two-year risk oversight cycle. There are multiple engagements that are applied to state agencies to measure what their policy procedure and policy framework is within a department. The California Maturity Metric Model is then applied to measure the effectiveness of how security controls are implemented and test the various security controls applied within the framework. The Cal-Secure roadmap is not just about technology, but also about the people and processes that support it. By adopting Cal-Secure, state agencies can leverage it as an overarching framework and adopt security control categories that are most important to their specific business objectives and mission. The goal is to integrate all the strategies and objectives into a cohesive whole, with a focus on building a diverse and cybersecurity-minded workforce, making technology easy to access, building a confident and empowered multidisciplinary team, and building security and privacy controls into everything the state does. Vitaliy’s complete presentation and slides are available on demand here. The agency perspective on Cal-Secure Dylan Pletcher, CISO, State of California Department of Hospitals, provided his perspective on Cal-Secure as a powerful tool that has been leveraged in three different ways by organizations looking to improve their security posture. First, Cal-Secure serves as a justification for spending, providing a validation that the investment is critical and has the support of state CISO and the governor. Second, Cal-Secure helps organizations strategize and plan their next steps, providing guidance and aligning their strategy and roadmap with industry best practices. Lastly, Cal-Secure helps tell a story to non-IT personnel, making it easier for organizations to communicate the importance of security and the steps they are taking to improve it. However, Cal-Secure should not be seen as a panacea for all security problems. IT professionals often focus too much on the technology side of security and not enough on the people and process. Cal-Secure provides advice on reaching out to different people, including visiting job fairs and talking about security as a profession. Good security analysts are more important than good technicians, as security is about thinking things through and being able to analyze and assess threats. Organizations should look for employees with strong analytical skills and not just focus on those with a background in IT security. This includes individuals from different fields, such as physics, who can bring unique perspectives and strengths to the security team. Cal-Secure provides valuable guidance and support for organizations looking to improve their security posture. By leveraging Cal-Secure, organizations can justify their investments, strategize and plan their next steps, and communicate the importance of security to their personnel. However, organizations must also focus on the people and process side of security and look for employees with strong analytical skills, not just those with a background in IT security. The full Q&A with Dylan Pletcher is the focus of our next blog. The technical roadmap To complete the presentation, Angelo Di Carlo, senior systems engineer at Zscaler, reviewed a Zscaler Cal-Secure Alignment white paper on how Zscaler can help agencies to comply. Zscaler has a product portfolio focused on three pillars: Zscaler for users, Zscaler for workloads, and Zscaler for IoT. Zscaler for Users provides secure internet and SaaS access and secure private application access to deliver zero trust security around remote workforce, remote users, data centers and branch offices. Zscaler for workloads addresses cloud security, secure internet SaaS access, secure workloads, workload communication, and posture control for cloud environments. Zscaler for IoT/OT delivers secure IoT device connectivity and privileged remote access for OT devices. The white paper includes a detailed technology alignment component and a visual representation of Zscalar's level of alignment with the Cal-Secure framework. Next steps Cal-Secure is a ground-breaking program for a whole of state approach to collaboratively tackling cybersecurity threats across state, local, county, education and the private sector. The full webinar can be found on-demand here. For more information on the program, view the Cal-Secure Multi-Year Information Security Maturity Roadmap. We encourage anyone interested in more information to reach out to their account team for a deeper engagement and to request a workshop with Zscalar's architecture specialists and consulting engineers. We are also happy to provide a copy of the Zscaler Cal-Secure Alignment white paper or webinar slides for those interested. Simply contact us with your request. For more information about Zscaler’s full suite of public sector solutions, visit our State and Local government page. Thu, 09 Feb 2023 14:55:01 -0800 Ian Milligan-Pate Microsoft Outlook Outage Detected by Zscaler Digital Experience (ZDX) Zscaler Digital Experience detects outage At 7:35 p.m. PST on February 6, 2023, Zscaler Digital Experience (ZDX) saw a substantial, unexpected drop in the ZDX Score for Microsoft Outlook services in North America. Upon further analysis, we noticed high page fetch times highlighting a Microsoft Outlook outage. . ZDX Dashboard indicates most impacted applications The ZDX heatmap details the impacted locations: ZDX: Outlook outage across North America With ZDX, customers can proactively identify service issues and quickly isolate them, giving IT teams confidence in the root cause while reducing mean time to resolve (MTTR) and first response time (MTTD). ZDX Score highlights Microsoft outage A ZDX Score represents all users in an organization, across all applications, locations, and cities. You can see the score on the ZDX Admin Portal dashboard. Depending on the time period and filters selected in the dashboard, the ZDX Score will adjust accordingly on a scale of 1 to 100, with the low end indicating a poor user experience. As we drill into the North America users, we can see the ZDX Score drop into the poor experience range. That correlates to higher page fetch times. To get additional details, you can simply mouse over the ZDX Score to understand the page fetch times. In this instance, the times are between 52,743 and 106,838 ms, and eventually web probe timeouts, which renders the application unusable and impacts overall end user performance. ZDX Dashboard shows low ZDX Score with high page fetch times ZDX Dashboard shows web probe timeout With further analysis, you can see the ZDX Score for Microsoft Outlook dropped to poor around 7 p.m. PST, on February 6 and began to slowly recover, but continued to face issues until 8 a.m PST, February 7. From within ZDX, service desk teams can easily see that the service degradation is limited to a single region and quickly begin analyzing the root cause. ZDX dashboard indicates the Microsoft Outlook outage and slow recovery ZDX can quickly identify the root cause of user experience issues with its AI-powered root cause analysis capability. This spares IT teams the labor of sifting through fragmented data and troubleshooting, thereby accelerating resolution and keeping employees productive. With a simple click in the ZDX Dashboard, you can analyze a low score, and ZDX will provide insight into potential issues. As you can see in the Microsoft Outlook outage, ZDX highlights the low region score and poor performance for Microsoft Outlook. ZDX AI-powered root cause analysis indicates the reason for the outage When there’s an application outage, many IT teams turn to the network as the root cause. However, as you can see above, ZDX AI-powered root cause analysis already verified that the network transport wasn’t the issue; it was actually at the application level. You can verify this by looking at the CloudPath metrics from the user to the destination. ZDX CloudPath indicates no networking issues According to the Microsoft Outlook status page, the outage was reported at 7:30 p.m. PST, which correlates to the ZDX data above. The outage was confirmed in a tweet suggesting a “recent change” contributing to the impact. Source: Twitter As Microsoft Outlook services started to recover, there were still residual impacts detailed on the Microsoft Service Health page until 8:05 a.m. PST. Source: Microsoft With ZDX Alerting, our customers were proactively notified about end user problems, and incidents were opened automatically with our ServiceDesk integration long before users started to report it. From a single dashboard, customers were able to quickly identify this as a Microsoft issue, not an internal network outage, saving precious IT time. Zscaler Digital Experience successfully detected a Microsoft Outlook outage along with its root cause, giving our customers confidence in who was impacted, their networks, and devices, averting critical impact to their business. Try Zscaler Digital Experience today ZDX helps IT teams monitor digital experiences from the end user perspective to optimize performance and rapidly fix offending application, network, and device issues. To see how ZDX can help your organization, please contact us. Tue, 07 Feb 2023 14:48:49 -0800 Rohit Goyal How Customers Capture Real Economic Value with the Zero Trust Exchange Hub-and-spoke networks and castle-and-moat security architectures were designed for days gone by, when users, apps, and data all resided on premises. But in today’s world, endlessly extending the network to more branch offices, remote users, and cloud apps, and defending network access through ever-growing stacks of point product hardware appliances breeds significant costs. The Zscaler Zero Trust Exchange provides a different kind of architecture (zero trust) that not only enhances security, but reduces costs while doing so. With increasingly sophisticated cyberattacks and prolonged economic uncertainty, organizations need this zero trust architecture both now and in the future. In this blog, you will learn a few of the means by which Zscaler delivers superior economic value than perimeter-based architectures. You will also see examples of the benefits reaped by customers of the One True Zero Trust Platform. Optimizing technology costs Perimeter-based architectures entail significant infrastructure costs. Whether it’s for expensive MPLS networks or complex stacks of hardware security appliances for defending network access, massive spend is inevitable. Fortunately, as a comprehensive, cloud-delivered platform, the Zero Trust Exchange decouples security from the network via direct-to-app connectivity, minimizes bandwidth requirements, and eliminates the need for hardware appliances and point products. As an illustration of these benefits, a Fortune 500 oil and gas company deployed Zscaler and saved tens of millions of dollars in forgone MPLS equipment, branch hardware appliances, and more. Another organization, an independent agency of the U.S. federal government, cut its security costs 70% by retiring perimeter-based security tools, like VPN, in favor of Zscaler. Enhancing user productivity Under yesterday’s architectures, traffic is backhauled to the data center for network access and security. This breeds unnecessary latency that harms user experiences and disrupts productivity. Unfortunately, traditional monitoring tools lack full visibility and impede troubleshooting. But with direct-to-app connectivity optimized by the world’s largest, most high-performance security cloud, user experience is accelerated for enhanced productivity. At the same time, end-to-end digital experience monitoring integrated into the Zero Trust Exchange ensures rapid detection and resolution of any user experience issues, saving time for end users and administrators. With Zscaler, one state government achieved secure app access that was five times faster than that of VPNs, eliminating hundreds of monthly help desk tickets and contributing to nearly $1 billion in savings. Similarly, a telecommunications company reduced app latency by 20%, cut help desk tickets in half, and quadrupled its ability to quickly resolve user experience issues (from 25% of the time to 95%). Improving security posture Castle-and-moat security only serves to increase risk in modern environments. It expands the attack surface, enables lateral threat movement, and fails to stop compromise. This facilitates expensive data breaches that increase costs and decrease revenue through system downtime, customer churn, and more. The good news is that a zero trust architecture powered by Zscaler minimizes the attack surface, prevents lateral threat movement, and blocks compromise in real time. This ensures that breaches and their costly consequences are stopped. One high-tech customer of Zscaler’s reduced the cost of its web security by 40%, while enhancing its defenses through full inspection of encrypted traffic, as well as real-time alerting and behavioral analysis. Another organization that deployed Zscaler boosted its protection by 90%, stopping 4 million policy violations and 14,000 threats monthly, blocking ransomware and its impact on productivity. To learn more on how Zscaler delivers superior economic value than perimeter-based architectures, read our new ebook. It walks through the success stories of several customers, giving concrete examples of how Zscaler helps real organizations overcome their challenges and reduce their costs. Tue, 07 Feb 2023 08:00:01 -0800 Jacob Serpa Why You Should Care About Enterprise DLP Have you seen the latest Gartner DLP Magic Quadrant? No? That’s because it doesn’t exist anymore. Does that mean that one of the mainstay protection technologies over the last 20 years is no longer relevant? Before we answer that, let's get a quick history lesson. DLP blasted onto the scene quite some time ago. Based upon a sturdy foundation of signatures and regex, organizations across the land suddenly had impressive visibility into the movement of data across their organizations. Once the technology was proven valuable, it eventually found its way into various form factors, like Network DLP, Endpoint DLP, NGFW, and finally CASB. While it was available in all varieties, the end goal was always the same - move the technology to where the data is located, in order to improve visibility and reduce risk. But like any good technology, it can spiral out of control and get overused or misused. With the proliferation of DLP across multiple products and channels, SOC teams started to buckle under the pressure of alerts. Too much noise and complex daily operations add to the challenge of operating a successful data protection program. This led to failed programs and unhappy organizations that never fully realized the value of data protection. But back to the original issue of Gartner discontinuing the DLP Magic Quadrant. To understand why they did this, all you have to do is look to the Security Services Edge (SSE) Magic Quadrant. Focused squarely on a cloud and mobile world, the SSE MQ addresses one of the main challenges organizations are now facing. Cloud apps have distributed your data far and wide outside the perimeter, so a new model is needed to secure your data. Because users are connecting from everywhere and accessing your sensitive data over the internet, the days of traditional DLP are over. Enterprise DLP explained But what is Enterprise DLP and how does it fit into this new cloud-first security architecture? Enterprise DLP is the term used to define a solution that entails everything an organization may need for data security, all located in one cloud-delivered offering. It should protect all relevant data channels, and do it in a way that drives down complexity while solving the challenges of cloud and mobility. This is a worthwhile goal, and one that organizations are leaning into as they struggle with distributed data and point product proliferation. So while the DLP Magic Quadrant is no longer maintained, the concept of Enterprise DLP is very much alive and well. What do you need in an Enterprise DLP offering to properly secure this new hyper-distributed world? The easiest way to approach this is to think about the big three: data in motion, data at rest, and distributed devices. If an Enterprise DLP solution focuses on all the aspects of securing these three concepts, you’re in great shape. Securing data in motion For data in motion, you want to protect all the destinations data would move to out of your organization. This first requires a best-in-class DLP engine that can find and classify all types of data, leveraging ML-powered detection, where possible, to improve speed and accuracy. The web, SaaS, email, and private apps are all common destinations for data. You should have control over these to secure sensitive data and determine if it should leave or not. You should also assume there will be some level of customization to your Data Protection efforts, so advanced techniques like Exact Data Match, Index Document Match, and Optical Character recognition should also be available. This ensures you can protect custom data like customer lists, forms, or images like screenshots. How to protect data resting in cloud apps Once you have control of what data leaves your organization, you will want to track the data you allow to leave and land in your sanctioned SaaS apps. Since you own these apps, you have control over their APIs, which is the main reason CASB comes in handy. Using these APIs, CASB can scan these apps for sensitive data being used in risky ways. You can quickly find out if data is being shared outside the organization, or shared with dangerous open internet links. In order to identify sensitive data, CASB relies on a DLP engine, so it’s important that any Enterprise DLP uses one unified engine and policy across data in motion and CASB at rest scanning. Another aspect of protecting data at rest is protecting the application hosting the data. Many cloud apps have configuration settings that can be easily misconfigured. This is where SaaS Security Posture (SSPM) management comes in. You can scan these cloud applications for dangerous misconfigurations and get recommendations on how to resolve them. Some of the largest data breaches we’ve seen to date have been due to misconfigurations like this, so this is an important aspect of an Enterprise DLP solution. Protecting devices The last area of focus for an Enterprise DLP solution is securing data on devices. There are a few key use cases that organizations should focus on. First is securing the data from leaving the device over connections that aren’t covered in our data in motion use case (internet, SaaS, email, and private apps). Common channels here are Bluetooth, USB Removable Media, Network Shares, etc. Usually addressed by Endpoint DLP, these channels must be secured so data can’t leave the devices - oftentimes when employees are leaving the company. Again, use the same DLP policy you’ve used before, but apply it to these “sideways” device connections. The other common use case is bring your own device (BYOD). There are often legitimate reasons to allow BYOD access to cloud apps with sensitive data. Either from a vendor, or employees using their own devices, BYOD security is a challenge many organizations struggle to solve. Most CASB vendors have tried to solve this with the concept of the reverse proxy. However, this can be problematic. In order to secure the connection between the BYOD and the cloud app, CASB’s jump into the middle of the connection, and then proxy the cloud app web page back down to the BYOD device, inspecting it along the way. This process often breaks the functionality and usability of the webpage and only supports certain common cloud apps. A better way to secure BYOD is through browser isolation. With browser isolation, only pixels are streamed down to the BYOD device, not actual data. By using a cloud-based isolated browser, BYOD can leverage the cloud app and access data, but be prevented from cutting, pasting, downloading, or printing. Users get full functionality of the cloud app through the isolated browser, while organizations get to prevent sensitive data from landing and walking away on a non-corporate-owned device. When shopping for an Enterprise DLP, ensure it supports the browser isolation for the BYOD use case. Is Enterprise DLP right for you? Many organizations focus on threat prevention before they turn to data protection initiatives, however, this is somewhat backward. Today’s ransomware and cyberattacks are going after data like never before. A good majority of these attacks now feature double extortion, which squarely focuses on stealing data. It’s imperative now that organizations begin a concerted effort to improve data security across the board. The right Enterprise DLP solution can help tip the scales in your favor. Instead of dealing with point product sprawl, or complex protection operations, organizations can realize the dream of a unified, simplistic approach to data protection that works in concert to drive down risk and cost. If you’re ready to learn more about how Zscaler can help you supercharge your data protection and deliver airtight protection across all users, devices, and locations, we’d love to hear from you. Thu, 02 Feb 2023 08:00:01 -0800 Steve Grossenbacher A New and Critical Layer to Protect Data: SaaS Supply Chain Security Today, Zscaler added another layer of security to protect customer data with the acquisition of Canonic Security, an innovative startup focusing on a critical new technology space: SaaS Supply Chain Security. There’s a major gap in your data security strategy Most organizations have thousands of potential backdoors as employees interconnect third-party applications and browser extensions. It’s no wonder companies are seeing an increase in data loss activity caused by employees. To increase their productivity, employees are unwittingly opening backdoors in SaaS platforms such as Microsoft 365, Google Workspace, Slack, and Salesforce, creating potential risk of data loss and cyberthreats. According to the ThreatLabz Data Protection 2022 Report, 94% of threats reside in these cloud platforms with direct access to sensitive data. For example, the seemingly benign applications that are utilized daily, such as calendar apps, these calendar apps integrate with Google Calendar used by sales to book external meetings. Other examples are helpful storage apps that integrate with messenger used by marketing to conveniently access and share content or email widgets integrated with browsers to help send and track customer emails. Some of these applications, like MailChimp and Box plugins, are business-critical and are a part of the SaaS supply chain. Some apps and widgets may not be approved by IT. Regardless, the problem is a lack of visibility into what applications employees are provided access to. Every application that is authorized into your secured SaaS supply chain environment is a potential threat to your organization. For example: A third-party calendar app will have direct access to your employees’ meeting content and attachments that may contain sensitive M&A information, financial data, product roadmaps, or sensitive customer information. Applications may have configuration privileges allowing for software injection such as ransomware or credential theft. An application may have been compromised, therefore creating a backdoor for bad actors introducing a myriad of new threat risks. SaaS supply chain attacks and vulnerabilities have been overlooked by most organizations. Here is a quick litmus test to assess whether your organization has the proper security measures to counter these threats. How many third-party integrations and plugins have your employees enabled? What level of privilege do these applications have in your environment? Can you maintain regulatory compliance in your SaaS supply chain? Do these applications have access to your sensitive data? Do they make copies or store your sensitive data? If you can't answer these questions with certainty, then your organization is at risk—at risk for data breaches and compromising your organization’s ability to protect sensitive data such as intellectual property, personal identifiable information, healthcare information, business, and financial data. The good news is that protecting against SaaS supply chain attacks is achievable. However, like any security measure, it requires a layered approach. Organizations that have adopted a Secure Service Edge (SSE) or zero trust platform approach are well on their way. Preventing threats from SaaS supply chain attacks is only possible with an integrated data protection architecture that provides protection to sensitive data and malware, with integrated SaaS application security and user behavior monitoring. The importance of a layered and integrated approach allows for true analytics and adaptive policy controls that can only be accomplished with strong analytics, AI, and machine learning technology analyzing your environment in real time. Take, for example, the Zscaler platform approach to protect organizations against sensitive data loss with the addition of an integrated SaaS supply chain security layer. For organizations to holistically protect their sensitive assets in SaaS platforms, different steps are required: Securing sensitive data inline in real-time: Zscaler provides real-time inline inspection for all cloud traffic providing full visibility and policy control for sensitive data going out and blocking against malicious activity and threats with inline traffic monitoring. All ingress and egress traffic is inspected, auto-classified, and inspected for sensitive data and risk with adaptive policies implemented utilizing advanced AI/ML. Zscaler inspects billions of artifacts daily, protecting financial industries, large healthcare providers, governments, and more. These organizations depend on Zscaler daily to protect against sensitive data leaving their organizations. Read more: Zscaler's cloud-delivered enterprise data loss security solutions Securing collaboration with OOB CASB: When users are sharing sensitive assets with external links and external collaborators, IT needs to have proper visibility and enforce appropriate policies to ensure organizations’ sensitive data is not exposed. This is where Zscaler has API integrations with out-of-band Cloud Access Security Broker (CASB). Zscaler CASB is integrated directly within SaaS platforms monitoring user behavior, inspecting files for sensitive data being created and shared between SaaS applications, preventing malware and blocking potential malicious user activity. Protecting against misconfigurations with SaaS Security Posture Management (SSPM): Furthermore, organizations can maintain SaaS configuration policies and protect against misconfigurations through posture management. This helps organizations to protect against human error that happens during routine configuration changes and to ensure new applications maintain consistent policies. For example, ensuring an admin doesn’t accidentally turn off multi-factor authentication or allow link sharing outside the organization. SaaS Security Posture Management automatically resets misconfigurations to adhere to company policies. Read more: the Zscaler CASB and SSPM solution Integrating SaaS supply chain security: Zscaler new supply chain security capabilities will be integrated into its data protection services, strengthening its CASB and SSPM solutions by enabling companies to consolidate point products, increasing security posture, and preventing malicious applications from injecting malicious software or exfiltrating sensitive data with the following functionality: Discover and Assess M Party Apps and Extensions: Gain full visibility over first, second-, and third-party apps and API integrations across the enterprise business application estate. Uncover rogue and vulnerable apps and assess each integration posture, behavior, and the risk involved with its API access and browser extensions. Reduce Attack Surface: Quarantine suspicious apps, reduce excessive and inappropriate privileges, revoke, and block access if necessary. Enforce Access Governance: Enable app integrations by automating app-vetting and app access recertification processes. Learn more about Canonic Security and SaaS supply chain security here. SaaS security inherently requires an integrated platform approach: A layered approach is crucial to protecting the SaaS supply chain. Standalone DLP, CASB, and SSPM tools require a massive amount of resources to configure, maintain and manage, which can be costly and take in months to implement inrun putting organizations at risk. Concurrently, the lack of automated workflows prevents security teams from managing critical risks leading to elongated mitigation timelines and unresolved incidents. To make matters worse, the reliance on separate point products causes increased risk, reduced visibility, and inconsistent policies. SaaS platforms today host companies' crown jewels and most sensitive data which are fully distributed in the cloud. Organizations need to consider a fully integrated platform that addresses data security at all times for all channels. Tue, 14 Feb 2023 19:14:07 -0800 Salah Nassar The Impact of Public Cloud Across Your Organization Effective digital transformation enables organizations to move at market speeds. The ability to bring new products, customer experiences and capabilities to market are what set competitors apart from each other. In order to make that happen efficiently, people and processes need to be realigned to focus on the line of business (LOB) objectives. In particular, centralized teams often struggle with this realignment. Information technology, compliance, risk management, enterprise architecture, network, and security operations centers historically have been centralized to provide enterprise-wide services. These teams defined the frameworks for the enterprise because they built, operated, maintained, and secured the environment that the LOBs used to achieve their business objectives. Each central discipline chose the platforms, tool sets, and processes that each LOB would need to conform to in the pursuit of those objectives. Public cloud changed this dynamic. The change was innocuous at first. Typically, the LOB wanted to investigate whether an application could even run in a public cloud environment. The promise of global reach, elastic scalability, and lower costs fueled those early movements. It was normally the application owners from the business unit that were charged with this “lift & shift” evaluation. Application owners with limited skill sets in infrastructure or security could easily provision minimally required infrastructure thanks in part to the abstraction afforded by new Infrastructure as a Service (IaaS) providers. The nuances of permission or entitlements were set aside initially to focus on the question of application delivery in this new model. Armed with early success, the business unit began to look at their entire catalog bringing the core benefits of reach and scale to multiple applications. They began to leverage Platform as a Service (PaaS) offerings, increase the use of automation frameworks and emerging cloud native application paradigms to embrace digital transformation at scale. As more LOBs consumed this new paradigm, it was apparent that existing people and processes needed to be adapted for this new reality. Previously centralized IT teams were faced with multiple new challenges: Defining and standardizing a cloud operating model Ratcheting back over-provisioned Identity and Entitlements used during “pilot” phases that were never cleaned up Extending compliance tooling to evaluate new services that were running in public cloud service providers Reducing misconfiguration-induced roll-backs of new applications Dealing with the fact that different LOBs chose different cloud service providers (CSPs) resulting in multiple interfaces, query languages, and other tooling required to secure operate and maintain these new environments The impact of these challenges has been far-reaching. Entire consulting practices emerged to deal with these challenges across the industry. Tooling exploded to address these specific challenges, many of them targeting specific teams and personas. Asset management needed to be able to understand what was deployed and where. Compliance needed to understand the configuration of those assets as measured against multiple (and growing) industry benchmarks. Incident and response teams needed not only the ability to identify threat vectors but also how to remediate these issues across disparate CSPs. Entire ITSM processes needed to be reworked to ingest signals that spanned sometimes disparate and often uniquely configured environments. And, as is always the case in a new security domain, an entire class of cloud native open source and commercially available tools grew to automate security testing and reporting. These tools had many different classifications. Cloud Security Posture Management (CSPM) tools look at the configuration of cloud assets (e.g. compute instances, security groups, databases, cloud storage) to ascertain potential threat vectors Cloud Infrastructure Entitlement Management (CIEM) offerings to look at accounts and their IAM roles to highlight over-provisioned or even stale accounts that could be exploited Vulnerability management platforms extended their agents to run on cloud workloads to ensure CVE reporting Compliance tool sets to take cloud infrastructure and compare it to established industry and regulatory benchmarks like CIS, NIST, PCI, and others. Cloud Workload Protection Platforms (CWPP) were created to monitor cloud assets for configuration drift, infections, and imminent threats. Infrastructure as Code security tools were built to extend security awareness across the software development life cycle ecosystem. The end result was rising costs and complexity to stitch together an increasingly brittle framework to allow historically centralized teams to operate in the new public, hybrid, and multi-cloud reality. While some of these tools were extremely effective in their silo, they lacked a holistic platform approach. Cloud Native Application Protections Platforms (CNAPP) offer complete security coverage replacing multiple point products. It provides comprehensive visibility and insights across your entire multi-cloud footprint while reducing friction between security and the DevOps team to better support DevSecOps. This post is the first of a 6 part series where we will explore how organizations can leverage Zscaler Posture Control, our CNAPP solution to tackle not only the technology challenges, but the people and process challenges that arise as an organization matures along its public cloud journey. Uncover critical risks across your cloud environment Sign up for a free automated Cloud Security Risk Assessment to assess your cloud environment security posture and expose any looming threats. Mon, 06 Feb 2023 07:14:07 -0800 Scottie Ray Upgrade your Infrastructure-as-Code Security with CNAPP Securing cloud-native infrastructure is undergoing significant change for the past few years with the rise of DevOps tools, frameworks, and various automation frameworks making it almost effortless to write a few lines of code to spin up entire environments in the cloud from scratch. However, the challenge with making things so simple is that a single piece of misconfigured code can cause a cascading effect downstream (or upstream) and cause significant security issues if there are no specific guardrails or guidelines in place. Security teams often face this dilemma with Infrastructure-as-Code (IaC) processes and tools. Any automation framework used correctly can provide significant advantages in time to value and time-saving, but used incorrectly it can also massively amplify misconfigurations in IaC artifacts in cloud architectures. These can quickly propagate through the proverbial supply chain. What is Infrastructure-as-Code (IAC)? To put it simply, Infrastructure as Code (IaC) is an approach to set up and define all the required assets in a cloud environment using automation as opposed to configuring each resource manually through the cloud provider's console. A typical analogy in this to think of IaC templates is akin to building blueprints. There are several building blocks and codes that one needs to consider during this process and IaC is no different in terms of helping templatize and modularize the approach. IaC allows us to apply this single configuration file over and over in a consistent, repeatable manner and the setup across these environments would remain consistent. Typical examples of using IaC are test environments where teams would stand up/tear down production-like environments or set up multiple instances on demand. Using IaC prevents manual intervention and also ensures that there are no deployment issues or configuration drifts. Shift-Left Security Migrating these code-level issues and eliminating misconfigurations requires that organizations factor security into this engineering process. By “shifting left”, it helps organizations and engineers identify and eliminate these issues at a much earlier stage in the lifecycle before the artifacts are deployed and overall helps reduce the risk exposure. This approach requires scanning the code at various stages such as during code creation, commit, and within CI/CD pipelines. To make this approach work, let us examine below how we can achieve these using a Cloud Native Application Protection Platform (CNAPP) such as Zscaler Posture Control. Choose Your Adventure There are a few approaches to securing IaC but broadly they fall into three categories. To help visualize this, consider the following workflow and process diagram. There are various stages where we can plug in IaC scanning capabilities. First and foremost is to plug into the IDE environment such as Visual Studio Code to alert the DevOps engineer of security violations locally before the code is committed to a source code repository such as GitHub. This is the second step of evaluation, where we can evaluate the pull request for security compliance. Here, we have the ability to fail the pull/merge request if it is not in line with the required security policy framework we have established. Lastly, we can integrate these capabilities with a CI/CD framework such as Jenkins where we can fail a build and only allow the build to proceed when we can ensure security compliance. IaC Scanner for Visual Studio Code The IaC scanner in Posture Control for Visual Studio Code enables you to scan template files that are using Terraform, CloudFormation, Azure Resource Manager, and several other files by scanning the individual IaC files and directories to find and fix configuration errors before committing the code for deployment. IaC Scanning for GitHub, Azure Repos, and GitLab To leverage the IaC scanning capabilities in a source control repository such as the one mentioned above, we use the native integration capability to set up and allow access to the IaC source code repositories. Whenever we add or update code or make a pull request or use the push command to commit the code, the IaC scanning capabilities take effort and the platform will automatically scan the template to identify security misconfigurations, and policy violations and would display the scan results within the code. We can then take steps to fix the configuration issues in the IDE, ensure the code is secure and compliant, and then merge the code with the main branch. Scanning with CI/CD tools - Jenkins, GitHub Actions, Terraform Cloud Using an IaC scan plugin for Jenkins as an example allows us to identify security misconfigurations in the Terraform, CloudFormation templates for both freestyle and pipeline jobs in Jenkins. To get started with this integration, the administrator installs and authorizes the Zscaler IaC scan plugin on Jenkins to access the code repositories. This short demonstration video shows examples of what "shift-left security" looks like in practice, where CNAPP findings are integrated directly into the tools that development and DevOps teams are already using, maximizing efficiency. Link to video. See It For Yourself: Free Cloud Security Assessment Posture Control is 100% agentless and can scan all of your AWS, Azure, and GCP environment to help identify and prioritize the assets that require your attention.We help combine the power of multiple point solutions such as CSPM, CIEM, CWPP and correlates across multiple security engines to prioritize hidden risks caused by misconfigurations, threats, and vulnerabilities across your public cloud stack, reducing costs, complexity, and cross-team friction. Find more details on how to request a Cloud Security Assessment from the team. Wed, 01 Feb 2023 19:14:07 -0800 Prabhu Barathi The economic challenges of perimeter security and how to solve them Driving secure digital transformation is a challenge in the best of times. But when the winds of economic conditions shift, strategic IT leaders face increased obstacles in achieving cost efficiency while maintaining a robust security posture. IT teams are continually pushed to do more with less, budgets are cut, and resources eliminated. Unfortunately, their legacy security architectures that use firewalls and VPNs only serve to make the situation worse. Let’s take a closer look at four challenges of perimeter security that inhibit an organization's ability to successfully provide future-ready protection and deliver superior economic value, and how to overcome them. Challenge 1: Cost and complexity For the last 30 years, organizations have been building castle-and-moat security architectures for defending their network perimeters and everything inside them. These perimeter models leveraging firewalls and VPNs worked well when users, data, and applications resided on premises. Today, however, users are working from everywhere. Applications and data are becoming hyper-distributed across disparate data centers, clouds, SaaS applications, and the internet. This dramatically expands the attack surface and pushes firewalls, VPNs, and other point product appliances beyond their useful limits. Attempting to force fit these legacy architectures to support a hybrid workforce necessitates large capital expenditures and extensive management overhead. Unfortunately, this approach only provides a temporary, costly band-aid because firewalls and VPNs are not designed for the scale, service, or security requirements of modern business. Challenge 2: Productivity Using these perimeter- and network-based approaches to protect a user base that increasingly works outside of the corporate network negatively impacts productivity and collaboration. Think about SaaS tools like Microsoft 365 and ServiceNow, or collaboration tools like Zoom and Teams. To secure this traffic, legacy architectures leverage VPNs and MPLS to backhaul traffic to a data center and route it through a centralized security stack before sending the traffic to its destination, only to route traffic back over that same path to return to the end user. This approach introduces unnecessary bandwidth demands and maintenance expenses and creates a choke point that increases latency and brings productivity to a screeching halt. Users are left frustrated and looking for ways to circumvent the system. Challenge 3: The rising costs of cybercrime As if the first two challenges weren’t enough, organizations are facing unprecedented cyberthreats and, as a result, a rapid increase in the cost of cybercrime. In 2021, global cybercrime damages reached $6 Trillion USD, and it’s expected to grow 15% per year, reaching $10.5 Trillion by 2025. The costs of experiencing, responding to, and mitigating a cyberattack impact an organization with operational disruption and downtime, direct losses tied to remediation, brand damage, and revenue loss. Regrettably, perimeter architectures are unable to defend properly against today’s cyberthreats. Challenge 4: Delayed realization of M&A value Mergers and acquisitions can present game-changing opportunities for organizations, but, according to Harvard Business Review, 70% of M&A transactions never deliver upon their expected deal value. Why? Integrating legacy networks, castle-and-moat security infrastructure, resources, and applications across the two combining entities—not to mention granting employees access to the appropriate assets—is an extremely complicated, technically challenging, and time-consuming process. As a result, the M&A process is often laden with unforeseen costs and delays in the ability to begin value-creation activities. Overcoming the challenges For IT leaders tasked with protecting the organization from threats while driving value in a difficult economic climate, these challenges may seem insurmountable at first. But overcoming them is certainly possible with a zero trust architecture. The Zscaler Zero Trust Exchange delivers a modern, unified approach to securing today’s cloud-first, hybrid workplaces. It reduces cost and complexity while minimizing the risk of cyberattacks and breaches—so an organization can endure and even thrive in trying economic times. To examine these challenges in detail and learn how a true zero trust architecture can help you overcome them, read our white paper, “The One True Zero Trust Platform: Delivering Unparalleled Security and Superior Economic Value with the Zscaler Zero Trust Exchange.” Fri, 03 Feb 2023 07:14:07 -0800 Jen Toscano