Blog Category Feed Zscaler Blog — News and views from the leading voice in cloud security. en The Top Challenges Faced by Organizations Implementing DevSecOps DevSecOps stands for development, security, and operations. DevSecOps emerged in response to observations that many DevOps processes failed to integrate security properly. The idea is to identify and fix security issues as early as possible in the development lifecycle, helping to reduce risk while maintaining agility and speed. DevSecOps is as much about cross-team collaboration as it is about technology, as it makes application and infrastructure security a joint responsibility of production, security, and operations teams. Challenges associated with DevSecOps implementation Implementing DevSecops comes with several challenges. In this blog, we will focus on some of the key challenges in implementing DevSecOps. Here are some of them: Infrastructure Challenges: Complexity in the cloud According to the 2021 Flexera State of the Cloud Report, 92% of organizations are using multiple public clouds. These multi-cloud deployments typically use a wide range of cloud services, and heavily leverage automation, both of which make it difficult for security to keep up. Continuous infrastructure security, compliance assurance, and data security pose big challenges. Tool sprawl and alert fatigue Along with a rapidly-expanding set of cloud services, the industry has responded with a rapidly-expanding set of cloud security services. The result? Security professionals are flooded with high volumes of alerts from each tool, making it difficult to focus on the most important fixes. Without risk-based prioritization, developers and security teams might spend time on issues that might not even represent risk to the organization. Compatibility issues The DevOps team uses many open source tools that include a repository of frameworks, codes, libraries, and templates. While these tools boost productivity, they can also introduce security issues if they are not audited or used properly. Common challenges include continuous access to a variety of tools, induce continuous and consistent security mechanisms compatible with the tools and techniques used in DevOps process to prevent and mitigate security issues as they emerge across the development process. Identifying and fixing vulnerabilities As per this report from Security Boulevard, 50% of apps are always vulnerable to attack at organizations that have not adopted DevSecOps, as opposed to 22% at organizations with a mature DevSecOps approach. With security testing typically taking place at the end of the development cycle, developers end up patching or rewriting code very late in the process, causing costly rework and delays. Balancing speed and security DevOps is all about speed and agility, and every team, including security, needs to keep pace in order to keep the innovation engine humming. Keeping up with DevOps means creating a security foundation that’s agile, adaptable, and fast. Legacy security tools and processes aren’t up to the challenge of securing deployments and negatively impact the pace of development and deployment. Regulatory compliance and audit mandates Organizations are subjected to a stringent, evolving compliance landscape and time consuming audits. The risk of not following compliance and regulatory standards can lead to financial loss as well as reputational damage. Audit readiness and a constant state of compliance is challenging in a dynamic DevOps environment. Organization Culture Security is considered a bottleneck According to Gartner, “71% of CISOs say their DevOps stakeholders still view security as an impediment to speed-to-market.” A common myth or perception among the Dev and DevOps teams is that security slows things down. Security checks are considered a bottleneck. Lack of resources and knowledge gap Recent stats show that 70% of organizations lack adequate working knowledge of DevSecOps practices. With limited staff, tools, and budget allocations, the other challenge includes bridging the knowledge gap. Developers lack security and compliance expertise which is one of the most common DevSecOps challenges. Similarly, security and operations teams are not familiar with both infrastructure and software development environments. The knowledge gap and common platform to share knowledge are barriers to successful DevSecOps implementation. Friction among cross-functional teams Developers aren’t usually security experts and predominantly focus on development and faster deployment based on tight delivery timelines. Security teams, however, are primarily concerned with ensuring that the environment, as well as code, is safe. Often these cross-functional teams work in silos. Their goals and agendas are dissimilar which leads to operational friction. It is challenging to force common goals and practices and mitigate tension between cross-functional teams so that they can function as one team. Roles and responsibility alignment It is challenging to align roles and responsibilities as the DevOps environment is dynamic, and teams are constantly changing. Developers often think the security team is responsible for security and risk mitigation, but practically, the security team’s role is to create security policies, guide developers and operators to understand security requirements and best practices to deliver secure codes and serve as advisors. The people and organizational structure may be the hardest part when it comes to adopting DevSecOps. Guidelines to Successful DevSecOps Implementation Implementing DevSecOps differs with each organization's domain and requirements. What's common? Teamwork is critical when implementing DevSecOps. It needs buy-in from different stakeholders and organization-wide acceptance. The strategies below can help with the successful implementation of a DevSecOps culture: Knowledge sharing: All stakeholders need to understand security challenges, risks, implications, and the importance of addressing them. Continuous knowledge sharing through online forums, training, guidelines, documentation, and more can help bridge the knowledge gap. Collaboration: Building an effective collaboration between cross-functional teams will ensure effective communication and efficient response. Guardrails enforcement: Continuous monitoring, compliance checks, and implementation of guardrails will help streamline the overall DevSecOps process. Automation: Formulating a DevSecOps strategy, implementing a comprehensive yet developer-friendly platform like Zscaler Posture Control (ZPC), and a best-practices approach can make security integration with DevOps easier. Conclusion DevSecOps can be extremely beneficial, improving both security and organizational efficiency. But, the most challenging part of DevSecOps adoption is to make security complement existing business processes, culture, and people. Security leaders need to develop cross-functional collaboration and unite developer, security, and operations teams around the culture of security as a shared responsibility. With a successful DevSecOps strategy and automated cloud security platform, teams practicing a DevSecOps methodology can overcome the above-mentioned challenges and work together to improve security across any cloud, reducing risk, complexity, and cost while achieving secure, faster deployment. Zscaler can help organizations close culture gaps and accelerate DevSecOps adoption. Learn more here. Wed, 18 May 2022 08:00:02 -0700 Mahesh Nawale How to Resolve Data Protection Woes With SSE The outdated security approaches of yesterday are no longer a good fit for protecting today’s data. These traditional security tactics were centered around the data center, but users, apps, data, and even infrastructure (in the form of IaaS), have left the building for good. Consequently, backhauling traffic no longer makes sense in today’s dynamic, work-from-anywhere world. The rise of SSE offerings SSE, or security service edge, is a framework for integrating complementary security technologies to provide consistent, consolidated, and easily manageable data protection that follows users away from the corporate network, applying security policy at every step. A term originally coined by Gartner, true SSE solutions successfully integrate CASB, SWG, ZTNA, DLP, and other future-forward security technologies. Following the SSE framework, cloud security is typically delivered at the edge—as close to the user as possible. This eliminates the need for backhauling and ensures that security is everywhere, providing fast, seamless application and data access. The Zscaler Zero Trust Exchange is a leader (and positioned highest in ‘Ability to Execute’) in Gartner’s new SSE Magic Quadrant. What sets our technology apart is the ease with which it secures data across all transactions, regardless of application, device, or location. To highlight this, here are a couple of key data protection use cases with which we help our customers: Preventing data loss in encrypted traffic When traffic is encrypted, it obfuscates the movement of data therein. As a result, much of corporate data loss today occurs via SSL traffic. Unfortunately, inspecting encrypted traffic for this data loss takes massive amounts of computing power. And whether they are hardware or virtual, outdated appliances have fixed capacities to service users and lack the scalability needed to inspect this traffic at scale. This means that organizations relying upon a legacy security architecture built on appliances typically have little to no inspection for encrypted traffic. Obviously, with 95% of traffic today being encrypted, this isn’t enough for modern security. Powered by the world’s largest security cloud, consisting of more than 150 points of presence around the globe and processing 200 billion transactions daily, Zscaler can easily inspect all encrypted traffic and does so for some of the world’s largest organizations, including more than 25% of Forbes Global 2,000 companies. This means that Zscaler’s platform can find and stop any data loss inline and in real time—wherever it may be flowing—through leading DLP with advanced capabilities like exact data match (EDM), indexed document matching (IDM), and optical character recognition (OCR). Securing unmanaged devices such as BYOD Unmanaged devices are phones, tablets, laptops, and countless other internet-facing endpoints that do not belong to, or were not issued by, the enterprise. In particular, the use of employees’ personal devices to access corporate data has been increasing over the years to enhance productivity—in part due to the global pandemic. In addition to BYOD, unmanaged devices can also belong to technology-partner organizations and third-party contractors, both of which need secure access to an organization’s data or business-critical applications. Legacy tools, such as endpoint agents, aren’t a good fit for data protection on unmanaged devices where mandating software installations is typically infeasible. Similarly, reverse proxies (which are agentless) regularly break and impede user productivity. Blocking unmanaged devices altogether is also a poor strategy because it disrupts normal business operations. For SaaS and private apps alike, Zscaler Cloud Browser Isolation isolates app sessions in the Zero Trust Exchange and streams only pixels to the end user’s device. This allows access on unmanaged devices but prevents download, copy, paste, and print to stop data leakage. Because Zscaler can do this agentlessly, it is a perfect fit for unmanaged devices and a superior alternative to agents and reverse proxies. Uncover more about SSE with Zscaler To see the other top data protection use cases that Zscaler customers use us to address, download our new ebook for free: The Top SSE Data Protection Use Cases. You can also see demos of more specific Zscaler data protection technologies here. Mon, 16 May 2022 11:00:01 -0700 Jacob Serpa CISA Must Update its latest OT Security Warning and Guidance to Include Zero Trust On April 14, 2022, CISA published a warning regarding potential denial-of-service attacks that could exploit vulnerabilities in certain OT assets. Specifically, CISA warned that an OpenSSL TLS server may crash if sent a maliciously crafted renegotiation ClientHello message from a client. According to the warning, servers with the default configuration, TLSv1.2 and renegotiation enabled, are vulnerable, and the vendors were releasing patches. As mitigation, CISA recommends isolating the OT network from the IT network and the internet, and suggests that for remote access, companies use VPNs to securely remotely access industrial manufacturing areas. Yet CISA also cautions that VPNs themselves aren’t infallible and can contain vulnerabilities as well. To me it seems, this advice is limited, and outdated. NIST and many other reputable expert bodies have advocated eliminating the use of VPN and replacing it with Zero Trust Architecture. We must remember that the Colonial Pipeline ransomware attack took place by stealing VPN credentials and getting on the corporate network, moving laterally and finding high-value billing applications, encrypting it, and asking for ransom. The biggest risk of VPN access is that it puts people on the network, hence enabling lateral threat movement. In contrast, Zero Trust Architecture connects authorized users to specific applications, not to the network. Beyond the fact that these mitigation strategies are not fail-proof, they also can restrict progress towards factory modernization. Forever hiding the OT network from the IT side and from the internet can mean factories must pass on a whole host of benefits that could otherwise be gleaned from adopting Industry 4.0. This includes the OT/IT convergence, which yields more comprehensive asset management, as well as artificial intelligence-driven production line automation, which yields efficiency gains, better factory uptime, and higher output. Fortunately, Zscaler and Siemens have teamed up to design and offer a zero trust approach for secure access to OT assets, including Siemens’ devices. The solution yields increased security and at the same time, maximizes uptime to keep the shop floor, robotics, and automated assets running smoothly even in the face of cyber threats. Specifically, Zscaler Private Access app connector is run alongside the Siemens SCALANCE LPE, offering enterprises the opportunity to layer in zero trust connectivity alongside traditional perimeter-based methods. In most cases, VPNs are able to be replaced with zero trust. Several advantages of the Zscaler Solution that is based on Zero Trust architecture include: Secure remote access to plants and machines — Microsegmentation based on zero trust policies allows convenient and secure access to OT/IIoT systems, reducing reliance on VPNs which, as CISA point out, aren’t often updated and can contain vulnerabilities. Privileged remote access for internal and third-party users — Browser-based access allows authorized admins to execute commands from remote endpoints to OT systems over secure and fully isolated connections, without the need to install an agent on the OT systems or any software on the user’s endpoints. Seamless integration into existing OT networks — Docker-based app connectors make it easy to deploy secure remote access on industrial control systems (ICS) and industrial network components such as Siemens SCALANCE LPE, as well as other Arm and Intel-based devices. Distributed, multi-tenant OT/IIoT security exchange — Zscaler’s solution has the largest security cloud with over 150 data centers worldwide, which enables the fastest connections between users and assets, and supports factories no matter where they are in the world. Jump-host alternative. It is often recommended to create a jump host server in the DMZ so every external connection is terminated in the DMZ, with new connections beginning there via a separate system, that create an internal connection from the jump host server to the end devices. However, jumphosts can be hijacked, providing the attacker access to everything. A zero trust secure remote access solution, in contrast, removes the need for the jumphost and is a far more secure alternative. Powered by a cloud-native zero trust exchange, there is no attack surface for an attacker to target in the DMZ, rendering a setup that is far more resilient with very low risk of disruption. Security and stability. Unlike other OT-specific secure remote access solutions, Zscaler Private Access has been in market for 6 years and the Zscaler Zero Trust Exchange for 14 years, yielding a proven and reliable exchange service that governs access. Siemens already considers the need to layer zero-trust as part of defense-in-depth. Here you can read more about it. I am proud of the work Siemens and Zscaler have done to modernize security for factories. CISA, we strongly recommend you update your guidance to add the zero trust defense layer as well. Thu, 12 May 2022 08:00:01 -0700 Jay Chaudhry Securing Infrastructure by Embedding Infrastructure As Code (IaC) Security into Developer Workflows Infrastructure as Code (IaC) is widely adopted by organizations to easily manage and provision their infrastructures on the cloud and automate their deployment process. It allows engineers to quickly build, provision, scale, update, or delete infrastructure resources on cloud platforms using automation tools. With great automation, comes the potential for great risk. While infrastructure as code has brought exponential efficiency gains to development teams, it has also brought new security risks. Fortunately, with the right approach, these risks can be mitigated successfully. Security risk associated with IaC Developers, who aren’t typically security experts, are under constant pressure to release new applications and updates. This focus on “shipping” new products requires them to put speed and innovation first, often at the expense of security. Developer focus on speed combined with the automation that IaC provides creates a recipe for rapid spread of security issues. A mistake made in an IaC template ends up being propagated across all infrastructure provisioned from that template. While this provides fantastic developer efficiency, it also amplifies mistakes, including security mistakes. A single IaC template misconfiguration might be automatically applied to hundreds, or even thousands, of cloud workloads, magnifying the impact of that misconfiguration 100x or more. Moreover, Insecure IaC templates can expand the attack surface and pave ways for critical attack vectors. Security groups, open ports, publicly accessible services, and internet wide accessible storage and databases are some of the critical things that must be monitored continuously. Continuously changing environments and the use of multiple tools may lead to configuration drift and compliance violation. Due to the risk of misconfigurations in the cloud infrastructure, it is essential to implement a way to ensure visibility and real-time feedback for developers of IaC before they build cloud environments. As an added benefit, identifying and fixing security issues early in the development cycle is faster and requires fewer resources. Solution: achieve better security outcomes with security built-in With this context in mind, it is important for the security and compliance team to work hand in hand with developers to integrate Infrastructure as Code security into development and DevOps tools and day-to-day processes across distributed environments without slowing release velocity or performance. With the right cloud security platform and policy framework, all teams can better work together using the same policies at every stage of the cloud infrastructure lifecycle. It also enables all teams involved to meet their objectives and goals. Key benefits: drive consistent and secure releases with strong team collaboration Benefits for developers: Automated security reviews: Developers are able to stay in their tools and deliver secure code. They can scan their code against standard policies and configuration checks to validate their code for misconfigurations and violations. It helps them to easily identify new violations and misconfigurations that can be prevented, including pass/fail results, the exact policies violated and non-compliant resources, with remediation guidance. Accelerated innovation: Developers can spend more time innovating and less time collaborating with security teams on issues, including trying to understand security standards and documenting compliance reports. Benefits for security teams: Continuous monitoring: Security teams can continuously assess risk, speed up reviews, detect violations, and prevent insecure IaC code from reaching production. Risk prioritization and alert fatigue elimination: Automatically prioritize risk with rich context so that developers can focus on the violations that are most critical. Easily notify code owners on critical violations with near to real time alerts on IaC security issues by integrating with existing tools. Enriched developer experiences: Guide developers to remediate issues quickly thus saving time and resources while keeping pace with new security risks and regulatory compliance changes. Security teams can also enforce consistent policies and controls to prevent configuration drift and the tampering of IaC configuration with unauthorized access. Reduced cross team friction: Significantly reduce the friction between security and development teams, as the security feedback is provided in the development environment (IDE) when the code is composed, providing the developer with confidence that the build will not fail due to security violations. Reduced workloads and consistent security: Automated guardrails reduce the burden on security teams and resources in their efforts to prevent the provisioning of risky code, even if it is not addressed by the developer. Benefits for the GRC/Compliance team: Continuous compliance assurance: With IaC security controls in place, any code that violates compliance requirements can be flagged and addressed early in the infrastructure lifecycle. Thus, compliance and security processes become streamlined. This enables the compliance team to achieve continuous compliance with minimal efforts and manual intervention. Additionally, code that violates compliance requirements can be flagged and addressed early in the infrastructure lifecycle. Securing IaC with Zscaler Zscaler IaC scanning supports popular IaC tools including Terraform. It helps to integrate and embed IaC security directly into developer workflows within minutes. Moreover, IaC scanning can: Scan infrastructure as code (IaC) templates (e.g. HashiCorp HCL, AWS CloudFormationTemplate, Kubernetes app manifest YAML) before they are committed to source control for default variables or configuration errors, vulnerabilities, and insecure deployments that violate security standards. Benchmark configurations against IaC security best practices and compliance controls. Identify misconfigurations, vulnerabilities, policy violations, and aid risk prioritization. Integrate with ticketing systems to generate near to real time alerts on violations, and misconfigurations which kick off notification workflows, and provide guidance to developers on remediation, and code fixes for rapid resolution and secure deployments. Conclusion As you can see, it is better to automate the IaC security process by embedding IaC security in developer workflows so that security responsibility is shared between developers, security, and GRC teams. It’s a win-win scenario for the DevOps, Security, and GRC teams. It increases the speed of secure deployment and reduces misconfiguration and compliance errors while improving the organization’s overall security posture. Learn More Start with a Zscaler IaC demo and free assessment of the security of your DevOps pipeline. Fri, 06 May 2022 12:50:52 -0700 Mahesh Nawale Best Practices for Securing Infrastructure as Code (IaC) Organizations are rapidly adopting Infrastructure as Code (IaC) to automate the process of deploying, configuring, and decommissioning cloud-based infrastructure. IaC helps to avoid configuration drift through automation and increases the speed, consistency, and agility of infrastructure deployments—as compared to traditional IT infrastructure—by allowing it to be defined as code and also enabling repeatable deployments across environments. But, like many new technologies and processes, organizations must follow several best practices to ensure that they don’t introduce new security risks into their cloud deployments with IaC. Security challenges with IaC As IaC usage grows across teams, the chances of configuration errors and other mistakes are higher, leading to security loopholes. Developers have strong expertise in building applications, but their experience varies in terms of provisioning, testing, and securing IaC use. Minor configuration errors in infrastructure as code can quickly propagate misconfigurations across the entire cloud infrastructure, turning isolated issues into widespread weaknesses. Adhering to several key IaC security best practices is an effective way to secure infrastructure against the risk of cyberattacks and breaches. Best practices to secure IaC Here are some of the security best practices for IaC that can be easily integrated into the development lifecycle: Gain comprehensive visibility into asset inventory During IaC operations, it is necessary to identify, tag, monitor, and maintain an inventory of deployed assets. Untagged resources should be carefully monitored as they are difficult to track and cause drift. Whenever the resources are retired, their associated configuration must be deleted and data should be secured or deleted as well. IaC templates scanning for errors The most crucial element of IaC is the templates. There is a high likelihood that IaC templates have unsecured default configurations and vulnerabilities. By integrating checks into the developer and DevOps workflows, and regularly monitoring IaC templates for misconfigurations, insecure default configurations, publicly accessible cloud storage, or unencrypted databases, developers can find and remediate potential issues before they make their way into production environments. The earlier an issue is identified, the faster it can be addressed. Zscaler has built native integrations into development tools, such as VS Code, as well as into a broad range of the most popular version control systems and CI/CD tools. The result? Security alerts are raised as soon as issues are identified, directly in native tools and workflows. Identify and fix environmental drift Ideally, configurations across developers’ environments are uniform. But application owners sometimes need to make modifications to their applications and the underlying infrastructure. As those modifications and changes happen, the configuration of the applications and infrastructure changes, leading to a “drift” between the intended state specified in the IaC template and the observed state actually running in the cloud. Without proper monitoring or tools, the unchecked accumulation of these leads to configuration drift which can leave the infrastructure exposed and create gaps in security and compliance. Sometimes, fixing configuration drift is complicated and can be expensive in terms of business downtime. One of the benefits of IaC scanning and CSPM convergence onto a single platform is that it can help identify, remediate, and keep drifts to a minimum. Secure hard-coded secrets Sensitive data such as secret keys, private keys, SSH keys, access/secret keys, and API keys hardcoded in IaC can provide easy access to underlying services or operations and help attackers move laterally. Hardcoded secrets are commonly mismanaged and can be easily uncovered with limited effort. Having exposed credentials spread through IaC code, which is committed to source control (e.g. GitHub), can be of great risk for organizations. The best approach is to prevent these hard-coded secrets from ever making it into the version control system by scanning commits before they are merged into the main branch and/or highlighting the presence of these secrets to the developer in the IDE. Secure developers accounts Developers' accounts need to be secured from attackers. It is important to harden and monitor developers’ accounts, track changes in IaC configurations, and verify that the changes are sanctioned and intentional. Unauthorized changes can cause IaC template or configuration tampering that may result in a code leak. Restrict access to environments Development environments have privileged credentials used by both human users (developers, tool admins, site reliability engineers, cloud admins, etc.), as well as applications, automated processes, and other machine and non-human identities. These environments can be complex and unfamiliar to security teams. Security teams are unaware of how privileged credentials are being used in development environments, and how effectively these credentials are being secured—or not. Attackers targeting DevOps tools and platforms can exploit unprotected credentials and gain access to data and other sensitive resources and launch attacks such as cryptojacking, data exfiltration, and malicious activities like application downtime. Hence, to secure development environments, both human and non-human identities must be secured. Therefore, security teams need to have a single point of control that enables consistent management of privileged accounts, credentials, and secrets across each of the development and compute environments. It enables them to govern current and future privileged credentials usage, detect access configuration issues with required context, right-sizing of identity access and permissions, and consistent least-privileged policy enforcement. Activate alerting Don’t wait until it’s too late. Notifications should be configured to send alerts when code checks fail, which allows misconfigurations to be identified early in the development process. The responsible owner and team members should be notified of failures and the process to remediate issues as soon as they occur so that the developers can take care of the problem quickly. Enforce guardrails Security teams should enforce cloud-native policy guardrails that incorporate checks to secure multi-cloud infrastructures from configuration drifts and alert on violations, enforce consistent security policies during build and runtime, and deliver clear guidance to developers on how to resolve vulnerabilities and risks. For instance, one may want the CI/CD build to fail, in case a certain security threshold was not met. Benefits of adhering to best practices Developers are able to detect, track, and fix misconfigurations as part of their normal workflow, before those issues make their way to production, by verifying that IaC templates adhere to security and compliance frameworks. Security and compliance teams can guide developers with guardrails in tools they are already using to resolve vulnerabilities and risks and ensure secure and compliant deployments. Reduced complexity: Adhering to best practices gives organizations a comprehensive view of the IaC security landscape and decreases the response time to fix security issues that may arise. With stringent security policies, it becomes harder for attackers to compromise the developer environment and find hard-coded secrets that can lead to breaches. Moreover, guided remediation and prompt notifications help to prioritize risk more effectively with the right context for multiple events and tools. Putting IaC security best practices to work The above section covered a high-level description of what IaC is the most common security challenge, and best practices to resolve the associated security risk. Arm your development teams with IaC best practices to help them strengthen IaC security and establish a strong collaboration among development, security, and compliance teams—because when security and development teams work together to defend against attacks, drive operational efficiencies, and satisfy audit and compliance requirements, everyone wins. How Zscaler can help with IaC security Zscaler provides a comprehensive solution for securing infrastructure as code (IaC) that helps prevent misconfigurations, code leaks, environmental drift, and other common issues, all in a single integrated platform. Once integrated into the developer workflow, each commit is scanned for issues including hard-coded secrets or potential misconfiguration. Near-to-real-time alerts and guided remediation is available both through a GUI and within the PR to ensure a faster incident response. Overall, it helps detect potential security vulnerabilities in infrastructure code early, and fix them before they go into production, to minimize risk and maintain cloud compliance. Learn More Start with a Zscaler IaC demo and a free assessment of the security of your DevOps pipeline. Fri, 06 May 2022 12:51:09 -0700 Mahesh Nawale Welcome to Networkrassic Park In Jurassic World, where giant dinosaurs ruled, cultivated the land, and maintained the natural balance, these creatures were a perfect species for that environment that served a purpose. Similarly, decades ago, in the business world, firewalls, VPNs, switches, and routers roamed the data center using routable transports to maintain ingress and egress balance providing security against external threats. This multi-layered security ecosystem protected all their crown jewels under the castle-and-moat trusted habitat. Then, the cloud meteor hit the network security world. As applications fled to the cloud and users and data scattered everywhere, a vague notion of an office emerged, and the internet became the new corporate network. The extinction of the networkrassic era has begun. Welcome to the cloud-age Cloud-first enterprises cannot rely on network security dinosaurs. Legacy firewalls and VPNs are network-dependent, making them slow, expensive to maintain, and ineffective when protecting against sophisticated threats and stopping lateral movement. More than 85% of network professionals surveyed said that firewalls are better delivered via the cloud (this is why you need a new approach), and 72% of enterprises actively adopt the zero-trust approach to minimize the attack surface and stop lateral movements. It's pretty simple, don't trust anyone. Only allow verified, direct, and secure access from users to intended applications, regardless of location or transport layer. Consider some additional data points from a recent survey we conducted: 67% strongly agree firewalls are unable to effectively provide fast, secure access for remote users 64% agree that firewalls are unable to prevent lateral movement within a network 75% say it is challenging to manage firewall hardware, upgrades, and deployments* Source: Virtual Intelligence Briefing (ViB) | Networks Security Survey 2021 Digging up fossils prevents you from zero trust Now you might be thinking, how can I tell if my infrastructure is living in the networkrassic world? Here are five signs that will help you identify and dig up some fossils in your network: 1 - Crumbling under heavy load: Backhauling, all Internet-bound user and branch traffic through the corporate network, stacked up with various security appliances, is not giving you the desired results. Users are frustrated, and support tickets are piling up and getting out of hand. 2 - Unable to look for hidden threats: More than 84% of global internet traffic is encrypted, and hackers take advantage. You cannot inspect it because you know that turning on SSL decryption on firewalls results in severe performance degradation. 3 - Out-of-control management: We all agree that a zero trust approach to network security is the right thing to do, but your current network with firewalls and VPN makes it almost impossible. To mimic a zero trust architecture, you have to configure hundreds, if not thousands, of policies on a swarm of internal firewalls across the network, cloud, and remote users. 4 - All-you-can-eat hacker buffet: Users and applications are moving to the cloud quickly. You extend your network perimeter to the cloud to protect them by adding more firewalls. Now you have created an all-you-can-eat buffet for the attackers. They can move freely inside your network and now have new access to all your cloud applications. You have just increased your attack surface. 5 - Let it through. We'll check it later: Firewalls/VPNs were made to play nice with your existing network plumbing. By design, they adopt a pass-through architecture that lets the traffic through without inspecting traffic for sophisticated attacks. Sure, they can analyze out-of-band traffic and determine malicious intent, but it's already too late by then. Thrive in the cloud age with the Zscaler Zero Trust Exchange The Zscaler Zero Trust exchange takes a fundamentally different approach that naturally fits the cloud age. It enables secure and direct-to-internet connections from users to applications without relying on the underlying IP-based networks. Imagine your life with no routing complexity, no performance issues, worry-free policy management, protection from the world's largest security cloud, and no dinosaurs. Complete security with: Infinite scale and performance with full inspection on all ports and protocols, including SSL. Secure local internet breakouts with a fantastic user experience. Effortless policy management built for the cloud age. Cloud-delivered AI-powered protections, close to every user, device, and application, not the network. Register for our upcoming webinar, "7 Reasons Why Legacy Firewalls Are Unfit For Zero Trust," to understand what zero trust is and isn't and why firewalls are not built for the cloud age. Fri, 06 May 2022 09:00:02 -0700 Kamil Imtiaz VPN vs ZTNA: Five Lessons Learned by Making the Switch from VPN to Zero Trust Network Access In the late 1990s, VPN technology took the corporate world by storm. The network could be extended into every household and users could work from home as if they were in the office. But just like dial-up modems, pagers, and VCRs, corporate VPNs are a relic of another era and fail to meet the needs of today’s cloud- and mobile-first world. One large healthcare company realized they needed to rethink their traditional VPN strategy when the pandemic hit and mass remote work became imperative overnight. Secure was no longer sufficient when it came to remote access—it needed to be secure and usable. They sought to strike a balance and they turned to zero trust network access (ZTNA). In my recent conversation with the company's head of information security architecture, he shared five important lessons learned from switching from legacy VPN to ZTNA. User satisfaction skyrocketed with ZTNA compared to VPN Unlike VPN, which requires backhauling user traffic through a corporate data center and slows down internet performance, ZTNA connects users directly to private applications. The company learned that while everyone tolerated VPN, no one actually loved VPN. With ZPA, user satisfaction shot through the roof thanks to faster and easier access to their applications. Users gave rave reviews with an average rating of 4.8 out of 5.0, compared to 3.0 for VPN. Supporting zero trust is different than supporting traditional VPN With a traditional VPN, users are authenticated once then placed on the network. It’s just like they’re sitting in the office where they can access everything. But with zero trust, users and devices are continuously validated and only granted access to specific, authorized applications. To get to this concept of minimum necessary access, you need to build profiles for everyone so they can get to only the applications they need to do their job. That means a mobile developer is different than a web app developer, and a finance user is different than an IT user. How much risk are you willing to accept? Zero trust is a balancing act between the amount of risk you are willing to accept and the effort needed to build and enforce policies. You need to ask yourselves hard questions and think deeply about the answers. How much access is sufficient? What policies do you need for HR? Finance? Legal? Marketing? IT? How many different policies do you need for each group’s different access needs? How much risk can you tolerate? How much management overhead are you willing to take on to achieve that level of risk? Zero trust doesn’t happen overnight. It’s a journey. Implementing zero trust is a continuum of paring back access over time to get to your goal of minimum trust and least-privileged access. This company gave its first zero trust users greater access than they would have liked, but when they compared the level of risk, even then, to that of using VPN, they were still way better off. Initially, they allowed access to *, for instance. Then they used Zscaler’s application analytics capabilities to see who was using which applications and which were the chattiest. With this information in hand, they then narrowed user access and continued to refine policies over time. Zero trust goes beyond secure remote access. Improving the security and user experience of your remote workforce can be the driver for implementing zero trust, but, ideally, you want to instill a zero trust mindset across your entire enterprise. Remote access is an essential component of zero trust, but you should also think about what zero trust means for ALL access. What does it mean in your cloud environments? What does it mean for on-prem access? Ultimately, you want to reduce the attack surface and protect data across your entire organization and every single user, no matter where they reside. In short, rethink the risk and reward of trying to stretch your existing network-centric controls into today's cloud and mobile use cases where they don't serve well. If you were designing an architecture from scratch to meet your needs, not just today, but in the future, would you use yesterday's technology? Like many of our customers, you’re likely to find opportunities to move away from legacy controls and expand into a more modern approach that can simplify your zero trust journey. To learn more about making the transition from legacy VPN to ZTNA and how Humana deployed Zscaler Private Access, watch the recent webinar Finding Zero Trust Alternatives to Risky VPNs. Wed, 04 May 2022 08:00:02 -0700 Linda Park AI in Cybersecurity: The Hardware Problem Artificial Intelligence (AI) has gotten a lot of attention in recent years as its adoption has skyrocketed across industries and use cases, effectively forging its way into a “mainstream technology.” However, cybersecurity is where you may have not heard about it much. Well, actually, you’ve probably heard about it in cybersecurity, but I’ll bet it’s sounded a bit “buzz word-y,” right? Much of the talk about AI has been forward-looking, focusing on what “could be” versus what is actually available today. Attackers are increasingly using sophisticated AI to bypass traditional defenses, so it’s time for AI to take center stage in cybersecurity. However, there’s a clear reason as to why AI has remained on the tip of every cybersecurity marketer’s tongue (myself included), but hasn’t been fully realized: the hardware problem. AI’s hardware problem From our point of view: Cybersecurity products, from the past and present, are largely hardware-based, with only a handful of these products being truly cloud-native. Cybersecurity products must be built for and delivered from the cloud to fully realize the benefits of AI and realistically deploy it. The future benefits of AI to improve cyber defenses rely heavily on the adoption of cloud-based cybersecurity solutions. In this blog, I'll cover the components of AI’s hardware problem in cybersecurity: Data; Deployment and compute power; Out-of-band AI; and How Zscaler is positioned to bring AI to cybersecurity in a meaningful way There’s no AI without good data Data is at the heart of AI, but without good data, AI models are inaccurate and essentially useless. Good data spans a wide range of scenarios and is delivered in a constant stream, helping ensure the model can render accurate verdicts across a wide range of scenarios. You might be thinking, “my hardware security appliances collect a lot of data, isn’t that enough?” The short answer is no; let’s talk through what a broad set of good data looks like. Security appliances do collect data so they can perform their intended function, however, the data is “stuck” on a single appliance, or within your organization’s set of appliances. Your organization’s data or the data a single appliance can collect isn’t nearly enough for the AI model to continuously learn and adapt. Even if you store this data in a SIEM for analysis, which is great for incident response and investigation, the data is still “stuck” in that you can’t directly enforce security controls based on the AI model’s verdicts. Ultimately, the end goal is to create a network effect of shared, but differentiated data points that gives the AI model more to work with. Security appliances are not built to create a pool of high-quality data. Deployment and compute power AI is compute-heavy, meaning it takes a lot of computing power for an AI model to run its calculations before reaching a verdict. The security appliances of today may have the necessary compute power to effectively run AI models, but can you say that about appliances from 5 years ago? What about even as recent as 3 years ago? Or 1 year ago? Hardware vendors recognize their limitations and are moving their AI/ML functionality to the cloud, creating a slow, out-of-band analysis and enforcement flow. As AI models expand in sophistication - and as a result, expand the need for more compute power - appliances find themselves consistently behind the needs of today and tomorrow. Which leads into our next topic: deployment. It’s hard to imagine a world where organizations can realistically keep every single one of their appliances up to par with the requirements of advanced AI. Take a second to think about what it would cost your organization to purchase and consistently update your appliances to meet the needs of AI. Note: security appliances with the necessary compute power for AI usually fall into the “really expensive” category. The burden of managing compute power with appliances for AI cannot be on organizations, it must be on the vendor to deliver that kind of capability natively. Without this responsibility model, AI will not get put to effective use in cybersecurity. Zscaler & AI Let’s bring this all together into how AI can actually work in cybersecurity. The bottom line: cloud-delivered security. Zscaler was born in the cloud and for the cloud, eliminating the hardware problem for AI in cybersecurity. How do we do it? Simply put, Zscaler: Has great data: By analyzing 300+ trillion signals from 200+ billion transactions every single day, we have a robust data set for our AI models to learn from (for reference, that’s 35x the amount of Google searches per day). We get this data from our customers (network effect), across all industries and geographical regions which enables our AI models to see a large array of situations, making them better at determining verdicts for their intended purpose. Is cloud-delivered: Gone are the days of worrying about your security appliances being able to handle the compute power necessary to implement AI into your cyber defenses. Zscaler is delivered from the cloud, ensuring that you’ll always have the resources necessary to implement AI and that you won’t have to go through tedious update or upgrade cycles. Performs inline AI and enforcement: Don’t let threats pass through your appliances while out-of-band analysis gives you retroactive verdicts. Zscaler’s revolutionary zero trust architecture terminates every connection inline, performing AI analysis and enforcement of security policy inline. To summarize, cloud-delivered security is required to effectively and realistically bring innovative AI to the cybersecurity market. Without it, there are only empty vendor promises and headaches on the horizon. Zscaler is making massive strides in delivering innovative AI-powered solutions that you’ll want to take advantage of. To learn more about new and exciting AI-powered innovations coming to Zscaler, register for Zenith Live. Fri, 29 Apr 2022 10:00:01 -0700 Adam Roeckl The Four Key Drivers of Data Loss and How You Can Respond Protecting data in the modern business world is no small task. The widespread adoption of cloud-based resources like SaaS apps, the rise of bring your own device (BYOD), and much more have introduced a myriad of new data security gaps. Unfortunately, legacy tools lack the needed functionality to address these novel challenges. They were built for a long-gone era of security when use cases revolved around on-premises-only users, apps, and data; not to mention the performance and scalability issues that arise with legacy tools’ appliance-based architectures (you can read more about why that matters here). In particular, there are four primary data leakage paths that organizations must address if they are to stay secure as they embrace digital transformation. You can find them below along with the technologies that are needed to address them—technologies that are readily available with Zscaler, which was designed from the beginning to secure any transaction and address the data protection needs of the future. Data loss to the internet The internet has become the new corporate network, connecting users around the world to resources like websites and SaaS apps so that they can do their jobs. But this gives ample opportunity for data to leak to these web destinations—particularly for organizations attempting to backhaul traffic to security appliances that lack scalability and provide little to no inspection of SSL traffic. With the vast majority of web traffic now being encrypted, this is a significant issue when it comes to securing data. Full SSL inspection at scale is critical for stopping leakage today. Zscaler is built upon the world’s largest security cloud, with over 150 data centers around the world processing over 200 billion transactions daily. As a result, Zscaler’s inline DLP can easily scale to inspect all SSL traffic for data loss. With advanced measures like EDM, IDM, and OCR, the integrated solution can find and secure any sensitive data in motion while reducing IT complexity and removing the need for DLP point products. Data loss to unmanaged devices From employee personal devices (BYOD) to the third-party endpoints of business partners, unmanaged devices are often used to access IT resources like SaaS apps. Once these devices download data, however, IT loses control. Unfortunately, choosing how to respond can be challenging. Blocking said devices disrupts enterprise operations, installing security software on them is rarely feasible, and agentless reverse proxies frequently break and hamper user productivity. Agentless Cloud Browser Isolation (of which Zscaler is the pioneer and continued innovator) enables unmanaged device access to enhance productivity while preventing leakage and circumventing the use of agents and reverse proxies. Users’ app sessions are isolated within the Zero Trust Exchange (Zscaler’s security cloud) and only a stream of pixel-perfect images are sent to the endpoint. This ensures a native user experience while securing access to apps and preventing functions like download, copy, paste, and print—no data is able to be pulled down to the end user device. Data loss via risky sharing within SaaS SaaS applications are an incredible boon to enterprise productivity and are designed to facilitate collaboration and sharing. While this enhances organizational dynamism, it can also lead to unauthorized oversharing if the proper security measures are not put in place. Because of this, organizations must be sure that they can identify and respond to risky shares of sensitive data at rest in the cloud. This is a common cloud access security broker (CASB) use case. Zscaler’s multi-mode CASB is complete with API integrations for scanning apps and their contents. This allows Zscaler to leverage its leading DLP to classify data at rest, and automatically revoke shares of sensitive information. This out-of-band functionality can also be used to find and remediate malware and ransomware at rest. With Zscaler, customers get CASB functionality as one part of a leading security service edge (SSE) platform that is complete with secure web gateway (SWG), zero trust network access (ZTNA), and more, alleviating the complexity and cost of point products. Data loss via poor cloud resource security posture Cloud resources like SaaS, IaaS, and PaaS instances need to be set up properly if they are to function correctly and securely. But these solutions’ deployments can be complex and are often performed by people who are not security experts, leading to poor security postures. Specifically, misconfigurations and excessive permissions can easily expose data. News headlines about public-facing S3 buckets leading to breaches are an all-too-common illustration. A variety of technologies are needed to address these challenges—technologies which can be placed under the umbrella of posture management or larger DevSecOps approaches like CNAPP. With Zscaler, organizations receive market-leading SaaS security posture management (SSPM) for finding SaaS misconfigurations, cloud security posture management (CSPM) for finding IaaS and PaaS misconfigurations, and cloud infrastructure entitlement management (CIEM) for finding risky or excessive permissions. Together, these capabilities can protect public cloud data and workloads. Where to go from here The four key drivers of data loss shared above must be addressed if enterprises today are to remain secure. While it can feel overwhelming to tackle these challenges, Zscaler is a complete data protection platform that has made the endeavor simple and pain free for thousands of global organizations. Take a look at SANS’ review of our data protection offering to see what we can do in more detail. It’s a thorough walkthrough of our solution, complete with screenshots of configurations in the user interface. There is also a corresponding webinar. Or, to see short video demos of features like those discussed above, click here. Thu, 28 Apr 2022 09:00:02 -0700 Jacob Serpa The Top 3 Lessons We’ve Learned from Embracing the Zero Trust Exchange When you’re in the business of producing confections that bring a ray of sunshine into people’s everyday lives, disrupting your manufacturing operations to recover from a malware attack is unacceptable. That’s why we turned to Zscaler to improve our security posture at Baker & Baker a little more than five years ago. After our initial deployment of Zscaler Internet Access (ZIA) and, later, Zscaler Private Access (ZPA), we’ve progressively expanded our Zscaler deployment to protect and streamline our enterprise, including Industry 4.0 processes on our shop floors. Along the way, we learned important lessons that we continue to apply today. In this blog, I’d like to share our top three takeaways in hopes that they can assist you during your Zscaler journey. Lesson one: invest in premium support services – it’s totally worth it Although Zscaler solutions are amazingly easy to deploy, we discovered that properly preparing our policies and our IT environment paid significant dividends. We gained this insight because we elected to invest in premium support from the start, which netted as a Technical Account Manager (TAM). Like all Zscaler TAMs, ours is drawn from a pool of experienced senior support engineers and functions as an extension of our internal team. He immediately took a deep dive into our IT environment and then provided us with invaluable advice. We credit our TAM with ensuring we could complete our initial ZIA deployment within a month across 4,500 users and 48 countries worldwide. That’s fast by any enterprise technology measure and we strongly recommend taking this path at the outset. Lesson two: remember your end users deserve great security experiences too Deploying ZIA and ZPA was life-changing for our IT and InfoSec teams. Threat incidents plummeted, management frustrations melted away, and infrastructure savings skyrocketed. As technologists, our experiences were nothing short of amazing. However, we also realized our existing troubleshooting tools couldn’t give us sufficient insight into user experiences for us to isolate issues or proactively fine-tune experiences. So we turned to our TAM for an answer. When we learned Zscaler Digital Experience (ZDX) was in the pipeline, we made plans to adopt it as soon as it became available. To say it’s been everything we’d hoped for is an understatement. ZDX enables us to reduce days, weeks, and, in some cases, months of issue troubleshooting to minutes or seconds. ZDX even helped us get to the root cause of a long-term performance challenge at one of our locations. In short, deploying ZDX has been an investment that’s rewarded us—ensuring there’s no tradeoff between user experience and security. Lesson three: keep building out your platform to maximize business and IT value At the start of our Zscaler journey, we set out to solve two pain points: getting beyond the limitations of firewall technology to ensure our workers could securely access cloud-delivered applications (ZIA) and eliminating VPNs to significantly improve worker productivity while reducing our attack surfaces and protecting against lateral movement within our network (ZPA). However, as the wins mounted with each new Zscaler adoption, we experienced the benefits of taking a platform approach and embraced the Zscaler Zero Trust Exchange. We’ve demonstrated the value to our company leadership – not just upon deployment, but continuously over time as new needs have arisen – and we intend to stay the course. If you’re new to Zscaler, we encourage you to look beyond the individual use cases for securing your IT infrastructure—whether cloud-based or a hybrid—and consider your situation holistically. This reduces the likelihood you’ll adopt point-based solutions that add cost and complexity. Instead, you get fully-integrated solutions with innovations that are manageable from a single intuitive dashboard. In good company As we’ve watched Zscaler spread to many more world-class enterprises, from some of the most venerable brands to rapidly-expanding start-ups, we’ve also benefited from the robust user community that Zscaler supports at every level, from engineers and infrastructure administrators to technology directors and CXOs. Overall, the Zero Trust Exchange platform has proven to be a game-changing solution at our enterprise that has delivered consistent performance, scalability, and measurable value. We fully expect Zscaler’s pace of innovation will continue and can’t wait to see what comes next. To learn more, I invite you to read the accompanying case study about how our zero trust journey in partnership with Zscaler is helping us achieve our business goals. Tue, 26 Apr 2022 23:00:02 -0700 Steffen Erler Beyond the Perimeter 2022: Defending Against Ransomware with a Zero Trust Ecosystem Accommodating a hybrid workforce is now a reality for many organizations. But giving remote workers anywhere, anytime, any device access to enterprise assets and data raises a new systemic challenge for security teams. Hybrid work disperses users and data across locations, resulting in an ever-expanding attack surface. Eager to take advantage, threat actors have changed tactics specifically to exploit strained enterprise defenses and sneak past traditional security solutions. Ransomware attacks, in particular, have increased 500% year over year, with high-impact, headline-making incidents continuously growing in volume and scope. In other words, it’s time to get serious about zero trust. In the insecure, boundary-free world of hybrid work, cybercriminals are increasingly bold and developing more sophisticated attacks resulting in bigger ransom payments. No industry is off-limits anymore, and it’s no longer a question of if you'll be attacked… it’s when. Unfortunately, outdated anti-malware and anti-ransomware tools are simply incapable of handling the complex threatscape that the modern hybrid workforce demands. Modern cyberattackers use sophisticated tactics to bypass conventional ransomware detection, often embedded in trusted and encrypted traffic. The global surge in sophisticated ransomware threats, including nation-state attackers and dedicated ransomware gangs, demand nothing less than the comprehensive defense strategy that only extended zero trust architecture offers. To safely and responsibly provide the frictionless remote access needed to support hybrid work, enterprises need a new—and completely different—approach to cybersecurity. The best time to transition to zero trust was a year ago, but if your company is not there yet, the second-best time is right now. At Beyond the Perimeter 2022, Zscaler and CrowdStrike will demonstrate how they work together to guard hybrid enterprises against new, sophisticated attacks, establishing a secure dynamic perimeter that supports hybrid workforce productivity, powered by sophisticated technology for thwarting fast-evolving cyberthreats. This event features integration demos and real-life scenarios that show how our joint solution prevents, protects against, and remediates ransomware and other threats, no matter where you conduct business. Additionally, industry leaders will share the holistic approaches they use to keep hybrid work flowing while safeguarding enterprise assets, data, and identities. Beyond the Perimeter 2022 will showcase how network and security leaders can prevent ransomware by helping their organizations easily navigate the road to zero trust. Private drive. Protect your enterprise with the industry's most comprehensive zero trust platform. It delivers all key security controls as an edge service, close to every end user, branch, or enterprise headquarters. Dead end. Eliminate the risk of lateral movement by directly connecting users and devices to apps, not the network. Merge with caution. Employ the industry’s most holistic data protection solution that spans managed and unmanaged devices, servers, public cloud, and cloud apps. Together, Zscaler and CrowdStrike provide a best-in-class, cloud-delivered, end-to-end joint solution purpose-built for modern cloud-first enterprises. Ironclad protection across workloads, endpoints, applications, and identities supports efficient and frictionless hybrid work no matter where your users are or what device they’re using. The collaboration provides adaptive, risk-based access control to all applications and shares telemetry and threat intel between the platforms, enabling the zero-day malware detection you need to significantly reduce risk. The joint solution gives your admins real-time, contextualized insights into the evolving threat landscape, allowing them to dynamically change access policies based on user context and device security posture. As a part of the CrowdStrike XDRAlliance, Zscaler integrates directly with CrowdStrike to give organizations like yours extended detection and response (XDR) powered by zero trust, for end-to-end visibility based on shared telemetry across endpoint, network, and cloud applications. Come to Beyond the Perimeter 2022 on May 10th to discover exactly how Zscaler and CrowdStrike work together to deliver comprehensive zero trust protection over applications and endpoints while cutting through the traditional complexity of managing a mobile workforce. Join us and see for yourself how you can increase visibility, harden security, and reduce detection and remediation time by going “Beyond the Perimeter” with Zscaler and CrowdStrike. Register now. Mon, 18 Apr 2022 09:00:01 -0700 Leena Kamath The Next-Gen Firewall is Dead. Long Live Cloud-Gen Firewall! For a large part of the last two decades, I have been designing, developing, and deploying firewalls. Initially, the industry was happy with 5-tuple, port-based stateful firewalls. In the mid-2000s, next-generation firewalls were born, and they included other dimensions such as users, groups, and applications. URL filtering and threat and data protection techniques evolved and became integral add-ons to the next-generation firewall. But as applications moved to the cloud and employees logged in from anywhere, these next-generation firewalls soon became ineffective, requiring the third wave of evolution—the cloud-generation firewall. So, why is there a need to replace next-gen firewalls aside from being regarded as a "last-generation" solution? And what can replace them? We're going to answer this question from the point of view of security and network operations teams. Firewalls cannot do zero trust First, let's begin with the fact that next-gen firewalls do not conform to zero trust principles. The most basic tenet of a zero trust architecture is least-privileged access: the idea that a security solution, network—even professional—should never obtain inherent trust. Firewalls simply can't do this at the level it takes to inspect encrypted traffic while making accessibility decisions for users working on various devices, from a myriad of locations, over countless unprotected networks. In contrast, the Zscaler Zero Trust Exchange quickly and securely connects a user or device to a specific application or workload by leveraging least-privileged access defined by context-based identity and policy enforcement, allowing employees to work from anywhere using the corporate network. Without this, the risk of lateral movement is too pronounced. Next-gen firewalls lead to broad network access and unwanted lateral movement Traditionally, enterprise firewalls provide zone-based network segmentation and implement rudimentary anti-spoofing techniques. If an internal IP address originates from the internet or Demilitarized Zone (DMZ), it is deemed as “spoofed” and is blocked. However, zone-based segmentation still allows broad network access and lateral movements, “allowing” intra-zone traffic. Unfortunately, if an attacker has access to one DMZ server, they have access to all of them. Therefore, a compromised internal print server could propagate malware across all users and devices within the “trust zone” through implicit intra-zone “allows.” In fact "trust zone" is a misnomer and a contradiction to the principle of zero trust mentioned above. Next-gen firewalls fall short when preventing compromise Physical firewalls and appliances are incredibly prone to misconfigurations. In fact, Gartner reports that 95% of firewall breaches are caused by misconfigurations. Some of the most common misconfigurations per a firewall management vendor include allowing SMTP access or lax 'allow-any-to-any service,' inbound ICMP or ping, and so on. Moreover, having multi-vendor firewalls amplifies misconfiguration and leads to more harm than benefit. Because firewalls have to be connected to the internet, they themselves are vulnerable. Moreover, allowing inbound HTTP/HTTPS can lead to distributed denial-of-service (DDoS) attacks even if it accesses just one server at a branch location. Additionally, enterprise firewalls are not web-application firewalls and do not have load balancers to bear the brunt of DDoS attacks. Shodan, which is a search engine for hackers, crawls the web to list exposed and vulnerable devices, shining a light on the number of exposed devices, including firewalls with default passwords and wide open services. As long as there are humans, misconfigurations may never go away. The move to cloud applications further unravels the need for cloud protection. So, what is the recommended approach? A cloud-gen firewall that provides zero trust security helps reduce the attack surface on-premises. Point your users, devices, and edge routers to the Zscaler Zero Trust Exchange. The Cloud-gen firewall in the Zero Trust Exchange will examine all traffic—both web and non-web. As a default gateway, it enforces the right access policies and hands-off traffic for additional web security, threat and data protection. It is less error-prone due to centrally orchestrated uniform policies and is much better at preventing compromise along with the DNS and IPS Control services. All services are inline, and inspection happens using the Single-Scan, Multi-Action (SSMA) technology which ensures that there is no incremental latency in inspecting packets. Compliance challenges An enterprise firewall is a hardened device, and may have additional compliance enforcements like Common Criteria. But most firewalls are not subject to proactive security compliance checks. Sure, security teams commission penetration testing on firewalls. But how proactive and effective is it in every enterprise? Compare this with a fully certified cloud that is ISO/Fedramp/SOC-2, CSA-Star compliant . There is no one appliance compliance standard that can offer this level of confidence through compliance. Lack of horizontal scale, reduced availability If you look at a typical firewall datasheet, you will see limits of X Mbps. Once you turn on TLS inspection, this listed performance drops by about half. Threat prevention further brings it down to one-third of its original performance. With 90% of traffic being TLS-encrypted, your firewalls will always be half as bad in advertised performance. Sudden bursts of new sessions/ sec could also lead to congestion and latency as firewall appliances do not scale horizontally like a cloud service. Assuming quarterly planned, unplanned patching, and maintenance of 1-3 hrs each, it leads to at least 6-12 hours of downtime per year which leads to an availability of 99.86% to 99.93% at best. A redundant high-availability firewall helps improve the situation slightly but it is not enough. Compare this with a cloud service where there is infinite horizontal scale. Sure, there are limitations in cloud hardware too. But they can be easily overcome by adding more load balancers and public service edge instances. Zscaler is ISO27001-certified and provides 99.999% availability guarantees, with additional SLAs on latency and security. No other vendor can match this. Check out our guide to demystifying cloud SLAs that provides more insight into this. Comparison: Legacy Firewalls vs. Zscaler Zero Trust Exchange Legacy Firewalls Zscaler Zero Trust Exchange Users & Devices When users move, firewalls do not move. Road warriors are not protected by legacy firewalls User connects to any of the 150+ Datacenters worldwide. Protects users and devices anywhere, anytime Scale & Performance Limited by fixed scale & performance that drops when SSL and threat inspection are added Load balancers distribute new sessions/sec to service edges based on user proximity, current performance and scale High Operational Cost Patching and maintenance result in loss of availability. It also results in high operational costs that include specialized IT staff at every location to maintain, patch, and upgrade firewalls. Based on studies from the report, Secure Cloud Transformation, the CIO’s Journey, enterprises save on average 50-85% on security appliances such as firewalls at the branch when they move to the Zscaler Zero Trust Exchange. Zscaler engineers maintain, patch, manage incidents, and issue advisories for all cloud services per cloud SLAs. This eliminates the operational costs of maintaining the firewalls at every site. It is prime time for cloud-gen firewalls on the Zscaler Zero Trust Exchange. So, when are you replacing your next-gen firewall? Fri, 15 Apr 2022 09:00:01 -0700 Anusha Vaidyanathan The Latest Sandworm Botnet Attack Shows Why Firewalls Can’t Do Zero Trust US Attorney General Merrick Garland announced Wednesday that US officials have disrupted a two-tiered global botnet of thousands of infected firewall devices allegedly controlled by the threat actor called Sandworm, who have been previously connected to the Main Intelligence Directorate of the General Staff of the Armed Forces of the Russian Federation (the GRU). The attack operation effectively converted the infected firewalls into malicious hosts to be used for command and control of the botnet. Sandworm has a long history of globally disruptive malicious cyber activity, and are attributed with such campaigns as NotPetya in 2017 and attacks against the Winter Olympics and Paralympics in 2018. This latest botnet is known as CyberBlink, and is an evolution of the VPNFilter botnet framework. VPNFilter was the fourth-most popular IoT malware payload in Zscaler ThreatLabz’ 2021 study of IoT devices, despite its operations being severely disrupted by the US Justice Department in 2018. In a statement, the US Justice Department said that they “copied and removed malware from vulnerable internet-connected firewall devices that Sandworm used for command and control (C2) of the underlying botnet.” The threat actor targeted firewalls built by WatchGuard and ASUS, both of whom released guidance on how to detect and remediate issues related to the malware. Despite the remediation work done by the Justice Department, as of mid-March, the DOJ said “a majority of the originally compromised devices remained infected.” When Firewall Security Backfires The CyberBlink botnet is just the latest example of why firewalls are inadequate for modern enterprise security. Firewalls are designed to keep threats out by securing the network perimeter. This is based on an outdated “castle-and-moat” security model that relies on implicit trust: everything inside of the perimeter is trusted, and everything outside of the perimeter is untrusted. But firewalls are vulnerable to exploits just like any other device. Internet-connected firewalls, such as the ones used in this attack, are easily discoverable by any attacker with an internet connection, giving adversaries easy access. It’s horrifying to think of a security tool not just failing, but actually being used as a host for malicious activity. But having the device taken over for use in a botnet is not nearly the most damaging potential consequence to a victim organization. Firewalls rely on networks by design, and in fact force network connections – either physically in an office, over MPLS from a branch, or remotely via VPNs. Once an attacker is on the network, they have all the access that your legitimate users have, and can move laterally to network assets or downstream devices. This allows the delivery of malware and ransomware, theft of data, or access to applications. “These network devices are often located on the perimeter of a victim’s computer network, thereby providing Sandworm with the potential ability to conduct malicious activities against all computers within those networks,” the Justice Department explained. 64% of security decision-makers feel that firewalls are unable to prevent lateral movement within the network. Source: Virtual Intelligence Briefing (ViB) Networks Security Survey 2021 Firewalls don’t even have to be exploited to be vulnerable – often they can be bypassed outright. With more than 80 percent of attacks now happening over encrypted channels, inspecting encrypted traffic is more critical than ever. However, firewalls and their pass-through architectures are not designed to inspect encrypted traffic inline, making them incapable of identifying and controlling data in motion and data at rest. As a result, many businesses allow at least some encrypted traffic to go uninspected, increasing the risk of cyberthreats and data loss. Additionally, organizations no longer operate within a predefined perimeter that can be easily ring-fenced with firewalls. Applications, users, and data are everywhere, and are too-often exposed to the internet where they can be exploited by malicious actors. Virtual firewalls and other cloud-based perimeter tools attempt to secure these use cases, but are no different from their physical hardware counterparts—the location of the firewall moves from the data center to the cloud, but the overall security model remains the same, and carries the same security, scalability, and performance downsides. Even worse: by putting virtual machines (VMs) in the cloud, you are actually expanding the attack surface outward, making it possible for attackers to exploit your cloud assets. A Better Approach: Zero Trust Just about every security vendor these days will tell you that they enable “zero trust,” because they know it’s what organizations need to protect their distributed businesses from increasingly sophisticated threat actors. As NIST states it, “Zero trust is a cybersecurity paradigm focused on resource protection and the premise that trust is never granted implicitly but must be continually evaluated.” Based on the above definition, firewalls by their very nature of network reliance cannot do zero trust. Any concept of a ‘trusted network’ is in direct opposition to zero trust principles. And by using security models that include implicit trust, you’re taking on unnecessary risk. A true zero trust architecture connects your users only to the data and applications that they need, without exposing anything else. Establishing a zero trust architecture requires visibility and control over the environment's users and traffic, including that which is encrypted; monitoring and verification of traffic between parts of the environment; and strong multifactor authentication (MFA) methods beyond passwords, such as biometrics or one-time codes. In a zero trust architecture, a resource's network location is no longer the biggest factor in its security posture. Instead of rigid network segmentation, your data, workflows, services, and such are protected by software-defined microsegmentation, enabling you to keep them secure anywhere, whether in your data center or in distributed hybrid and multicloud environments. The Zscaler Zero Trust Exchange Zscaler delivers zero trust with its cloud-native platform, the Zscaler Zero Trust Exchange. Built on proxy architecture, the Zero Trust Exchange directly connects users to applications, and never to the corporate network. The Zero Trust Exchange sits as a policy enforcer and decision maker in between endpoints or other entities that are trying to connect (at the bottom of the below graphic) and the resources that they are trying to connect to, such as the internet and applications (at the top). The Zero Trust Exchange applies policy and context in a variety of ways to come to an enforcement decision, then brokers authorized connectivity to the requested resource. This architecture makes applications non-routable entities which are invisible to potential attackers, so your resources can’t be discovered on the internet. It reduces the attack surface, prevents lateral movement, inspects and protects all traffic, and stops sensitive data from leaving to suspicious destinations. The Zero Trust Exchange delivers cloud-native, transparent zero trust access—offering seamless user experience, minimized cost and complexity, increased visibility and granular control, and enhanced performance for a modern approach to zero trust security. Zscaler’s leading Zero Trust Network Architecture (ZTNA) is one of the reasons that we were rated highest in execution in the 2022 Gartner® Magic Quadrant™ for Security Service Edge (SSE). For more on this topic, watch this webinar: Why Firewalls Cannot do Zero Trust. You'll learn what zero trust is, what it isn’t, and how you can reduce your risk of falling victim to attacks like CyberBlink. Fri, 08 Apr 2022 00:06:02 -0700 Mark Brozek The Top 5 Benefits of a Cloud-Native Application Protection Platform (CNAPP) CNAPP platforms help an enterprise integrate security principles and standards across the development lifecycle by implementing security controls at each stage—development, integration, deployment, and production operations. CNAPP can be used to consolidate security tools while providing increased visibility into enterprise workloads and control over security and compliance risks in cloud environments. The overall idea is to identify security issues as early as possible, which helps save costs, avoids costly rework, and ensures that cloud workloads are “born secure,” having been secured prior to deployment. According to the Gartner Innovation Insight for Cloud Native Application Protection Platforms report, “CNAPPs are an integrated set of security and compliance capabilities designed to help secure and protect cloud-native applications across development and production.” CNAPP replaces several cloud security point products CNAPP consolidates many of the most important features from siloed point-products into one streamlined platform that comprehensively identifies and helps mitigate cloud risks. CNAPP provides functionality previously implemented by: Cloud Security Posture Management (CSPM) Cloud Infrastructure Entitlement Management (CIEM) Infrastructure as code Data Protection Vulnerability scanning Compliance and Governance (part of CSPM) Cloud Workload Protection Platform (CWPP) Cloud Security Posture Management (CSPM): CSPM continuously scans cloud environments, surfacing potential threats ensuring adherence to compliance policies and reducing risk. It offers comprehensive controls across cloud infrastructure, resources, data, and identities. Cloud Infrastructure Entitlement Management (CIEM): CIEM secures human and machine identities while enforcing a least-privilege access model—significantly reducing the risk of breach that can be caused by internal and external sources. Infrastructure as code security (IaC security): Also known as shift left, IaC security empowers developers to deliver code securely by integrating security with developer and DevOps workflows to identify and fix vulnerabilities and compliance issues before they move into production. Data Protection: Data protection module combined with data loss prevention helps secure confidential data across multiple cloud repositories while maintaining visibility, control and compliance. Cloud Workload Protection Platform (CWPP): CWPP secures hosts, containers, virtual machines, and serverless functions across the full application lifecycle. The top five benefits of CNAPP - Data breaches, zero-day vulnerabilities, and compliance violations continue to grow, making it imperative for enterprises undergoing digital transformation or building new cloud apps to streamline security processes that can identify and remediate application vulnerabilities early in that development process rather than incurring the high cost of remediating issues in the production or even worse, recovering from a breach. Siloed cloud security tools struggle to provide complete coverage since they only focus on single aspects or specific risks. CNAPP integrates end-to-end cloud-native security to identify, prioritize, and remediate the most critical security risks. The top five benefits of implementing a CNAPP include: 1.Bringing it all together Challenge - Securing public cloud environments, applications, and confidential data requires strong collaboration between different teams: security, development, infrastructure, and operations. Unifying these teams and their processes can be challenging, as undefined roles and policies can lead to gaps in security. Benefit - CNAPP delivers a unified approach to securing heterogeneous, cloud-native applications deployed across distributed clouds. It helps to bring all team members together on a single platform, improving collaboration and efficiency by identifying and correlating minor issues, individual events, and hidden attack vectors into powerful unified intuitive; visual attack flow graphs with quick alerts, recommendations, and remediation guidance for security and non-security experts so that they can make informed decisions. 2. Reduce costs and operational complexity Challenge - Multiple, non-integrated traditional security tools create complexities that lead to security gaps and increased overheads. Benefits- CNAPP platforms help enterprises replace multiple point products–CSPM, CIEM, CWPP, vulnerability scanning, IaC scanning, DLP, and CMDB–with a complete picture of critical risk via comprehensive visibility into configurations, assets, permissions, code, and workloads. It improves the efficiency of the team by analyzing millions of attributes to prioritize the risks that the security team should focus on first while reducing the noise, complexity, and cost of maintaining point solutions. 3. Comprehensive cloud and services coverage Challenge - Enterprises rely on multiple clouds to deploy applications and run workloads. As a result, using the native security controls of the different public cloud providers results in limited visibility, and a diversified collection of tools that creates security silos, varying levels of protection, inconsistent security policies, and fragmented reporting of diverse threat landscape. Benefit - CNAPP provides visibility and insights across the entire multi-cloud footprint, including both IaaS and PaaS services, extending across VM, container, and serverless workloads and into development environments to identify risks early in the deployment cycle. It helps to continuously monitor cloud resources for misconfigurations, vulnerabilities, and other security threats and enforce consistent security and compliance policies. 4. Security at the speed of DevOps Challenge - Rapid release cycles can cause coding mistakes that can go undetected and can be exploited. In a traditional development environment, the security team performs security testing after the development stage before sending the application into production. This waterfall-like process can be very time consuming and slows down the fast pace of the DevOps process. Security teams find it difficult to keep up with the pace of deployments and continuously changing environments with limited resources. Benefit - CNAPP platforms typically integrate with popular IDE platforms like VS Code and DevOps tools such as GitHub, Jenkins, and more to identify misconfigurations or compliance issues during development and CI/CD, giving security and developer teams a chance to investigate and remediate risks before they are exploited by bad actors and cause significant disruption. It also integrates with SecOps ecosystems such as ServiceNow, Zendesk, and Splunk to trigger alerts, tickets, and workflows on violations so that the teams can act immediately and effectively with the embedded remediation guidance. As a result, cloud environments remain safer, and enterprises can deploy new programs with minimal disruption. 5. Guardrails help distribute ‘security’ responsibility Challenge - DevOps teams expect freedom to innovate with security automation so that security does not become a bottleneck. Security automation helps to identify vulnerabilities sooner, saving time for developers and DevOps engineers. DevOps environments are usually complex with different platforms, codes, open source, and language. Credentials, tokens, and SSH keys are openly shared in this environment. Even applications, containers, and microservices share passwords and tokens. DevOps engineers and developers aren’t usually security experts, with many unaware of enterprise or industry specific security policies and the latest compliance mandates and associated penalties. Benefit - CNAPP helps integrate security principles and standards in the DevOps cycle, i.e., injecting security controls at each level of the DevOps cycle, with native integrations into existing development and DevOps tools. The result is that infosec teams are able to implement much-needed guardrails that developers are able to take ownership of in their day-to-day work, reducing unnecessary noise and friction between security and the DevOps team. Making security a shared responsibility is a recipe for success. Learn more about best practices for securing cloud-native applications, infrastructure, and data. Talk to our experts today! Thu, 07 Apr 2022 10:35:27 -0700 Mahesh Nawale Machine Identity in the Cloud - Bypassing All Security Controls Modern public cloud environments provide great flexibility, agility, and benefits to companies of all sizes. In addition to operational benefits and cost reductions, the public cloud offers great security benefits, if managed properly. If not, the scale and flexibility of the public cloud can cause a severe security risk. Management of cloud identities, both human and non-human, is a key area of focus for organizations looking to have their cloud deployments act as a security asset rather than a liability. In this blog, we’ll review the concept of machine identity - what it is, why it is important, and the security risks and benefits associated with it. Machine identity - why do we need it? In multi-cloud environments, machine identities, such as virtual machines, serverless functions, roles, containers, applications, scripts, etc., play a pivotal role in driving digital transformation. They help enterprises in scaling up workloads, running scripts, patching holes, completing repetitive tasks effectively, and increasing productivity at the speed of agile DevOps with lower cost and errors. With these capabilities, machine identities are often tasked with advanced responsibilities where they make decisions on behalf of human identities that are part of autonomous and automated processes. The recent increase in machine identities requires new ways of managing risk. The average enterprise today has more than 1,000 applications in use, often supporting tens or hundreds of thousands of machine identities—each with varying access requirements that change constantly based on business needs. This is a lot to keep track of for a fast-moving enterprise, but pair this with numerous human identities and a complex multi-cloud environment, and the security challenge is considerable. Traditional methods of adequately tracking, managing, and protecting identities no longer work in the case of machine identities. According to Gartner, "Managing machine identities is becoming a critical security capability" Source: Smarter With Gartner, “Top 8 Security and Risk Trends We’re Watching”, 15 November 2021. According to Gartner, “It is impossible to keep pace with this change, and therefore manual methods for determining least-privilege access are neither feasible nor scalable. To address this adequately, organizations need a more identity-centric view of their cloud infrastructure entitlements. Furthermore, as organizations begin to understand appropriate access, the ability to efficiently remove unneeded entitlements and adjust access policies is essential.” Source: Gartner, “Managing Privileged Access in Cloud Infrastructure”, Paul Mezzera, Refreshed 7 December 2021, Published 9 June 2020. GARTNER is a registered trademark and service mark of Gartner, Inc. and/or its affiliates in the U.S. and internationally and is used herein with permission. All rights reserved. It is essential for enterprises to recognize where and how machine identities are used in multi-cloud environments and place necessary systems and processes to manage them properly. Failure to do so increases attack surface, exposes critical resources to security and compliance risk, and jeopardizes business operations. Enterprises using multiple cloud platforms sometimes utilize cloud-native security tools provided by cloud service providers to manage and control human and machine identities, access, and permissions across infrastructure, applications, and data. But, relying on native tools cannot provide a single unified view of access or support a global set of cloud access policies. Enterprises may not be able to address some of the challenges related to machine identities such as fine-grained visibility, control, and enforcement of consistent least privilege policies in multi-cloud environments. Why is it critical to securely manage machine identities in the public cloud? Watch the demo video below: Consider a simple web application hosted in the public cloud (e.g. AWS). It is likely to have a web interface, hosted on a web server (e.g. Nginx installed on a virtual machine/EC2 instance) which has to communicate with a database (e.g. DynamoDB). When the web server is trying to access DynamoDB, the request must include valid AWS credentials. One way to accomplish this is to hard code the AWS credentials in the application code, environment variable, text file, etc. While it is easy to do during initial application deployment, it creates an operational challenge, as one has to take care of credentials rotation across one or multiple instances of the application server. It also contradicts the AWS Preventative Security Best Practices, which state: “You should not store AWS credentials directly in the application or EC2 instance. These are long-term credentials that are not automatically rotated, and therefore could have significant business impact if they are compromised. ” AWS also provides a recommendation in the same document: “An IAM role enables you to obtain temporary access keys that can be used to access AWS services and resources.” Cloud platforms allow us to associate an identity with different entities, like virtual machines, cloud functions (e.g. AWS Lambda), and other workloads. Returning to our web application example, instead of hard coding the API credentials, we can simply specify that our web server has access to specific DynamoDB tables. AWS will then automatically handle the required authentication, allowing our developers to focus on delivering business value, instead of handling infrastructure issues. What is the catch? As described above, leveraging machine identities in a public cloud environment can greatly improve security posture and minimize the risks associated with long-term credentials. The catch is that one should be extremely prudent when creating and allocating IAM permissions to non-human entities, as deviating from the least-privilege concept can cause severe damage. After all, IAM permissions can bypass most, if not all, of traditional security controls. Let us consider an example of a simple AWS account implementing several security best practices: Different Virtual Private Clouds (VPCs) for production and R&D sandboxing. Confidential data stored in S3 buckets which explicitly deny public access. Production workloads that are allowed to communicate with the S3 buckets only via VPC endpoint. The account setup is depicted below: The main idea behind the setup illustrated above is that while we allow our developers freedom to provision any workloads in the R&D Test VPC, we protect our confidential data stored in the S3 buckets by preventing any access to it from outside of the production VPC. Let us review the AWS setup from the AWS console: Review results: All public access is explicitly denied: S3 bucket policy allows read access only from a specific VPC endpoint: Security assessment with AWS Security Hub As security professionals, we cannot rely on manual verification of our setup, so we use cloud provider native tools like AWS Security Hub to make sure that our S3 buckets—and the data contained in them—is secure. Let us check if there are any security findings related to one of our sensitive S3 buckets: We can see that there are no high or critical severity issues, therefore we can assume that our data is reasonably safe. How poorly-configured machine identities bypass security controls Poorly-configured and weakly-secured machine identity can introduce critical risks. In our example (refer to the diagram above), a developer deployed a popular server image for web application hosting - the LAMP (Linux, Apache, MySQL & PHP) stack, in the R&D sandboxing environment. As there are no strict security requirements for the sandboxing environment, the server contained a PHP vulnerability, which allowed an attacker to inject custom PHP code into the server. This code would allow a remote attacker to execute shell commands by wrapping them in a standard HTTP GET request. Considering the security controls outlined above, the impact of such an exploit should be limited to the sandboxing environment, which is strictly isolated from the rest of our cloud estate. We can start by verifying that the exploit is working by running an ‘ls’ command, to list the directory content: Now we can try and see what S3 buckets the server has access to: Note the financeconfidential bucket highlighted above. Let’s try to access its content: As you can see, by exploiting a vulnerable server in an isolated sandboxing environment we managed to easily obtain access to an S3 bucket containing confidential information, despite implementing several best practices (different VPCs, VPC endpoint, blocking public access to S3 & bucket policy) and performing a security audit using AWS Security Hub. How did it happen? What went wrong? Further analysis identified the culprit - an overly permissive IAM role was associated with the EC2 instance hosting the test web server: This role allocation allowed the EC2 instance to bypass all access restrictions and exposed confidential data. Conclusion Cloud provider native security solutions (e.g. AWS Security Hub) provide great value by identifying basic misconfigurations. Unfortunately, they lack two critical components - context (i.e. what is important vs. what is less important) and identity awareness. In our next blog post, we’ll discuss how the Zscaler cloud protection offering can help secure your cloud environment from the risk of excessive permissions that lead to data exposure. Thu, 07 Apr 2022 09:00:02 -0700 Max Shirshov Data Protection: Outside is Where All the Fun Happens Now that summer is approaching, it’s time to go outside and play. Outside is where all the fun happens, or that’s what I keep telling my boys. Pay no attention to those addicting video games - the real action is outside on bikes, trampolines, or electric scooters. Of course with these activities, the risk increases dramatically. But we put up with the occasional bump and bruise because outside is important and it helps build a well-rounded child that feels confident in their abilities to handle whatever comes their way. When it comes to data protection, you should think of your organization the same way. Outside is just as important as inside, and helps build a well-rounded data protection strategy. With me, or lost? Let me explain. Take a moment to think about where the risk of data loss resides in your organization. Is it an inside threat or an outside threat? One of the biggest misguidances in the industry is that data protection is too heavily focused on insider threats. This partially comes from the popularity of CASB and cloud apps, which are squarely focused on the employee and their actions with cloud data. While this is important, we must not lose sight of the bigger picture. Data is the lifeblood of any organization, and it’s often what the adversaries are after. Additionally, when data is lost, outside breaches are usually far more impactful than smaller events attributed to individual user loss. With that, a data protection strategy cannot be complete without a well-thought-out plan that unifies protection around malicious data breaches and accidental loss. Are you fully prepared to address outsider threats to your data? Before we get into the details, let's explore what defines the difference between an inside or outside threat. The big difference is identity. An insider threat is carried out by an entity with a valid identity. An external threat is a risk from an adversary that does not have a valid identity (yet). Let’s now explore both concepts and what you should be thinking about when it comes to unifying data protection. Great outside and inside data protection starts with inline visibility. If you can’t peel back SSL to get visibility across all your user connections, you’ve got nothing. Remember, most of your data is headed to either a sanctioned cloud app, or worse yet, completely off the grid to the internet or risky app locations. All these destinations require inline SSL inspection that can scale across all users and devices, on- or off-network. This is where Security Service Edge (SSE) comes in. Delivered from a security cloud, your inline SSL inspection can go everywhere with ease. But most importantly, if you don’t have a proven, scalable way to do this, you’re going to run into problems down the line, from business interruptions and downtime to scalability issues. Get this one fundamental building block right, and you’re on your way to fantastic control over external breaches and internal data loss. Once you have that foundational component of scalable inline SSL inspection, you're ready to start thinking about outside threats. With the right SSE service, you can feel confident that adversaries, state-sponsored attacks, and all the other crazy stuff currently going on in the outside world are easily controllable. SSL inspection lets you find threats where they live, and cloud-delivered policy ensures every user gets consistent protection, regardless if they’re on or off the network. Pair this with a strong secure web gateway, cloud-delivered sandboxing, and other advanced threat protection approaches like AI, ML, and browser isolation and you’ve got a recipe for success—everything in one place, with a unified policy and drastically reduced complexity. Remember though, to properly stop unknown threats and ransomware, your security platform must be able to inspect all ports and protocols, inbound and outbound across every connection, on- and off-network. Now, what about insider threats? If a bad actor compromises an identity, you certainly have an insider problem, but the casual employee can often be your worst enemy. Your organization is rife with all kinds of accidental dangers. Users often mean well, but inadvertently share data in dangerous ways or utilize risky apps, which ultimately puts your sensitive data at risk. Even IT administrators on your own team can configure clouds improperly that leave your security posture open to exploitable issues. While often accidental, these insider events required focused attention. But worry not, because SSE comes to the rescue again. Integrated into this must-have architecture are all the technologies you need, including: Full inline inspection with cloud DLP to help secure and block sensitive data that shouldn’t leave the organization, including off-network connections. Integrated multi-mode CASB to help you quickly identify and block risky apps, personal apps, and email. With a fully multi-mode CASB, you can also ensure data at rest in SaaS apps is shared properly, by governing users as they collaborate across your sensitive data. And what about the compromised identity from the outside? One of the great things with the right cloud platform is you can bring User and Entity Behaviour Analytics (UEBA) to the equation to quickly identify anomalous exfiltration activities like impossible travel logins, unusual upload patterns, or encrypted uploads. So, now that we’ve recalibrated our perspective on data protection, does outside sound like where all the fun is? Certainly, the risk outside is greater, but you can’t stay inside forever! A great data protection strategy knows that outside is just as important as inside and understands the right approach to tip the scales in your favor. So, go outside with confidence and embrace the elements! If you want to hear more about insider and outsider threats, watch our LinkedIn Live that explores the topic in depth. Wed, 06 Apr 2022 08:00:01 -0700 Steve Grossenbacher 5 Tips for Leveraging Zero Trust to Up Your Work-from-Anywhere Game Today’s competitive talent environment requires bold moves for companies seeking the best workers. That’s why Careem has committed to a remote-first workplace and borderless hiring with a global reach. We believe this strategy is the most effective way to acquire talent for building out the everyday super app we’ve developed for the Middle East region, which includes the robust payments system we engineered to serve a market where the majority of individuals don’t own a credit card. As the pandemic demonstrated to many enterprises, there is a vast divide between the concept of work-from-anywhere (WFA) hiring and the reality of enabling individuals to do so in a manner that is simultaneously secure and productive. With the goal of eliminating that divide, we’ve joined other leading companies in adopting a zero trust security model and deploying the Zscaler Zero Trust Exchange platform. In this blog, I’d like to share a few tips from our journey in hopes that the lessons we’ve learned can assist you with getting the most out of your zero trust deployment. Tip 1 – Take a holistic platform approach Whenever you embark on a security initiative, it’s easy for stakeholders to focus on protecting their individual use cases. This leads to considering solutions in isolation and, often, deploying multiple point-based products. In the end, this undermines your ability to gain either comprehensive security or management simplicity. Instead, it’s critical to take a platform approach at the outset and evaluate how well each platform vendor addresses your use cases. We suggest you pay particular attention to the integrations among the services within the platforms you’re considering. Like any other software, a “platform” that’s simply a collection of individual products, with little or no integration, will not only create inefficiencies for your IT and InfoSec teams but also for your business users. Taking a platform approach ensures you adopt a comprehensive solution for today and streamlines future deployments as security technologies evolve and new use cases arise. If your enterprise is like ours, anytime you can smoothly add capabilities to an existing solution you’re better off than when you’re forced to rip-and-replace. Tip 2 – Leverage the right pain point As getting buy-in from the business is always key to adoption success, we discovered the best strategy is leveraging the right pain point. For example, our Zero Trust Exchange adoption addressed multiple use cases, ranging from supporting our WFA model to easing geopolitical compliance requirements. However, for our business users, the most pressing issue was eliminating the daily frustrations associated with accessing the applications they needed with our existing VPN solution. This led to positioning our Zscaler adoption as a VPN replacement because that solved the business problem in a manner that business users could understand. Tip 3 – Seek a platform provider with a deep regional partner network Although complying with data-related regulations is particularly complex in a region like ours, which lacks an overall governing body, the truth is that every geographically-dispersed enterprise faces regional nuances for complying with laws and practices – a fact driven home recently when the global geopolitical environment became even more complex than ever. This reality makes local partners invaluable for the process of adopting and, depending upon your application management strategy, administrating a comprehensive zero trust platform. Local partners can help you navigate the regional situation to ensure you’re accounting for all of the nuances. In addition, local partners are invaluable resources for your platform provider to ensure its security solutions evolve to keep pace with the local environment. Therefore, we sought a platform provider with a deep and growing partner network. To us, committing to a robust partner network demonstrated Zscaler’s sincerity for ensuring that every customer has access to the relationships that contribute to a successful zero trust journey. Tip 4 – Market zero trust advantages to prospective employees Much the same as any other technology tool, we can market our Zero Trust Exchange deployment to prospective employees. For technical staff, we’re offering an opportunity to expand their skills using the latest cloud-enabled security solution as well as its integrations with AWS, CrowdStrike, Okta, VMware, and other cloud applications. Similarly, business users appreciate working for a forward-thinking enterprise that streamlines technology processes so they can focus on their jobs. We believe that high-quality workers will find our approach attractive and we plan to further leverage our Zscaler adoption as we continue scaling up our workforce. Tip 5 – Treat zero trust as a journey, not a destination Regardless of the current benefits we receive from our Zero Trust Exchange adoption, we consider our zero trust approach to be an ongoing journey rather than a destination. We’ll continue adding new capabilities to stay ahead of threats, protect our super app platform, and safeguard our customers. Near term, we’re excited about exploring the new ZPA AppProtection solution that checks for potentially malicious activity embedded within encrypted traffic destined for our private applications. We think of it like air travel: before people are allowed to board an airplane, officials inspect their passport, visa, and bags. Another item on our radar is Zscaler Deception, which blankets an IT environment with decoys and false paths that lure sophisticated attackers and detect advanced threats without adding operational overhead or false positives. It’s like setting up honey pots within our environment so we can proactively uncover and stop adversaries like organized ransomware operators. We’re also interested in extending the same protections we gain for applications and users to workloads using the various new services within the Zero Trust Exchange. Ready for whatever comes next No matter how we evolve our Zero Trust Exchange deployment, it will continue to be the heart of our secure WFA business model, always ensuring our far-flung workforce can be productive regardless of their time zone. To learn more, I invite you to read the accompanying case study about how our zero trust journey and partnership with Zscaler is ensuring we’re able to meet our business goals. Mon, 04 Apr 2022 08:00:01 -0700 Peeyush Patel Wise Organizations Learn from the Successes of Others Enterprises worldwide are producing and delivering products enjoyed by customers in countless industries, and though the products may vary widely, behind the scenes, almost all these enterprises operate similarly. They utilize on-prem data centers and cloud service providers to deliver applications for use by employees, sold by partners, purchased by customers, or a combination of all the above. Of course, these applications need to be resilient, reliable, and secure. Otherwise, employees cannot be productive, partners will have a hard time selling, and customers will purchase something better. Meeting these application criteria for enterprises is a constant challenge. It doesn’t help that many enterprises are using outdated network security solutions—like VPNs and firewalls—to secure their applications. Unfortunately for organizations, these solutions of yesteryear are no longer adequate for the cloud. A commitment to success Zscaler is different. We’ve built a platform that rises to the challenge to secure user connectivity to applications and the internet. And now, the same security Zscaler customers use to protect their users can now be applied to their workloads. Zscaler for workloads provides the protection and performance required to enable secure cloud workload connectivity, whether it is to the internet, to another workload on a different cloud platform, or to another workload within a VPC/VNet. Enterprises can secure access to applications for employees, partners, and customers to enjoy. An example of success One longtime Zscaler partner and customer understood these aforementioned challenges all too well. In one conversation with them, they shared the challenges they had and explained how they were able to successfully overcome those challenges with Zscaler solutions. Here’s a quick summary: Desired Business Outcomes: Overall, our customer’s key objective was to deliver secure workload access to the internet and private applications. As part of their key objective, they also wanted to: Effectively scale the delivery and management of secure internet access. Consolidate the services they use in order to more easily support the growth in cloud infrastructure, user base, and businesses. Ensure security compliance as more of their workloads move to the cloud. Enable flexibility in choice of cloud service provider for their business units. Reduce capital and operational costs related to legacy infrastructure and security. Challenges: Like many large organizations, this customer managed thousands of workloads across thousands of different cloud accounts (10k+ workloads, 1k+ cloud accounts). The sheer volume of workloads and accounts they manage and secure inherently translated to high costs and complexity. Additionally, the outcomes they wanted could not be sufficiently fulfilled by legacy network security solutions. When breaking things down, common themes started to appear, including: Managing the high cost and complexity of using and managing multiple products like squid proxies, NAT gateways, firewalls, and URL filtering. Maintaining approved denial lists and open source proxies. Applying consistent security policy enforcement across different cloud infrastructures and managed cloud applications Controlling rising operational costs from workload traffic being backhauled from the public cloud to data centers The solution: By working with this customer, we found that Zscaler for Workloads was the right choice to overcome the challenges they had and achieve their key objective of providing secure workload access to the internet and private applications. The benefits they gained included: Stronger security: all global internet-bound workload traffic, both encrypted and unencrypted, is inspected. They were able to provide consistent security and data protection policies by removing the need for multiple products. Simplified connectivity: direct-to-internet connectivity and multi-cloud connectivity via the Zero Trust Exchange eliminated their need to backhaul traffic from the cloud to their data centers for inspection. Our ability to provide automated setup and rollout also simplified many of their tasks. Scalability: they can now leverage the scale and availability of Zscaler’s global presence to support the 10,000+ workloads they have in AWS and Azure. Reduced cost and complexity: our customer was also able to consolidate multiple products like squid proxies, NAT gateways, firewalls, and URL filtering. They have a centralized control plane which provides simplicity in automating, managing, and enforcing their security policies. They also were able to reduce costs by avoiding traffic backhaul to data centers. Gain zero trust for your workloads Our customer’s challenges and objectives are not uncommon. And existing network and security solutions found in the market can partially help organizations, like our customer, with their cloud security. However, there is a reason our customer specifically chose Zscaler to help secure their cloud workloads. To learn more about Zscaler for workloads and how it can help you secure workload access to the internet and to private applications please visit our solution page. For a deep dive into Zscaler products and services and how our platform can uniquely benefit your business, schedule a chat with one of our experts. Fri, 01 Apr 2022 10:00:01 -0700 Franklin Nguyen A Modernized Approach to M&A Mergers and acquisitions (M&A) form a key part of many growth strategies. As a result, every M&A deal is high stakes, with due diligence and rigor essential to mitigate risk and maximize value. The utmost priority is a smooth IT transition, to get all users of systems and applications working without interruption, and the merged company on its way to realizing value. M&A IT integrations are traditionally perceived to be difficult, but they needn’t be—because the right technology approach can speed up activities by removing complexity whilst safeguarding the user experience. The low cost of cash and the expansion aspirations of companies that recognize acquisitions as a fast route to growth have contributed to a thriving M&A market. So much so that 2021 witnessed a reported 62,000 deals at a total deal value of $5.1 trillion. High activity levels are likely to continue this year too; in a Deloitte M&A trends survey released in January, 92 percent of executives said they expect deal volume to increase or stay the same over the next 12 months. The challenges of M&A Zscaler’s conversations with companies undergoing M&A reveal a range of challenges including time to value, risk and compliance, and value capture. It is a board-level concern that the business realizes its return on M&A investment as quickly as possible. However, this is challenging to deliver because essential activities that include due diligence and integration planning take time to execute. IT must consolidate complex, legacy networks and security infrastructure while connecting users and standardizing IT monitoring and controls. Users need to be connected—with the right profile and access—to all necessary systems and applications as quickly as possible for day-to-day operation of the business. With limited time, many deals wind up failing. Additionally, risk is a huge challenge in M&As, as the acquirer assumes all threats and vulnerabilities, together with the cost of remediation. If security threats are not identified and mitigated early, subsequent breaches can cause catastrophic financial and reputational damage and potentially negate deal value. Deal value has eroded in recent times due to inherited poor security postures and their associated risks that threaten the timeline and cost to achieve planned synergies. Furthermore, the complexities with on- and off-premises applications, customer globalization, global supply chains, and global service delivery can quickly impede any inorganic growth to resolutely act upon its opportunities. Many deals hinge on synergy savings, which are important drivers of success. However, capturing this value can be elusive. According to one PwC report, 83 percent of successful deal makers realize their synergy expectations, while only 47 percent of unsuccessful ones do. A traditional vs. modern M&A approach Merging companies should pursue an IT integration approach that drives out costs, reduces risks, increases time to value, and simplifies operations. The traditional M&A approach requires significant upfront planning, investment, and effort to achieve sub-par results with respect to risk management and efficient user access. It involves a significant investment of time in risk profiling and connectivity and access, with limited amounts of these activities running concurrently. Modern, cloud-based security services instead enable IT to work fast as part of M&A activities to improve the user experience and secure access to applications. A modern approach makes a quantum leap forward in time and effort efficiency while maintaining or elevating risk posture and easing the user experience during integration. It reduces the length of time required for risk profiling and connectivity and access, with the majority of these activities able to run at the same time. Zscaler brings speed to M&A integrations… IT integration is not a valued effort in itself within M&A, it is just a means to an end. The sooner businesses can get past it, the more value they will get out of the newly-integrated business capabilities. Zscaler's cloud-native approach reduces the integration and separation timeline to achieve planned value within weeks, versus months or even years, which has become the expected norm. A cloud-based security service such as the Zscaler Zero Trust Exchange enables IT to work fast as part of M&A activities to improve the user experience and secure access to applications, utilizing the internet as the backbone of connectivity for all corporate assets. The user has a single experience because there is no VPN nor on/off network differentiation. It is a demand-based model, with network and security functions leveraging scalable cloud service capabilities that only require administration and operations to plan and run. This changes the game for IT integrations, reducing deployment time while minimizing risk and simplifying operations. …accelerating the time to achieve deal value Secure connections for users to any app, anywhere, at any time safeguards the user experience while expediting time to value and mitigating risk. Part of the modernized M&A playbook is to integrate only what is needed, deploy a simplified and secure access solution, and move rapidly to planned value capture opportunities. That way, users can continue to collaborate, manage operations, execute business strategies, and innovate. The time and effort required to connect users is more easily predicted due to simplified access and removed external dependencies, while synergy savings and benefits are realized sooner. As information security needs to support the business, IT needs to work in alignment with the C-Level team in charge of M&A, and therefore be brought into discussions early. The new role of the CIO as enabler of digitized enterprises is underlining the strategic function of IT that has the potential to support M&A successfully as well. Find out how to accelerate your next M&A, take a look at Zscaler solutions for M&A and Divestitures. Tue, 29 Mar 2022 14:30:01 -0700 Stephen Singh Digital-First Insurance Solutions at Tower Even before the onset of the COVID-19 pandemic and rapid growth of the work-from-anywhere trend, Tower’s goal was to meet the 21st-century head-on with customer-focused, digital-first insurance solutions. Our board took a strategic decision to focus its efforts on becoming a digitised leader in the insurance industry to enable employees to work from anywhere. Our digital transformation aims to ensure the business is more agile and competitive, while at the same time ensuring secure access and a more efficient user experience. We understood that digital transformation could not be achieved using traditional hub-and-spoke networks and that zero trust needed to be part of the process. A number of different products were trialled and evaluated before Zscaler was selected due to its superior performance and support provided. Unlike legacy network security technologies that leverage firewalls or VPNs, Zscaler delivers zero trust with its cloud-native Zero Trust Exchange platform. Built on proxy architecture, it securely connects users, devices, and applications using business policies over any network. This is important to us because some of the client companies with which we interact don’t offer secure access and, of the Pacific Islands where Tower has branches, only Fiji offers a digital platform. Zscaler Internet Access has allowed us to not only improve our security without the cost and complexity of appliances, but also simplify our Microsoft 365 deployment, meaning that we are now in a better position to realise the benefits of cloud and mobility. Zscaler Cloud Firewall provides firewall controls and advanced and consistent security to all our users, regardless of their location, for all ports and protocols. It enables fast and secure local internet breakouts and, because it is in the cloud, there is no hardware to buy, deploy, or manage. Zscaler Advanced Cloud Soundbox is built on the Zscaler Zero Trust Exchange and detects, prevents, and quarantines unknown attacks, including threats hiding in TLS/SSL traffic. Driven by advanced artificial intelligence and machine learning, it stops patient-zero attacks with instant verdicts for common file types and automatic quarantine for high-risk unknown threats. Our core insurance platform is now cloud-based which provides an additional layer of security and means employees are able to download files when necessary. We have seen a significant reduction in malware incidents with malware attempts either blocked or sent off for analysis without impacting users who are assured of continuous and seamless service. My mantra is “visible service but invisible operations” and we have certainly achieved this with Zscaler. Our employees are able to work securely from anywhere as a result of the security layers in place, without compromising the user experience. We have reduced our management overhead with automated reports which provide insights into malware threats, thus saving the IT team significant time. WAN congestion is easily managed through Zscaler which prioritises Microsoft 365 traffic over recreational or less critical traffic. At the same time, data traffic can be directed to a localised data centre. This level of flexibility was a surprise. Not only have we enabled a better user experience, but we’ve achieved greater efficiency as we’re able to create a number of different security policies all under one portal. Key to the successful deployment of Zscaler has been the support of highly qualified technical experts which has meant that I have avoided the aggravation of being referred to a website to troubleshoot issues. I’ve worked with a number of technology vendors in the past and never had the kind of relationship with them that I enjoy with Zscaler. Although our digital transformation and zero trust journey is far from over, we’re confident that we’re on the right road with Zscaler. To learn more about how we've achieved visible service with invisible operations, I invite you to read the accompanying case study. Tue, 29 Mar 2022 12:00:02 -0700 Darren Beattie What You Need to Know About the LAPSUS$ Supply Chain Attacks Join the ThreatLabz research team and our product experts on Tuesday, 3/29/22 at 9:30am PT for an analysis of the LAPSUS$ Okta attack and strategies for assessing and reducing the impact to your organization. The extortion threat group LAPSUS$ arrived on threat researchers' radar back in December 2021, with a burst of erratic attacks that represent a notable departure from the business-like operations of ransomware gangs. This brazen group uses smash-and-grab methods to extort organizations, with techniques that include island-hopping supply chain attacks, phone-based vishing scams, targeting personal emails accounts, buying compromised credentials, and even paying employees or business partners to gain access to permissioned accounts. At first, LAPSUS$ threat activity was focused on companies in South America but has since expanded to high-profile attacks on some of the world’s largest tech companies including LG, Microsoft, NVIDIA, Okta, Samsung, Ubisoft, and Vodafone. The latest data leaks from LAPSUS$, including partial source code from Microsoft and data of up to 366 Okta customers, have launched this group into the media spotlight and captured the attention of the cybersecurity industry. The Okta breach could be categorized as a supply chain attack that used a compromised user account from a third-party service contractor to access sensitive systems and clients. Also known as “island hopping,” this technique requires only a single account as an entry point to exploit an integrated ecosystem of connected organizations. Following these events, it is important that security leaders take to task anticipating how a similar attack would impact their own organization and use this mindset to develop an effective defense strategy. This mentality of preparing for the worst instinctively lends itself to deploying a zero trust strategy. The rest of this article is focused on methods to assess your defenses and break down how zero trust can help you improve your security posture and reduce the impacts of targeted supply chain attacks, insider threats, and data breaches. Mitigating a supply chain attack or compromised user with zero trust Stopping an upstream supply chain attack or compromised user can be one of the toughest tasks in security. While there are no silver bullets, a zero trust architecture can dramatically reduce the blast radius of a successful attack by ensuring you can: Minimize the attack surface: Make apps (and vulnerable VPNs) invisible to the internet, and impossible to compromise, ensuring an attacker can’t gain initial access. Prevent initial compromise: Inspect all traffic in-line to automatically stop zero-day exploits, malware, or other sophisticated threats. Enforce least privileged access: Restrict permissions for users, traffic, systems, and applications using identity and context, ensuring only authorized users can access named resources. Block unauthorized access: use strong multi-factor authentication (MFA) to validate user access requests. Eliminate lateral movement: Connect users directly to apps, not the network, to limit the blast radius of a potential incident. Shutdown compromised users and insider threats: Enable inline inspection and monitoring to detect compromised users with access to your network, private applications, and data. Stop data loss: Inspect data in motion and data at rest to stop active data theft during an attack. Deploy active defenses: Leverage deception technology with decoys and perform daily threat hunting to derail and capture attacks in real-time. Cultivate a security culture: Many breaches begin with compromising a single user account via a phishing attack. Prioritizing regular cybersecurity awareness training can help reduce this risk and protect your employees from compromise. Test your security posture: Get regular third-party risk assessments and conduct purple team activities to identify and harden the gaps in your security program. Request that your service providers and technology partners do the same and share the results of these reports with your security team. Zscaler helps defend your organization from supply chain attacks Supply chain attacks continue to be an effective tool for attackers. Because you can’t manage the security posture of all your partner organizations, it’s important to have multiple layers of protection and visibility across your environment. As part of the Zero Trust Exchange, our integrated platform helps you: Identify and stop malicious activity from compromised servers by routing all server traffic through Zscaler Internet Access. Restrict traffic from critical infrastructure to an “allow” list of known-good destinations. Ensure that you are inspecting all SSL/TLS traffic, even if it comes from trusted sources. Turn on Advanced Threat Protection to block all known command-and-control domains. Extend command-and-control protection to all ports and protocols with the Advanced Cloud Firewall, including emerging C&C destinations. Use Advanced Cloud Sandbox to prevent unknown malware delivered in second stage payloads. Safeguard crown jewel applications by limiting lateral movement using Zscaler Private Access to establish user-to-app segmentation policies based on the principles of least privileged access, including for employees and third-party contractors. Prevent private exploitation of private applications from compromised users with full in-line inspection of private app traffic, with Zscaler Private Access. Limit the impact from a potential compromise by restricting lateral movement with identity-based microsegmentation. Detect and contain attackers attempting to move laterally or escalate privileges by luring them with decoy servers, applications, directories, and user accounts with Zscaler Deception. Additional Resources Read the ThreatLabz security advisory: Lapsus$ Attack on Okta: How to Evaluate the Impact to your Organization for a technical analysis of the threat, practical SOC playbook, and recommended detection rules from Zscaler’s threat research team. Learn more: join a live ThreatLabz briefing on Tuesday, March 22 and 9:30am PT for updated information on the LAPSUS$ attack on Okta, a walkthrough of our SOC playbook, and zero trust strategies for preventing and mitigating damage from similar compromises in the future. Register now. Fri, 25 Mar 2022 17:31:01 -0700 Emily Laufer 6 Steps for Speeding M&As with the Zscaler Zero Trust Exchange It’s one thing to acquire a company, it’s another to complete integrations rapidly enough to make your newly acquired employees productive. That’s why we’re using zero trust for M&A here at Sanmina, where we provide integrated Industry 4.0 manufacturing solutions to serve the global Electronics Manufacturing Services (EMS) market. As a market leader growing organically and by acquisition, our Zscaler Zero Trust Exchange platform deployment has become an integral part of our overall M&A strategy for quickly achieving synergies. In fact, the platform has helped us ensure productivity on day one during four recent acquisitions in the U.S., Eastern Europe, and Latin America. For this blog, we’d like to share a few of the key steps we recommend drawn from the lessons we’ve learned over the course of those acquisition experiences. Step 1 – Replace VPN with ZPA for rapid user onboarding Like many companies, prior to adopting Zscaler we used VPNs to grant employees remote access with firewalls and other technologies for working on site. When we acquired a company, we’d bring in every employee’s laptop for reimaging, to ensure application and security compliance, before we could connect them by VPN or permit them to access our network when on site. Completing the reimaging process requires weeks, months, or longer, depending upon the acquired company’s size. Instead, we developed a plan to use Zscaler Private Access (ZPA). With ZPA, we grant acquired employees immediate access to our internal applications while they wait in the laptop reimaging queue. This enables people to begin contributing to our company immediately, without connecting them to our network, benefitting our bottom line. We also reduce enterprise risk to nearly zero, as ZPA prevents an existing infection from coming along with the acquisition. Step 2 – Obtain an application and user inventory Although obtaining a list of employees and the applications they’ll require sounds simple, this can take the acquired company longer to produce than you might expect. Many times, employees can quickly identify the applications they use often, but don’t always remember to include other critical solutions they access less frequently. As inventories are key to success, start the generation process early to net the most complete data by the time you’re ready to move forward with other tasks. In addition, develop an efficient system for new employees to request access beyond the inventory you’re provided. This reduces frustrations around resolving any mismatches or changes in job needs. Step 3 – Configure an onboarding policy group in ZPA The first time we used ZPA as an M&A enabler, a security engineer developed the appropriate least-privilege access specifications for each of our applications. Once the specs were established, plugging them into ZPA and configuring an onboarding policy group took minutes. With the policy group created, we dropped the inventory of acquired employees and applications into the group. Then, at the start of business on the appointed day, the acquired employees downloaded Client Connector onto their devices and they were up and running within an hour. Step 4 – Stay the course for acquired company access Because our existing employees also require access to acquired company applications, such as ERP systems, we developed a similar process to address this scenario. It includes obtaining an inventory of your existing employees and the privileges they need and deploying App Connector for the necessary applications at the acquired firm. In our experience, using ZPA to access acquired company applications saves weeks of approval and technology processes for them to provide us with traditional VPN access. What’s more, using VPNs opened us up to greater risk as we had no control over the security environment on their side. Using our zero trust platform eliminates that risk. Alt: Step 5 – Get visibility into your app security posture Moving forward, we’re excited about the ability to boost the security posture of our applications, whether existing or acquired, using ZPA AppProtection for its inspection capabilities. By checking for potentially malicious activity embedded within encrypted traffic destined for those applications, ZPA AppProtection can help us further reduce the risk of compromise and disruption to our business. In other words, Zscaler is a lot like airport security: before you’re allowed to board a plane, they check your passport, they check your visa, and they inspect your bags for suspicious contents. Step 6 – Hand the process off to your admins Once we established the process for onboarding M&A employees, ongoing execution is like anything else with Zscaler – it’s so straightforward we no longer need to dedicate any significant engineering resources to it. Today, the heavy lifting is just another job that our Zero Trust Exchange platform handles, with only a minimal amount of time required for our systems administrators to manage. Accelerating M&As increases competitive advantage Overall, leveraging Zscaler for onboarding acquired employees is now integral to our M&A strategy. By gaining synergies faster, we’re able to accelerate acquisition completion. This, in turn, provides our enterprise with the agility to take competitive advantage of new M&A opportunities as they arise. To learn more, I invite you to read the accompanying case study about our zero trust adoption and how our partnership with Zscaler will help take us into the future. Wed, 23 Mar 2022 08:00:01 -0700 Matt Ramberg The Zero Trust Exchange – The Only Road to Zero Trust In my previous blog, I explained how firewalls and other perimeter-based network security solutions are incapable of delivering zero trust. Be it firewalls and VPNs, cloud-based perimeter models like virtual firewalls, or cloud-based point solutions – none of them comply with a true zero trust framework (as defined by NIST and other leading agencies). But the question remains: if these models cannot deliver zero trust, what can - and how? The short answer is the Zscaler Zero Trust Exchange. With its unique architecture, this cloud-native platform can guarantee zero trust – unlike legacy network security technologies. Built on proxy architecture, the Zero Trust Exchange, as depicted in Figure 1, acts like an intelligent switchboard that securely connects users to apps, apps to apps, and machines to machines - for any device, over any network, at any location. It mandates verification based on identity and context before granting any communication request to an application. The Zero Trust Exchange provides zero trust for users, applications, and workloads by securing access to the internet and SaaS as well as private applications wherever they are hosted – on the internet, in data centers, or in private or public clouds. Figure 1: Zero Trust Exchange Overview Let’s review in detail how the Zero Trust Exchange delivers zero trust at scale with its proven architecture. In figure 2, the Zero Trust Exchange sits as a policy enforcer and decision maker in between the entities like mobile devices, IoT, etc. that are trying to connect (at bottom) and the resources like cloud applications, SaaS applications, internet applications, etc. that the entity is trying to access (at top). The Zero Trust Exchange applies policy and context in a variety of ways to come to an enforcement decision, then brokers authorized connectivity to the requested resource. Figure 2: Zero Trust Exchange Architecture Identity Verification - The first step is to establish identity. To do so, the Zero Trust Exchange first terminates any connection. Terminating the connection at the very first step might seem like an odd thing to do, but there is a good reason for it. The Zero Trust Exchange halts the session and checks the connection by comparing identity from Identity and Access Management (IAM) systems to verify who this user/person is and what context is associated with their identity. Depending on the type of application, the Zero Trust Exchange can enforce authentication requirements such as ID Proxy, SAML assertion, and MFA (with IdP option). If the identity check fails, or the user is not allowed to access that specific resource based on identity context, the connection is terminated right there. This is done through a proxy architecture - in contrast to the pass-through architecture of firewalls, which lets the data pass and then performs out-of-band analysis, allowing unknown threats to pass through undetected. The Zero Trust Exchange has API integration with all major identity providers like Okta, Ping, Active Directory / Azure AD, and others to establish that identity. Device Verification - The next step is to build context around the posture of the device - is it a corporate device or personal? Managed or unmanaged? Compliant or not? Device context is combined with other forms of context, such as the user’s role, what application they are trying to access, what content they are exchanging, and more. Those conditions determine the level of access that gets granted. The Zero Trust Exchange integrates with major endpoint protection solutions such as Microsoft Defender, VMware Carbon Black, Crowdstrike Falcon, and more for context and endpoint security. Application Policy - The Zero Trust Exchange identifies whether the requested application is a public application or a private application, and further categorizes SaaS applications as sanctioned (apps that the company has purchased, such as M365) or unsanctioned (employees using on their own). Based on the type of application, it classifies the risk of the application and handles access policy by leveraging application risk index with solutions such as URL filtering, Cloud Access Security Broker (CASB) protections, and more. The Zero Trust Exchange also determines the closest application source available to the user, which is then used to establish the connection. Security Posture - The ultimate goal of any security technology is to protect sensitive data, including encrypted data. Many attackers are hiding malware in SSL, knowing that firewalls cannot inspect encrypted traffic at scale, so they can go undetected. More than 90% of traffic is encrypted today, and while firewalls are incapable of inspecting all encrypted data inline, the Zero Trust Exchange can decrypt traffic and see what’s inside. It provides data loss prevention (DLP), as well as cyberthreat protection for inline data via sandboxing. The context collected in the previous steps is used to look for anomalous behavior. These verifications help identify the risk levels of the users at each step. If the user passes all of these gates, the key question is: do we want to broker that connection to the requested resource? Policy Enforcement - Companies determine their business policy to designate at a high level what their employees can and cannot access. Based on those policies, along with the context of the individual request, the Zero Trust Exchange permits or denies access to the applications. Private applications are not exposed to the internet, and access is brokered via outbound-only connections, whereas public applications have conditional access. Consider an employee from the finance department using a managed device to access financial data - the exchange would allow this transaction if all context required by policy is fulfilled. However, if the employee uses an unmanaged device, full access will not be given. An alternate policy can instead offer access via a remote browser session that streams data as pixels from an isolated session in a containerized environment, but will not allow the data itself to be accessed, downloaded, cached on the device, etc. The Zero Trust Exchange establishes a granular connection from the entity to the resource or application for which access is authorized. This is a true zero trust connection. Even if there is a security threat, it is limited to that connection between the specific requesting entity and the application it is accessing, rather than the entire network. This architecture is fully compliant with the tenets defined in NIST architecture - essential for any security solution to deliver trusted access. The Zero Trust Exchange eliminates the need for complex MPLS networks, complex perimeter-based firewall controls, and VPNs - with fast, secure, direct-to-cloud access and secure cloud-to-cloud connectivity that eliminates backhauling, route distribution, and service chaining. Instead of multiple hardware-based or virtual security solutions that are hard to manage and maintain, an integrated zero trust solution secures all internet, SaaS, and private applications with a single, comprehensive platform. The Zero Trust Exchange delivers cloud-native, transparent zero trust access - offering seamless user experience, minimized cost and complexity, increased visibility and granular control, and enhanced performance for a modern approach to zero trust security. To learn more about zero trust, watch this webinar: Why Firewalls Cannot do Zero Trust. You'll learn what zero trust is, what it isn’t, and best practices for implementation. Tue, 22 Mar 2022 12:00:02 -0700 Ankit Gupta ZTNA, Evolved: Introducing the Industry’s First Next-Gen ZTNA Platform Just the news: Experience the future of zero trust at 10am PT / 1pm ET during our Zero Trust Live virtual event, also available in EMEA and APAC and Japan friendly timezones, or access key resources here. What’s next for the workplace? It’s clear for many employees the era of five-days-per-week in the office is over. A hybrid workforce is the future for digital enterprises who want to retain and attract the best talent from an expanded global pool - where flexibility and time shifting, from any location, are the norm, not the exception. The rise of Security Service Edge (SSE) frameworks, and Zero Trust Network Access (ZTNA) within it, has charted a path for progressive IT and security leaders to retire their legacy VPNs and firewalls for a fundamentally better approach to supporting this new reality, delivering what many thought was impossible to achieve: superior security with an exceptional user experience. First-generation ZTNA: Revolutionizing remote access beyond legacy VPNs and firewalls When we invented first-generation ZTNA, the problem space was clear: VPNs were too slow, too risky, and everyone hated the experience of using them. Backhauling traffic to a data center that was becoming more and more irrelevant no longer made sense, exposing apps (and VPNs themselves) to the internet opened up a massive attack surface, and putting users on the network allowed unconstrained lateral movement. First-generation ZTNA changed all this with fast, secure, and direct user-to-app segmentation built on identity and policy, including: Least-privileged access: Granting access with zero trust policies ensured only the right users could access the right apps. Minimized attack surface: Eliminating exposed VPNs and making apps invisible to the internet made them impossible to attack. Lateral movement prevention: Connecting users directly to apps, not the network, in a segment of one, prevented adversaries from moving laterally to progress their attack. With ZTNA’s foundational benefits compounded by the pandemic, Gartner predicts that by 2025, 70% of new remote access deployments will be served predominantly by ZTNA as opposed to VPN services, up from less than 10% at the end of 2021. However, simply being better than three decades old technology isn’t nearly good enough. The massive adoption of ZTNA over the past 24 months has brought forth a new problem space that needs to be solved: what happens when the very tenets of identity have been subverted by an advanced attacker or insider threat? These threats loom even larger in a hybrid environment: consider a recent study from Stanford University that showed that 88% of breaches are caused by human error, and that 57% of remote workers admit they are more distracted when working from home. We are grateful for having the trust of our customers across their zero trust transformation journeys, which has allowed us to listen to their needs, learn directly from them as well as other leaders across the industry, and synthesize this insight into a reinvention of both ZTNA and SSE. Introducing next-gen ZTNA: Stop compromised users and the most sophisticated cyberattacks With this release, we are extending our Zero Trust Exchange with three industry-first innovations to stop cyberattacks resulting from compromised users and insider threats, and expanding the scope of zero trust across new areas of the enterprise, including: Safeguarding private apps from compromised users: Extending Zscaler’s inline inspection framework to private app traffic to stop advanced attackers from exploiting the most critical web application security vulnerabilities (e.g., OWASP top 10). Enhancing lateral movement detection to stop breaches: Native app deception intercepts the most advanced adversaries and prevents lateral movement with built-in decoys and automated containment across the Zero Trust Exchange and third-party security operations tools. Reducing the attack surface of privileged users: Enhanced agentless access with RDP/SSH support simplifies troubleshooting of industrial systems and private apps from unmanaged devices while eliminating lateral movement and replacing burdensome VDIs. Together, these innovations evolve ZTNA into the next generation of the category, delivering an extensible architecture to connect, segment, and protect private apps so they can be accessed by any user, on any device, and from any location. It’s a future-proof approach that can help you begin—or extend—your journey to achieve both zero trust and a secure hybrid workforce. These new capabilities, available as part of Zscaler Private Access, bring zero trust network access into an even more secure future. Matt Ramberg, Vice President of Information Security at Sanmina, shared that the new capabilities helped him “enable remote access [and] spot potentially malicious activity, [saving] us time and headaches. It’s easy having visibility and control we need in a single place as there is no context-switching.” As a leader in the 2022 Gartner MQ for SSE, positioned furthest in Ability to Execute, and the inventors of the ZTNA category, we helped spark the revolution to switch from traditional network security architectures that no longer support the needs of today’s cloud and mobile-first organizations. I invite all of you to join us in the zero trust revolution. Be sure to attend Zero Trust Live, or watch it on-demand, to hear directly from leaders at Salesforce, Humana, Guaranteed Rate, NTT DATA, Fannie Mae, Crowdstrike, and Okta. You can also learn more about the cutting-edge capabilities that allow the Zero Trust Exchange to deliver the world’s only next-gen ZTNA offering in our resource center and press release. Tue, 22 Mar 2022 05:00:02 -0700 Scott Simkin Understanding the Assignment: Defending Against Ransomware The education industry has unceremoniously emerged as the second most common target for ransomware. In 2020, at least 1,681 schools, colleges, and universities of all sizes and prestige were infected. Institutions face the difficult challenge of preserving academic freedom, easy access to information, and open collaboration while defending from threat actors who exploit these same characteristics. The adoption of public cloud, software as a service, and the Internet of Things add additional layers of complexity onto IT architectures not designed to support these applications. Network architectures within higher education are largely designed as hub-and-spoke, mirroring the physical topography of campuses. While this enables the prioritization of academic freedom and access to information, the legacy security controls designed for this architecture lose effectiveness as workloads move to public cloud and SaaS services. The growth in remote learning has also provided a great opportunity for bad actors. It has created an incredibly wide attack surface, providing seemingly endless entry points to the network. Additionally, digital uptime is now paramount for classes and work to happen, making schools more willing to quickly pay higher ransoms to just get back online. Finally, the variety of users presents a final challenge of competing missions, charters, and personas. Security policies and technologies must work for a user base that includes students, faculty, researchers, medical staff, private industry partners, and more. The relationships between users—who may have multiple personas—and the institution further complicate security guidance and structure. For example, when I was at UC Davis I had to simultaneously fulfill the mission of a university, hospital, and a research organization that collaborated with other institutions and government agencies. Transforming the legacy Legacy security is based on the data center being the center of gravity containing all apps and services. The data center was the core of hub-and-spoke networks. Users connected via VPN and networks were segmented with firewalls. This worked when the number of people connecting was relatively low and the applications were fairly simple. But as more people connected and the data being accessed grew, latency and complexity grew along with it. Frustrated with complex change and slow IT turnarounds, business users found ways around roadblocks caused by the complexity. Adding to the issues, universities adopted cloud, mobile, and IoT/OT into their technology stack to meet user needs, further complicating and diversifying the already vast IT surface. How ransomware beats the system Ransomware is often less about technological sophistication and more about exploitation of the human element. The initial access points for the malware that kicks off a ransomware attack include: Phishing emails Exploiting vulnerabilities of legitimate websites and trusted cloud-based apps like DropBox, Google Drive, etc. Buying access – by purchasing credentials over the darknet or using previously leaked credentials with credential stuffing or brute force attacks The breach often starts with a single compromise, leading to the subsequent deployment of commercial and open-source ethical hacking tools made available through a malware loader. The ransomware is designed to maximize the impact on business operations by encrypting as many files as possible. The malware leaves behind a ransom note notifying the victim how to contact the threat actor to negotiate and pay a ransom. If you can’t control it, contain it Organizations need to focus on stopping lateral movement. The main goal of the lateral movement of a cyberattack is to compromise additional systems, elevate their access, and steal secrets. The domain controller, or identity infrastructure, allows the threat actor access to nearly all systems. These are used to stage the next phases of the attack, which include performing reconnaissance to identify data to exfiltrate, identify the company’s backup systems (to prevent file recovery), and rummaging through finance and HR systems to identify important documents (such as intellectual property and trade secrets ), people, and the organization’s account balances to determine how much cash they have on hand. After the recon and exfiltration phase, ransomware is deployed across the organization. The best way to contain a threat is to never let it on your network. The Zscaler Zero Trust Exchange allows organizations to do just that. Using Zscaler, users can get to any application or data they need (and are permitted to access) without ever getting on the network. Zscaler was built from the ground up to enable customers to move securely in a world where the cloud is the new data center and the internet the new network. The Zero Trust Exchange was developed to ensure that organizations can operate under any conditions, at any scale, anywhere in the world, regardless of user device or location. We invite you to join us at Educause’s Cybersecurity and Privacy Professionals Conference this May and hope you will participate in an interactive session I will be hosting. Thu, 17 Mar 2022 08:00:02 -0700 Bryan Green The Zscaler Data Protection Tour: How to Protect Data in Image Files In this blog series, we are taking our readers on a tour of various challenges to enterprise data security. As we do so, we will detail the ins and outs of each subject, describe why they all matter when it comes to keeping sensitive information safe, and explain how your organization can thoroughly and easily address each use case with Zscaler technologies—like its cloud access security broker (CASB), data loss prevention (DLP), and more. In each installment of this series, a brief video will accomplish the above while presenting a succinct demonstration in the Zscaler user interface, concretely showing how you can protect your data. Prior topics include shadow IT, risky file sharing, SaaS misconfigurations, noncorporate SaaS tenants, sensitive data leakage, reducing DLP false positives, securing key documents, notifying users of DLP policy violations, and securing unmanaged devices. This blog post’s topic is: Securing data in image files Sensitive data appears as more than just plain text within spreadsheets and other documents. It also exists within image files like PNGs and JPEGs. As an illustration, users can take screenshots of their desktops to capture the sensitive information appearing on their various browsers and applications. So, to prevent data leakage completely, organizations need a solution capable of identifying sensitive information within these types of files. Zscaler DLP is complete with optical character recognition, or OCR. This allows Zscaler to extract text from images and inspect it for sensitive data, leveraging all of its advanced data classification measures like exact data match and indexed document matching, as well as more basic regex and keyword matching. In other words, Zscaler boasts the comprehensive suite of data protection technologies needed to stop all data loss. To learn more about Zscaler OCR, watch the demo below. Zscaler’s integrated, multi-mode CASB can address all of your cloud security use cases. To learn how, download the Top CASB Use Cases ebook. Wed, 16 Mar 2022 08:00:02 -0700 Jacob Serpa How Government Agencies Can Make the Internet Their New Network To meet the goals outlined in the President’s Executive Order on Improving the Nation’s Cybersecurity, follow-on guidance specific to the move to zero trust was issued in the Federal Zero Trust Strategy memorandum. One specific task requires agencies to select “at least one FISMA moderate system” to make internet-accessible. Agencies should consider including the selection of this application in their implementation plan, due to OMB in March 2022. Once submitted, the work has to begin to “allow the secure, full-featured operation of the internet,” by January 2023. This task, and all of the other tasks outlined in OMB M-22-09, are steps needed to meet the goal to accelerate agencies toward a shared baseline of early zero trust maturity. Zero trust is the right approach for the scope, scale, and mission of today’s government. Allowing for secure, efficient access to data and systems from anywhere is paramount to the functionality of a modern, responsive government enterprise. However, making applications internet accessible doesn't mean leaving them open to the internet. A new path to secure apps The overarching vision for applications and workloads is that “agencies treat all applications as internet-connected, routinely subject their applications to rigorous empirical testing, and welcome external vulnerability reports.” Many agencies are meeting this by transitioning private applications that once ran solely in the data center to public clouds while maintaining high levels of security that do not impact user experience or performance. Traditionally, this access has been hairpinned through the Agency’s network via Virtual Private Networks (VPNs), but as the memorandum states, agencies can no longer rely on VPNs to make these connections. Not only are VPNs cumbersome for end users and administrators alike, but they also open up attack surfaces by exposing IP addresses. Zscaler was born out of the idea that there had to be a better way than VPN to connect remote users to the applications they need, and we’re ready to help agencies meet the internet-accessible mandates. What does internet-accessible look like? A zero trust approach is a wholesale break from “how we’ve always done it.” It embraces the idea that the federal government can no longer depend on conventional perimeter-based defenses to protect critical systems and data. Instead, the internet becomes the Agency’s new transport network. Applications are individually secured and invisible to unauthorized users. Application access is based on context - consumed from existing identity and access management (IAM) solutions - and should not require network access. When a user tries to access an application, policy is checked and, if authorized, they are pushed to the closest instance of that application, leading to better performance and experience. This approach achieves application segmentation and limits lateral movement. In this model, remote users can leverage the internet as untrusted transport for secure, encrypted zero-trust access that connects the user only to authorized applications, rather than connecting the endpoint to the network. This eliminates the need for complex network-centric controls to prevent unauthorized lateral movement. Additionally, existing protections such as multi-factor authentication (MFA) and endpoint security can be integrated into the context for access decisions. How do you manage the internet as your network? With zero trust, you remove the need for users to be on the network--regardless of whether the application is in a traditional data center or a modern cloud environment. Agencies can standardize on a single cloud security service simplifying access across multi-cloud environments. With applications in the cloud, administrators can gain a clear understanding of who is accessing what and when. It also enables real-time views into activity and the health of applications, servers, and connectors. This approach provides consistent access experience whether users are remote or in office, and regardless of how applications are hosted. The zero trust principles of context-based, least-privileged access are applied in a modern framework that integrates existing security elements such as IAM, MFA, endpoint protection. Agencies can meet the requirement for internet-based access while retaining full visibility and granular control over all user access. With the right approach, any application can be accessed with these zero trust principles. Tue, 15 Mar 2022 09:20:03 -0700 Jeremy James What is the OWASP Top 10? In the first installment of this blog series on private application protection, we’re discussing the OWASP Top 10, which represents the most critical risks to modern web applications and is widely recognized in the IT industry. Stay tuned in the coming weeks for deeper technical dives on how to prevent these security risks from compromising your applications. OWASP, short for the Open Web Application Security Project, is an international non-profit organization dedicated to improving software security through open source initiatives and community education. Among its core principles is a commitment to making projects, tools, and documents freely and easily accessible so that anyone can produce more secure code and build applications that can be trusted. What is the OWASP Top 10? The OWASP Top 10 is a threat awareness report that ranks the most critical security risks to web applications. Simply put, it is considered the industry application security standard since its introduction in 2003. The 2021 OWASP Top 10 is based on an analysis of more than 500,000 applications, making it the largest and most comprehensive application data security set. As of 2021, the OWASP Top 10 includes the following web application security risks: Source: OWASP Foundation Read more about each risk below: 1. Broken access controls are common in modern web apps and attackers regularly exploit them in order to compromise users and gain access to resources. Authentication and authorization flaws can lead to exposure of sensitive data or unintended code execution. Common access control vulnerabilities include failure to enforce least-privileged access, bypassing access control checks, and elevation of privilege (e.g., acting as an admin when logged in as a user). 2. Cryptographic failures are the root cause of sensitive data exposure, which can include passwords, credit card numbers, health records, and other personal information. The most common mistake is when encryption is not implemented correctly, or at all, such as transmitting data in clear text, using old or weak cryptographic algorithms, or not enforcing secure protocols to transport sensitive data such as HTTP, SMTP, FTP. 3. Injection refers to a broad class of vulnerabilities that allow an attacker to supply hostile, untrusted data to an application (via a form input or other data submission) that tricks the code interpreter into executing unintended commands or accessing data without proper authorization. Some of the most commonly used and easily exploitable flaws are SQL, OS command, and LDAP injections. 4. Insecure design focuses on risks related to design and architectural flaws and represents a broad category of weaknesses. It calls for greater use of pre-coding activities critical to the principles of Secure by Design. 5. Security misconfiguration vulnerabilities occur when application components are configured insecurely or incorrectly, and typically do not follow best practices. They can happen at any level of an application stack, including network services, web servers, application servers, and databases. Security misconfiguration flaws can be in the form of unnecessary features (e.g., unnecessary ports, accounts, or privileges), default accounts and passwords, and error handling that reveals too much information about the application. 6. Vulnerable and outdated components occur when a software component is unsupported, out of date, or vulnerable to a known exploit. Component-heavy development can result in development teams not knowing or understanding which components they use in their applications. 7. Identification and authentication failures occur when functions related to a user's identity, authentication, or session management are not implemented correctly or adequately protected, allowing attackers to gain access and assume the identity of a user. 8. Software and data integrity failures relate to code and infrastructure that do not protect against integrity violations. When you use software plugins, libraries, or modules from untrusted sources, repositories, and content delivery networks (CDNs), they can introduce the potential for unauthorized access, malicious code, or system compromise by attackers. Examples include unsigned firmware, insecure update mechanisms, or insecure deserialization. 9. Security logging and monitoring failures are the bedrock of nearly every major incident. Attackers rely on insufficient monitoring and slow response to gain a foothold in your application and achieve their objectives while remaining undetected. On average, it takes companies 287 days to detect and contain a new breach, giving attackers plenty of time to cause disruption and damage. 10. Server-side request forgery flaws occur when a web application does not validate the user-supplied URL when fetching a remote resource. This allows an attacker to coerce or force the application to send a crafted request to an unexpected destination, even when the application is protected behind a firewall, VPN, or another type of network access control list (ACL). How to Get OWASP Top 10 Protection The OWASP Top 10 provides a great starting point to learn about the most critical security risks to web applications. But achieving application security remains challenging as systems become more complex, and as attackers focus more efforts on targeting the application layer. Zero trust can help secure your web applications against vulnerability-targeting attacks. Our industry-leading zero trust network access solution, Zscaler Private Access, offers private application protection against the most prevalent layer 7 (L7) attacks with complete coverage of the OWASP Top 10 and fully customizable signatures to virtually patch zero-day vulnerabilities. It provides inline inspection and prevention capabilities so you can automatically detect and block malicious active content embedded in user traffic destined for your private apps. Private application protection along with capabilities like app discovery, user-to-app microsegmentation, and agentless access are all part of a complete zero trust network access solution. To learn more, join us on March 22 as our product experts discuss AppProtection and more next-generation innovations in zero trust network access. Register now for Zero Trust Live. Can’t wait? Watch this demo to see the end user experience and the behind-the-scenes admin setup for use cases like OWASP Top 10 protection, app-level visibility, and virtual patching against zero-day threats and CVEs: Zscaler Private Access: AppProtection. Fri, 11 Mar 2022 08:00:01 -0800 Linda Park Shift Left and Shift Down with CWPP In recent years, Cloud Workload Protection Platforms (CWPPs) have become an integral part of many organizations’ cloud security strategies. CWPPs provide visibility and control over the behavior of cloud workloads, helping to protect against malware and other threats. The challenge, however, is that CWPP technology has primarily relied on the use of agents installed on cloud workloads. For many cloud-native services, agents are not only disliked by developers, but in many cases cannot be installed at all. Capabilities provided by CWPP are increasingly shifting in two directions to overcome this challenge - to the left, with tighter integration into development and DevOps pipelines, and downwards, into the network. Challenges with CWPP agents In the early days of an organization’s cloud journey, where cloud projects often consist of lift-and-shift of traditional applications, CWPP agents can be deployed on the corresponding VMs and provide protection. As organizations mature in their cloud journeys, they increasingly adopt cloud-native services, many of which are offered as serverless. Think managed container services like AWS Fargate, or Function-as-a-Service (FaaS) offerings like Azure Functions or AWS Lambda. With these services, the customer has no access to the underlying host, and therefore no ability to install an agent. Several attempts have been made to recreate CWPP functionality on these types of services, but none can be universally applied to all services, leading to a quagmire with many point products and different policy models for each. Key characteristics of cloud-native workloads Fortunately, there are several key characteristics of cloud-native workloads that have opened the ability to change the game in CWPP. First, with cloud often comes changes to process, with security getting involved in application development to help mutually identify and remediate risk early, with an objective of instantiating workloads that are already secure. Second, the footprint of the application code running in microservices is significantly smaller and single purpose, making behavior more predictable and deviations easier to detect. Finally, many such workloads have a very short lifespan, making it difficult for an attacker to gain persistence before the workload is decommissioned and a new one deployed. What does all of this mean for CWPP? It means you can stop struggling to force fit agent-based technologies and start shifting left and shifting down. Shifting left in the public cloud The objective of shifting left is to ensure that all cloud workloads are born secure. Here, you’ll move security into IDEs and into the CI/CD pipeline to integrate security into the application development process, minimizing the likelihood of vulnerabilities and other security weaknesses from being introduced to your production cloud environments. Applications that are built securely are far less likely to be compromised. This approach also has the tremendous benefit of being far more time- and resource-efficient by minimizing costly rework and delays associated with finding security issues in deployed workloads. This functionality is typically offered via a combination of CSPM and CIEM technologies that are increasingly being integrated into Cloud Native Application Protection Platforms (CNAPP). Step one, complete. Shifting down in the public cloud With your workloads built and deployed securely, the next step is to shift down. Even with vulnerabilities eliminated and a workload deployed into a securely configured environment, there is still a need to monitor behavior and guard against threats. But, as mentioned previously, traditional agent-based approaches won’t apply to many cloud services. Shifting down means moving many of the capabilities traditionally provided by a CWPP agent into the network. Runtime enforcement capabilities provided by solutions like Zscaler’s Zero Trust for Workloads allow for behavioral monitoring and control, threat prevention, and data loss prevention across all services, with no agents. Together, these two approaches can help you eliminate the complexity of protecting cloud workloads, while simultaneously improving the speed and efficiency with which your development organization can build and deploy secure cloud workloads. Thu, 10 Mar 2022 08:00:01 -0800 Rich Campagna Zscaler SSE Insights Part 3: How to Stop Ransomware Zscaler was recently named a leader in the 2022 Gartner® Magic Quadrant™ for Security Service Edge, positioned with the industry’s highest ability to execute. This marks 11 consecutive years of Zscaler leadership in the Gartner® Magic Quadrant™. Security Service Edge (SSE) is a fairly new category. Depending on how you look at it, it’s either a consolidation of three existing security categories—Secure Web Gateway (SWG), Zero Trust Network Architecture (ZTNA), and Cloud Access Security Broker (CASB)—or, it’s a deconstruction of SASE that separates security capabilities from network plumbing. Either way, SSE is not just an arbitrary addition to the security industry’s alphabet soup: it’s an extremely relevant evolution of enterprise security that recognizes what organizations need to protect their distributed users, applications, and workloads against today’s threats. In this blog series, we’re outlining three case studies that showcase why SSE matters. You can find a blog for securing hybrid work here, and one for stopping data breaches here. In this blog, we’ll pull from the full SSE feature set with a case study around something on the top of most security teams’ list of concerns these days: ransomware. How SSE stops ransomware SSE delivers important protections across the ransomware attack lifecycle. A ransomware attack starts with attackers infiltrating an endpoint or application from the internet, whether through a phishing attack, exploit, or brute force. The secure web gateway capabilities of SSE help prevent this with inspection, malware protection, and least-privilege access control. Today’s attackers are sophisticated and can whip up new encrypted malware variants with ease, so it’s important that your security controls can inspect all traffic in-line (whether encrypted or unencrypted) and use tools like sandboxing and isolation to quarantine and analyze unknown threats. Stage 1 of a ransomware attack: Initial compromise Next, attackers move throughout your network to escalate their privileges and access your valuable data. A zero trust network architecture can mitigate damage at this stage by stopping attackers from moving laterally, granting access only to specific applications, not to other endpoints or your organization’s crown jewels. By stopping lateral movement, if an attacker does manage to infiltrate an endpoint, the attack is contained – which makes it much easier to mitigate, and much less likely to disrupt your business in any meaningful way. Stage 2 of a ransomware attack: Lateral movement Finally, ransomware actors execute their attack. Most ransomware attacks today include double-extortion tactics, where attackers steal data before encrypting as many valuable files as they can access across various endpoints and network assets. Attackers will threaten to publish the files that they steal, which gives them lots of leverage, as you can no longer just restore encrypted files from backup and be done with it. CASB and DLP capabilities identify vulnerable data and inspect outgoing traffic to make sure your assets stay safe, stopping any attempted exfiltration attempts to malicious servers. Stage 3 of a ransomware attack: Action to objective The Zscaler Zero Trust Exchange is the industry’s most complete SSE solution. Zscaler’s protections start before the attack even begins: its cloud-native, proxy-based architecture reduces the attack surface by making internal apps invisible to the internet, thus eliminating potential attack vectors. Zscaler then delivers full inspection and authentication of all traffic, including encrypted traffic, to keep malicious actors out. Zscaler safely connects users and entities directly to applications—not networks—to eliminate the possibility for lateral movement, and surrounds your crown jewel applications with realistic decoys for good measure. Then, it again inspects all traffic headed outbound to cloud applications to prevent data theft. By unifying these technologies through the Zscaler Zero Trust Exchange, organizations gain unmatched ransomware protection and visibility from a single platform that reduces IT complexity and optimizes performance. Zscaler is proud to be recognized for the comprehensive risk reduction that we deliver to our customers, and we’re improving every day. Our experts are continuously building new capabilities to stay ahead of attackers using advanced AI fed by data from the world’s largest inline security cloud. You don’t have to take our word for it: Download your own complimentary copy of the 2022 Gartner® Magic Quadrant™ for Security Service Edge and learn how the Zscaler Zero Trust Exchange can protect your organization. Wed, 09 Mar 2022 14:00:01 -0800 Mark Brozek Speeding Deal Velocity for Healthcare M&A Healthcare organizations are under pressure to stay competitive in a highly-dynamic market while meeting the demands of patients, payers, and regulations. Many in the healthcare industry are turning to mergers, acquisitions, and divestitures (M&A) to take advantage of the market and stay ahead of the curve. The challenge for any M&A activity is time to value. And to succeed, organizations must have a clear strategic plan not just for the legal or business services activities, but for integrating business and technical processes. With a modernized, cloud-based M&A playbook, healthcare organizations can achieve greater M&A time to value while maintaining critical patient and organization data security. The reasons behind rapid consolidation in healthcare Several years ago, Deloitte predicted that M&A would increase. The increase would come from technology, market dynamics, and regulatory environments that make it harder for independent hospitals to remain profitable. As predicted: Virtual health usage is 38 times higher than in 2019. COVID forced many providers and systems to use telemedicine, but it will continue. The CARES Act expanded telemedicine, and 40% of patients surveyed by McKinsey reported they would continue to use telehealth. Those factors, combined with COVID overload in hospitals today, are driving care outside of the hospital walls and leveraging technology. Payer pressure to reduce costs is increasing. The focus of the Affordable Care Act on value-based care is pushing healthcare systems to lower the total cost of care. Smaller hospital systems often struggle to achieve cost savings goals at the level of larger organizations. They also have a harder time optimizing economies of scale in healthcare support services like IT and billing. Small and independent hospitals are struggling to meet revenue goals. High rates of uncompensated care and uninsured patients are compounded by an inability to match innovation at larger hospitals. This has fueled a target-rich acquisition pool. Demand for healthcare isn’t declining. It will go up as more Baby Boomers reach Medicare eligibility between now and 2030. But care will shift away from inpatient facilities as payers and policymakers push value-based care (VBC) initiatives. Other factors influencing the need for M&A include: Technological advancements disrupt the status quo. Healthcare has been slower than other industries to adopt new technology, but the pace of innovation and change is rapidly increasing. Facilities unprepared to adapt will have to make changes to stay competitive. They will seek investment, acquisition, or place themselves for bid to fill the gap. Regardless of the direction, it will be a challenge to capture the synergy benefits and face technical complexities and debts related to legacy systems. Resource imbalances between small and large organizations. Larger organizations have more resources to invest and scale up technology, data security, and healthcare expertise. A lack of resources leaves smaller organizations with no way to catch up except to consider M&A. Three ways to supercharge your healthcare M&A activities Healthcare systems that leverage inorganic growth to create more innovative and effective care opportunities also need a solid plan to execute once a merger or acquisition is underway. Traditional strategic IT plans for integrations are often designed under a conservative timeline with many efforts executed serially; methodologies are loaded with supply chain and logistical challenges that require a high amount of capital spend and effort to deploy for a low value return with respect to time to value. This legacy playbook needlessly extends M&A timeframes, making it harder to achieve the full potential benefits of the deal value as it pushes more value capture activities toward the end of the integration timeline. Three key concepts that modernize the M&A playbook to drive value in the healthcare market: 1. Transform access and scalability. Zscaler’s cloud-based platform can securely connect users, systems, and applications from any location to any location and make any M&A transition much easier. Provide seamless and secured access to providers, patients, partners, and other professionals by using zero trust architecture and start integrating critical systems on day one. This enables patient care to continue uninterrupted throughout integration activities. Allow applications, data, and processes to flow without the burden of addressing underlying technical complexities. The applications or users of a healthcare organization can access other organization’s applications or systems no matter where they are hosted without the need for network integrations or VPNs. Cloud-based SaaS platforms are infinitely scalable to accommodate spikes in demand during M&A. Just as important, the costs scale accordingly, only integrate what you need and only pay for what you integrate, meaning no more capex-ladened integration project spend for securing and connecting healthcare resources. 2. Enable a modern healthcare workforce and increase productivity. The Affordable Care Act shifted healthcare from fee-for-service to VBC. Improving the quality of care and reducing costs requires the ability to take advantage a diverse set of cloud and on-premises capabilities for things like telehealth. Move applications and systems to a more cost-friendly environment while maintaining a persistent security posture no matter where it is hosted. Extend that same flexibility to any future integration and mesh new acquired application ecosystems without skipping a beat or lowering your pulse on security posture. Remove the need for healthcare workers to navigate different experiences when they work remotely and when they work from healthcare facilities. This enhances both the worker and patient’s care experience. Give seamless and secured access to anyone in the healthcare network partner ecosystem without having to backhaul traffic for compliance. The entire Zscaler platform has improved many M&A time to value demands. Allow practitioners to gain great visibility across the entire co-joined ecosystem by granting access to all yet-to-be integrated systems on day one. Let them be productive as the integration activities begin. 3. Maintain security throughout M&A transitions. Healthcare providers must protect patient data or face financial penalties and a loss of trust. Cybercriminals know that legacy systems are vulnerable, and they often take advantage of these vulnerabilities after M&A announcements when IT teams are in transition. Zscaler can help avoid cybersecurity risks by quickly and securely integrating new providers and facilities without exposing applications, data, or ecosystems to anything or anyone not directly permitted. Zscaler can provide central and robust auditability tied to all access and internet traffic along with deceptive honey pots, allows quick identification and remediation of any potential internal or external threats starting day one, and remains shields-up during the entire integration lifecycle. As a cloud platform, Zscaler can overlay any network and provide a flexible and scalable manner to both protect and secure many diverse integration or separation scenarios. The Zscaler Zero Trust Exchange removes nearly all the complexities in connecting and securing disparate companies’ technologies, allowing integrations to move at the speed and security of cloud vs. speed of legacy IT approaches. Learn more about the modern approach to integrate healthcare businesses Eliminate time-consuming M&A integration complexities and transaction challenges and gain a competitive advantage in the healthcare landscape. Learn how the Zscaler Zero Trust Exchange modernizes and accelerates M&A by downloading our datasheet and scheduling a demo. Fri, 04 Mar 2022 12:00:01 -0800 Mike Cuneo Preparing For the Log4j Long Haul: How to Mitigate Log4Shell Risk It has been several months since the discovery of the pervasive Apache Log4j / Log4Shell vulnerability, but the end of managing this threat is not yet in sight. Moderate estimates predict that security teams will continue to manage this vulnerability and the associated risk for years to come. Although software vendors have issued thousands of updates with patches to the vulnerability, there are potentially millions more instances where the flaw will likely take months and years to be patched—or never fixed at all. There are two main issues contributing to the longevity of this vulnerability: firstly, the broad-scale adoption of the Log4j open source Java logging library, and secondly, the difficulty of identifying where it is currently in use. Even security vendors are still discovering previously-unknown instances of Log4j layered within their products and the long list of unsupported software used by other developers offers no hope for security teams depending on updates alone. Making matters worse, the Log4j vulnerability enables remote code execution, providing easy access for attackers to gain control and carry out their goals. Unlike vulnerabilities that enable only a limited subset of attacks, Log4Shell exploits work like universal keys that will open up the door to anyone, adding to the bitter reality that unpatched instances of Log4j can still be found across nearly every organization in the world. Most security teams jumped on the response to the Log4j crisis when the news first came out and set about mitigating the Log4Shell flaw by scanning their environments, isolating discovered instances, and installing updates where applicable. If this sounds like your organization, kudos to you for all the hard work you’ve already done! But you are not in the clear yet, and unfortunately, there is still a hill of dark discoveries to climb before your SOC team will have full control over the situation. The many exploits we have seen so far will be followed by more sophisticated threats that will have a larger impact on the organizations they target. To prevent your organization from being compromised by this vulnerability, you’ll need to develop a strategy to mitigate risk and improve rapid detection, isolation, and response to these types of threats. Start by following these critical steps: Establish a complete asset inventory You must account for every asset to stay one step ahead of the most recent exploits by quickly installing the latest updates. In addition to continuously monitoring your external applications, you need to examine your in-house applications that use Log4j libraries and keep them updated, and look for scheduled batch jobs and other common sources that you may have missed during your initial scanning activities. A complete inventory of your assets will help you streamline the process of efficiently tracking and implementing new updates. Develop a proactive plan As you work to align your organization for the long-haul marathon to come, the highest imperative is to continuously identify and update all vulnerable systems and applications in your environment as quickly as possible. Assist your team in discovery by updating your SIEM rules and conducting regular threat hunting activities to detect new exploits and active threats and subscribe to trusted information sources for the latest notifications and best practices to help inform your strategy and improve your plan. Implement zero trust Eliminating lateral movement to prevent adversaries from gaining an initial foothold from a vulnerable server (or pivoting to one from within) they can't progress to other aspects of the network. Transform your approach to perimeter security to become invisible to attackers and eliminate lateral movement by adopting zero trust policies. Establish strong IAM policies with least-privileged access controls and MFA Connect users directly to applications, not the corporate network Minimize your attack surface Inspect all traffic (unencrypted and encrypted) Use deception to mitigate risk In case of zero-day attacks like Log4j, attackers tend to go after testbed applications commonly hosted on subdomains such as dev, test, uat, etc. These internet-facing assets are further down on the patch priority list, which makes them a likely target for adversaries exploiting a zero-day vulnerability. By creating decoys that mimic these vulnerable testbed applications, you can intercept attempts to exploit Log4Shell and feed containment actions. These also help to future-proof against any other zero-days, as the decoys deployed today will continue collecting telemetry and trigger when newer threats are disclosed. Develop your long-term Log4j response strategy by joining our live event Managing the Long Tail of Log4j on March 1, at 11 a.m. PT. Learn how to create an effective SecOps plan for handling Log4j that includes a list of common sources you may have overlooked, how digital transformation can help you eliminate threat vectors, and best practices to proactively expose active threats. Register here. Fri, 04 Mar 2022 08:00:01 -0800 Emily Laufer Zero Trust by the Numbers: Why Firewalls and VPNs Don’t Make the Cut The Greek philosopher Heraclitus once said, “Change is the only constant in life.” This statement certainly applies to the modern workplace. Your users, data, and applications were once relegated to the corporate office and data center. But, today, 50 percent of all corporate data is stored in the cloud, and 70 percent of the applications used by businesses are SaaS-based. And no one can ignore the fact that, while already in motion, the global pandemic rapidly accelerated the shift to remote work. Overnight, organizations saw a 300 percent increase in the percentage of total employees that are remote workers. While the numbers may adjust to some extent when we eventually emerge on the other side of the pandemic, the reality is this forward momentum will not reverse. Remote work is here to stay, at minimum in the form of a hybrid workplace. Data will become even more distributed across clouds, applications, and user devices spread around the globe. And, businesses will continue to take advantage of the benefits of moving applications out of the data center and into public clouds and SaaS. These shifts demand change beyond what can be seen upon first glance. Organizations embracing digital transformation quickly realize that the old ways of protecting the perimeter and trusting everything inside of it may have worked in the past, but just don’t cut it in their new reality. There is no perimeter anymore. So, how can you stake your security on firewalls and perimeter-based tactics? Ninety-two percent of organizations realize you can’t, and recognize the need to upgrade their security to better protect this new hybrid-workforce reality. But the question remains: how do you achieve this? According to Cybersecurity Insiders, 72 percent of organizations are prioritizing the adoption of a zero trust model. Zero trust is a strategy—a foundation for your security ecosystem—based upon the principle of least-privileged access, combined with the idea that users should not be inherently trusted. Organizations are quickly coming to the realization that you can’t do zero trust with firewalls and VPNs. In fact, 47 percent of organizations don’t have confidence that their existing technologies will help them achieve zero trust. Unfortunately, the other 53 percent will place users on the corporate network, mistakenly trusting their existing technologies. Connecting users to the corporate network creates significant challenges to achieving zero trust. VPN gateways have an open inbound listener to allow remote users to connect. Unfortunately, this also makes it easy for attackers to discover and gain access to the network. Once on the network, the inherent trust placed in the user makes it easy for threats and attackers to move laterally across the network to locate and exploit valuable assets and data. To avoid connecting users to the network, organizations will sometimes choose to publish applications on the Internet instead, exposing IP addresses to make applications easier for your employees to find and access. But this is just moving the attack surface from the connectivity infrastructure to the application itself - not an improvement, and often carrying even more risk. Inspection of encrypted traffic is more critical than ever as attacks over encrypted channels increased 314 percent in the last year. But, solutions leveraging passthrough firewalls have limited ability to inspect encrypted traffic inline and at scale. Even if they could inspect traffic, the passthrough approach allows content to reach its destination before analysis is complete, only signaling a problem after it has occurred and potentially spread across your entire network. So, if your firewalls and VPNs aren’t fit for zero trust, what can you do? According to Socrates, “The secret of change is to focus all of your energy not on fighting the old, but on building the new.” As such, zero trust requires a fundamentally different approach. Instead of blind trust and complex network segmentation—the hallmarks of firewall-based solutions—zero trust authorizes connections using identity and context, where context is based upon business policy and dynamically reassessed as conditions change. Zero trust requires an inline platform possessing three critical capabilities. First, it must be able to eliminate the lateral movement of threats by connecting users directly and only to the applications they need to do their job, while never placing users on the corporate network. Second, it must minimize the attack surface by making users and applications invisible to the internet because you cannot attack what you cannot see. Finally, it must stop threats and data loss, and that means full inspection of all traffic—including encrypted traffic—without choking application performance. “Change is the law of life. And those who look only to the past or present are certain to miss the future.” - John F. Kennedy To successfully achieve digital transformation, organizations must adopt a new approach to networking and security, one that enables true zero trust. Every day, the Zscaler Zero Trust Exchange helps IT teams embrace zero trust on their transformation journey. The zero trust exchange secures over 200 billion transactions and prevents more than 7 billion security incidents and policy violations every day, all while processing over 200 thousand unique security updates to ensure your organization is secure. To find out more about how Zscaler can help you transform securely using zero trust, download our complimentary whitepaper, “The Top Five Risks of Perimeter Firewalls and the One Way to Overcome Them All” and register for our webinar, “Why Firewalls Cannot Do Zero Trust.” Tue, 08 Mar 2022 08:00:02 -0800 Jen Toscano Zscaler SSE Insights Part 2: How Modern Companies Stop Data Breaches In this blog series, we are detailing the way that digital transformation necessitates security transformation and how security service edge offerings are the ideal solution for modernizing enterprise cybersecurity. Our previous topic revolved around securing hybrid work; this post is focused on stopping data breaches with SSE—just like our upcoming webinar (don’t miss it!). This subject is critical because legacy data protection strategies and technologies no longer suffice in the modern business world. How has data protection changed? Back in the olden days, an enterprise’s users, apps, and data were all housed on-premises and were connected via a private network. In this scenario, traditional appliances like firewalls were used to establish a security perimeter around the network and the resources therein. This style of protecting data was known as castle-and-moat security. While many organizations still attempt to rely on a traditional moat strategy, the castle has all but vanished. With the rise of software-as-a-service (SaaS), infrastructure-as-a-service (IaaS), and platform-as-a-service (PaaS) offerings, IT applications (and the data within them) are no longer on-premises or owned by the enterprise. Additionally, because of the global pandemic, users also have left the perimeter en masse. Connections between all of these entities are now facilitated by the public internet, which has effectively become the new network for modern organizations. In other words, while IT could once protect corporate data through traditional appliances that secured the enterprise network and the data center, that is now a distant memory. In today’s world, the legacy security stack fails to address modern cloud use cases, breeds unnecessary complexity, demands traffic backhauling, impedes user experience and productivity, and lacks the scalability needed to find data loss hiding in SSL traffic on the public web. SSE to the rescue Security service edge (SSE) offerings are what organizations now need to protect their data. These comprehensive security platforms provide the wealth of integrated functionality needed to defend data consistently in any transaction across the web, cloud resources, and the network. This removes the need for a fleet of hardware appliances in the data center, reduces complexity through consolidation, and enhances security through technologies designed for modern use cases. Additionally, SSE platforms deliver their security through the cloud and at the edge (as close to the end user as possible). This means that hardware appliances and backhauling are avoided, user experience and productivity are maximized, and unprecedented scalability powers SSL inspection for even the largest enterprises. To hear more about how SSE can help stop data breaches, watch our SSE Insights video below. Zscaler Security Service Edge is the ideal solution for organizations that want to address their modern data protection challenges. That’s why Zscaler is a Leader in the brand-new Magic Quadrant for SSE and has the highest ability to execute out of all evaluated vendors. To learn more, download the Magic Quadrant for SSE, courtesy of Zscaler, or register for our upcoming webinar, “How to Ensure Data Protection with Security Service Edge.” To see what real-world customers have to say about Zscaler, check out our latest SSE infographic here. Thu, 03 Mar 2022 08:00:01 -0800 Jacob Serpa The Seven Pitfalls to Avoid When Selecting an SSE Vendor As seen in Gartner’s recent Magic Quadrant for Security Service Edge (SSE), all SSE vendors are not created equal. Undertaking a network and security transformation requires careful thought and consideration. It requires carefully avoiding common pitfalls during the selection process. As a refresher, the Security service edge (SSE) is the convergence of security services like cloud access security broker (CASB), secure web gateway (SWG), and zero trust network access (ZTNA) into one cloud platform delivered at the edge. It is the security component of Gartner’s secure access service edge (SASE) model. Given the complexities in navigating this new Gartner market segment, we have compiled the ‘Seven Pitfalls to Avoid when Selecting an SSE Vendor.’ where we provide real-world guidance for SSE decision-makers to ensure they are making the right decisions for their network and security transformation. This guidance encourages SSE decision-makers to consider SSE solutions that are: Born in the cloud with best-in-class resilience, infrastructure, geographic diversity, functional capabilities and optimal user experience. This allows SSE services to be delivered in-line at carrier-neutral data centers. Built on the foundation of Zero Trust, which only allows access for contextually validated identities, regardless of location/network. This least-privileged path is for all services, not just users. By connecting authorized sources through the correct SSE controls to valid destinations and nothing more, enterprises remove lateral movement, which is often exploited by threat actors. Provide SSL/TLS inspection of traffic at production scale with minimal impact on performance, which requires a scalable proxy architecture. The deep insights gained from inspection apply advanced threat protection for encrypted traffic and advanced data classification policies for data loss prevention. Offer flexible deployment models for protecting users and applications wherever the application is hosted, and these protections are extended to third-parties and workload-to-workload communications within the same or across multiple clouds. Transparent, easy to authenticate, and always on, ensuring that end users on their SSE platform are having a great user experience using objective measures. Degradations in user experiences should be monitored and diagnosed. Able to integrate via robust APIs with other best-of-breed ecosystem players to ensure optimal protection and user experience. Able to seamlessly pilot their solution with a single unified agent, access to a global set of service edges (close to the user), with a centralized and easy-to-use UI. During the critical SSE selection process, remember that the ability to: manage a global cloud at scale, provide a holistic Zero Trust architecture, inspect encrypted traffic at scale, provide the best user experience, allow flexible deployment modes and support an ecosystem of best-in-class partners are all critical factors to consider. Zscaler offers the ideal SSE solution to round out a SASE architecture with an all-in-one security platform built on the world’s largest security cloud. Zscaler SSE: Eliminates the attack surface by hiding apps behind the Zscaler Zero Trust Exchange™ Prevents compromise by securing all user-to-app, app-to-app, and machine-to-machine communications Stops lateral movement by connecting users to apps and not the network, isolating threats Minimizes cost and complexity while maximizing user experience and performance To learn more about how SSE can transform your security strategy, read our new ebook that details the pitfalls and challenges organizations should avoid. Wed, 02 Mar 2022 08:00:01 -0800 Nathan Howe Zscaler SSE Insights Part 1: How Modern Companies Secure a Hybrid Workforce This blog is the first in a three-part series that details using security service edge (SSE) to modernize enterprise cybersecurity. In this blog, we will focus on how SSE can secure a hybrid workforce in which users are connecting both from the office and remote locations while accessing applications that reside either in the data center or in the cloud. SSE is a new framework introduced by Gartner in 2021, that breaks network configuration from security capabilities. Under the security capabilities, three key solutions—ZTNA, CASB, and SWG—are consolidated into one offering, now named SSE. Zscaler was recently named as the leader in the first-ever Gartner® Magic Quadrant™ for Security Service Edge 2022, as the highest in the ability to execute. Modernization with hybrid work The last few years has seen a seismic shift in people working remotely. Outdated technology like VPNs and firewalls are meant to secure corporate networks, while accessing applications mostly residing in the data center. VPNs and firewalls solved issues relating to app access when there were stipulations around where employees worked and where applications were hosted. In recent times, a collective workforce that includes employees, third party contractors, and suppliers are not limited by their location. The last two years have accelerated this, with the continuous evolution and adoption of work-from-anywhere. A recent survey by PWC suggests as many as 78% of employees prefer a hybrid work approach for their future work. In addition to workplace modernization, digital transformation has been a main point of discussion, especially in the last decade. Organizations moving applications to the cloud have seen several benefits of scale and reduction in complexity and cost. Gartner analysts said that more than 85% of organizations will embrace a cloud-first principle by 2025. As a stopgap, many organizations have used VPNs to provide these mobile users application access, and in doing so have been backhauling traffic, resulting in poor user experience and connectivity issues. As workplace modernization continues with the adoption of hybrid work and SaaS, security solutions need to be able to keep up. The Zscaler Zero Trust Exchange is the industry’s most complete platform to secure a hybrid workforce Zscaler provides a comprehensive security solution allowing organizations to confidently embrace hybrid work. The Zscaler Zero Trust Exchange provides a platform that unifies the SSE components required to securely access private apps, SaaS apps, and the internet. Zero Trust Network Access (ZTNA) works on the principle of least-privileged access, providing application access only to authorized and authenticated users and safeguarding private apps. The platform offers a cloud access security broker (CASB) to safely access SaaS apps by enabling granular data protection and access policies, even preventing the possibility of internal threats. The secure web gateway (SWG) capabilities of the Zscaler Zero Trust Exchange offer secure internet connections without degrading user experience. The three main components to consider for a well-rounded solution for hybrid work are: Reducing attack surface by making applications invisible to the internet, and allowing access only to authorized users or devices; preventing attackers from discovering them. Stopping lateral movement by connecting users and devices directly to apps without ever exposing the network. Providing superior user experience with fast, direct access to apps, without connecting the user to the network and enforcing policy at the edge closest to the user. Zscaler provides a comprehensive offering to secure applications no matter where users are connecting from. Get started on your zero trust journey with Zscaler. Zscaler Security Service Edge can help you protect your hybrid workforce. To learn what real-world customers have to say about Zscaler, check out our latest SSE infographic. Stay tuned for two more blogs in our SSE series to learn how to stop data breaches and ransomware. Tue, 01 Mar 2022 12:30:01 -0800 Kanishka Pandit Are You Prepared for Russian Cyberwarfare Attacks? On Thursday February 24th, Russia began invading Ukraine with a combination of physical strikes and cyberattacks on banks, security services, and government websites in Ukraine. Government officials have issued multiple warnings that retaliatory cyberattacks on critical infrastructure will hit the US and countries in the European Union imposing sanctions against Russia. As we wait and hope for a diplomatic resolution to this political conflict, nations are gearing up for the potential that the situation will escalate further, setting the world stage for the possibility of war. But what form will that war take? Will this global conflict be characterized as the first official cyberwar in history? While state-sponsored APTs are already a constant concern for security professionals, the implications of the current political situation between Russia and Ukraine lead us to consider the possibility of hybrid warfare and large organized strikes occurring across our virtual fronts. This concern is supported by the Gerasimov Doctrine - an outline of how to support Russia’s political goals using hybrid warfare tactics that started in 2008 with cyberattacks on Georgia, then since 2014 on Ukraine, and 2015 on Syria. Russian cyberattacks on critical infrastructure are nothing new, whether from criminal groups or state sponsored APTs, but what constitutes an act of “cyberwarfare” is a bit of a gray area. While last year’s attack on the Colonial Pipeline turned heads by demonstrating Russian hackers’ capacity to breach critical infrastructure, the breach lacked a clear state actor and the use of aggressive force necessitated by international law to deem it an act of war. Similarly, the ongoing DDoS and malware attacks on Ukraine including HermeticWiper and WhisperGate, provide a shroud of plausible deniability for Russia and show restrained force causing disruptive outages calibrated to generate confusion and internal conflict without undue risk. Russia has a demonstrated history of using covert strategies such as spreading disinformation and interfering with communications channels to fragment political alliances. Now that the format is virtual, it is easier than ever for Russia to compromise legitimate companies and information sources for use as launching platforms to achieve broader goals like gaining access to more strategic targets and better intel. The February 16, CISA Alert AA22-047A details how Russia has been busy at play using the aforementioned tactic to compromise small defense contractors and gain sensitive intel from the US Defense Networks since January 2020. Security professionals must gear up for a potential increase in attacks, whether or not they’re deemed “cyberwarfare.” For many organizations, this is a difficult ask—cyber risk management can already be a complex and overwhelming issue. Below, we share some guidance to help you simplify and improve your security risk posture to drastically reduce the odds that your organization will be compromised. Protect your organization from attack by reducing cyber risk Anticipating the fallout from political tensions from Russia’s advances towards Ukraine, CISA Alert AA22-011A issued back in January, advises organizations to increase their security postures and defenses against the impending threat of future cyberattacks on the US supply chain. The alerts come as a warning of the potential for a Russian-backed retaliatory strike of attacks on critical US infrastructure that could eclipse the Colonial Pipeline shutdown that grabbed headlines last year. Accompanying the alert, CISA, the FBI, and NSA released the AA22-011A Joint Cybersecurity Advisory, Understanding and Mitigating Russian State Sponsored Cyber Threats to U.S. Critical Infrastructure which includes guidance and recommendations for companies seeking to strengthen their security posture. In summary, these government affiliates provide broad advice to: Be prepared. (Establish critical business and security continuity plans) Enhance your organization’s cyber posture. (Implement strong security practices, policies, and technology) Increase organizational vigilance. (Stay informed) Using CISA’s official checklist gives organizations a place to assess their current security posture and start hardening defenses with the advice to implement zero trust architecture, prioritize threat hunting, and leverage key information sharing sources for the most up-to-date threat intel. Supporting the collective missions for organizations to improve their security postures and develop strong practices and programs, Zscaler has a proven track record of providing our customers with innovative solutions and intel that simplify security transformation and speed broad cyber risk reduction. Implement zero trust architecture to defend against cyberattacks To protect your organization from collateral damage, it is important to lay a solid foundation for addressing cyber risk and eliminating attack vectors. While there are many existing security tools that can protect against specific types of threats, implementing a complete zero trust strategy is the most effective way to reduce risk overall. Zscaler helps organizations adopt zero trust. The Zscaler Zero Trust Exchange for users and workloads delivers enhanced cyber protection and user experience for secure access across your internal and external applications, to help you: Minimize the attack surface. Make apps invisible to the internet and impossible to exploit. Prevent compromise. Stop attacks with full inline inspection and threat intel from the world’s largest security cloud. Eliminate lateral movement. Connect users to apps without ever exposing the network. Stop data loss. Prevent data theft and accidental exposure across managed and unmanaged devices, public cloud, and SaaS. Consider the attack chain of a ransomware attack: first, attackers perform reconnaissance to understand your assets and security controls. Then, they compromise your system (perhaps using phishing, an exploit, or credential stuffing), move laterally to escalate privileges and infect as many systems as possible, exfiltrate any data that they’d like to use for extortion purposes, and then encrypt your data. Zero trust uses inspection and policy-driven conditional access to minimize the success of each of these steps and maximize resiliency. In the above example, the Zscaler Zero Trust Exchange hides your attack surface, inspects and analyzes all traffic to prevent intrusion, keeps attackers from moving laterally, and stops sensitive data from leaving to command and control servers. These multi-layered defenses disrupt every stage of the ransomware attack chain and help you quickly uncover and stop advanced threat actors before they can cause harm. To learn more about how a zero trust architecture can help you protect against cyberattacks and harden your security posture, visit this page. Threat hunting is an important part of a mature security strategy With so many novel threat discoveries happening all the time like WhisperGate, a wiping malware that masquerades as ransomware, you need enhanced visibility and expert determinations to detect new exploits and advanced threat behavior. The majority of security teams don’t have dedicated headcount for threat hunting so they typically organize threat hunting activities with key members across the organization on an ad-hoc or periodic basis. To help make this strategy a success, it is important to leverage additional insight that can help verify and contextualize your team’s findings. If you are a Zscaler customer, you have access to real-time telemetry from the Zscaler cloud, the world’s largest security cloud and feed, that provides the context your threat hunting program needs to stay on track and close gaps in your security program. Leverage trusted sources of information It is essential to stay up to date on the latest discoveries and advances with updates from information sources you can trust. CISA recommends subscribing to their mailing list and feeds to receive notifications about new security topics and threats. One way that Zscaler supports our customers and the larger SecOps community is by sharing our latest security research findings from ThreatLabz. It is important to put a formal SecOps plan in place that includes how your team will collect and triage intel from trusted sources and how they will respond in making critical updates in the most efficient way possible. Check out the latest threats and discover critical insights from Zscaler Security Advisories, backed by ThreatLabz continuous threat research discoveries across millions of real-time samples. Zscaler is your security ally As the security landscape grows in complexity and new threats evolve across the globe, the Zscaler team is here to provide direction and guidance on the best ways to keep your organization safe. In the face of this political conflict, our mission is to help support the whole community of cybersecurity professionals tasked with the difficult job of preparing their defenses against the world’s most advanced threats. To this end, we remain committed to diligent research, timely information sharing, and rapid assessments to help secure our customers, community, and all the people, infrastructure, supply chains, and services that they protect. For more detailed information on the cyberattacks against Ukraine visit: ThreatLabz Technical Analysis: HermeticWiper & resurgence of targeted attacks on Ukraine ThreatLabz Technical Analysis: PartyTicket Ransomware linked to HermeticWiper Fri, 25 Feb 2022 10:37:28 -0800 Emily Laufer Uncovering the Truth of Firewalls and Zero Trust Join us for a webinar as Zscaler experts explore why zero trust is necessary, how firewalls and VPNs are failing, and what it takes to successfully implement zero trust. Businesses have changed significantly since the introduction of the firewall. Today’s enterprise employees can work anywhere and everywhere—at a home office, in shared workspaces, at branch offices, and beyond—as long as there’s an internet connection and a power source. Users and applications are exposed and cannot be trusted in this distributed workplace. Zero trust is a holistic approach to securing modern organizations, based on least-privileged access and the principle that no user or application should be inherently trusted. Connections are authorized using identity and policy based on business context. Implementing zero trust is essential in order to effectively secure all these users and applications, and security vendors are misleading organizations by claiming to deliver zero trust solutions. In reality, firewalls and other legacy solutions are incapable of delivering zero trust. Join us for a webinar and read on to explore why. Perimeter models using firewalls and VPNs cannot do zero trust Hub-and-spoke and castle-and-moat architectures leveraging firewalls and VPNs need to be extended to users outside the defined perimeter to enable remote access to applications, thereby dramatically expanding the attack surface. These large attack surfaces—in a data center, cloud, or branch—get exposed as applications get published on the internet and can be found by users and also by bad actors. These architectures are based on the principle of “verify, then trust'' security, that fully trusts any verified user and allows them access to all applications in the network. It’s an obsolete model that fails to block bad actors that imitate legitimate users and doesn’t effectively secure remote users, data, and cloud-based applications outside the network perimeter. On the contrary, the zero trust model is based on the principle of “never trust, always verify” and least-privileged access, which assumes that no user or application should be inherently trusted. Virtual, cloud-based, perimeter models like virtual firewalls cannot do zero trust Cloud-based perimeter models like virtual firewalls are no different from their physical hardware counterparts - the location of the firewall moves from the data center to the cloud; but the overall security model remains the same. Operators still need to define perimeter policies, and threats can still move laterally across the organization as virtual firewalls expose IPs on the internet, meaning they can be discovered, attacked, and exploited, and therefore cannot be zero trust. Most virtual solutions are an adaptation of their hardware predecessors that were later adapted to general cloud infrastructure, causing huge performance limitations. Backhauling traffic to security applications hosted in public clouds like GCP or AWS chokes user application performance. This model lacks flexibility to grow and restricts capacity planning for remote and branch users as the application availability is dependent on cloud infrastructure and its availability. Cloud-based point solutions (CASB, SWG) cannot do zero trust Most “cloud-based” point solutions started with a specific security feature or capability and then attempted to compete as a platform by adding capabilities to their existing framework, where zero trust is an afterthought. They are immature solutions with no experience in enterprise security and lack the depth and breadth of a comprehensive security platform, including capabilities such as cloud firewall, sandbox, intrusion protection, cloud SWG, ZTNA, browser isolation, dynamic risk assessment, and more. Scalability of point products is dependent on, and at times limited by, the availability of data centers on which these solutions are hosted. Zero trust with Zscaler Unlike network security technologies that leverage firewalls, VPNs, or cloud-based solutions, Zscaler delivers zero trust with its cloud-native platform: the Zscaler Zero Trust Exchange. Built on proxy architecture, the Zero Trust Exchange directly connects users and applications, and never to the corporate network. This architecture makes applications non-routable entities which are invisible to potential attackers, so your resources can’t be discovered on the internet. Operating across 150 data centers worldwide, the Zero Trust Exchange ensures that the service is close to users, co-located with the cloud providers and applications they are accessing. Organizations are in need of zero trust but are not aware of the best way to implement it. Tune into our webinar: Why Firewalls Cannot do Zero Trust to understand what zero trust is, and what it isn’t, and best practices for implementation. Webinar dates: Americas: Tuesday, March 8, 2022 | 11:00 AM PT | 2:00 PM ET EMEA: Wednesday, March 9, 2022 | 10:00 AM GMT | 11:00 AM CEST APJ: Wednesday, March 9, 2022 | 10:00 AM IST | 3:30 PM AEDT Wed, 23 Feb 2022 08:00:01 -0800 Ankit Gupta Cloud Security Modernization - as Easy as 1-2-3 Most companies can't just "jump" into the cloud and successfully move off-prem cold turkey, so they need a plan to get there over time. Often, the starting point is a “lift and shift” for a specific use case, and from there the cloud footprint gradually expands. Maintaining a consistent security posture across both on-prem and cloud, i.e. a hybrid approach, can help such businesses migrate and adapt at the speed that makes sense for them. Many have turned to Zscaler’s cloud-delivered Zero Trust Exchange to consistently secure both on-prem data centers and cloud workloads and to migrate workloads over time, while reducing the network engineering involved. In the process, they’ve gained the advantages of a flexible, scalable zero trust security platform. Here is an example of how to work your way from a secure network solution to a cloud-enabled transformational environment in stages: A B C Deploy Zscaler as your path to the internet and to private applications, with no infrastructure changes. You get increased security with very simple implementation. Next, simplify your network by removing gateway appliances as they age out. Watch cost and overhead fall. Finally, cloud-enable your network. Deliver secure SD-WAN and private app access to eradicate networking costs and ensure a better user experience. This three-step process means you can achieve your cloud and zero trust vision at a pace that makes sense for your business. Upgrade your connectivity with a solution that has a path to your future. Reduce your foundational networking costs while increasing your security: Eliminate your attack surface. If they can’t find you, they can’t attack you. Stop lateral movement. Connect users only to what is required. Defend against modern threats. Full traffic inspection protects users and workloads. These security benefits come with improved user experience since the user can go directly to their cloud-based solutions (like Microsoft O365) without taking a hairpin turn through your data center. Position your business for the cloud. One step at a time. Zscaler has a clear path for transformation that makes it attainable for all businesses. If you have questions about how to cloud transform your network and increase your security with zero trust solutions, reach out to us for help! Tue, 22 Feb 2022 09:00:05 -0800 Lisa Lorenzin Zero Trust That Rogue Host “What if a rogue host gets stood up in my public cloud environment, either by accident or by a malicious attacker? And then, what if that rogue host is used as a launchpad to inflict further damage on my environment or to steal data?” Several customers have described this same scenario involving unauthorized virtual machines (VMs) or rogue instances, which has been made more likely by the automated and simplified provisioning capabilities offered by public cloud service providers. A developer could unintentionally instantiate a host that may not be properly configured or secured and the host could be taken over by a malicious actor. Or the actor, if he or she has the right privileges, could stand up a rogue host. A good approach to security should protect the environment no matter the cause. Read on to learn how a zero trust approach can reduce risk. A multi-layered approach to securing the environment is required. The approach should: Eliminate external attack surface. Applications exposed to the internet present an attack surface. Applications should not be discoverable. Prevent initial compromise. Attackers need to gain a foothold in the environment. Attack techniques could include phishing, exploit of zero-day vulnerabilities, or other means of unauthorized access. End-user devices and access need to be secured. Stop lateral movement of threats. After gaining a foothold, attackers move laterally to their ultimate targets. Stopping threats from spreading across the network is paramount in limiting damage. In this instance, a zero trust approach to securing communications is key. Prevent exfiltration of data. Securing outbound communications of workloads from cloud and data center environments is a critical final step. External communications must be authorized and inspected to prevent data loss. Perceptive readers would recognize the above steps as a simplified version of a kill chain. Breaking any link in the chain could stop an attack, but having controls at every stage yields the greatest reduction in risk. The remainder of this blog focuses on the third approach–lateral movement prevention. Stopping lateral movement of threats with zero trust Today, the term “zero trust'' is broadly applied and at risk of losing its value. Narrowing the term to “zero trust networking” is useful and actionable. Simply put, with zero trust networking, we are assuming the network is hostile and untrusted; and on this untrusted network, the identity of any communicating entity (e.g. applications or workloads) must be verified and every communication flow must be authorized. Furthermore, policies must be automated to ensure no gaps in coverage, especially in dynamic environments. The question is: how does zero trust relate to stopping lateral movement and rogue hosts? If an attacker instantiates a rogue host or even compromises a legitimate host, it is just the second step of initial compromise in the attack kill chain as described earlier. To cause greater damage, attackers must connect to other systems, and then move laterally over the network toward the most valuable assets in the environment. Attackers do this by installing malware that could spread in the environment or exploit dual-use, administrative tools such as Powershell to propagate across the network, which is often flat, i.e., unsegmented. The traditional approaches to stopping lateral movement have involved using firewalls to segment environments. However, attackers can piggyback on approved firewall rules. Firewalls inspecting IP addresses, ports, and protocols have no knowledge of the software behind the address. Is it good or bad software? This is why we need to apply the concepts of zero trust networking; to move beyond the address-based security approaches of firewalls, and instead verify the identity of the communicating software and host. In a zero trust network, even if a rogue host is instantiated, it has no ability to communicate with anything else in the environment, regardless of if the communication is within a VPC or across VPCs. If a rogue host attempts to communicate, every other host will not accept the connection because it is outside of policy. The identity of the host and software must be verified; it is not enough for the rogue host to be in the same network or use approved ports or protocols. The zero trust network allows for even more fine-grained control. If a host has been compromised, all verified software on the compromised host will continue to be allowed to communicate, while the malicious software is not allowed to communicate. This approach ensures business continuity in a secure way, even if there has been a compromise. Of course, the administrator should be alerted to the attempted suspicious or malicious communications. A zero trust environment, with least-privileged access, is a lonely place for a rogue host. To learn how Zscsaler can help secure workload communications and segment cloud and data center environments, please see our secure Cloud Connectivity solution and register to attend our webinar: Why Enterprises Need a New Approach for Securing Cloud Workloads. Thu, 17 Feb 2022 10:00:01 -0800 Nagraj Seshadri The Zscaler Data Protection Tour: How to Secure Unmanaged Devices In this blog series, we are taking our readers on a tour of various challenges to enterprise data security. As we do so, we will detail the ins and outs of each subject, describe why they all matter when it comes to keeping sensitive information safe, and explain how your organization can thoroughly and easily address each use case with Zscaler technologies—like its cloud access security broker (CASB), data loss prevention (DLP), and more. In each installment of this series, a brief video will accomplish the above while presenting a succinct demonstration in the Zscaler user interface, concretely showing how you can protect your data. Prior topics include shadow IT, risky file sharing, SaaS misconfigurations, noncorporate SaaS tenants, sensitive data leakage, reducing DLP false positives, securing key documents, and notifying users of DLP policy violations. This blog post’s topic is: Securing the use of unmanaged devices From customer and vendor endpoints to employees’ personal phones and laptops, countless unmanaged devices that do not belong to the enterprise are used to access corporate cloud resources every day. While this access is necessary for enterprise productivity, such devices do constitute a risk to data protection. Unfortunately, go-to methods of securing them, like software agents and reverse proxies, typically create their own headaches and fall short. Zscaler Cloud Browser Isolation is an innovative approach to securing unmanaged devices that forgoes the deployment, management, and breakage concerns of agents and reverse proxies. Without any software installations or code rewriting, user traffic coming from unmanaged devices is forwarded to the Zero Trust Exchange, where application sessions are isolated and only pixels are streamed to the user. This enables access so users can accomplish their work duties as needed, but prevents downloading, copying and pasting, and printing of data so that organizations can stop leakage on unmanaged devices. To learn more about how Zscaler secures the use of unmanaged devices via Cloud Browser Isolation, be sure to watch the below demo. Zscaler’s integrated, multi-mode CASB can address all of your cloud security use cases. To learn how, download the Top CASB Use Cases ebook. Tue, 15 Feb 2022 09:00:02 -0800 Jacob Serpa Five Reasons to Leave Firewalls Behind and Adopt Zero Trust Until recently, everything from data and applications to machines pretty much resided on-premises. Establishing a perimeter with firewalls and trusting everything inside that “zone of trust” met the needs of most businesses. But the world has changed. Employees are working from everywhere, and applications are no longer residing only in the data center. The perimeter has vanished, and there is no zone of trust anymore. This means organizations need a new approach to networking and security–an approach based on zero trust. Unfortunately, firewalls and VPNs weren’t designed for zero trust and put your organization at risk. Let’s dive deeper into the risks that perimeter firewalls can pose to your business. Increased attack surface The migration of applications into SaaS and public clouds and the volume of remote employees have dramatically expanded the risk exposure for an organization. Think of it this way: every connection represents a potential attack surface. As an increasing volume of users VPN into legacy architectures, the attack surface inevitably grows. Perimeter-based firewalls and their virtual counterparts only serve to further exacerbate the problem, as they expose IP addresses on the internet to make it easy for users to find them. This makes it easy for attackers as well. Decreased application performance Your users expect fast and unimpeded access to the applications they need to do their jobs, regardless of where they connect. But extending the flat network to branch offices and remote users and routing traffic back to centralized firewalls for security creates bottlenecks that leave users frustrated and unproductive. Even worse, they find ways to bypass VPNs and access applications directly, putting your organization at even greater risk. High operational costs and complexity It isn’t feasible to implement zero trust using perimeter firewalls, MPLS, and VPNs. It would be utterly unworkable and cost-prohibitive to deploy and manage perimeter firewalls in every branch location and home office while securing mobile users. The challenge lies in delivering the same level of security for all users and devices, regardless of location, without driving up costs of equipment, staffing, and resources. Organizations often find themselves cutting corners and compromising by deploying smaller firewalls or virtual machines. The unintended result is a mashup of security point products and policies that adds complexity while still failing to provide adequate security. Lateral threat movement One of the biggest risks organizations face from an IT perspective is the lateral movement of threats. Traditional firewalls and VPNs connect users to the corporate network for access to applications. Once on the network, users are considered trusted and given broad access to applications and data across the enterprise. In the event that a user or workload is compromised, malware can quickly spread across the organization and bring down the business in an instant. Data loss With more than 80 percent of attacks now happening over encrypted channels, inspecting encrypted traffic is more critical than ever. However, firewalls and their pass-through architectures are not designed to inspect encrypted traffic inline, making them incapable of identifying and controlling data in motion and data at rest. As a result, many businesses allow at least some encrypted traffic to go uninspected, thus increasing the risk of cyberthreats and data loss. Mitigating the risks with a true zero trust architecture Successfully implementing zero trust can be an arduous task, particularly if you are attempting to do so using firewalls, virtual machines, and VPNs. Overcoming the risks posed by these devices and securely enabling the modern workforce requires migrating to a single, cloud-based security platform designed for zero trust. Download our complimentary white paper, “Top Five Risks of Perimeter Firewalls and the One Way to Overcome Them All,” to further understand the risks of perimeter firewalls and how you can eliminate them with a modern zero trust architecture. Wed, 09 Feb 2022 08:00:01 -0800 Jen Toscano The Zscaler Data Protection Tour: Alerting Users of DLP Policy Violations In this blog series, we are taking our readers on a tour of various challenges to enterprise data security. As we do so, we will detail the ins and outs of each subject, describe why they all matter when it comes to keeping sensitive information safe, and explain how your organization can thoroughly and easily address each use case with Zscaler technologies—like its cloud access security broker (CASB), data loss prevention (DLP), and more. In each installment of this series, a brief video will accomplish the above while presenting a succinct demonstration in the Zscaler user interface, concretely showing how you can protect your data. Prior topics include shadow IT, risky file sharing, SaaS misconfigurations, noncorporate SaaS tenants, sensitive data leakage, reducing DLP false positives, and securing key documents. This blog post’s topic is: DLP user notifications DLP policies are typically configured and enforced without the knowledge of the end user. As a result, employees often receive a frustrating user experience; for example, when a file upload to the cloud is blocked because sensitive information is detected, and the user does not understand why her or his action is being prevented. Not only does this impede employee productivity (and satisfaction), but it fails to include users as partners in the quest to protect data. Zscaler’s leading data loss prevention includes user notifications that keep employees informed about policy violations in real time. When a policy is enforced, users receive an alert via Slack or Microsoft Teams that explains why controls were imposed on their activities. It also provides users with an opportunity to give feedback that can help IT refine existing DLP policies. To learn more about DLP user notifications with Zscaler, watch the below demo. Zscaler’s integrated CASB can address all of your cloud security use cases. To learn how, download the Top CASB Use Cases ebook. Thu, 03 Feb 2022 09:54:19 -0800 Jacob Serpa Zscaler Extends Inline, AI-Powered Protection to Online Student Safety Today, we are excited to announce new innovations in the Zero Trust Exchange for K-12 schools to enhance student safety by preventing cyberbullying and online self-harm activity. The rapid acceleration of kids using internet-connected devices for school and the resulting harmful interactions online is alarming, so we stepped up and answered our K-12 customers’ ask for help: intertwine comprehensive cyber threat and data loss prevention with online student safety. In this blog, we’ll examine: A prevalent student safety concern, cyberbullying; How Zscaler is innovating to enhance student safety; and The benefits of a new partnership between Zscaler and Saasyan. Cyberbullying poses significant risk to student safety One of the many student safety-focused features we are releasing focuses on cyberbullying. Cyberbullying, unlike traditional forms of bullying, has taken on a new form of its own, allowing kids to bully and be bullied outside of school hours, not in a face-to-face environment, and over many different mediums. This has led to a rapid explosion of bullying online among kids, with 37% of all students reporting that they have experienced cyberbullying. This is more than double the rate since 2007, and for some perspective, in the United States, that represents about 20 million kids who are the victim of cyberbullying every year. The impact of cyberbullying puts students’ mental and physical health at risk; somberly, students who are cyberbullied are 2.5x more likely to commit suicide and 60% of students see a negative impact on their grades and standardized test scores compared to those who aren’t bullied. Cyberbullying takes shape in many ways, but the primary vectors of harm come from: Social media Text message Instant messaging Email Online forums Gaming communities In the case of cyberbullying, kids’ safety is clearly at risk. But, the story doesn’t stop there; kids will continue to be exposed to new online spaces that may introduce new safety concerns outside of cyberbullying. As a result, K-12 schools across the globe are looking for an integrated, best-of-breed platform to help ensure the safety of their students online. Enhance student safety with Zscaler In working closely with our K-12 customers, we uncovered the need to intertwine cyber threat and data loss prevention with online student safety. The thinking is simple, yet innovative: utilize the inline visibility and control provided by Zscaler Internet Access to properly deploy and get the most value from online student safety measures. We took that thinking and designed a new approach to how we can help keep students safe online: catching harmful behavior early, providing teachers and counselors the information they need to intervene, and blocking harm from happening in the first place. Zscaler has the most comprehensive inline student safety detection, prevention, and reporting capabilities in the industry. Let’s see how it works. As a cloud-native proxy, Zscaler Internet Access sits inline between students, the internet, and apps, enabling schools to gain unmatched visibility and control over harmful online interactions. We bolstered our AI-powered detection capabilities to include a comprehensive dictionary of keywords and phrases commonly used in cyberbullying and self-harm, with support for multiple languages and slang. Detection is key, but prevention is the ultimate goal. When harmful interactions or searches are identified, schools can prevent the content from being sent, loaded, and/or received. In addition, schools can select and control the applications and websites that students can access. When any harmful content is identified or blocked, contextual alerts can be sent directly to a custom-set list of IT/Security admins, teachers, or counselors. With these alerts, schools are able to react quickly, armed with the information they need to make decisions about how to best handle the situation. We realize that technology alone cannot keep students safe, it must be an enabler for those who have dedicated their lives to making school a safe and productive environment for students. With these new features, we strive to keep students safe and provide educators what they need to be successful and make an impact. Zscaler + Saasyan: the world’s most powerful combination for online student safety As part of this announcement, we are excited to partner with Saasyan, a leading student-safety technology company, to bring K-12 schools the most powerful combination for online student safety available today. Saasyan has worked tirelessly to bring the most advanced AI/ML-based detection, reporting, and control to the online student safety market with their platform, Assure. Together, Zscaler Internet Access and Saasyan Assure provide K-12 schools: K-12 centric student safety detection. Using AI and dictionary-based matching techniques to determine if trigger words and phrases are being used in search engines, video titles, chat messages, and emails Self-serve Reporting. Designed for ease-of-use by teachers, well-being professionals, IT, and other school staff with the context they need (i.e. student(s), website, search, and application) Control and Classroom Management. Enables teachers and IT teams to quickly create rules that allow or deny specific web pages, web categories, or web applications for a class, a student, or several specific students Sources:, Mon, 31 Jan 2022 09:00:02 -0800 Adam Roeckl Active Defense with MITRE Engage Background In the cybersecurity world, MITRE is perhaps best known for ATT&CK, a free knowledge base of adversary tactics and techniques that have been extracted from real-world observations. The framework has gained global adoption. Security teams around the world measure the efficacy of their threat detection programs by their ability to detect techniques documented in MITRE ATT&CK. However, ATT&CK has largely been a reactive knowledge base—detailing techniques that adversaries are likely to use at a given stage of the attack, and how to detect them. We don’t use the word ‘reactive’ here in a negative sense. There’s no problem with using reactive strategies. If anything, MITRE ATT&CK has created the foundational framework for understanding attack techniques and creating a game plan to deal with them. However, adding active defense to your security playbook, in addition to reactive strategies, opens up new opportunities for security teams to take action more quickly, effectively, and with greater confidence in high-pressure scenarios such as a breach. Enter MITRE Shield MITRE had been using deception-based active defense to defend its network for over a decade. In August 2020, the organization consolidated its techniques into a new knowledge base focused on active defense and launched Shield. Much like ATT&CK, Shield was also a collection of techniques. But instead of taking an attacker’s view of how networks are penetrated and breached, Shield took the defender’s view of what can be done actively to derisk an environment by planting traps (decoys) and intercepting attacks instead of reacting to adversaries when they’re moving laterally. By virtue of how Shield was organized (a collection of techniques), it was heavily catered to practitioners. However, technical feedback from the community revealed that security teams needed something that could help them understand, strategize, and plan active defense operations before they could dive head-first into techniques. Evolving into MITRE Engage The MITRE team went back to the drawing board and streamlined Shield into a new framework that could help cyber practitioners, leaders, and vendors plan and implement adversary engagement, deception, and denial activities. The new framework is called Engage and was beta launched in Aug 2021. What has changed? While MITRE Shield was a technique-heavy and execution-focused framework, Engage adds the much-needed layers of planning and analysis by bookending deception techniques with activities that can help defenders define the scope of their active defense operations and use the threat intelligence gathered to inform threat models and refine deception operations. The framework is divided into three parts: Row 1 - Goals: What do you want to achieve? Do you want to expose adversaries, do you want to misdirect them once they are in your network, or do you want to elicit certain actions so that you can understand their motivations and goals? Row 2 - Approaches: What will you do to achieve the goals of your active defense/adversary engagement program? Do you want to detect adversaries or prevent them from moving any further? Row 3 onwards - Activities: These are the different options you have under each approach. You can use one or more or combine several to meet the strategic goals defined under the ‘Prepare’ column. What does this mean for defenders? You can learn more about how MITRE Engage differs from Shield here, but here’s an overview of how the changes help you: Provides the security community with a shared vocabulary that can help standardize the foundational thinking around active defense, deception, and adversary engagement. Provides a framework for running end-to-end active defense programs that encompass planning, operations, and analysis. Organizes activities under approaches to enable security teams to prioritize active defense techniques based on their maturity level and bandwidth. Operationalizing MITRE Engage Most security teams are heavily focused on prevention. More mature teams bend toward threat detection. While MITRE Engage will make it easier for teams to adopt active defense, it can be a little overwhelming at first. Defenders can pick and choose from the different activities based on their appetite and then grow from there. A great place to start is building detection capabilities. Threat detection is a difficult problem to solve because of the volume of alerts generated in a typical environment. Even after regular tuning, a quarter of all alerts are false positives. Taking an active defense approach by using decoys to detect threats solves two problems: Easy to get started: While traditional threat detection approaches take months to be fully operational and effective, deception-based threat detection can be operationalized in a matter of days. Low false positives: Decoy assets are deployed in a manner that makes them invisible to the legitimate user. Any interaction with a decoy, therefore, is a high confidence indicator of a breach. Security teams can prioritize deception alerts to begin investigation and trigger orchestration. Deception provides a variety of approaches for threat detection. Here are a few: Perimeter deception: Internet-facing decoys that heuristically detect pre-breach threats that are specifically targeting your organization. Application deception: Server system decoys that host services like SSH servers, databases, file shares, and more. Endpoint deception: A minefield for your endpoints. Includes decoy files, decoy credentials, decoy processes, etc. Active Directory deception: Fake users in active directory that detect enumeration activity and malicious access. Cloud deception: Decoys web servers, databases, file servers, etc. that detect lateral movement in your cloud environments. Email deception: Email decoys that intercept attackers attempting to mount social engineering or spear-phishing attacks. Zscaler Deception delivers 99% of the capabilities covered in MITRE Engage. If you want to get started with deception, augment your threat detection program, or fully operationalize MITRE Engage, download this white paper to learn how you can use Zscaler Deception to implement all the active defense activities without doing any manual work. We’re also hosting a webinar with, Dr. Stanley Barr, MITRE’s capability area lead for cyber denial, deception, and adversary engagement, and Bill Hill, CISO, MITRE where we’ll address the following questions: What are cyber deception and adversary engagement? What is MITRE Engage? How does deception fit into a zero-trust architecture? How do you integrate deception into your security toolkit? What does deception look like in action? If you want to learn more about active defense, deception, and adversary engagement from the folks who invented the framework, this webinar is a great place to do so. Register here. Tue, 18 Jan 2022 22:24:36 -0800 Amir Moin The Zscaler Data Protection Tour: How to Secure Key Documents In this blog series, we are taking our readers on a tour of various challenges to enterprise data security. As we do so, we will detail the ins and outs of each subject, describe why they all matter when it comes to keeping sensitive information safe, and explain how your organization can thoroughly and easily address each use case with Zscaler technologies—like its cloud access security broker (CASB), data loss prevention (DLP), and more. In each installment of this series, a brief video will accomplish the above while presenting a succinct demonstration in the Zscaler user interface, concretely showing how you can protect your data. Prior topics include shadow IT, risky file sharing, SaaS misconfigurations, noncorporate SaaS tenants, sensitive data leakage, and reducing DLP false positives. This blog post’s topic is: Securing sensitive documents Organizations handle a wealth of documents built upon various forms that regularly contain sensitive information. As an illustration, tax documents, finance forms, manufacturing specifications, and more all typically use preformatted forms. As a result, organizations need a data protection solution capable of identifying sensitive forms wherever they may appear—both at upload to the web and at rest within SaaS applications. Zscaler Indexed Document Matching (IDM) is an advanced data classification technique that is a key part of Zscaler DLP. Admins simply create a fingerprint for the forms that they would like to detect with Zscaler’s Index Tool, and the forms’ fingerprints are uploaded to the Zscaler Zero Trust Exchange. Afterward, admins can easily configure automated DLP policies capable of identifying the forms and securing the documents that use them—wherever they go. To learn more about Zscaler IDM, watch the below demo. .embed-container { position: relative; padding-bottom: 56.25%; height: 0; overflow: hidden; max-width: 100%; } .embed-container iframe, .embed-container object, .embed-container embed { position: absolute; top: 0; left: 0; width: 100%; height: 100%; } Zscaler’s integrated CASB can address all of your cloud security use cases. To learn how, download the Top CASB Use Cases ebook. Wed, 19 Jan 2022 08:00:01 -0800 Jacob Serpa How to Choose a Security Service Edge Platform This is the third installment of our security service edge (SSE) blog series. Our first blog explores what SSE is as a platform, and the second looks at the top use cases. In this blog, we’ll explore what features you should be looking for when selecting an SSE platform. SSE is all about delivering security from the cloud. Cloud apps and mobility have started a revolution, and security needs to be untethered from the network. While SASE explores the complete framework required both from a network and security perspective, SSE (the security half of SASE) is all about security services. In order to reap maximum benefits from SSE, you need to be uncompromising in your decision process. Why? Unlike point products, a holistic, integrated approach like SSE will undoubtedly become a key centerpiece of your security strategy. It is crucial to make the right decisions. Let’s take a look at the capabilities an effective SSE platform should include. Inline and SSL inspection at scale One of the key values of SSE is its ability to deliver a unified approach to security inspection. Across web, internet, cloud apps, and data, inspection will be the most important thing your SSE platform will be doing. Because much of your SSE platform will be delivering inline inspection across business-critical traffic, you need to make sure it was built with that in mind. If there is a problem with your SSE platform, you’ll know immediately, as your business traffic will come grinding to a halt. When choosing an SSE platform, stress test extensively, and select platforms with a proven pedigree for performance and scale across large organizations. Additionally, SSE platforms should provide proxy inspection. This is the only way to deliver SSL inspection, where most threats now hide. The best SSE platforms need to be able to deliver ultimate scalability to accommodate a surge of users and traffic. Keep in mind that proof of value (POV) testing associated with pre-purchase often cannot simulate the demands and scalability of an ecosystem with 20,000+ users. Don’t overlook the importance of getting references from SSE vendors that prove they can deliver at scale across giant install bases. Purpose-built zero trust Many organizations have initiatives around zero trust. It’s a huge market driver, and one of the key reasons Gartner has defined SSE with strong zero trust overtones. Zero trust network access (ZTNA) is the concept that remote access should provide user-to-app connectivity without having to place the user on the network. As organizations began swapping out their legacy VPNs for ZTNA in force around the beginning of the pandemic, it became clear that ZTNA was a much-needed security service within the SSE ecosystem. ZTNA is really no different than SWG and CASB, as they all focus on user-to-app connections. Forward-thinking companies are now shopping for zero trust solutions to complement the rest of their cloud security services. So what capability should your SSE have to enable zero trust? At its core, zero trust is identity-driven least-privilege access. View your SSE platform as an international airport—nothing gets through unless it passes all identity checks. An effective SSE platform should not only evaluate identity, but also uplevel security by checking for device posture, user risk score, location, and destination. Of course, all this only works if the SSE platform can scale SSL inspection and has the global presence to protect all users, both on and off the network. Optimization for enhanced user experience User experience is a critical aspect of SSE that should not be overlooked when selecting an SSE platform. We know that legacy data center security approaches can negatively impact user experience, and we’ve all heard the saying that security shouldn’t be an inhibitor—you shouldn't even know it’s there. Backhauling users and offices to a centralized egress point is sure to gain users’ attention, but for all the wrong reasons. Alternatively, creating a direct path to the internet and cloud applications increases speed and improves user experience, reinforcing the value of cloud platforms like SSE. Securing all these connections with a ubiquitous cloud platform immediately provides a faster experience, but there are a few things you need to consider in an SSE platform. First, as previously mentioned, look for vendors with the largest footprint of global point of presence (POP), especially if you have a distributed workforce. Employees globally, from London and Japan to Australia, India, and the U.S. all want to have a fast experience, with a local SSE onramp. Also, purpose-built SSE vendors will deliver inspection down to the edge. Instead of having a few centralized locations of compute, every SSE onramp should do edge inspection with all security services across SSL. This again ensures the fastest experience without any security latency. Lastly, strong peering is a must with SSE. Ensure your SSE vendor peers with as many other cloud providers as possible, ensuring the connection between everything your business uses is also fast and local. Room to grow The last recommendation when picking an SSE platform is to think about the future. SSE is all about unifying cloud security services in a holistic way. As mentioned, it will be a centerpiece of your security strategy: once you embrace a unified platform, you will wonder how you operated without it. Look for vendors that value innovation and are preparing for the future of SSE based on customer requirements. Outside of security, think through other areas of your company that will require growth, including: Branch offices While remote work is still top of mind, your branch offices will soon start coming back to life, and they will need fast internet performance to match what users have come accustomed to at home going direct. The best SSE vendors will add significant value on the network side, which is core to the SSE parent, SASE. Direct internet access (DIA), SD-WAN, and other connectivity aspects of your organization can benefit from an SSE vendor with a strong feature network set that can maximize branch office performance and user experience. Digital experience monitoring The ability to monitor user experience, with in-depth visibility into choke points, can be an invaluable tool to maintain user productivity and prove the worth of your SSE platform to the board. Lastly, expanding the scope of SSE to cloud workloads is an important initiative. Like users, workloads connect to the internet, require inspection, and have extensive routing and connectivity requirements. Look for SSE vendors that deliver workload connectivity and protection. Enabling SSE to be at the heart of your IT ecosystem simplifies requirements while consolidating all your user and workload security across the same policy and controls. Because SSE is newly-defined, it can be difficult to identify the most important benefits, and even more difficult to select the platform that is best for your business. Don't be intimidated. Instead, consider the factors that matter most to your organization, from user experience and growth potential to zero trust and security requirements. Learn more: What is Security Service Edge? What You Need to Know About Gartner’s New Security Service Edge Gartner’s New Security Service Edge: Real-World Applications Thu, 13 Jan 2022 08:00:01 -0800 Steve Grossenbacher SASE Vs. SSE: The Ever-Growing Bowl of Alphabet Soup in Cybersecurity When it comes to the cybersecurity space, there is no shortage of acronyms. With DLP, CASB, SSL, IPS, ATP, CIEM, ZTNA, CSPM, ML, SWG, and a myriad of others, the alphabet soup can simply become too much to consume. However, each acronym typically corresponds to technologies or frameworks that address unique challenges that must be solved if an enterprise is to maintain a robust security posture. As such, when a new phrase is coined, IT teams need to understand what it refers to, why (or perhaps if) it matters, and whether they need to change the way they go about security as a result. SASE is one such acronym that recently took the world by storm and called existing IT paradigms into question. However, at the apex of its popularity, Gartner, its creator, coined yet another, similar-sounding term: SSE. Naturally, this has led to some confusion. So, why this addition to the cybersecurity dictionary, and how is SSE different from SASE? Read on to find out. SASE: The core framework SASE (pronounced “sassy”) stands for secure access service edge and refers to a framework suggested by Gartner rather than a specific technology. As opposed to legacy data center architectures wherein network services and security services are disjointed, SASE envisions a cloud-delivered ecosystem that unifies the two. With users, services, applications, and end user devices existing virtually everywhere, organizations need a means of connecting them both effectively and securely, ensuring a productive user experience while keeping data safe and threats like ransomware at bay. While the development of SASE offerings is still in its early stages, Gartner’s vision is that individual vendors will one day have complete suites of both network and security services (from SD-WAN and quality of service (QoS) to cloud firewall (FWaaS) and Cloud Browser Isolation) so that organizations can obtain a single, unified, secure access service edge. SSE: Unified security SSE stands for security service edge and is a subset of Gartner’s SASE. Specifically, it is the portion of SASE that is focused on the consolidation and delivery of security services (while the other half of SASE has to do with network services). In other words, SSE serves as a first step into the overarching SASE philosophy by suggesting that organizations adopt a single, cloud-delivered security platform that boasts a variety of integrated technologies and provides them at the edge—for any user anywhere. The above represents a significant departure from legacy security architectures that require backhauling traffic to a central location as well as a number of disjointed appliances that can’t scale to inspect SSL and are costly to purchase and maintain. Stated simply, security, user experience, and enterprise productivity suffer under the status quo. Additionally, even where true cloud security solutions are deployed as point products, the lack of integration and the duplication of (disparate) policies create inconsistent security and a significant burden for the IT teams tasked with managing them. SSE has emerged as a critical solution to the above challenges. SSE platforms provide comprehensive security by integrating three primary solution sets: cloud access security broker (CASB), secure web gateway (SWG), and zero trust network access (ZTNA); in this way, they can secure any cloud app, all web traffic, and private applications, respectively. This unified approach enhances security across the IT ecosystem while reducing complexity and saving time for administrators. Additionally, as the name security service edge implies, SSE offerings deliver their comprehensive security functionality through the cloud and as close to the end user as possible. Where do we go from here? While the volume of acronyms in cybersecurity can be overwhelming (and at times unnecessary), IT must work to separate the wheat from the chaff. In the case of SSE, Gartner has its finger on the pulse of what IT teams need in order to keep their organizations safe. That is to say, this is one bite of alphabet soup that is sure to be both delicious and nutritious. Want to see how Zscaler fits into the SSE picture? Check out our Zero Trust Exchange. In particular, take a look at how we help when it comes to Data Protection. Wed, 12 Jan 2022 11:42:06 -0800 Jacob Serpa The Zscaler Data Protection Tour: Enhancing DLP with Exact Data Match In this blog series, we’re taking our readers on a tour of the various challenges faced in enterprise data security today. As we do so, we will detail the ins and outs of each subject, describe why they all matter when it comes to keeping sensitive information safe, and explain how your organization can thoroughly and easily address each use case with Zscaler technologies—like cloud access security broker (CASB), data loss prevention (DLP), and more. In each portion of this series, a brief video will accomplish the above while presenting a succinct demonstration in the Zscaler user interface, concretely showing how you can protect your data. Prior topics include shadow IT, risky file sharing, SaaS misconfigurations, noncorporate SaaS tenants, and sensitive data leakage. This blog post’s topic is: Enhancing DLP detection Organizations often want to secure specific data values rather than any information matching a given data pattern. For example, a company may want to secure customer credit card numbers, but not care about employees using their personal credit cards to make personal purchases. In such scenarios, DLP solutions that can only scan for data patterns (like generic credit card numbers) will generate a myriad of false positive results. This translates to wasted time for admins who have to comb through countless alerts to ensure that the right data is actually being protected. Zscaler Exact Data Match (EDM) addresses the aforementioned use case and alleviates the above headaches. By identifying specific data values that need to be protected rather than generic data patterns, detection accuracy is enhanced, false positives are reduced, and time is saved for administrators. Because only hashes of exact data are uploaded to Zscaler for EDM, sensitive data never leaves the customer’s purview. To see how EDM works in the Zscaler user interface, watch the demo below. Wed, 05 Jan 2022 08:00:01 -0800 Jacob Serpa Building a Greener Security Cloud In November 2021, we announced the achievement of 100% renewable energy being used to operate our offices and data centers, which is a critical milestone for us, our customers, and the future impact on the environment as our market share continues to grow. In this blog post, I’d like to provide some detail on our approach to building a greener security cloud, which includes many initiatives to embed environmental sustainability into how we operate. When customers entrust their cybersecurity to our cloud, they have the ability to improve their environmental impact by removing on-premises hardware, which also reduces—or even eliminates—activities like shipping, handling, and business travel associated with installing and maintaining stacks of security appliances. But that’s not all. Our teams have taken the initiative to build a cloud that is environmentally conscious and can demonstrate this with concrete achievements. To start, the underlying architecture of our cloud platform was built from scratch and designed to efficiently scale, knowing we were going to support the entire network traffic of major enterprises, including companies in the Forbes Global 2000. “Zenith of Scalability” is our mantra, and our engineering team is laser-focused on optimizing processing efficiency. Our multi-tenant cloud architecture enables better utilization of resources. Innovations, such as our patented “single-scan multiple action,” drastically reduce the number of compute cycles when compared to the service chaining approach of legacy platforms that requires workloads to be passed along multiple servers for the same set of security actions. This is an absolute must since we process more than 200 billion transactions per day for our customers. As part of the cloud operations team, we are responsible for building and running the global infrastructure and pride ourselves on the ability to scale in advance of our customer needs with a focus on quality and reliability. The Zscaler Zero Trust Exchange operates in more than 150 data centers globally to provide security and a great user experience to our customers around the world. In building our cloud, we select data center providers that are exceptionally connected with the highest level of security, close to users, and operationally reliable. We are continuously innovating to push the boundaries of resource and energy efficiency. This includes partnering with data centers that prioritize the use of renewable energy wherever possible and green practices such as motion-activated LED lights, automated controls, use of recycled water for cooling, and other PUE (power usage effectiveness) optimizations. We updated our data center procurement process to incorporate sustainability as a key factor in the selection and renewal criteria. We push improvements and monitor progress during regular business reviews with our data center providers and hold our partners accountable during renewal periods. Through this engagement approach and the actions of our data center providers, we were able to achieve more than 75 percent renewable energy in powering our cloud. We also engage in education initiatives in parts of the world where renewable energy is still not top of mind. “Zscaler has built the largest and most scalable security cloud in the world,” said Misha Kuperman, SVP Cloud Operations at Zscaler. “I am proud that this was done with sustainability in mind!” In partnership with the Zscaler Environmental, Social, and Governance (ESG) team, we sought to understand how to improve the environmental impact of the balance of non-renewable energy usage of our cloud. By calculating our contracted energy use and building on the information reported by our data center providers, we were able to quantify where there was room to improve. We looked at different approaches and determined that Renewable Energy Credits (RECs) were the best fit to source the balance of renewable energy for all the global locations of our cloud infrastructure and our offices. RECs are a way to track renewable electricity and is the globally recognized mechanism for owning the environmental benefits associated with the generation of renewable energy. Working with 3Degrees, we sourced high-quality RECs from solar and wind projects around the world, in countries where we have operations, to reach 100% renewable energy. “It is great to see our engineering and operations teams embrace the challenge of building a better security cloud that is less impactful on the environment,” said Victor Wong, Senior Director ESG, Zscaler. “We see this as another benefit to our customers as they work to achieve their own ESG goals - particularly as they seek to quantify and reduce their upstream emissions.” As we look forward to building infrastructure to support our company’s commercial growth, we will continue to keep in mind our environmental impact -- as we know that this is what is right for our company, our customers, and the planet. To learn more about Zscaler ESG initiatives, read: Zscaler Environmental, Social, and Governance (ESG) Page Jay Chaudhry: Innovation to Protect the World Press Release: Zscaler Powers its Global Data Centers and Offices with 100% Renewable Energy Case Study: Bombardier Enhances Security with Zscaler Advanced Cloud Sandbox to Stop Patient-Zero Attacks Tue, 04 Jan 2022 09:00:01 -0800 Nicole Martinez