Microsegmentation can be a very effective cybersecurity strategy, helping to stop lateral threat movement, thereby minimizing the blast radius and damage caused by a cybersecurity incident. Despite its huge potential, several key challenges have kept microsegmentation from wider adoption.
Most high-profile attacks and security incidents involve an initial infiltration via a vulnerable system or person, followed by lateral movement across the organization’s network. It is this lateral movement that typically allows attackers to escalate the amount of damage they can do.
The attacker might initially infiltrate or compromise an application that is not considered mission-critical and, therefore, hasn’t been patched, only to use that system as a jumping-off point to other systems that do contain sensitive information. Or, the initial system might be the first infected machine from which a strain of ransomware starts to make its way across the network to other machines.
On an open, flat network with very few internal controls, this type of thing happens all too often. The unfortunate truth is that most efforts have been focused on creating a strong perimeter defense, leaving internal controls and defenses lacking. With the porous nature of today’s enterprise perimeter and the persistence of bad actors, most perimeter defenses will eventually be defeated. Internal controls built on microsegmentation can minimize the damage that an attacker can cause once they get in.
Microsegmentation works by dividing large, open groups of applications or workloads into small segments based on the communication requirements of each application. Applications will then be permitted to communicate within their segment, but cannot make unauthorized communications to applications outside of their segment. Microsegmentation is a core component of a zero trust security model for workloads in the cloud and data center. In short, microsegmentation picks up where perimeter security ends, enforcing policy throughout the organization’s internal network, not just at the perimeter.
For example, a customer resource management (CRM) web application would be permitted to communicate with the database that stores the corresponding customer data, but the system coordinating physical building operations and security would not be able to connect to that same database.
Microsegmentation is most often applied at either the host level or the network level.
Host-based microsegmentation relies on agents installed on endpoint devices. The advantage of using an agent is that it provides much more granular control and visibility, and offers a path towards easier-to-manage identity-based microsegmentation. By using an agent, you can segment based on dynamic, human-understandable policies rather than on static network-level rules.
For example, an identity-based policy might state that a certain Python script, legitimate.py, can communicate on the network, but the more nefarious malicious.py that is running on the same VM cannot. Such granularity of control and an easy-to-understand policy model is impossible with the network-based model.
The downside of host-based microsegmentation is that not all workloads can have an agent installed on them. This may include legacy operating systems, serverless and PaaS functions, etc. For these types of services, most agent-based platforms will typically include either embeddable agent code and/or fall back to network-based segmentation for out-of-scope systems.
There are two primary types of host-based microsegmentation. One orchestrates host-based firewalls and the other type leverages identity-based control. Host firewall-based microsegmentation involves a more dynamic version of a traditional network firewall but with similar limitations. Because all firewalls are blind to the true identity of communicating workloads and rely on network address-based controls, they can be circumvented by attackers that exploit the "trust in the network.”
Workload-identity-based protection, on the other hand, allows only cryptographically verified applications to communicate over approved network paths. Every attempted communication is validated, ensuring that bad actors and malicious software have no ability to communicate on the network.
Network-based microsegmentation is exactly like it sounds—segmentation performed at the network level by modifying access control lists (ACLs) or firewall rules. Because it is performed at the network layer, there is no agent to deploy onto workloads.
There are several major downsides to network-based microsegmentation. First, it can only enforce policies per endpoint. This means that if there is a legitimate piece of software on the same endpoint as a malicious piece of software, the firewalls typically used as enforcement devices cannot distinguish between the two of them. Both pieces of software will either be blocked or allowed.
Additionally, because these policies are based on network port and IP address, they are static by definition. In today’s cloud-centric environments, workloads are dynamic and ephemeral. Policies that aren’t also dynamic and ephemeral slow things down and can cause issues.
Finally, this approach can be complicated to manage, often leading to larger segments than originally anticipated, a tradeoff that results in lower operational overhead, but that undermines the very point of doing microsegmentation to begin with. Any attempt to do more granular segmentation (i.e., microsegmentation) leads to a massive increase in the number of firewall rules, which becomes unmanageably complex.
Despite its effectiveness in reducing the risk and impact of a breach, many microsegmentation projects fail in implementation. There are several key reasons for this, most of which can be traced back to operational complexity and too little automation requiring too much human involvement. Newer microsegmentation products have been designed to overcome these challenges.
One of the biggest challenges with microsegmentation is mapping out the appropriate communications paths for each piece of software, which provides the basis for policies. Doing this manually requires detailed knowledge of how every workload communicates with every other workload in your cloud and/or data center environment. This level of knowledge doesn’t even exist in most organizations. Without machine learning driven automation, this can be an incredibly time consuming process, with the findings being outdated by the time the investigation has been completed.
Additionally, if not done properly, implementing microsegmentation can break applications. This stumbling block can cause internal pushback on the entire project, and requires a system that accurately and quickly identifies required communications paths, adapting automatically as the dynamic environment changes.
Finally, the underlying network changes required to implement microsegmentation policies can be extensive, leading to downtime, costly mistakes, and difficult coordination across several groups in the organization. More recent advances in microsegmentation have been designed as overlays to the network itself, achieving the desired goal but without making any changes to the underlying network.
Zscaler Workload Segmentation (ZWS) was built from the ground up to automate and simplify the process of microsegmentation in any cloud or data center environment. Built on easy-to-understand, identity-based policies, with a single click, you can enhance security by allowing ZWS to reveal risk and apply protection to your workloads—without any changes to the network or applications. ZWS’s software identity-based technology provides gap-free protection with policies that automatically adapt to environmental changes. Eliminating your network attack surface has never been simpler, and you get true zero trust protection.