Zscaler Blog

Get the latest Zscaler blog updates in your inbox

Products & Solutions

Why You Can’t Achieve Zero Trust with a Legacy Sandbox

September 07, 2022 - 4 min read

Zero trust, it’s not just a buzzword anymore. In fact, zero trust is often part of a larger digital strategy to mitigate risks, increase resiliency, and secure access in today’s distributed environment and workforce. As organizations make the seismic shift to a zero trust strategy and architecture, IT and security teams are re-evaluating their security stack, especially appliances that sit in their data centers or branch locations. Sandboxes, as a critical player in malware prevention, are one type of security appliance being re-assessed. 


Do legacy sandboxes align to the tenets of zero trust laid out by NIST 800-207?

1. All data sources and computing services are considered resources

2. All communication is secured regardless of network location.

The growing mobile and hybrid workforce wants fast, direct access to files and connections to applications and the internet. Unfortunately this means that traffic forwarders that send files and traffic to sandboxes and security stacks in the data center only creates frustrated users – and frustrated users find a way to bypass security measures. As the internet becomes the corporate network, perimeter-based security must be replaced or coupled with inline protection to secure users regardless of their location or device.

3. Access to individual enterprise resources is granted on a per-session basis.

Legacy sandboxes that rely on passthrough architecture and out-of-band deployments inherently cannot deliver zero trust because the malware was granted access to the user and network. Regardless of file retrospective, which applies protection after the file has been deemed malicious, it may be too late to protect against lateral movement and data loss. Instead, a modern sandbox that sits inline can terminate a connection or session, limiting and blocking a malicious file’s action and trigger the user’s permissions to be adjusted if needed.

4. Access to resources is determined by dynamic policy, observable state of client identity, application/service, the requesting asset, and other behavioral and environmental attributes.

5. The enterprise monitors and measures the integrity and security posture of all owned and associated assets.

6. All resource authentication and authorization are dynamic and strictly enforced before access is allowed.

Legacy hardware sandboxes are rackable because, for most enterprises with remote or branch offices, it’s not uncommon to find four (count ‘em – 4!) sandboxes deployed per location. When sandboxes cannot distinguish between new or known files, rescanning files inevitably affects hardware capacity limits and increases latency for users. As it turns out, a single sandbox from a legacy provider can only perform analysis on 8,200 files per day – and this is before throttling from SSL and TLS traffic decryption and inspection.

Yet, malware continues to slip through the cracks. With a 314% rise year-over-year of encrypted threats, modern sandboxes must be able to natively decrypt and inspect across web and file transfer protocols, including SSL/TLS. Unfortunately, legacy sandboxes require support for SSL inspection and without these additional devices, what is a security best practice of encrypting traffic becomes a successful technique for threat actors to evade detection.

An adaptable, AI-driven approach with unlimited capacity leaves no stone unturned. Known benign files receive instant verdicts and delivery to users, while unknown or suspicious files are quarantined for dynamic analysis and detonation, effectively blocking malware before it ever reaches the user.

7. The enterprise collects as much information as possible about the current state of assets, network infrastructure and communications and uses it to improve its security posture.

The risk of misconfiguration from overly complex controls and policies increases with each additional appliance and location deployed. Since security involves both technology and people, even the slightest human error can unintentionally result in breaches. With policy management already a headache for most administrators, inconsistent rules and policies and patches applied months later can make for ineffective security.

A true zero trust sandbox must be able to quickly adapt to policy changes and further minimize attack surfaces by blocking threats across all users once they’ve been identified. The cloud effect makes sure that every time a new threat is identified in any of the tens of billions of requests processed daily by the Zscaler security cloud, it gets blocked for all Zscaler users, everywhere.

Shortcomings from legacy sandboxes are not aligned to the tenets of zero trust. Similar to the playgrounds we’ve outgrown, sandboxes in data centers need to be left in the past. Instead, choose a proven, cloud-gen sandbox. Zscaler Advanced Cloud Sandbox leverages advanced AI/ML models to stop zero day attacks and malware. By detecting, preventing, and intelligently quarantining unknown or polymorphic threats and malicious files, the cloud-gen sandbox prevents malware from reaching your users and permeating through your network. Built on a true cloud-native, zero trust platform, the Zscaler Zero Trust Exchange ensures security is brought as close to the user as possible so that protection becomes inline and ubiquitous. 

Learn more about Zscaler Cloud Sandbox, or request a custom product demo today.

form submtited
Thank you for reading

Was this post useful?

dots pattern

Get the latest Zscaler blog updates in your inbox

By submitting the form, you are agreeing to our privacy policy.