Zscalerのブログ
Zscalerの最新ブログ情報を受信
The "Control" Trap: 3 Reasons Your Legacy Firewall Can’t Keep Up (And Why You Think It Can)
There is a specific kind of psychological comfort associated with on-premises firewall appliances.
The hum of the cooling fans, the perfectly dressed cables, and the rhythmic blinking of green LEDs creates a reassuring illusion: if traffic crosses this box, it’s controlled.
I get why orgs hesitate to go all-in on a cloud-native proxy architecture. Letting go of the box feels like letting go of the wheel. But clinging to the appliance model is no longer the conservative choice, but an active acceptance of gaps.
Let’s dismantle the three persistent myths that keep organizations tethered to the appliance model.
The Reality: Centralized enforcement only works when traffic reliably transits that choke point. Topology drift has rendered the physical perimeter porous. Users originate from diverse remote networks, and applications reside in SaaS and public cloud VPCs/VNETs rather than a single data center.
Consequently, the on-premises legacy firewall inspects a statistically shrinking slice of enterprise traffic. To maintain usability, operations teams are frequently forced to implement split-tunnelling and route exceptions for high-bandwidth applications - effectively removing policy enforcement from the highest-volume paths.
The illusion of control further collapses under the weight of modern protocols such as TLS 1.3, HTTP/3 over QUIC, and WebSockets with persistent, multiplexed flows that demand sustained compute power, not burst capacity. The legacy firewall suffers from performance challenges:
- TLS interception is expensive per flow: Session setup, Key operations, Decryption/Re-encryption, Certificate validation/rewriting, plus full content scanning (IPS, malware, sandbox detonation, DLP, CASB controls) are CPU intensive tasks. Firewall appliances cannot scale with the needs of your organization.
- Feature stacking compounds cost: Enabling SSL inspection, IPS, Sandboxing and DLP materially increases CPU cycles, memory pressure, and queue depth. As the legacy firewalls hit CPU saturation, latency climbs and throughput collapses.
- Operational reality: When the appliances hit limits, your teams reduce coverage via category exclusions, app bypasses, and quick-fix exceptions. That creates predictable blind spots - exactly where attackers concentrate.
The on-premises appliance carries inherent security risks.
- These firewalls are exposed assets because of their public IP addresses which are routable and continuously scanned from the internet.
- The management plane and dataplane vulnerabilities are repeatedly weaponized in the wild. Your teams spend significant time in patching the software to ensure you are up-to-date against security threats.
- The impact on your organization is high, if the appliances are compromised because it often sits adjacent to broad network segments and becomes a pivot point.
What “better control” looks like now
A cloud-delivered Zero Trust architecture removes the inbound attack surface entirely. Users establish outbound sessions to the service where policy is enforced, and private applications are accessed via outbound connectors without public exposure.
True control today is defined by policy consistency and inspection depth, not by the ownership of the box processing the packets.
The Reality: If the problem is architectural (distributed egress + encrypted traffic + fixed capacity), running the same appliance as a VM in a public or private cloud environment doesn’t change the physics - it just changes the hosting location.
You still inherit the full appliance lifecycle: VM firewalls still require OS/image hardening, vulnerability management, emergency patching, upgrade testing, rollback plans, and maintenance windows. High Availability remains stateful and fragile in public cloud environments.
At cloud scale, this pattern also breeds image sprawl and configuration drift across regions and accounts.
Scaling is still engineering work, not elasticity: When traffic grows or when the magnitude of inspection increases, you still hit performance ceilings. “Scale” with VMs means instance sizing, provisioning new nodes, tuning load balancers, and rewriting routes to preserve symmetry. When CPU cycles are saturated in individual VMs or across a cluster of VMs, you see latency, session drops, and selective inspection bypass, not a clean autoscale outcome.
The architecture stays network-centric, so lateral movement persists: Appliance models enforce network boundaries. If users/workloads retain subnet/port reachability, compromise becomes inevitable. In the classic kill-chain, once the network has been breached, lateral movement follows. Micro-segmentation can reduce blast radius, but in appliance-centric designs, your security often devolves into distributed Access Control Lists, policy sprawl and region-by-region duplication.
What changes with cloud-native security
A cloud-native enforcement fabric is delivered as a managed, multi-tenant service: the provider owns patching, scaling, and High Availability. Policy decisions are identity/device/context-driven and enforced consistently for internet, SaaS, and private apps. Critically, access is app-specific. There are no network-routable apps. Apps are not discoverable and lateral movement paths do not exist.
The Reality: In a distributed world, the opposite is true. Your legacy architecture is the bottleneck.
In hub-and-spoke designs, users often tunnel to a central data center for inspection, then exit to the internet - regardless of where the destination actually is. That creates the classic hairpin path: a user in London routes to a firewall in New York, then back to a SaaS front door that might be in London.
You’ve added distance, congestion points, and failure domains before you even start the application session.
The penalty compounds because latency isn’t one number - it is how many times you pay the round-trip tax:
- TCP handshake
- TLS handshake (often multiple RTTs, plus cert validation)
- App negotiation (HTTP/2/3, auth redirects, token exchanges)
- Long-lived flows (WebSockets, streaming, GenAI responses) that magnify jitter and loss
So the real question isn’t “proxy or not.” It is: where is the first security decision made relative to the user.
The Cloud Advantage
A properly built cloud edge model makes the first enforcement point local.
- Users connect to the nearest PoP, so the “security hop” is a few milliseconds away.
- Policy is enforced at that edge, then traffic rides optimized peering paths to the destination (SaaS/IaaS).
Net result: you typically remove the backhaul hop rather than add a new one - fewer transits, fewer choke points, better p95 experience for SaaS.
Caveat (the part people confuse): If your “proxy” is just a VM cluster in one region, it will behave like the old model and be slow. That’s a failure of the architecture, not an inherent property of proxying.
The Bottom Line: Redefining Control
Moving to SSE isn’t surrendering control. It’s shifting control from infrastructure ownership to policy enforcement.
You can continue to operate legacy firewall appliances with/ without hypervisors, managing images, HA pairs, route tables, patch cycles, and capacity events.
Or you can operate based on intent: who can access what, under which conditions, with inspection and logging applied the same way everywhere.
One model scales people's problems. The other scales security outcomes.
How to evaluate your legacy firewall appliances
Run three tests. They’ll tell you more than any vendor deck:
- Encrypted reality test
Increase TLS decryption/inspection coverage. Track p95 latency, breakage rates, and the number of forced exclusions needed to stay stable. - Operations truth test
Inventory what you still own: OS/image patching, HA design, scaling events, routing symmetry, policy replication, and troubleshooting paths across regions. - Path and experience test
Trace flows by geography and app. Measure RTT and p95 to your top SaaS/private apps with security on/off, and confirm where the first enforcement decision is made (local edge vs centralized backhaul).
The real question is not “cloud vs on-prem. It is whether your architecture can inspect encrypted traffic at scale, minimize exposed attack surface, and enforce policy close to users without turning security into an infrastructure maintenance job.
このブログは役に立ちましたか?
免責事項:このブログは、Zscalerが情報提供のみを目的として作成したものであり、「現状のまま」提供されています。記載された内容の正確性、完全性、信頼性については一切保証されません。Zscalerは、ブログ内の情報の誤りや欠如、またはその情報に基づいて行われるいかなる行為に関して一切の責任を負いません。また、ブログ内でリンクされているサードパーティーのWebサイトおよびリソースは、利便性のみを目的として提供されており、その内容や運用についても一切の責任を負いません。すべての内容は予告なく変更される場合があります。このブログにアクセスすることで、これらの条件に同意し、情報の確認および使用は自己責任で行うことを理解したものとみなされます。


