New Report from Gartner Research: The Future of Network Security Is in the Cloud
The IT networking world is evolving rapidly: The new universe of cloud and mobility can neither be built nor scaled on the network architectures of the past. Gartner Research VP analyst Lawrence Orans and distinguished VP analysts Joe Skorupa and Neil MacDonald have authored a new report, The Future of Network Security Is in the Cloud. (Skorupa and MacDonald had also authored the earlier report Market Trends: How to Win as WAN Edge and Security Converge Into the Secure Access Service Edge.) From my perspective, Secure Access Service Edge (SASE) represents a new, visionary market paradigm.
(Download the full The Future of Network Security Is in the Cloud report here.)
These works are seminal. I believe Orans, Skorupa, and MacDonald have defined a new cloud service architectural model and technology market that will resonate with most Fortune 2000 CTOs and CIOs. There just wasn’t a clear definition of this massive shift to edge computing until SASE.
SASE goes well beyond the disruption of MPLS with SD-WAN, or hardware appliances with cloud, or applying Zero-trust principles. Orans, Skorupa, and MacDonald write of IT leaders protecting their enterprise environments with “software-defined secure access”:
Complexity, latency and the need to decrypt and inspect encrypted traffic once will increase demand for consolidation of networking and security-as-a-service capabilities into a cloud- delivered secure access service edge (SASE, pronounced “sassy”). [Source: Gartner, The Future of Network Security Is in the Cloud; 30 August 2019; Lawrence Orans, Joe Skorupa, Neil MacDonald]
As I interpret it, SASE (and specifically, its impact on enterprise cloud and mobility):
- Renders existing corporate network and security models obsolete.
- Requires organizations to adopt a cloud-based, as-a-service model of a “secure edge” that is simple, scalable, and flexible, with low latency and high security.
- Demands service-edge providers offer compute power at the edge of a widely-distributed network...distributed as close as possible to each endpoint.
On the edge: architecting a low-latency cloud service
Large enterprises are struggling with creating low-latency connectivity between end points and cloud applications. Low latency dictates that security services must be close to the endpoint.
When we architected the Zscaler cloud, we had to decide whether to A) create a big NFV platform to enable every remote location to have every service (call it the “fat-branch” model) or B) put a router in every remote location and run heavy functions in a widely distributed cloud (“thin branch”). Zscaler chose approach B early on.
I like to quote GE CIO Chris Drumgoole from his Zenith Live keynote last year: The more code you put in the branch, the more issues you will have to remedy. As he explicitly stated, the best approach is to have as light a footprint as possible in the branch, and push most functions to the cloud.
SASE formalizes this approach. SASE prescribes that the branch should be light and all the complex functions from firewall to DLP should be "as-a-service" in the cloud.
The expensive virtualized-hardware shortcut
Faced with demand for cloud services, some traditional security-appliance vendors virtualize their hardware -- effectively replicating bottleneck-prone, single-tenant network architectures in the cloud. It’s an “as-a-service-model” shortcut: Host a variety of virtual machines in a sequence and send traffic through each of them to do each function. And the model can be built very quickly in a cloud like AWS. But SASE specifically decries this approach, as it results in inconsistent policies, missed context, and high latency.
SASE recommends a “Single-Pass” approach: The service discovers the context once, and then all functions can be performed with respect to that context.
This approach is fundamental to the Zscaler cloud service: Zscaler's SSMA™ technology was built to achieve this. (SSMA’s development was based on every customer telling us that user experience and policy consistency were paramount.)
How many POPs does your cloud vendor have?
In architecting the Zscaler cloud, we evaluated points of presence (POPs) around the world, and considered two options:
Build compute in ten or twelve large data centers, create POPs, and establish our own network backbone; or
- Build compute in every POP so all services would be performed at the secure edge of the cloud.
While the first option would have been cheaper, faster, and easier to build, it misses the low-latency objective. It simply replaces a corporate MPLS backhaul model with a provider backhaul (from POP to compute region).
Placing compute at the edge (in every POP) ensures minimal path latency added between the endpoint and the application. This approach is certainly more costly and time-consuming to architect, but once built, it allows users to get from any device on any network to any application on any network with least-path latency. As Zscaler’s adoption grew, we established direct peering relationships with service providers including Orange Business, BT, AT&T, TATA, and more. On the content/cloud app provider side, Zscaler peers Microsoft, Google, Akamai and many many more. (Partners value volume: A vendor claim of 100GBps peering capacity is meaningless if traffic is zero. At 700Gbps+ sustained to many cloud applications, most cloud providers and service providers peer with Zscaler.)
Consider an OBS customer in Singapore connecting to Office 365 via the Zscaler cloud. When an endpoint generates a packet of data to O365, the path is deterministic: OBS and partners provide the last mile. OBS peers with Zscaler in Singapore, so when the packet leaves OBS fiber, it connects to Zscaler’s edge via a router/switch. Zscaler also peers with Microsoft, which means that beyond the Zscaler inspection stack, that packet gets to Microsoft’s fiber in a single hop.
Contrast that with an edge service (like say, a firewall) hosted in Google Cloud. The data packet would travel from OBS to Google over a peering network, then over the Google network backbone to one of the regions where Google has compute. The packet now goes through the security stack and then routes back over the Google network to a peering location, then (finally) over to the Microsoft network backbone. This unnecessarily extended, long-distance, network-hopping, latency-introducing data-traffic routing is exactly what SASE repudiates.
The SASE model is an ideal vision of cloud architecture -- but its service-implementation fulfillment requires significant infrastructure investment. You can’t just drop a service in AWS and call it SASE. SASE explicitly recommends edge computing services not be run in clouds designed for application delivery. As of yet, no IaaS or PaaS providers offer a platform distributed enough for pure edge compute as a service. (Lambda probably comes the closest.)
SASE: the new “digital business enabler”
Ultimately, SASE’s great promise lies in the opportunity it provides IT leaders to transform their organizations. Gartner’s Orans, Skorupa, and MacDonald recommend those IT leaders “Position the adoption of SASE as a digital business enabler in the name of speed and agility.” Among other recommendations, they also advise security and risk-management leaders to consolidate services. Further, Gartner’s analysts recommend enterprises “[r]educe complexity now on the network security side by moving to ideally one vendor for secure web gateway (SWG), cloud access security broker (CASB), DNS, zero trust network access (ZTNA), and remote browser isolation capabilities.”
Zscaler customers know that the cloud-computing service delivery future is highly-distributed, secured, and powered at the edge, close to every user. To learn more, download the Gartner report here.
Gartner, The Future of Network Security Is in the Cloud; 30 August 2019; Lawrence Orans, Joe Skorupa, Neil MacDonald
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Manoj Apte is the Zscaler Chief Strategy Officer