Hero Panel Image

Exploring the history of enterprise app connectivity to prepare for what’s next

Share:
Brett James

Brett James

Contributor

Zscaler

Dec 7, 2022

In which Zscaler Director of Transformation Strategy Brett James examines the history of enterprise applications to better prepare peers for the next generation zero trust architecture.

Enterprise architects are shouldered with the heavy burden of looking three, five, and even ten years into the future to match business needs with technology capabilities. They’re expected to do this while taking into account external effects like cost, complexity, reliability, and performance. Only since the emergence of the cloud, though, has security ascended as a primary concern.

Malware, ransomware, and state-sponsored attacks

Security has challenged the infrastructure status-quo with network perimeter roots tracing back over 30 years. Yesterday's strategies told us to trust any user on the network if they were able to overcome various, reactive mitigations meant to detect bad actions from a compromised device, stolen credentials, or leaked API keys.

APTs and advances in attack techniques by threat actors mean these strategies are no longer sustainable. Today, it falls to enterprise architects to design an application infrastructure that tips the security scales back in favor of the enterprise, while maintaining a reasonable user experience and preserving access to cloud-based resources. 

Zero trust architecture (ZTA) promises to do just that, but unlike previous IT infrastructure evolutions, it is not simply a new product you slip into the data center or a fresh agent to install on all of your endpoints. Instead, ZTA is a design philosophy that enterprise architects must master to turn security from a business barrier into an enabler. 

Let’s examine the history of enterprise applications so we can be better prepared for the next generation zero trust architecture.

Direct access

Direct access refers to application access based not on security posture or ownership of the device, but rather a user’s presence on the "same network." If the device can connect (via the same routable network), the application accepts the connectivity. This includes remote access via VPN, since the border device simply puts the device on the routable network after authentication.

Application types using this model depended on the time, environment, and use cases. They may include terminal-mainframe-type apps, locally hosted, client-server applications, or simply unstructured data like file shares. As the internet emerged and browsers matured, web applications became the norm for new releases or for use as storage repositories.  

Application security is mostly restricted to the authorization of one's identity. By virtue of being on the network, one is authorized to access the app.

With direct access, infrastructure security is simply a matter of restricting access to the network, whether it be through physical means or the use of 802.1x protocol, like authentication to the WiFi network or network access control (NAC). After authentication, the applications themselves are directly relied on for security (vulnerability risks, patch management, etc), potentially supported by associated infosec group initiatives like SIEM, SOAR, and others to detect and respond to an incident after the fact. 

Proxied access, or access via a gateway service

These services began life by providing remote access to internal resources, without the need for VPN-type connectivity.

Four critical categories of access type include:

  1. Web applications – Presented directly, or access to resources provided behind a load balancer or a web application firewall (WAF), or hidden behind an authentication boundary.
      
  2. Virtualized applications (i.e., Citrix or VMware) – Desktop and server applications are streamed to the user. App virtualization is used for several reasons – initially to centralize application management for operational efficiencies or to prioritize security, but then it began to also be used for remote access. The concept of streaming only the pixels to remote users, instead of delivering the full application was seen as a boon to security, while also unifying access methods. For a time, this promised to enable secure BYOD, but the emergence of cloud applications restricted this infrastructure due to performance problems, complexity, and expense.
      
  3. Specialized, legacy application portals – These are used to give remote access to legacy and/or non-web applications, like file-share access via a web portal.
      
  4. APIs – API-type application access was popularized for mobile apps, but it's now common for web apps consumed via PCs to employ this method. API gateways are now routinely employed, often fronted by a WAF to protect the APIs and provide security services. Though classified as a border device, the API gateway is often accessed the same way, whether the consumer is remote or within the corporate network perimeter.

Over time, vendors collapsed many of these functions into or fronted them with application delivery controllers (ADCs) to incorporate advanced features like load-balancing, SSL offload, traffic rewriting, and security capabilities like threat detection.

Though ADCs grew incredibly powerful, they were always limited from a security perspective by web browser privacy protections. With little knowledge of the consumer, much of the security tasks were performed reactively inline in an effort to protect enterprise resources. This required large-scale processing capabilities to do adequately. Cloud and virtual versions appeared, but expense and management overhead generally restricted these to public/private, border-type use cases.

While ADCs are typically saved for remote use cases, they are also occasionally employed to protect sensitive resources for on-premise users.

Universally brokered access

Zero trust architecture began gaining traction in 2016, then cemented its spot as the new infrastructure design standard after COVID-19 necessitated remote work and various industry bodies, including NIST, promoted its advantages over the traditional network perimeter security model.

The object of zero trust is simple: All entities are untrusted by default; least privilege access is enforced; and comprehensive security monitoring is implemented. This cannot be accomplished by allowing consumers direct access to resources or applications. From an infrastructure perspective, zero trust takes the proxied, or gateway service concept, a step further. It demands that a border device, or a broker, be placed between all resources and consumers, regardless of the network, effectively treating all networks as hostile. This may sound like a dramatic shift from what we’ve traditionally done (it is), but there are many benefits. These include establishing a default, high-bar security posture for all resources, encouraging rapid innovation, and offering seamless, unified application access methods for all users regardless of their location.

Ultimately, security appliances are an operational burden. The thought of managing dozens, or even hundreds, of brokers for large enterprises would terrify most enterprise architects. Hence the preference for managed, cloud-based zero trust brokering services for achieving zero trust implementation. This approach marries nicely with most enterprises’ cloud-first strategies, where the broker is effectively next to the workload. Of course, there are other scenario considerations, but this is typically where most begin.

As I hope this short history has shown, application access trends have been toward greater control than simply a matter of being on the correct network. Brokered access is the model that, up to now, provides the most granularity. Given the complexities involved, though, this will likely be an increasingly outsourced capability for enterprise customers. Luckily, zero trust network architecture anticipates these needs nicely. 

What to read next

Illustrating the transition to SSE and zero trust

Zero trust element #1: Who’s connecting

Recommended