Zero Trust

You could do zero trust the old-fashioned way, but why?

Aug 03, 2022
You could do zero trust the old-fashioned way, but why?

Many organizations are trapped in an outdated way of operating and simply do not realize it. People, businesses, and agencies regularly use antiquated processes or technology, never anticipating encountering serious problems until it is too late. Worse, people like to adapt tried-and-true solutions to new problems, even when doing so is inefficient and ineffective. Consider how much of the cybersecurity industry continues pushing old frameworks onto new challenges.

Some of you may remember the early days of AV, when computers would run active virus-scans to detect malware. For its time, the process made sense. There were a limited number of malware binaries. Keeping a list of their file signatures and searching a computer’s hard drive for matching files was an effective, if not particularly speedy, solution. Today, millions of new malware variants are released every month. Obviously, trying to keep a list of each one’s signature would be exhausting. Expecting a single PC to download such a list, then check it against all local files would be an exercise in futility. This is an example of a problem outpacing a solution.

Of course, security analysts recognized they were facing a situation that would soon overwhelm them. What did they do? They adapted their existing solution to be a little more effective. Instead of searching for file signatures, they focused on malware “families”. They began looking at the activity of malicious files (behavioral analysis) and flagging certain behaviors as malware-like and others as safe. Then, just as before, they compiled lists of these malware indicators into libraries and scanned endpoints for matching patterns. 

As technology moved forward, some AV vendors moved their libraries of threat indicators into the cloud. Their customers now upload enterprise endpoint data to the cloud where it is scanned for signs of malware infection. Another popular approach is to have cloud services push abbreviated threat libraries down to each customer endpoint - aka local AV software updates. We are 34 years past the initial release of virus scan, and yet many in the industry are still approaching cybersecurity as if we live in 1988.

The evolution of AV is only one example of many. The same general story applies to firewalls. Once upon a time, when all workplace endpoints were located in a single building, utilizing a single network, firewalls were great. Then, employees began accessing the enterprise through unmanaged networks, on mobile devices, outside the office. Again, the problem outpaced the solution, and once more the industry responded by adapting their antiquated models. Many security vendors attempted to extend the reach of the old firewall-based system by implementing more firewalls, physical and virtual. As I mentioned earlier, people tend to fall back on tried-and-true solutions even when they prove ineffective at addressing new problems.

Remember, every attempt to adapt and expand these outdated solutions requires additional skilled personnel to install, operate, and maintain each deployment. Vendors have played catch-up for years, trying to overtake emerging and evolving threats. Where has that path led? There is a global shortage of 2.7 million cybersecurity workers, we cannot support this legacy approach of scattering firewalls across the internet and cloud. Each new firewall ostensibly secures some network segment, yet it also adds to the global attack surface. These examples show that the tools of yesterday cannot fight the security battles of today and tomorrow.

The security issues arising from de-perimeterization troubled many international CISOs and analysts who formed an organization called the Jericho Forum in 2004. This group saw the direction things were headed and began working on solutions, including publishing papers on collaborative architecture and cloud computing. Much of their work and research would later be reflected in the zero trust framework, a term coined by John Kindervag.

The new solution is zero trust

John Kindervag first described his zero trust security model in 2010, which may cause many to believe it is not “new”. However, it is important to make a distinction between “zero trust” as a marketing gimmick, and an actual, functioning, zero trust framework. With true zero trust we are not dragging legacy equipment and cybersecurity practices into the modern world. We consider security problems in an entirely new way. Instead of scrambling to identify every bad actor or suspicious file, we begin with “everything inside and outside of the enterprise is a threat”. 

Who is granted implicit trust or access? Nobody. At this point, no one can access anything or talk to anyone. Now, obviously people need to reach our organizational resources and communicate with others, so how do we go about opening the environment? We award access to known actors (users, services, apps, networks, etc) only after performing rigorous identity verification and verifying other criteria. In other words, to interact with a zero trust environment it must know precisely who you are and what you should access. Through this simple step alone, we have already eliminated a significant portion of the attack surface.

There are many other aspects of zero trust that further strengthen an organization’s security posture. For example, resources are hidden from public view, encrypted traffic is monitored, users receive access at the application level, and so on. This is not meant to be a crash course on zero trust, but a wake-up call to those unknowing trapped in outdated security processes. If your organization is exhausting itself trying to keep up with cyberthreats through adding security layers and firewalls it is time for change. 

Often, organizations start adopting zero trust hoping to find an answer to: Who is accessing what in our network? For some, this is the stopping point of any business case. Many misunderstand zero trust, or zero trust network access as per Gartner, as something that can only work remotely and therefore replace VPNs. I have had exactly that experience in my former life as a practitioner. I tried to bring in zero trust, but did not see the bigger picture. We didn’t connect the dots between a new approach like zero trust and typical network security, based on network segmentation and network access control. Business case wise, seeing this connection made a huge difference. VPN is a cheap technology. Initiatives like network segmentation and NAC can get run into the multi-millions. The world around us has changed dramatically, and so has network topography and security. Why drop millions into an outdated network model that has changed or is rapidly changing?

Cyber threats move at a pace that cannot be stopped by firewalls or curated lists of IoCs. When threat actors leverage systems that exponentially increase their effectiveness, defenders need responses capable of scaling to the same degree. To this end, organizations simply must adopt zero trust, automation, and other modern approaches to remain secure. Those seeking to harm your organization have mastered advanced processes of engineering, distributing, and obfuscating threats. They have armed themselves with the latest technology and memorized vulnerabilities across devices, systems, and networks. Battling them with antiquated firewalls, yesterday’s layered defenses, virus signatures, and the like, is a recipe for disaster.

It is important to remember that zero trust is a framework, not a product. Organizations need to stop thinking of zero trust as an updated form of network security.  Minimize the attack surface first, then apply zero trust principles to what remains. Consider hiding infrastructure and resources from public view by using a private access broker, then connecting known individuals to trusted applications. Threats cannot move laterally through an environment when they can only access specific applications. In short, achieving zero trust takes more than simply using multi-factor authentication and ratcheting existing down security protocols. It requires a thoughtful approach that begins with shrinking the attack surface, then focuses on managing identities, devices, policies, and access with the overall aim to reduce risk!

What to read next

Transformation in the face of adversity

A brief(er) history of zero trust: Major milestones in rethinking enterprise security