Concerned about recent PAN-OS and other firewall/VPN CVEs? Take advantage of Zscaler’s special offer today

Zscaler Blog

Get the latest Zscaler blog updates in your inbox

Subscribe
Products & Solutions

The Advantage of Implicit DIStrust in Traffic Inspection

image

Zero trust comes up in almost every conversation I have. It's definitely THE hot topic across the industry right now. 

And yet, I see confusion in terms of how, when, and where to get started with it.

The National Institute of Standards and Technology (NIST) and other organizations have put out communication around it, and there are some clean starts to definitions around zero trust. The good news is this means you don’t have to address everything all at once—you can choose one focus area at a time.

Based on my career as a security professional, I think a good starting point is the concept of zero trust in the category of traffic inspection.

 

How encrypted traffic is giving bad actors opportunity

At Zscaler, we approach zero trust as a mindset centered around having an implicit distrust of all traffic. Bad actors excel at exploiting vulnerabilities, and by inherently trusting no one, organizations have better control and can avoid cyberthreats and subsequent data compromise.

In terms of traffic inspection, the level of encrypted internet traffic users interact with—whether it is HTTPS or Port 443—has crept up slowly over a long period of time. Users and IT managers have come to implicitly trust SSL encryption, and organizations are looking at that encrypted traffic and saying, “I'm going to grant implicit trust to that SSL traffic and I'm going to allow it to pass through.” 

The problem is that encrypted traffic cannot be trusted, and last year, 80 percent of all cloud traffic was SSL encrypted.

In the last six months of 2018 alone, Zscaler blocked 1.7 billion SSL threats. We see malicious actors taking advantage of the implicit trust people give SSL, using it as a threat vector to individual environments. 

I don’t think any IT security professional is granting this implicit trust to SSL because they believe all that traffic is trustworthy. Rather, it's the fact that inspecting all of it is a monumental challenge for many organizations. Purchasing and managing a number of appliances to break and inspect all this traffic is a heavy lift. 

 

Why implicit distrust matters

The oldest security game in the book is adversary and defender. It predates the digital age, going back to the days of cops and robbers. The bad guy identifies and exploits a vulnerability and then the security professionals, the good guys, patch and defend against that individual exploit. 

And then the bad guys find a new vulnerability because attackers are always going to excel at figuring out where vulnerability exists. 

Implicit trust of encrypted internet traffic is a vulnerability malicious actors use today to great effect, and they will continue to do so into the future. 

Zero implicit trust (or implicit distrust) is a solid counter to that strategy. 

For example, in an internal security program, there are URLs deemed as safe or unsafe for internet users. The addresses deemed safe are usually given implicit trust because the site is “safe.” Bad actors can spot this site specifically and commit a watering hole attack, plant malicious content, or create a redirect from a good node to a bad.

The bad actor places this traffic inside SSL over Port 443. It's got the HTTPS, so it will be encrypted because the bad actors understand that encrypted traffic has access to many places in a position of trust where it's not going to reach the same level of inspection as other pieces of traffic. The traffic is allowed through because it's coming from a known good node.

And now the vulnerability has been exploited. The exploit infects the user in this individual case. 

Even if the traffic will go into a sandbox, that doesn’t always prevent the attack. Due to the cost and time, I often see traffic passed simultaneously as it's being detonated in the sandbox. That means the payload is hitting the target at the same time it’s being flagged as a malicious payload. 

At this point, the payload is on the target and malicious actors are committing malicious action. 

This can be anything from internal denial of service to destruction or modification of data to ransoming an internal owner’s data back to them. 

Does this mean that current security models and methods are bad? No, not at all. 

An antivirus engine, backended by a threat database with file type static malware analysis is a good, fast, and efficient way to go. However, I think many methods are being used inefficiently because they can only be as good as they are up to date. 

The fluid nature of the environment in which we all operate means we can only provide a solid depth of protection as we have definitions. If we’re not updating them consistently, that's another vulnerability that malicious actors can use. 

Cyberattackers are rapidly changing their attacks and methods and coming at environments with different MD5 hashes because if they can relay around this, they can eventually get to the payload target.

 

How security works in a perfect world

Dynamic malware analysis has a level of heuristics at the end of that analysis, like a “good/bad” process, to be able to do a level of risk assessment against an individual piece of traffic. If it's deemed suspicious, it goes to a sandbox. In a perfect world, that sandbox blocks the file or piece of traffic from the user until it's determined whether that individual piece of traffic is good or bad.

If it's good, it passes it to the user. If it's bad, the user never has the opportunity to interact with it. That payload is never delivered to the host. 

As soon as the traffic is determined to be malware and should be blocked, the definitions are updated.

 

How this works in an even more perfect world

If I've discovered a day zero empty file hash in that sandbox, I immediately want protection against it, right? 

Now, I could do that inside my own agency and my peers could do it inside their agencies. 

But what if I had access to a service where I could not only stop that malicious content from accessing the payload landing on my host, but stop it from within a group of users? And those users could do the same. 

Can I update definitions on my site based on collaboration and discovery of malicious content? Can I gain knowledge of the group dynamic in order to create clean and clear definitions across my enterprise and bank of users?

In a perfect world, yes. 

This is how Zscaler works. Our definitions are updated constantly. We are always pulling from our own sources of data. There is no data known to be bad or malicious that we don’t include in our scan process. 

We have a method in place when analyzing traffic that allows us to say, “we don't have a definition specifically against this. There's not any MD5 hash to categorize the traffic, so we’ll take a look at it in the sandbox and see what happens.”

When we determine an individual piece of traffic is bad, we go back and update those definitions. When that MD5 hash is seen again, we don't have to spend the time cycles and currency sandboxing that individual piece of traffic, which can negatively impact the user experience. 

For example, we had a customer that had a piece of ransomware and an info stealer Trojan come through unflagged as good or bad because there were no definitions against this individual piece of traffic. 

However, based upon some analysis and heuristics we did, we determined it was suspicious. It went to the sandbox and was flagged as a problem. The malicious traffic was noted and repopulated back to the scan engines at the top. The next time this traffic came through to this customer and other customers subscribed to the service, the MD5 hash flagged it as known bad and immediately blocked it. 

We decrypt and inspect all SSL encrypted traffic against a long list of definitions that are as current as they can be. This level of implicit distrust is possible because, as a cloud service, we can have the cloud effect and those databases are kept up to date. I'm able to have a detonation outside of the agency's data center and outside of the internal network, make a determination on file, and that determination is immediately strengthening my scanning process at the top. 

And that is zero implicit trust (or the advantage of implicit distrust). We can live in a perfect world with a mindset around applying zero trust to the traffic inspection model.

form submtited
Thank you for reading

Was this post useful?

dots pattern

Get the latest Zscaler blog updates in your inbox

By submitting the form, you are agreeing to our privacy policy.