This article was originally posted on Dark Reading in June 2022.
Artificial intelligence has advanced greatly in the past decade. On my phone, I'm reading Apple and Google news that is well-tailored to me, thanks to AI recommendation models. Self-driving cars are already picking up passengers for rides in downtown San Francisco.
The same transformation is happening in the cybersecurity world too. However, questions remain: Will AI replace security professionals? Or will AI still be as useful in the zero trust era given that access is tightened to the minimum already?
AI was introduced to the center stage of the cybersecurity industry a few years ago, originally to tackle malware detection and anomaly detection use cases. We have come a long way to better understand both the usefulness and the limitations of applying AI to cybersecurity, especially in the zero trust era.
First, a zero trust architecture doesn't remove the need for AI. Though zero trust eliminates the attack surface and reduces the chance for the anomaly to happen, zero trust demands AI more.
Today, an enterprise user's security policy is typically that person's department security policy. Whether users are in a big or a small department, they all follow very similar, if not the same, security policy, including access control policy.
In the zero trust era, we need a personalized, contextual, dynamic, and granular security policy — which is exactly what zero trust is about. Access control, for instance, is no longer based on simple rules but a set of complex policies based on your identity, your device, your posture, your intention, your risks, your content, and a lot of rich data points.
However, generating such complex, granular, and personalized policy at scale can be very time-consuming if relying on human rules and heuristics. Different employees will use different applications and such application usage may need to evolve fast in a short period of time. AI is a critical technology to make such an intelligent and personalized security policy recommendation at scale.
At the same time, it is impossible for AI to capture or comprehend all the nuances and contexts of any complex environment, so AI may make recommendations that are suboptimal from experts' eyes. With ongoing human feedback, we can improve the AI model and its effectiveness.
Second, zero trust gives the enterprise much tighter protection than it has had in the past, but no matter how tightened things are, there is always a weak link somewhere. Therefore, we want AI to assist with evasive and unknown threat detection and prevention.
Some evasive threats are undetected in time by conventional signature-based or sandbox technology. The SolarWinds supply chain attack is a good example. This global attack turned the SolarWinds Orion software into a weapon, subsequently gaining access to several government systems and thousands of private systems around the world. There was no involvement from any malware by the traditional definition, and it was hard to rely on any single layer of the conventional technology to detect such an attack ahead of time.
AI has a good potential to be the technology to do a better job with unknown threat detection because it can "predict" threats that have never been seen before.
Practically, we will want to layer multiple security technologies along with AI. For instance, in the case of malware, the tried-and-true approach of signature matching and sandbox will continue to play a key role. AI will complement greatly, but not displace, the conventional technology.
Third, enterprise customers want to utilize AI in a way that is easily understood and digestible by security professionals. The "explainable AI" may not improve the AI model efficacy on the surface, but it will significantly increase the adoption of AI.
For instance, AI may be able to detect an unknown threat, but SecOps teams may want to see which family the threat belongs to before taking an action. Furthermore, AI may be able to generate intelligent and relevant security policy recommendations, but SecOps teams may still want to know the context of why certain recommendations are made before accepting them.
The cybersecurity industry needs AI to help reduce unknown attacks at scale and come up with granular and contextual security policies at scale to reduce the attack surface. We want the result to be explainable too.
AI-powered security tools and products are an amazing digital assistant to SecOps professionals, and professionals are assisting AI technology to advance, too, as we will need humans to verify many of the outputs and/or provide feedback for the AI model to improve.
AI is useful to scale the enterprise security functions, like more-intelligent policies and more-intelligent threat detection as discussed above. AI works best when security professionals and AI are complementing each other. In the end, AI is an assistant to security professionals and will not be a replacement for human effort for a long time to come.