Future-proof IT

CISOs should brace for massive context, coverage, and communication upgrade, says CSA AI leader

May 14, 2024
Generative AI will make us more secure and cybersecurity professionals can prepare to take advantage of this capability.

During his keynote at the CSA AI Summit (part of RSA Conference) last week in San Francisco, Caleb Sima, the Chair of the Cloud Security Alliance AI Safety Initiative, posed a reason why, despite a market saturated with vendors and worth billions of dollars, the top security challenges CISOs face remain the same year over year. 

"Caleb Sami speaking at CSA AI Summit 2024"

 

“I think there's a lower layer fundamental problem underneath things like vulnerability management and detection response…that products just cannot touch today because you can't get to it. And it's not because the security vendors aren't building products that are bad," Sima said. 

In his presentation, “How AI Will Help Us Be More Secure,” Sima made his case and explained how we should be optimistic because we have the technology to address the problem today. 

According to Sima, generative AI–and specifically LLMs–are good at what we thought for a long time would be out of the reach of AI: creativity, reasoning, logic, communication, and data synthesis. These powerful LLM traits are exploited by rapid advancements such as expanded context windows, automated fine-tuning and specialization, and localization. “When LLMs can start processing things like real-time data, that is when things get very, very interesting,” he said. 

By understanding how AI’s evolution translates in enterprise and software engineering domains, cybersecurity leaders can prepare now and seize the capabilities to solve fundamental problems that continue on unaddressed.  Sima detailed what he meant by this with examples of security workflow gaps, exceptions, and blind spots. He said we need a "real disruptor,"  which is GenAI applied to coverage, context, and communication.

Context

Context is the who, what, where, why, and how. Sima illustrated this using a barrage of questions and if-then statements for a hypothetical high vulnerability associated with a public AWS S3 bucket: 

  1. How do you know if that is true? You may have a lot of public S3 buckets. 
  2. Is this one supposed to be public? 
  3. Who has the list, and is the bucket in question?  
  4. What's in the bucket? And if you go look in the bucket, if there's nothing in the bucket, then is this a high-risk vulnerability anymore? 
  5. There's nothing in the bucket now, but what if there's something in the bucket later?

With context, Sima explained, you would be aware of an email thread stating that the bucket shares customer information with an external partner. So, while the bucket is empty today, next month, it may contain customer data, which is now public. "This is the kind of thing that happens in security teams that is under the radar. This is all the work that needs to go into it to say whether this vulnerability should actually be rated high or not," he said.

Coverage 

Sima believes most breaches occur because of lack of coverage, sharing examples showcasing lapses in breadth and depth. Take account takeovers for instance. With two-factor authentication (2FA) mandated for all  employees, it may come as a surprise to a security team when a breach occurs on, say, a SaaS marketing system. But if the takeover was attributable to some contractors that had access with an exception to the 2FA rule that the security team was not privy to, then you have a blindspot. 

Elsewhere, missing logs pose coverage problems when exposed during incident postmortems. Thousands of vulnerability issues across all ratings are often left untouched because of resource constraints, a classic case of the depth problem.

Communication

Sima said there’s a lot of communication that is “the biggest waste of time.”. He outlined the pain many feel when responding to requests for what you and your team are working on. You write a report, your manager then writes a report and so on until it makes its way up the chain as summarized points for the board, which looks at it and says, “okay this looks good.”  Another example he shared is how a simple question, like the risk profile for a certain asset results in a lengthy, manual data exercise. 

“…there's not one product that will give that answer. I have to go to my CSPM (cloud security posture management), get that answer. I have to go to the DSPM (data security posture management). I have to go to the ASPM (application security posture management). I have to go to the IAM (identity and access management). I don't even know what those letters mean. All I know is they're all these letters in different products that can all have to be merged into one report for the CISO to look at to say, okay, it looks like we're doing well here.” 

How AI can help

“I actually think these three problems are the three things AI actually excels in. Take for example, coverage. If I can have 10,000 smart junior security engineers, what would I have them do? I would definitely have an engineer looking at every single [access] right or object going into an S3 bucket, said Sima. 

“I would have them monitoring the engineering slack groups to say, oh, they're talking about this. ‘Is this security related? Is it not? Should I do something about it? Should I not?’ I would absolutely have them looking at every log file going to every system to say, ‘is this something suspicious?’” 

AI will soon be the army of security engineers.

Sima ended his talk showcasing similar before-and-after examples across common cybersecurity workflows where AI-based hyper-scaled and automated coverage, context, and communication bring a sea change for professionals. 

If the future is as Caleb Sima envisions it, imagine what it would mean for cyber teams to have total situational awareness, precise decision-making, and little or no manual drudgery. No anomaly detection, alert, intrusion, notification, or vulnerability would ever go unnoticed. 

What to read next 

CSA AI Summit keynote: The art of the possible with zero trust 
Adding a twist to the epic of vulnerability management
Fighting Fire with Fire: Generative AI vs AI-Powered DLP