Future-proof IT

The CISO's guide to AI: Embracing innovation while mitigating risk

Feb 15, 2024
The CISO's guide to AI The CISO's guide to AI

The CISO role has evolved over the last 20+ years. And, believe it or not, it’s not because of the latest SEC regulations. The role has always been about balancing progress with protection, even if we’ve not historically been amazing at it. 

We're always being asked to do more with less; advancing business objectives while staying within budget. Managing risk in this type of environment is difficult, to say the least. Yet there may be hope on the horizon from an unlikely source. In fact, it comes in a form that some CISOs see as one of the riskiest they've encountered in their careers. 

Of course, I am talking about AI. We have entered the AI age. From this point on, the delicate dance between innovation and risk mitigation will be even more complex. While AI promises groundbreaking solutions and increased efficiency, its nascent nature raises security, ethical, and moral concerns.

Fear, however, cannot keep us from taking appropriate action. Blocking AI outright hinders innovation and puts companies at a competitive disadvantage in terms of both protection from adversaries and competition in business. The key lies in proactive, informed leadership. As a CISO, understanding AI and its risks is crucial to effectively managing its implementation and reaping its benefits, without putting the organization at risk.

As a colleague of mine at Zscaler, Sean Cordero, has reminded me many times: AI is just one more technological advancement we must deal accommodate. CISOs had the same uncertainty about bring-your-own-device (BYOD) and cloud adoption, both of which are now common. It’s likely AI will become ubiquitous, which means CISOs must know how to manage, guide, and lead AI’s adoption. 

Navigating the Landscape: Key Considerations for CISOs

 

1. Demystifying AI

I tell my team not to work in fear, uncertainty, or doubt (FUD) about the new or unknown. When it comes to a new technological advancement like AI, we need to stop, learn, and absorb. We must gain a grasp of the basics. AI encompasses various techniques like machine learning, natural language processing, and computer vision. Each has its own risks. Familiarize yourself with the specific AI applications your company is exploring and their potential security implications.

Whenever I'm evaluating solutions, I think to myself, "How can I get to yes?" What guardrails must be in place for me to be confident the initiative is proceeding in a managed way, not in the background as shadow IT without the confidence of the business. 

While building our technical knowledge, we also must research associated risks.

2. Understanding common risks

I like to start with industry best-practices when identifying risks. For AI, I've found the OWASP AI/ML Top 10 to be a good starting point. The framework identifies the ten most critical risks associated with AI systems. It’s a valuable resource for understanding the attack surface and prioritizing mitigation strategies. Key areas of concern include:

  • Data poisoning: Malicious actors injecting manipulated data to sway the AI's output.
  • Model inversion: Extracting sensitive information from the AI model itself.
  • Privacy violations: Unintentional or deliberate misuse of personal data used to train the AI

Thankfully, we're not starting from scratch. Just like starting a security program, we can build around a framework that acts as a blueprint for what we're building. It provides something to show (or “sell”) to executive staff  and helps measure progress along the way. 

We can also look to see what regulations and guidance alread exist for AI. After all, AI isn't new. OpenAI (of ChatGPT fame) was founded back in December of 2015. 

3. Understanding AI regulations

 The regulatory landscape surrounding AI is rapidly evolving. Stay informed about relevant regulations in your region, like the EU AI Act and the U.S. Executive Order on AI. These regulations impose specific requirements on data collection, transparency, and accountability when using AI.

Organizations like the National Institute of Standards and Technology (NIST) offer valuable guidance on developing and deploying trustworthy AI. Familiarize yourself with the NIST AI Risk Management Framework (RMF) and consider implementing its recommendations within your organization.

Security shouldn't be an afterthought. Integrate AI risk management into your overall security strategy. Foster a culture of security awareness within your organization by educating employees on potential AI risks and their role in mitigating them.

Mitigating AI risks: Proactive strategies

While artificial general intelligence (AGI) – AI capable of understanding and learning any intellectual task – might seem distant, it's important to be prepared. Here's some advice on how to approach both AI and eventually AGI risks.

CISO’s should insist upon:

1. Transparency and explainability – Demand transparency from AI developers and vendors. Understand how the AI works and the rationale behind its decisions. This helps identify potential biases and vulnerabilities. 

2. Human oversight and control – AI systems shouldn't operate autonomously. Implement robust human oversight mechanisms to monitor and intervene when necessary. I've also seen this being referred to as HITL, or human in the loop, which isn't new or unique to AI. 

And, by all means, we have to review anything coming out of a system. It’s the old adage “trust but verify.” Review any data being generated as if an intern assembled it Before you pass it along, review it for accuracy and completeness. This is where you'll identify things like bias (which could be inferred from the prompt), as well as AI “hallucinations,” where it makes things up when responding to a user prompt. 

3. Continuous threat monitoring Stay vigilant. Regularly assess your AI systems for vulnerabilities and adapt your security measures as the threat landscape evolves. As developers continue to test, stay engaged, open, approachable, and coachable. 

4. Collaboration and communication – As mentioned, engage in open dialogue with stakeholders like developers, legal teams, and business leaders. Share your concerns and work together to develop comprehensive risk management strategies.

Form partnerships with other organizations. Understand what they are doing with AI. How they are adopting it and what guardrails they are putting in place. None of us is alone. We have a community. Use it. This also means being willing to share what you're doing. Community and relationships ought to be two-way to be real. Find people you can trust (internally and externally) and share. 

5. Invest in security research Support research into robust security solutions for AI. Contribute to industry initiatives and collaborate with academic institutions to accelerate the development of effective safeguards.

Remember, it's not AI vs. security. It's AI with security. Imagine how much better prepared we will be with AI-enabled security analysts who are empowered to combat AI-enabled cyber-criminals. 

By taking a proactive, informed approach, CISOs can navigate the exciting world of AI while still effectively managing its risks. Embrace AI as a powerful tool but remember your responsibility to ensure its safe and ethical implementation. Through knowledge-building, collaboration, and a commitment to continuous improvement, CISOs can (and should) lead the way towards a future where AI empowers progress without compromising security.

What to read next

What got us here: a CISO’s perspective

The evolving risks and rewards of AI [podcast]