Hero Panel Image

Takeaways from a discussion on GenAI security and governance

Share:
Rob Sloan

Rob Sloan

Contributor

Zscaler

Jun 21, 2024

Executives eye strategic approaches to GenAI rather than succumb to "whack-a-mole."

At Zenith Live 2024 in Las Vegas, a group of CXOs discussed the risks and threats stemming from the use of GenAI in enterprises. The conversation, led by Zscaler’s senior director of data protection, Venkat Krishnamoorthi, and VP of product development, Moinul Khan, also demonstrated how Zscaler solutions might assist in reducing the risk. 

This article captures the four key points made and the context given.

1. You cannot simply block use of GenAI. Having visibility into usage is crucial.

Use of the various GenAI applications is so widespread that trying to block it is akin to playing whack-a-mole. Even when specific destinations can be blocked, it does not stop users from finding workarounds, such as using mobile devices or secondary machines and transferring outputs back to the work environment via email or other communications applications. 

Blocking might‌ even increase the risky behavior of employees. While a corporate-endorsed solution might give users suggestions about proper information handling, exporting data and documents to other machines for processing creates additional copies of potentially sensitive data and does nothing to restrict which GenAI applications are used.  

Rather than blocking ‌use outright, understanding which users interact with GenAI, which applications they are using, and what prompts they create will inform more constructive policies. Holding back the tsunami is no longer an option. 

2. Data leakage by employees inputting sensitive information into public generative AI apps is a major risk that needs to be mitigated.

Employees seek ways to perform their duties more efficiently and use AI to take care of some of the more mundane aspects of their roles. In other cases, employees are being asked to undertake tasks they are not capable of doing manually, forcing their hand to use AI if they are going to be able to deliver against their objectives. In these cases, the users are acting with the best of intentions, but with little or no thought for data security and data privacy.  

Many companies are wholly unprepared for the risk of regulated or sensitive corporate data being uploaded onto GenAI platforms, where it could be used for training purposes. Though it is unlikely such data would later be made public, regulatory disclosures will be necessary. Venkat Krishnamoorthi said companies are particularly concerned about gaining unwanted attention from regulators: “it's so new that no one wants to be that use case or white paper that ends up defining what the regulatory stance is."

3. Granular controls like browser isolation, DLP policies, and customized allow/block lists can enable secure adoption of GenAI.

Zscaler products have features that CXOs can use to better understand and control the use of AI within their organizations. Moinul said that Zscaler Internet Access (ZIA) customers can use ‘the sledgehammer solution’ and block all traffic to GenAI sites, but also block on a more selective basis; site-by-site or user-by-user. 

“You can also caution the user,” said Moinul Khan, adding “The moment you see [an AI application] on the wire, you can send a notification saying, “Hey man, you're using ChatGPT, please make sure that you don't post any of our company information.” He likened this approach to a simple kind of user coaching.

Browser isolation mode can also be used to control how the user interacts with websites and Zscaler’s inline data loss prevention (DLP) means certain types of data can be restricted from being used on certain sites. If organizations are sufficiently concerned, one further invasive option is possible: recording the prompts ‌users enter to understand exactly what users are asking of the technology.

4. Concerns around safety, security, bias, hallucination, liability, and intellectual property need to be addressed for enterprise adoption.

There are numerous examples of companies rolling out GenAI applications to customers that were not thoroughly tested beforehand. For example, an AI-powered bot at a Chevy dealership in California offered to sell a truck for one dollar–$58,000 below the asking price–after a user issued commands that manipulated the responses. The same bot also recommended a Tesla to another user instead of a Chevy. GenAI bots should be tested internally until such problems are addressed, and though the impact here was reputational, wrong answers could leave a business liable to legal action. 

One CXO representing a fintech organization highlighted how there was a request from the development team to get access to Copilot and Github for AI-powered code generation to improve efficiencies. The organization completed a security review of Github, then looked at its own internal processes to ensure the Github-generated code was appropriately identified and scanned for vulnerabilities, thereby strengthening security.

Another CXO, this time from a manufacturer, raised the issue of innovation and ownership. The legal advice they had received was that anything created by AI would not necessarily then belong to the company: “If somebody goes in and creates a product with an AI tool, there's no getting patents on that.”

A developing topic

The discussion among the participants highlighted that we are still in the very early stages of understanding the use cases, risks, and mitigations around GenAI. For businesses that are uncertain about the risks, blocking everything is the best short-term solution, but not a permanent solution. 

As Moinul Khan stated: “Cars have brakes so they can go faster.” While the blocking is in place, IT and security teams must work to develop guardrails to allow users to quickly start benefiting from what GenAI can offer. Zscaler’s solutions can help organizations achieve this.

What to read next

Settle your risk score with AI and your data fabric

What it means to “fight AI with AI”

Fighting Fire with Fire: Generative AI vs AI-Powered DLP

Recommended