Zscalerのブログ

Zscalerの最新ブログ情報を受信

CXO Insights

The Director's Cut: When AI Agents Go Off-Script

image
ROB SLOAN
May 11, 2026 - 5 分で読了

Board-level cyber risks requiring oversight: autonomous AI agents acting beyond their instructions, AI governance creating new D&O liability exposure, stolen travel data weaponized within hours, and a Russian-linked attack on European energy.

 

AI Agents Are Making Decisions They Were Never Asked to Make

Companies deploying autonomous AI agents across business systems are seeing real productivity benefits, but a pattern is emerging: these agents are acting beyond their instructions in ways no one anticipated.

An experimental AI agent built on Alibaba’s technology independently began mining cryptocurrency and opened covert network tunnels to external servers during training. No one told it to. Last month at Meta, an AI agent responding to an engineering question instructed an employee to take actions that exposed sensitive user and company data internally for two hours, triggering a major security alert. In another recent case, an AI coding agent reportedly wiped a company database despite being explicitly told not to.

These are not edge cases. They illustrate “agent drift,” the gap between what an agent is told to do and what it actually does when it encounters obstacles or optimization opportunities. The emerging best practice is to apply zero trust principles to AI agents: rather than granting broad permissions, explicitly define which systems, data, and peer agents each one can reach. Just as users should only have permissions to access certain applications and data, AI agents should be treated as untrusted entities requiring continuous verification, meaning they are permissioned only for their specific task and nothing more.

What Directors Should Ask Management:

  • How many AI agents are operating within our environment, and do we have a complete inventory of what systems and data each one can access?
  • Are AI agents governed by zero trust principles, with access limited to the specific applications, data, and actions required for each task, or are they operating with broader privileges than any individual employee would be granted?
  • Have any of our AI agents acted beyond their intended scope, and how would we know if they had?

 

AI Governance Gaps Are Creating New D&O Liability Exposure

A new report from Aon warns that AI is accelerating governance expectations for boards. Courts, regulators, and insurers increasingly expect directors to understand how AI is used in their organizations and to show that risks, including model failure, data misuse, and third-party dependency, are being addressed. The most material exposures include oversight failures that lead to financial or regulatory harm, disclosure risk when AI is material to performance or strategy, and shareholder litigation tied to weak board oversight. 

The insurance market is responding. Aon reports that more than 90% of insurance decision-makers now view AI-driven incidents as a material concern, and D&O underwriters are increasingly evaluating governance maturity, disclosure discipline, and controls around data leakage and third-party AI exposure. Organizations that can demonstrate documented model testing, robust oversight, and safeguards against data leakage will secure more favorable terms and greater capacity.

What Directors Should Ask Management: 

  • Can we demonstrate to our D&O insurer that we have a documented AI governance framework and has it been reviewed by the board within the last twelve months?

 

Booking.com Breach Weaponized Within Days

Booking.com confirmed on April 13 that hackers gained unauthorized access to customer reservation data including names, email addresses, phone numbers, and booking details. The company reset reservation PINs and began notifying affected customers. Within days, customers reported receiving convincing fake emails and WhatsApp messages impersonating the travel platform, using the stolen booking details to trick people into sharing payment card information.

Attackers have a narrow window to exploit stolen data before the affected company can notify customers and prompt them to take protective action. AI is compressing that window further, enabling hackers to rapidly generate convincing, personalized phishing messages at scale within hours of a breach. For directors, this highlights the importance of understanding not just how quickly a breach can be detected, but how fast the organization can notify affected parties and disrupt downstream fraud before it scales.

What Directors Should Ask Management: 

  • In the event of a breach involving customer data, how quickly can we execute notification and what processes exist to disrupt follow-on phishing or fraud campaigns that use the stolen information?

 

Russian-Linked Hackers Attempted Destructive Attack on Swedish Power Plant

Sweden’s government disclosed on April 15 that pro-Russian hackers linked to Russian intelligence attempted a destructive cyberattack against a thermal power plant. The attack, which took place in mid-2025 but was only revealed this month, was blocked by a built-in protection mechanism at the facility. Sweden’s defense minister described the incident as evidence of “riskier and more reckless behavior,” signaling a shift from espionage toward attempted disruption of critical infrastructure.

For directors, the lesson is broader than the energy sector. Nation-state cyber activity is becoming more aggressive, with operational disruption—not just data theft or ransomware—an increasingly plausible risk. Organizations connected to critical services, industrial environments, or essential supply chains should ensure resilience plans account for destructive attacks and that those plans have been tested.

What Directors Should Ask Management: 

  • Does our business continuity planning account for destructive cyberattacks on operational infrastructure, not just data theft and ransomware, and has it been tested within the last twelve months?

 

*****

Zscaler is a proud partner of NACD's Northern California chapter. We are here as a resource for directors to answer questions about cybersecurity or AI risks, and are happy to arrange dedicated board briefings. Please email rsloan[@]zscaler.com to learn more.

form submtited
お読みいただきありがとうございました

このブログは役に立ちましたか?

免責事項:このブログは、Zscalerが情報提供のみを目的として作成したものであり、「現状のまま」提供されています。記載された内容の正確性、完全性、信頼性については一切保証されません。Zscalerは、ブログ内の情報の誤りや欠如、またはその情報に基づいて行われるいかなる行為に関して一切の責任を負いません。また、ブログ内でリンクされているサードパーティーのWebサイトおよびリソースは、利便性のみを目的として提供されており、その内容や運用についても一切の責任を負いません。すべての内容は予告なく変更される場合があります。このブログにアクセスすることで、これらの条件に同意し、情報の確認および使用は自己責任で行うことを理解したものとみなされます。

Zscalerの最新ブログ情報を受信

このフォームを送信することで、Zscalerのプライバシー ポリシーに同意したものとみなされます。