Zscaler Blog
Get the latest Zscaler blog updates in your inbox
Anthropic AI-Orchestrated Attack: The Detection Shift CISOs Can’t Ignore
For the longest time, we’ve resisted the urge to talk about why Deception is a great way to counter end-to-end AI-orchestrated cyber attacks. Why? Because when you discuss fighting back against a threat that's just hypothetical, you’re just spreading FUD (fear, uncertainty, and doubt) and confusion.
Until yesterday, no credible report of AI being used to pull off sophisticated autonomous attacks – that changed yesterday when Anthropic published a report detailing the first reported AI-orchestrated cyber espionage campaign. If you work in security, chances are you’ve already read about the attack. If you haven’t, you should – it’s a fascinating read.
What happened?
Anthropic reported a Chinese state-sponsored campaign that automated 80–90% of tactical operations using agentic AI – recon, exploit validation, credential harvesting, lateral movement, data analysis, and exfiltration. Human oversight was limited to escalation steps. Appronximately 30 organizations were targeted including major tech companies and government agencies. None of the victims came forward and Anthropic didn’t disclose names but the detailed account verified a handful of intrusions that resulted in credential harvesting, lateral movement, and data collection.
What does this attack mean for threat detection?
The good news is that this attack followed an old playbook. The bad news is that the old playbook can now be executed at machine speed and with significant parallelism. That change has massive implications for threat detection.
Here’s a breakdown of the impact.
Human adversary | AI-agent adversary |
Episodic, bursty activity | Constant, tireless, and adaptive; breaking timing-related rules |
Limited parallelism (1 - 3 threads) | Massive parallelism, breaking timeline reconstruction capabilities |
Minutes to hours to rethink attack strategy or adjust to new information | Millisecond feedback loops, with the ability to try all options simultaneously, affecting prioritization |
Cognitive limits on thoroughness leading to mistakes | No cognitive exhaustion, and the ability to creatively engage in permutation generation, path traversal and OPSEC safety well beyond human capacity |
Hesitation | Maximum reward |
How Deception can combat AI-based attacks
For years, and again with AI-based attacks, people have wondered about the efficacy of Deception technology. The most common question has been: “What if the attacker does not engage with a decoy?”
The attack that Anthropic detailed shows how AI agents engaged in massive parallelism, chasing all viable options simultaneously. So the question of engagement with a decoy has turned from probability into certainty.
Consider the following assumptions about autonomous hacking agents:
- Don’t hesitate
- Don’t weigh caution to the same degree as a human
- Designed to maximize reward signals
- Will follow the next most promising step
- Treat everything as a possible vector
- Correlate context beyond human adversary capabilities
A comprehensive Deception strategy will mean we deploy:
- Decoy login pages for attractive targets
- Honey-token accounts that seem to have high privileges
- Decoy machines like databases and file shares
- Decoy files with passwords and decoy credentials
It follows that the AI agent is extremely likely to aggressively pursue opportunities that will generate early warning signals to the security team.
Deception combats agentic decision making, with:
- High-signal, intent-based detection: Only adversaries trip decoy credentials, decoy services, or decoy files. These high-fidelity signals lead to early detection and fast triage.
- Low cost and low operational overhead: Decoys don’t sit inline. They don’t break apps.
- Defense-in-depth multiplier: Deception amplifies your identity, endpoint, network, and data-layer controls, catching early pivots that other tools either miss or see later and noisier.
- Adversary cost asymmetry: Decoys introduce unpredictability into your environment. Attackers, whether human or AI, must probe assets and artifacts to exploit them. Probing leads to detection, slowing AI-driven campaigns and buying your SOC time.
Maybe you're wondering ...
- “Anthropic is adding guardrails – won’t that stop this?”
Guardrails reduce abuse, but they can't eliminate it. The threat actor bypassed safeguards via role-play and orchestration. Plan for autonomy on offense; invest in autonomy on defense.
- “We already have EDR/XDR, ITDR, NDR, DLP.”
Good – Deception makes them better. It supplies early, high-confidence signals that route investigations to real pivots. Pairing decoy hits with identity containment, endpoint isolation, and access controls cuts dwell time.
Breaking an AI-orchestrated kill chain with Deception
We pulled the attack stages from the Anthropic report and layered each stage with decoys you can deploy for early detection, why they’re effective, and how you can pair them with other Zscaler and security controls for faster triage, investigation, and containment.
Attack stage | Attacker actions | Deception Strategy | Why it’s effective | Pair with Zscaler + ecosystem controls |
Pre-breach recon | Role‑play “defensive testing,” seed targets, kick off parallel campaigns | Threat Intelligence Decoys: Decoy VPNs, bug‑bounty portals, test bed applications with vulnerabilities like Log4J | Converts pretext into tripwires and forces early interaction with controlled artifacts | Auto‑enrich (VirusTotal/GreyNoise/AbuseIPDB) and block via Zscaler Internet Access or your Firewall |
Discovery and attack surface mapping | Enumerate apps, auth, services, topology via scanners/MCP | Network Decoys with Personalities and Services (FTP, SSH, HTTP/S, custom banners); Vulnerable Application decoys, Zero Trust Network Decoy for private apps | High fidelity because only attackers probe these assets; realistic banners + CVE datasets bait autonomous tooling | NDR/IDS via SIEM integrations (Splunk, Sentinel, QRadar); CSPM posture; ZPA App Connector telemetry; Orchestrate auto‑isolate source subnets |
Vulnerability discovery and validation | Generate payloads (e.g., SSRF/RCE), callback validation, foothold | High‑Interaction Containers (web app, custom protocol), decoy files | Captures payloads, IOCs, and callback fingerprints at low noise while sandboxing exploit flow | CrowdStrike / Microsoft Defender / Palo Alto / Fortinet containment integrations; Zscaler Sandbox, Joe Sandbox/WildFire for evidence detonation |
Initial access and persistence | Establish access, drop configs/implants | Active Directory Decoys: decoy users/computers, SPNs, constrained delegation, Cloud Decoys (GCP/Azure/AWS IAM roles, Managed Identity, Service Principals) | Touching identity paths triggers deterministic alerts and enables immediate containment | ITDR with Okta AI containment; Amazon GuardDuty containment; ZPA policy blocks; Orchestrate rule to revoke tokens and kill sessions automatically |
Credential harvesting | Query services, scrape secrets, test creds across systems | Landmine Decoys: Password, Defense Evasion, Privilege Escalation, Browser/Session/File lure modules; Decoy files; Key Vault/S3 decoys | Converts credential testing and secret scraping into precise detections across endpoint and cloud | CrowdStrike / Carbon Black containment; Conditional Access step‑up; SIEM correlation via Event Logs/Evidence export |
Lateral movement | Authenticate to APIs/DBs/registries/logging; build access maps | Decoy DBs (RDS/DynamoDB), Object Storage (S3/Azure Storage), ECR/Container Registry decoys; App decoys; Admin console personalities | Forces pivots through monitored chokepoints and fingerprints role discovery | Microsegmentation via ZPA; service allow‑lists; ZIA/ZPA containment integrations; Orchestrate rules to isolate VPC/subnet on decoy touch |
Data collection and analysis | Map schemas, pull accounts / hashes, create backdoor user, bulk downloads | Dark‑data beacons via File Datasets (static/dynamic); “Crown jewel” datasets; Backdoor‑user traps in AD/Azure | Intent‑based detections at query/exfil with high attribution quality | Zscaler DLP; egress hard allow‑lists; Database Activity Monitoring via SIEM |
Exfiltration staging and transfer | Finalize targets, stage and exfil | Egress deception: fake S3 buckets/URLs, SaaS connectors via Cloud Decoys | Catches staging/tests and blocks real channels while preserving evidence | TLS inspection in ZIA; Amazon GuardDuty; ZIA/Palo Alto/Check Point firewall containment; Orchestrate auto‑block |
Implementation blueprint
Deploy the following types of decoys and automated responses to find and contain bad actors quickly across your full environment:
- Perimeter deception: Plant decoys of VPNs, firewalls, email servers, and test bed applications running vulnerabilities. With Zscaler Deception, the decoys aren’t even hosted in your environment and require only a DNA-A record pointing to the decoy hostname. These decoys will provide early pre-breach private threat intel and can be blocked by ZIA or by your firewall.
- Identity-first deception: Seed honey users, SPNs, roles, and keys that match your real RBAC and group structures. Wire first touch to automatic revocation and stepped-up authentication.
- Deception realism: Stand up decoy web services, databases, registries, and admin consoles through templates matching your stack (versions, banners, auth flows). Rotate banners and minor misconfigs periodically.
- Dark-data canaries: Embed decoy files that act as beacons in sensitive-looking datasets and configuration fields; monitor at query/exfiltration time with deterministic alerts.
- Response automation: Treat a decoy hit as a containment/orchestration trigger—revoke tokens, isolate hosts/sessions, block egress, and then route enriched context to SOC playbooks and IR.
Operating guidance for CISOs, security architects, and SOC leaders
Consider the following metrics and low operational requirements for your deception strategy:
- Measure dwell-time reduction: Track how decoy hits correlate with actual malicious investigations and how quickly they trigger containment. Expect fewer false positives, faster triage.
- Budget and staffing: Deception is not a staffing burden - it’s a design task. With Zscaler Deception, a single person template can launch a deception campaign, and SOC automation handles the response.
- Cloud and SaaS: Don’t stop at networks and endpoints. Plant deception in AD, cloud workloads, and data stores where AI-driven campaigns naturally pivot.
Closing thoughts
Deception converts AI-orchestrated cyber attacks into decisive signals at minimal cost and operational weight. It won’t replace other tooling in your security stack, but it will make that tooling meaningfully better by identifying autonomous attackers at the moments their intent is visible. The right move is not to wait for AI guardrails to solve misuse. Pair adaptive, coherent deception with identity, endpoint, network, and data controls to catch attacks early, contain them quickly, and shrink dwell time.
Request a demo to learn more about how Zscaler Deception can break the kill chain for AI-orchestrated attacks
Was this post useful?
Disclaimer: This blog post has been created by Zscaler for informational purposes only and is provided "as is" without any guarantees of accuracy, completeness or reliability. Zscaler assumes no responsibility for any errors or omissions or for any actions taken based on the information provided. Any third-party websites or resources linked in this blog post are provided for convenience only, and Zscaler is not responsible for their content or practices. All content is subject to change without notice. By accessing this blog, you agree to these terms and acknowledge your sole responsibility to verify and use the information as appropriate for your needs.
Get the latest Zscaler blog updates in your inbox
By submitting the form, you are agreeing to our privacy policy.


