AI security is the practice of protecting artificial intelligence systems, models, and data from manipulation, unauthorized access, and data leakage across the AI lifecycle—from training to deployment and user interaction.
• Protect AI systems: AI security protects data, models, and AI infrastructure—reducing breach risk while maintaining integrity, safety, and trust.
• GenAI expands the attack surface: GenAI boosts productivity but widens the attack surface via prompts, plugins, and APIs—plus shadow AI, leakage, and weak governance.
• Secure the AI lifecycle: Secure the full AI lifecycle with layered controls across data, training, deployment, and user interaction—then monitor drift and abuse.
• Apply governance frameworks: Apply NIST AI RMF and zero trust; use Zscaler to find shadow AI, control GenAI use, stop data loss, and support audits.
Why Is AI Security Important?
AI security is becoming a foundational pillar of modern cybersecurity strategies. As organizations accelerate AI initiatives, four security pressure points are emerging that can quickly undermine data integrity, confidentiality, and trust:
GenAI adoption risks: Rapid deployment can outpace security controls, expanding the attack surface across prompts, plugins, integrations, and model workflows.
Shadow AI: Employees may use unapproved AI tools for speed and convenience, bypassing IT oversight and creating unmanaged data and compliance exposure.
Data leakage via LLMs: Sensitive information can be inadvertently shared through prompts, training inputs, logs, or model outputs, risking IP loss and regulatory violations.
AI governance challenges: Without clear policies, ownership, and guardrails, it becomes difficult to enforce consistent standards for data handling, model usage, auditing, and accountability.
How AI Security Works
AI security works by applying layered controls across the full AI pipeline—not just at the perimeter—so risks are addressed where they originate and where they manifest. This end-to-end approach improves resilience, supports AI security best practices, and reduces operational surprises during scale-up.
Data collection: Classify and minimize data, enforce consent and retention rules, and protect sources to strengthen AI data security from the first mile.
Model training: Validate datasets, secure training infrastructure, and test for abuse cases (e.g., leakage, harmful behaviors) to reinforce AI model security.
Deployment: Harden runtime environments, isolate workloads, control tool/plugin access, and implement safe defaults for production-grade enterprise AI security.
User interaction: Apply prompt and content controls, redact sensitive inputs/outputs, and prevent unsafe tool execution to support safer generative AI security.
Monitoring: Continuously detect anomalies, abuse patterns, drift, and data leakage indicators—then respond with playbooks and corrective actions.
What are the Key Components of AI Security?
Maintaining a solid AI security posture typically revolves around several core pillars. These components aim to establish a framework that incorporates data protection, technical safeguards, and organizational readiness:
Data protection: Ensures that all training data remains accurate and shielded from exposure.
Continuous monitoring: Enables proactive risk management to recognize and respond to vulnerabilities in real time.
Risk assessment: Identifies and prioritizes AI-specific threats to guide targeted risk mitigation efforts.
Incident response: Establishing procedures for detecting, responding to, and recovering from security breaches and incidents involving AI applications.
AI governance and compliance: Adhering to relevant laws and regulations like the GDPR, CCPA, and AI Act, which are essential for navigating the complex landscape of AI applications.
AI Security vs. Traditional Cybersecurity vs. Cloud Security
AI security extends traditional cybersecurity—and overlaps with cloud security—by protecting not only systems and networks, but also cloud-hosted AI services, identities, configurations, and APIs, alongside model behavior, training data, and AI-driven decision pathways. In practice, organizations that prioritize AI security gain stronger governance, faster detection of AI threats, and better alignment with secure AI adoption in modern, cloud-first environments.
Aspect
AI Security
Traditional Security
Cloud Security
Primary assets protected
Models, prompts, training data, outputs, AI pipelines—plus the traditional assets
Networks, endpoints, applications, identities
Cloud services and workloads, identities, configurations, APIs, and data—plus the traditional assets
Typical attack surface
Prompting, plugins/tools, model APIs, training pipelines, model artifacts, AI-integrated workflows
Stronger need for formal AI governance and security due to rapid change and novel failure modes
All traditional signals plus prompt/response patterns, model drift, abuse indicators, data-flow anomalies
Strong need for shared-responsibility governance, policy-as-code, and continuous configuration monitoring across accounts and services
What Risks Does AI Security Address?
AI security risks span far beyond standard malware and network intrusion—they also target how models behave, what data they learn from, and how people interact with AI-enabled workflows. For enterprise AI security teams, understanding these risk categories is a practical starting point for secure AI adoption and consistent AI governance and security.
AI Model Risks
AI model security focuses on threats that alter, exploit, or manipulate model behavior—especially in generative AI security, where inputs and outputs can be weaponized at scale. These risks can degrade reliability, expose sensitive information, or create downstream business harm even when traditional controls appear “green.”
Model poisoning: Attackers manipulate training data or fine-tuning inputs so the model learns harmful patterns, hidden triggers, or biased behaviors.
Prompt injection: Malicious instructions are embedded in prompts or external content (e.g., documents, web pages, tools) to override system rules and coerce unsafe actions or disclosures.
Adversarial attacks: Carefully crafted inputs are designed to cause incorrect classifications or unsafe outputs, exploiting model weaknesses rather than software vulnerabilities.
Data Risks
AI data security addresses how sensitive information can be exposed through collection, storage, training pipelines, logs, and outputs. Because AI systems often aggregate and transform large datasets, a single weak point can create disproportionate confidentiality and compliance impact.
Training data leakage: Proprietary or personal data may be recoverable through model outputs, inference techniques, or insecure storage and sharing of training artifacts.
Sensitive data exposure: Users or systems may inadvertently submit confidential data via prompts, uploads, telemetry, or integrations—creating leakage and regulatory risk.
User Risks
Users are often the fastest path to compromise, and AI changes the scale and credibility of social engineering. In enterprise AI security, user-facing risks frequently blend technical exploitation with trust manipulation.
AI phishing: AI-generated messages can be highly targeted, polished, and context-aware, increasing click-through rates and credential theft success.
Deepfake attacks: Synthetic audio/video can convincingly impersonate executives or vendors, enabling fraud, account takeover, or payment redirection schemes.
Zscaler ThreatLabz 2026 AI Security Report
Get insights into the latest enterprise AI adoption trends, risks, and security strategies.
An AI security framework aligns people, process, and technology so organizations can manage AI security risks consistently across teams and vendors. It also provides a structured way to operationalize AI governance and security—from policy and design to deployment and continuous monitoring.
NIST AI RMF: A risk management approach that helps organizations map, measure, and manage AI risks across the AI lifecycle with governance and accountability.
Zero trust for AI: Extends zero trust principles to AI systems by continuously verifying identity, device, data access, and tool/model permissions—never assuming prompts, plugins, or integrations are safe by default.
Secure AI lifecycle: Embeds security controls into each stage (design, data handling, training, evaluation, deployment, and retirement) to reduce systemic risk rather than relying on after-the-fact fixes.
Model governance: Establishes ownership, approval workflows, auditing, and documentation so models are used as intended, changed safely, and monitored for drift and misuse.
Regulatory and Ethical Considerations in AI Security
As AI weaves into everyday operations, businesses operating under strict data protection regulations must address growing demands around privacy, fairness, and compliance. Several factors emerge when exploring regulation, ethics, and technology.
Data Protection Laws
Compliance with data protection regulations, including the California Consumer Privacy Act (CCPA), defines how AI solutions collect and handle personal information. Companies must disclose their data practices and safeguard user data to avoid legal liabilities. In many regions, consumers also have the right to understand and control how their data is utilized.
Ethical Transparency
As artificial intelligence security grows more pivotal, transparency fosters trust by revealing some of the logic behind AI-driven decisions. Clear documentation of algorithms and training processes promotes accountability and helps minimize biases. This approach also supports security teams in better understanding the factors influencing system behavior.
Protecting Intellectual Property
Organizations face significant challenges keeping proprietary models secure, especially in large language model (LLM) deployments. Ensuring the confidentiality of specialized code, unique model architectures, and advanced solutions preserves a competitive edge. Strong protective practices reduce the likelihood of leaks or thefts that could harm a company’s reputation.
Enterprise AI Security Best Practices
Enterprise AI security succeeds when controls are practical enough for daily use and strict enough to meaningfully reduce AI security risks. The following AI security best practices are designed to strengthen AI data security and AI model security while enabling innovation without uncontrolled exposure.
Secure GenAI usage: Define approved tools and use cases, require safe prompting patterns, and restrict risky features (e.g., unrestricted web/tool access) to support secure generative AI security at scale.
Implement AI access control: Apply least privilege to models, datasets, and tools; enforce strong authentication and role-based policies for who can use which models and capabilities.
Prevent AI data loss: Classify and redact sensitive data, restrict what can be pasted/uploaded, and apply DLP-like controls tailored to prompts, logs, and model outputs for stronger AI data security.
Inspect AI traffic: Monitor and analyze model/API calls, prompt/response telemetry, and plugin/tool execution to detect prompt injection, exfiltration attempts, and policy violations.
Enforce AI governance: Establish ownership, documentation, evaluation standards, audit trails, and ongoing reviews so AI governance and security remain consistent across teams and vendors.
How Zscaler Secures AI
Zscaler enables AI security by extending zero trust principles across the full AI pipeline—so organizations can discover AI use (including shadow AI), govern access, and protect sensitive data where it’s most likely to leak: in prompts, responses, and AI-connected workflows. It pairs inline controls for generative AI and AI SaaS usage with continuous, automated testing and runtime guardrails, reducing the gap between “AI innovation” and “AI-ready security” as deployments scale.
And, because governance is now a moving target, Zscaler helps operationalize AI governance and compliance by mapping discovered risks to evolving frameworks and policies so teams can prioritize remediation and stay audit-ready:
Control AI access at the user level by discovering AI app usage and applying flexible policies to allow, block, warn/coach, or isolate interactions across popular GenAI apps, embedded AI in SaaS, and developer tooling.
Prevent data leakage through prompts and responses with inline inspection and DLP that can detect and block sensitive content (e.g., source code, PII/PHI/PCI) while supporting acceptable-use guardrails and safer day-to-day GenAI productivity.
Continuously test AI systems before and after launch using automated AI red teaming with prebuilt and customizable probes, multi-modal testing (text/voice/images/documents), and workflows to track and remediate issues in tools like Jira and ServiceNow.
Simplify AI governance and compliance by automatically mapping AI risks to major frameworks (e.g., EU AI Act, NIST AI RMF, OWASP LLM Top 10, MITRE ATLAS, ISO/IEC 42001) and supporting custom policy creation/import to align AI deployments with internal standards.
Request a demo
See how Zscaler can help you secure AI adoption—without sacrificing speed, usability, or compliance.
AI model security is the practice of protecting an AI model’s training process, artifacts, and runtime behavior from compromise. It includes controlling access to datasets and checkpoints, hardening training and hosting infrastructure, validating data and code, versioning and signing releases, and monitoring for poisoning, theft, jailbreaks, and anomalous queries.
Securing generative AI use means governing who can access models and what data or tools they can reach. Typical controls include SSO/MFA, approved tool lists, policy enforcement on prompts and outputs, data classification and redaction, least privilege permissions for plugins and connectors, logging, and monitoring for exfiltration or policy violations.
Prompt injection is an attack where crafted input causes an AI system to ignore intended instructions or reveal restricted information. It often exploits content from documents, web pages, or tool outputs that the model treats as authoritative. Mitigations include separating data from instructions, constraining tools, and filtering or validating inputs.
Sensitive data leakage in AI occurs when confidential information is submitted to models, stored in transcripts or telemetry, incorporated into training, or echoed back in outputs. Leakage can be accidental or triggered through adversarial prompts and inference techniques. Controls include data minimization, strong access policies, redaction/DLP, retention limits, and monitoring of interactions.
Enterprise AI governance is the set of policies, roles, and review processes that define how AI is selected, built, deployed, and monitored. It covers acceptable use, data handling, model and vendor risk, documentation and testing requirements, audit trails, and accountability for changes and outcomes. Governance helps standardize decisions as AI scal
Zero trust for AI applies “never trust, always verify” to model access, data connections, and tool execution. Each request is authenticated, authorized, and evaluated with context such as user role, device posture, and data sensitivity. Least privilege permissions and continuous monitoring reduce blast radius if an account or workflow is abused.
What Is AI Security?
<p dir="ltr"><strong>AI security</strong><span> is the practice of protecting artificial intelligence systems, models, training data, prompts, and AI-driven applications from cyber threats, misuse, manipulation, and data exposure. </span><a href="https://www.zscaler.com/es/zpedia/what-is-ai-security"><span>Read more</span></a><p> </p></p>
Why AI Security Matters Now?
<p> <ul><li><strong>GenAI adoption risks:</strong> Rapid deployment can outpace security controls, expanding the attack surface across prompts, plugins, integrations, and model workflows.</li><li><strong>Shadow AI:</strong> Employees may use unapproved AI tools for speed and convenience, bypassing IT oversight and creating unmanaged data and compliance exposure.</li><li><strong>Data leakage via LLMs:</strong> Sensitive information can be inadvertently shared through prompts, training inputs, logs, or model outputs, risking IP loss and regulatory violations.</li><li><strong>AI governance challenges: </strong>Without clear policies, ownership, and guardrails, it becomes difficult to enforce consistent standards for data handling, model usage, auditing, and accountability.</li></ul><p><a href="https://www.zscaler.com/es/zpedia/what-is-ai-security"><span>Read more</span></a></p></p>
How AI Security Works?
<p><a href="https://www.zscaler.com/es/zpedia/what-is-artificial-intelligence-ai-in-cybersecurity"><span><u>AI security</u></span></a> works by applying layered controls across the full AI pipeline—not just at the perimeter—so risks are addressed where they originate and where they manifest. <ul><li><strong>Data collection:</strong> Classify and minimize data, enforce consent and retention rules, and protect sources to strengthen AI data security from the first mile.</li><li><strong>Model training:</strong> Validate datasets, secure training infrastructure, and test for abuse cases (e.g., leakage, harmful behaviors) to reinforce AI model security.</li><li><strong>Deployment:</strong> Harden runtime environments, isolate workloads, control tool/plugin access, and implement safe defaults for production-grade enterprise AI security.</li><li><strong>User interaction:</strong> Apply prompt and content controls, redact sensitive inputs/outputs, and prevent unsafe tool execution to support safer generative AI security.</li><li><strong>Monitoring:</strong> Continuously detect anomalies, abuse patterns, drift, and <a href="https://www.zscaler.com/es/zpedia/what-is-data-leakage"><span><u>data leakage</u></span></a> indicators—then respond with playbooks and corrective actions.</li></ul><p>This end-to-end approach improves resilience, supports AI security best practices, and reduces operational surprises during scale-up. <a href="https://www.zscaler.com/es/zpedia/what-is-ai-security"><span>Read more</span></a></p></p>
What are the Key Components of AI Security?
<p>Maintaining a solid AI security posture typically revolves around several core pillars. These components aim to establish a framework that incorporates <a href="https://www.zscaler.com/es/resources/security-terms-glossary/what-is-data-protection"><span><u>data protection</u></span></a>, technical safeguards, and organizational readiness:<ul><li><strong>Data protection:</strong> Ensures that all training data remains accurate and shielded from exposure.</li><li><strong>Continuous monitoring:</strong> Enables proactive <a href="https://www.zscaler.com/es/zpedia/what-is-risk-management"><span><u>risk management</u></span></a> to recognize and respond to vulnerabilities in real time.</li><li><a href="https://www.zscaler.com/es/zpedia/what-is-ai-security-posture-management-aispm"><strong><u>AI security posture management:</u></strong></a><strong> </strong>Maintains AI security practices and readiness through ongoing assessment and improvement.</li><li><strong>Risk assessment: </strong>Identifies and prioritizes AI-specific threats to guide targeted risk mitigation efforts.</li><li><strong>Incident response: </strong>Establishing procedures for detecting, responding to, and recovering from security breaches and incidents involving AI applications.</li><li><strong>AI governance and compliance: </strong>Adhering to relevant laws and regulations like the GDPR, CCPA, and AI Act, which are essential for navigating the complex landscape of AI applications.</li></ul><p><a href="https://www.zscaler.com/es/zpedia/what-is-ai-security"><span>Read more</span></a></p></p>
AI Security vs Cybersecurity
<div><div><div><div><div><div><div><div><div><div><div><div><div><p dir="ltr"><strong>AI security focuses on protecting AI systems themselves, while AI in cybersecurity refers to using AI to defend against cyber threats.</strong></p><table class="table"><thead><tr><th><strong>Aspect</strong></th><th><strong>AI Security</strong></th><th><strong>Traditional Security</strong></th></tr></thead><tbody><tr><td><strong>Primary assets protected</strong></td><td>Models, prompts, training data, outputs, AI pipelines—plus the traditional assets</td><td>Networks, endpoints, applications, identities</td></tr><tr><td><strong>Typical attack surface</strong></td><td>Prompting, plugins/tools, model APIs, training pipelines, model artifacts, AI-integrated workflows</td><td>Ports, credentials, vulnerable services, misconfigurations</td></tr><tr><td><strong>Core risk</strong></td><td>Unauthorized access and manipulated behavior, unsafe outputs, leakage through model interaction</td><td>Unauthorized access and data theft</td></tr><tr><td><strong>Detection signals</strong></td><td>All traditional signals plus prompt/response patterns, model drift, abuse indicators, data-flow anomalies</td><td>Logs, EDR, SIEM, network telemetry</td></tr><tr><td><strong>Governance maturity</strong></td><td>Stronger need for formal AI governance and security due to rapid change and novel failure modes</td><td>Often well-established controls and standards</td></tr></tbody></table></div></div></div></div><div><div><a href="https://www.zscaler.com/es/zpedia/ai-vs-traditional-cybersecurity"><span>Read more</span></a></div></div></div></div></div></div></div><div> </div></div></div></div><div><div><div><div><div> </div></div></div></div></div><p><br> </p></div>
What are the Types of AI Security Risks?
<div><div><div><div><div><div><div class="text-darkBlue"><p>AI security risks span far beyond standard malware and network intrusion—they also target how models behave, what data they learn from, and how people interact with AI-enabled workflows. For enterprise AI security teams, understanding these risk categories is a practical starting point for secure AI adoption and consistent AI governance and security.</p><h1>AI Model Risks</h1><p>AI model security focuses on threats that alter, exploit, or manipulate model behavior—especially in generative AI security, where inputs and outputs can be weaponized at scale. These risks can degrade reliability, expose sensitive information, or create downstream business harm even when traditional controls appear “green.”</p><ul><li><strong>Model poisoning: </strong>Attackers manipulate training data or fine-tuning inputs so the model learns harmful patterns, hidden triggers, or biased behaviors.</li><li><strong>Prompt injection:</strong> Malicious instructions are embedded in prompts or external content (e.g., documents, web pages, tools) to override system rules and coerce unsafe actions or disclosures.</li><li><strong>Adversarial attacks:</strong> Carefully crafted inputs are designed to cause incorrect classifications or unsafe outputs, exploiting model weaknesses rather than software vulnerabilities.</li></ul><h1>Data Risks</h1><p>AI data security addresses how sensitive information can be exposed through collection, storage, training pipelines, logs, and outputs. Because AI systems often aggregate and transform large datasets, a single weak point can create disproportionate confidentiality and compliance impact.</p><ul><li><strong>Training data leakage:</strong> Proprietary or personal data may be recoverable through model outputs, inference techniques, or insecure storage and sharing of training artifacts.</li><li><strong>Sensitive data exposure:</strong> Users or systems may inadvertently submit confidential data via prompts, uploads, telemetry, or integrations—creating leakage and regulatory risk.</li></ul><h1>User Risks</h1><p>Users are often the fastest path to compromise, and AI changes the scale and credibility of social engineering. In enterprise AI security, user-facing risks frequently blend technical exploitation with trust manipulation.</p><ul><li><strong>AI phishing:</strong> AI-generated messages can be highly targeted, polished, and context-aware, increasing click-through rates and credential theft success.</li><li><strong>Deepfake attacks:</strong> Synthetic audio/video can convincingly impersonate executives or vendors, enabling fraud, account takeover, or payment redirection schemes.</li></ul></div></div></div></div></div></div><div><div><div><div><div><div><div><div> </div></div></div></div></div></div></div></div><p><a href="https://www.zscaler.com/es/zpedia/what-is-ai-security"><span>Read more</span></a><br> </p></div>
Core Components of a Robust AI Security Framework
<p>An AI security framework aligns people, process, and technology so organizations can manage AI security risks consistently across teams and vendors. It also provides a structured way to operationalize AI governance and security—from policy and design to deployment and continuous monitoring.<ul><li><a href="https://www.nist.gov/itl/ai-risk-management-framework"><span><strong>NIST AI RMF:</strong></span></a> A risk management approach that helps organizations map, measure, and manage AI risks across the AI lifecycle with governance and accountability.</li><li><a href="https://www.zscaler.com/es/resources/security-terms-glossary/what-is-zero-trust"><span><strong><u>Zero trust</u></strong></span></a><strong> for AI:</strong> Extends zero trust principles to AI systems by continuously verifying identity, device, data access, and tool/model permissions—never assuming prompts, plugins, or integrations are safe by default.</li><li><strong>Secure AI lifecycle:</strong> Embeds security controls into each stage (design, data handling, training, evaluation, deployment, and retirement) to reduce systemic risk rather than relying on after-the-fact fixes.</li><li><strong>Model governance:</strong> Establishes ownership, approval workflows, auditing, and documentation so models are used as intended, changed safely, and monitored for drift and misuse.</li></ul><p><a href="https://www.zscaler.com/es/zpedia/what-is-ai-security"><span>Read more</span></a></p></p>
What are 3 regulatory and ethical considerations in AI use?
<div><div><div><div><div><div><div class="text-darkBlue"><p>As AI weaves into everyday operations, businesses operating under strict data protection regulations must address growing demands around privacy, fairness, and compliance. Several factors emerge when exploring regulation, ethics, and technology.</p><ol><li><h3>Data Protection Laws</h3><p>Compliance with data protection regulations, including the <a href="https://www.zscaler.com/es/privacy-compliance/ccpa"><span><u>California Consumer Privacy Act (CCPA)</u></span></a>, defines how AI solutions collect and handle personal information. Companies must disclose their data practices and safeguard user data to avoid legal liabilities. In many regions, consumers also have the right to understand and control how their data is utilized.</p></li><li><h3>Ethical Transparency</h3><p>As artificial intelligence security grows more pivotal, transparency fosters trust by revealing some of the logic behind AI-driven decisions. Clear documentation of algorithms and training processes promotes accountability and helps minimize biases. This approach also supports security teams in better understanding the factors influencing system behavior.</p></li><li><h3>Protecting Intellectual Property</h3><p>Organizations face significant challenges keeping proprietary models secure, especially in large language model (LLM) deployments. Ensuring the confidentiality of specialized code, unique model architectures, and advanced solutions preserves a competitive edge. Strong protective practices reduce the likelihood of leaks or thefts that could harm a company’s reputation.</p></li></ol></div></div></div></div></div></div><div><div><div><div><div><div><a href="https://www.zscaler.com/es/zpedia/what-is-ai-security"><span>Read more</span></a></div></div></div></div></div></div><p><br> </p></div>
AI Security Best Practices
<p dir="ltr"> <ul><li dir="ltr"><strong>Secure GenAI usage:</strong><span> Define approved tools and use cases, require safe prompting patterns, and restrict risky features (e.g., unrestricted web/tool access) to support secure generative AI security at scale.</span></li><li dir="ltr"><strong>Implement AI access control:</strong><span> Apply least privilege to models, datasets, and tools; enforce strong authentication and role-based policies for who can use which models and capabilities.</span></li><li dir="ltr"><strong>Prevent AI data loss:</strong><span> Classify and redact sensitive data, restrict what can be pasted/uploaded, and apply DLP-like controls tailored to prompts, logs, and model outputs for stronger AI data security.</span></li><li dir="ltr"><strong>Inspect AI traffic:</strong><span> Monitor and analyze model/API calls, prompt/response telemetry, and plugin/tool execution to detect prompt injection, exfiltration attempts, and policy violations.</span></li><li dir="ltr"><strong>Enforce AI governance:</strong><span> Establish ownership, documentation, evaluation standards, audit trails, and ongoing reviews so AI governance and security remain consistent across teams and vendors.</span></li></ul><p><a href="https://www.zscaler.com/es/zpedia/what-is-ai-security"><span>Read more</span></a></p></p>
What attacks target AI models?
<p><span>Prompt injection/jailbreaks, data poisoning, model inversion/extraction, adversarial examples, and supply‑chain/backdoor attacks.</span></p>
What is prompt injection?
<p><span>Tricking an AI via crafted instructions (often embedded in user input/content) to ignore rules and leak data or take unintended actions.</span></p>
How is AI different from traditional security?
<p><span>AI security must defend both </span><em>models/data/prompts</em><span> and </span><em>probabilistic behavior</em><span> (plus agent/tool misuse), not just deterministic code and known vulnerabilities.</span></p>