Zscaler Blog
Get the latest Zscaler blog updates in your inbox
EU AI Act: What security leaders need to know and How D(AI)-SPM can help
A guide to help enterprises understand EU AI regulations, its impact, and best practices to prevent penalties of up to €35 million or 7% of annual turnover for prohibited AI practices.
Understanding the EU AI Act
For years, tech companies have developed AI systems with minimal oversight. While artificial intelligence itself isn’t inherently harmful, the lack of clarity around how these systems make decisions has left many stakeholders uncertain, making it difficult to fully trust the outcomes generated by AI.
The European Union Artificial Intelligence Act (EU AI Act) is a pioneering regulatory framework designed to oversee the development, deployment, and use of artificial intelligence (AI) technologies across the EU. Proposed in 2021 and designed to ensure ethical and safe AI usage, the Act categorizes AI systems into risk levels—minimal, limited, high, or unacceptable. The Act entered into force in August 2024 and its implementation follows a phased approach:
- February 2, 2025: Prohibitions on unacceptable risk AI systems
- August 2, 2025: GPAI model requirements
- August 2, 2026: High-risk AI systems in Annex III
- August 2, 2027: Remaining high-risk AI systems.
Goals of the EU AI Act
The primary goal of the Act is to ensure that high-risk applications, such as those impacting critical infrastructure, safety, or fundamental rights, are subject to stringent requirements like transparency, accountability, and safety measures. The Act aims to balance innovation with accountability, considering the rapid evolution of AI and its expanded capabilities, like generative models and autonomous agents.
Moreover, as AI increasingly influences critical sectors like healthcare, transportation, and law enforcement, the EU AI Act provides guidelines to secure sensitive data and ensure systems operate fairly and responsibly, avoiding discrimination or bias. The Act helps harmonize AI governance across EU member states and positions the bloc as a leader in ethical AI adoption globally.
The Act also prohibits practices deemed harmful, such as AI manipulation or systems exploiting vulnerabilities of specific groups. Organisations deploying AI in the EU face compliance mandates, including risk assessments, security controls, and user data protections, while penalties for non-compliance can be significant. The landmark legislation sets the tone for global AI governance, aiming to balance fostering innovation with safeguarding public interests and fundamental rights in the age of AI.
What are the key requirements of the EU AI Act
The EU AI Act takes a risk-based approach to regulating the development and deployment of AI. The AI Act has classified different risk categories based on the field of application and defined measures to be implemented by organizations that develop or sell AI systems. These include:
- Risk Categorization: The EU AI Act classifies AI systems into four categories: minimal, limited, high, and unacceptable risk. High-risk applications, such as those impacting healthcare, transportation, or employment, require stringent safeguards, including documentation, transparency, and security measures.
- Transparency and Disclosure: Developers must ensure users of their AI systems understand the system’s functioning and limitations. This includes labeling AI-generated content and enabling auditability of AI processes.
- Prohibition of Harmful Practices: The Act bans certain AI applications altogether, such as systems manipulating subconscious behavior or exploiting vulnerabilities of specific individuals or groups.
- Data Governance Standards: High-risk AI systems must adhere to strict data management protocols, including privacy-by-design principles and compliance with GDPR for personal data protection.
- Accountability and Monitoring: Organizations deploying AI systems must carry out risk assessments, maintain logs for audits, and continuously monitor AI performance to mitigate risks and ensure compliance.
For businesses engaging with EU citizens, it's a good idea to understand where AI applications fall along this AI risk spectrum.
Challenges in implementing The EU AI Act workflows
The regulatory landscape for AI is complex and changing at a rapid rate, making it difficult to know what risk management framework to implement. Different jurisdictions may have varying, and sometimes conflicting, requirements for AI systems, presenting compliance challenges to organizations operating globally.
Interpreting and applying regulations to specific AI use cases can also be challenging. Many current regulations weren’t designed with AI in mind, leading to ambiguities in their application to AI systems. Below are top challenges in implementing The EU Act workflows:
Evolving and complex nature of AI: One of the most significant hurdles in implementing an AI risk management framework lies in the rapidly evolving and complex nature of AI technologies while balancing the potential benefits of AI against its risks.
Speed and scale: The scale and speed at which AI systems can operate make it challenging to identify and address risks in real time. AI models can process vast amounts of data and make decisions at speeds far beyond human capability, potentially allowing risks to propagate at speed before they can be detected and mitigated.
Transparency and reliability: Transparency and explainability of AI systems present another challenge. While these are often cited as key principles in AI ethics, organizations may find it difficult to navigate situations where full transparency compromises personal privacy or corporate intellectual property. Balancing opposed imperatives requires careful consideration and often involves trade-offs. The unpredictable nature of AI responses makes it challenging to ensure the reliability of AI systems across all possible scenarios they might encounter.
Internal resources: Cross-functional collaboration and resource allocation in AI risk management can be challenging to achieve. AI development often occurs in specialized teams, and bringing together technical experts and other stakeholders like legal, ethics, and business teams can prove difficult. Siloes often result, leading to a fragmented understanding of AI risks, as well as inconsistent management practices.
The EU AI Act Implications
The EU AI Act introduces stringent penalties to ensure compliance with its regulatory framework, aiming to safeguard ethical AI practices and mitigate risks associated with AI technologies. The penalty structure emphasizes the importance of proactive compliance measures, including risk assessment, effective governance, and secure data practices. Penalties are primarily targeted at organizations that fail to adhere to transparency, accountability, and safety requirements for AI systems. This penalty structure mirrors or surpasses the GDPR, underlining the EU’s commitment to strong enforcement.
Example: For breaches involving unacceptable-risk AI systems—those banned outright due to their potential societal harm—such as AI-based manipulation or exploitation, organizations can face fines of up to €30 million or 6% of their annual global turnover.
Organizations operating AI systems within the EU must ensure thorough checks and balances to prevent regulatory infractions. The penalties reflect the EU’s intent to prioritize ethical AI use while fostering innovation within clear legal boundaries.
Best Practices to comply with EU AI Act
To meet EU AI Act compliance, organizations can adopt below proactive best practices:
- Continuous Monitoring: One of the key challenges of the EU AI Act is to continuously identify and categorise which data falls under high-risk categories. Data discovery and classification helps categorize data according to its sensitivity, compliance requirements, or the risks associated with its use in AI systems. Organisations can establish real-time monitoring for data and AI systems to data discovery, classification and to detect associated vulnerabilities, unsafe behaviors, and incidents promptly. Regular audits and lifecycle tracking of data and AI systems are essential to maintain trust and reliability.
- Establish Robust Risk Management: Organisation need to conduct a thorough risk assessment of all AI applications to identify data and AI systems under the Act's risk categories (minimal, limited, high, or unacceptable). Ensure comprehensive documentation for high-risk AI systems.
- Enhance Transparency: Transparency is at the heart of the EU AI Act, requiring organizations to provide detailed information about how their AI models process data. Organizations need to implement mechanisms that clearly demonstrates transparency regarding AI functionality clearly, data source and usage as per the guidelines.
- Enforce Consistent Security Policies: Promote data security and privacy-by-design, ensure compliance with GDPR data protection protocols, and train AI models on datasets while retaining robust data ownership and control of personal, sensitive, or operational data and AI ecosystem. Effective data governance policies are required, ensuring the proper storage, processing, and monitoring of sensitive information within AI systems.
These practices will empower organizations to navigate regulatory landscapes while leveraging AI responsibly and securely.
DSPM’s emerging role in managing compliance with the EU AI Act
DSPM plays a pivotal role in helping organisations align with the stringent requirements of the EU AI Act, particularly when it comes to secure deployment of AI and sensitive data security. Here’s an overview of how it facilitates compliance:
Visibility and control over AI landscape: The AI landscape is diverse , comprising millions of closed and open source models, agents , AI data stores tools etc. Often it's very hard for organizations to understand the risk of adopting such a diverse set of components and the risk to the AI ecosystem. Zscaler’s DSPM provides deep centralized visibility into the diverse AI components. This unified visibility streamlines compliance efforts, identifying data vulnerabilities and ensuring regulatory adherence, especially in high-risk AI environments.
AI Data Security: Data is the lifeblood for AI, any AI security solution must secure AI in the context of data security. Zscaler DSPM helps organizations comply with the EU AI Act by breaking down fragmented data silos and providing a centralized view of sensitive information across diverse data landscapes and AI systems. Zscaler DSPM secures AI data in two ways , firstly we detect unwanted access to data from AI services, secondly we ensure data that is being consumed by AI is identified and there is no risk of malicious modification/poisoning of data.
Moreover, DSPM can effortlessly categorize data based on its sensitivity, legal requirements, and associated risks. This enables organizations to quickly identify high-risk data—whether it pertains to personal details or critical infrastructure—ensuring that key compliance standards under the EU AI Act are met. By continuously scanning and monitoring data flows, organizations can quickly detect anomalies, enforce access controls, and ensure AI models are leveraging clean, regulatory-compliant datasets.
AI Governance: Ensuring you have control over the vast AI supply chain is critical, for e.g. you might want to prevent users from using models from huggingface that have low downloads or have risky origin?
Zscaler DSPM helps understand the efficacy of the AI guardrails and security controls and ensures your AI implementations and deployments adhere to AI safety standards such as NIST AI Risk Management framework.
Responsible AI: The EU Act places a strong emphasis on promoting the development and use of responsible AI. DSPM plays a crucial role in achieving responsible AI by focusing on security and data privacy aspects. Integrating AI-SPM and DSPM provides a comprehensive security strategy, protecting both data assets and AI systems, and minimizing risks. The convergence of AI posture management, visibility of sensitive data and data risk ensures key Data and AI risks are highlighted and organizations can adopt AI safely adhering to the responsible AI guidelines.
To know about Zscaler AI-Powered DSPM and AI-SPM capabilities:
- Schedule a demo: Experience the power of the Zscaler AI Powered DSPM platform with a guided demo
- Complimentary Risk Assessment: Our Data Risk Assessment is fast and easy. Get instant visibility of your compliance posture, and receive expert guidance on compliance management.
Disclaimer: This blog post has been created by Zscaler for informational purposes only and is provided "as is" without any guarantees of accuracy, completeness or reliability. Zscaler assumes no responsibility for any errors or omissions or for any actions taken based on the information provided. Any third-party websites or resources linked in this blog post are provided for convenience only, and Zscaler is not responsible for their content or practices. All content is subject to change without notice. By accessing this blog, you agree to these terms and acknowledge your sole responsibility to verify and use the information as appropriate for your needs.
Was this post useful?
Disclaimer: This blog post has been created by Zscaler for informational purposes only and is provided "as is" without any guarantees of accuracy, completeness or reliability. Zscaler assumes no responsibility for any errors or omissions or for any actions taken based on the information provided. Any third-party websites or resources linked in this blog post are provided for convenience only, and Zscaler is not responsible for their content or practices. All content is subject to change without notice. By accessing this blog, you agree to these terms and acknowledge your sole responsibility to verify and use the information as appropriate for your needs.
Get the latest Zscaler blog updates in your inbox
By submitting the form, you are agreeing to our privacy policy.



