InfoSec and Compliance – With 20 years of blogging experience, DISC InfoSec blog is dedicated to providing trusted insights and practical solutions for professionals and organizations navigating the evolving cybersecurity landscape. From cutting-edge threats to compliance strategies, this blog is your reliable resource for staying informed and secure. Dive into the content, connect with the community, and elevate your InfoSec expertise!
EU AI Act: Why Every Organization Using AI Must Pay Attention
The EU AI Act is the world’s first major regulation designed to govern how artificial intelligence is developed, deployed, and managed across industries. Approved in June 2024, it establishes harmonized rules for AI use across all EU member states — just as GDPR did for privacy.
Any organization that builds, integrates, or sells AI systems within the European Union must comply — even if they are headquartered outside the EU. That means U.S. and global companies using AI in European markets are officially in scope.
The Act introduces a risk-based regulatory model. AI is categorized across four risk tiers — from unacceptable, which are completely banned, to high-risk, which carry strict controls, limited-risk with transparency requirements, and minimal-risk, which remain largely unregulated.
High-risk AI includes systems governing access to healthcare, finance, employment, critical infrastructure, law enforcement, and essential public services. Providers of these systems must implement rigorous risk management, governance, monitoring, and documentation processes across the entire lifecycle.
Certain AI uses are explicitly prohibited — such as social scoring, biometric emotion recognition in workplaces or schools, manipulative AI techniques, and untargeted scraping of facial images for surveillance.
Compliance obligations are rolling out in phases beginning February 2025, with core high-risk system requirements taking effect in August 2026 and final provisions extending through 2027. Organizations have limited time to assess their current systems and prepare for adherence.
This legislation is expected to shape global AI governance frameworks — much like GDPR influenced worldwide privacy laws. Companies that act early gain an advantage: reduced legal exposure, customer trust, and stronger market positioning.
How DISC InfoSec Helps You Stay Ahead
DISC InfoSec brings 20+ years of security and compliance excellence with a proven multi-framework approach. Whether preparing for EU AI Act, ISO 42001, GDPR, SOC 2, or enterprise governance — we help organizations implement responsible AI controls without slowing innovation.
If your business touches the EU and uses AI — now is the time to get compliant.
📩 Let’s build your AI governance roadmap together. Reach out: Info@DeuraInfosec.com
The European Union’s Artificial Intelligence Act represents the world’s first comprehensive regulatory framework for artificial intelligence. As organizations worldwide prepare for compliance, one of the most critical first steps is understanding exactly where your AI system falls within the EU’s risk-based classification structure.
At DeuraInfoSec, we’ve developed a streamlined EU AI Act Risk Calculator to help organizations quickly assess their compliance obligations.🔻 But beyond the tool itself, understanding the framework is essential for any organization deploying AI systems that touch EU markets or citizens.
The EU AI Act takes a pragmatic, risk-based approach to regulation. Rather than treating all AI systems equally, it categorizes them into four distinct risk levels, each with different compliance requirements:
1. Unacceptable Risk (Prohibited Systems)
These AI systems pose such fundamental threats to human rights and safety that they are completely banned in the EU. This category includes:
Social scoring by public authorities that evaluates or classifies people based on behavior, socioeconomic status, or personal characteristics
Real-time remote biometric identification in publicly accessible spaces (with narrow exceptions for law enforcement in specific serious crimes)
Systems that manipulate human behavior to circumvent free will and cause harm
Systems that exploit vulnerabilities of specific groups due to age, disability, or socioeconomic circumstances
If your AI system falls into this category, deployment in the EU is simply not an option. Alternative approaches must be found.
2. High-Risk AI Systems
High-risk systems are those that could significantly impact health, safety, fundamental rights, or access to essential services. The EU AI Act identifies high-risk AI in two ways:
Safety Components: AI systems used as safety components in products covered by existing EU safety legislation (medical devices, aviation, automotive, etc.)
Specific Use Cases: AI systems used in eight critical domains:
Biometric identification and categorization
Critical infrastructure management
Education and vocational training
Employment, worker management, and self-employment access
Access to essential private and public services
Law enforcement
Migration, asylum, and border control management
Administration of justice and democratic processes
High-risk AI systems face the most stringent compliance requirements, including conformity assessments, risk management systems, data governance, technical documentation, transparency measures, human oversight, and ongoing monitoring.
3. Limited Risk (Transparency Obligations)
Limited-risk AI systems must meet specific transparency requirements to ensure users know they’re interacting with AI:
Chatbots and conversational AI must clearly inform users they’re communicating with a machine
Emotion recognition systems require disclosure to users
Biometric categorization systems must inform individuals
Deepfakes and synthetic content must be labeled as AI-generated
While these requirements are less burdensome than high-risk obligations, they’re still legally binding and require thoughtful implementation.
4. Minimal Risk
The vast majority of AI systems fall into this category: spam filters, AI-enabled video games, inventory management systems, and recommendation engines. These systems face no specific obligations under the EU AI Act, though voluntary codes of conduct are encouraged, and other regulations like GDPR still apply.
Why Classification Matters Now
Many organizations are adopting a “wait and see” approach to EU AI Act compliance, assuming they have time before enforcement begins. This is a costly mistake for several reasons:
Timeline is Shorter Than You Think: While full enforcement doesn’t begin until 2026, high-risk AI systems will need to begin compliance work immediately to meet conformity assessment requirements. Building robust AI governance frameworks takes time.
Competitive Advantage: Early movers who achieve compliance will have significant advantages in EU markets. Organizations that can demonstrate EU AI Act compliance will win contracts, partnerships, and customer trust.
Foundation for Global Compliance: The EU AI Act is setting the standard that other jurisdictions are likely to follow. Building compliance infrastructure now prepares you for a global regulatory landscape.
Risk Mitigation: Even if your AI system isn’t currently deployed in the EU, supply chain exposure, data processing locations, or future market expansion could bring you into scope.
Using the Risk Calculator Effectively
Our EU AI Act Risk Calculator is designed to give you a rapid initial assessment, but it’s important to understand what it can and cannot do.
What It Does:
Provides a preliminary risk classification based on key regulatory criteria
Identifies your primary compliance obligations
Helps you understand the scope of work ahead
Serves as a conversation starter for more detailed compliance planning
What It Doesn’t Replace:
Detailed legal analysis of your specific use case
Comprehensive gap assessments against all requirements
Technical conformity assessments
Ongoing compliance monitoring
Think of the calculator as your starting point, not your destination. If your system classifies as high-risk or even limited-risk, the next step should be a comprehensive compliance assessment.
Common Classification Challenges
In our work helping organizations navigate EU AI Act compliance, we’ve encountered several common classification challenges:
Boundary Cases: Some systems straddle multiple categories. A chatbot used in customer service might seem like limited risk, but if it makes decisions about loan approvals or insurance claims, it becomes high-risk.
Component vs. System: An AI component embedded in a larger system may inherit the risk classification of that system. Understanding these relationships is critical.
Intended Purpose vs. Actual Use: The EU AI Act evaluates AI systems based on their intended purpose, but organizations must also consider reasonably foreseeable misuse.
Evolution Over Time: AI systems evolve. A minimal-risk system today might become high-risk tomorrow if its use case changes or new features are added.
The Path Forward
Whether your AI system is high-risk or minimal-risk, the EU AI Act represents a fundamental shift in how organizations must think about AI governance. The most successful organizations will be those who view compliance not as a checkbox exercise but as an opportunity to build more trustworthy, robust, and valuable AI systems.
At DeuraInfoSec, we specialize in helping organizations navigate this complexity. Our approach combines deep technical expertise with practical implementation experience. As both practitioners (implementing ISO 42001 for our own AI systems at ShareVault) and consultants (helping organizations across industries achieve compliance), we understand both the regulatory requirements and the operational realities of compliance.
Take Action Today
Start with our free EU AI Act Risk Calculator to understand your baseline risk classification. Then, regardless of your risk level, consider these next steps:
Conduct a comprehensive AI inventory across your organization
Perform detailed risk assessments for each AI system
Develop AI governance frameworks aligned with ISO 42001
Implement technical and organizational measures appropriate to your risk level
Establish ongoing monitoring and documentation processes
The EU AI Act isn’t just another compliance burden. It’s an opportunity to build AI systems that are more transparent, more reliable, and more aligned with fundamental human values. Organizations that embrace this challenge will be better positioned for success in an increasingly regulated AI landscape.
Ready to assess your AI system’s risk level? Try our free EU AI Act Risk Calculator now.
Need expert guidance on compliance? Contact DeuraInfoSec.com today for a comprehensive assessment.
DeuraInfoSec specializes in AI governance, ISO 42001 implementation, and EU AI Act compliance for B2B SaaS and financial services organizations. We’re not just consultants—we’re practitioners who have implemented these frameworks in production environments.
Great prompt, Hugh. The Fundamental Rights Impact Assessment (FRIA) under Article 27 of the EU AI Act is a powerful tool for identifying and protecting the rights of individuals affected by high-risk AI systems. Here’s how it works and what rights it safeguards:
🛡️ Key Rights Protected by the EU AI Act via FRIA
When conducting a FRIA, deployers must assess how an AI system could impact the following fundamental rights:
Right to human dignity Ensures AI systems do not dehumanize or degrade individuals.
Right to non-discrimination Protects against algorithmic bias based on race, gender, age, disability, etc.
Right to privacy and data protection Evaluates how personal data is used, stored, and protected.
Freedom of expression and information Ensures AI does not suppress speech or manipulate access to information.
Right to good administration Guarantees fair, transparent, and accountable decision-making by public bodies using AI.
Access to justice and remedies Individuals must be able to challenge decisions made by AI systems and seek redress.
🧾 What a FRIA Must Include
Deployers of high-risk AI systems (especially public bodies or private entities providing public services) must document:
Purpose and context of AI use
Groups likely to be affected
Specific risks of harm to those groups
Human oversight measures
Mitigation steps if risks materialize
Governance and complaint mechanisms
This assessment must be completed before first use and updated as needed. Results are reported to the market surveillance authority, and the EU AI Office will provide a standardized template.
🧭 Why It Matters
The FRIA isn’t just paperwork—it’s a safeguard against invisible harms. It forces organizations to think critically about how their AI systems might infringe on rights and to build in protections from the start. It’s a shift from reactive to proactive governance.
If you’d like, I can walk you through a mock FRIA for a specific AI use case—say, facial recognition in public spaces or automated hiring tools. Just name the scenario.
The EU AI Act is the European Union’s landmark regulation designed to create a legal framework for the development, deployment, and use of artificial intelligence across the EU. Its primary objectives can be summed up as follows:
Protect Fundamental Rights and Safety
Ensure AI systems do not undermine fundamental rights guaranteed by the EU Charter (privacy, non-discrimination, dignity, etc.) or compromise the health and safety of individuals.
Promote Trustworthy AI
Establish standards so AI systems are transparent, explainable, and accountable, which is key to building public trust in AI adoption.
Risk-Based Regulation
Introduce a tiered approach:
Unacceptable risk: Prohibit AI uses that pose clear threats (e.g., social scoring by governments, manipulative systems).
High risk: Strict obligations for AI in sensitive areas like healthcare, finance, employment, and law enforcement.
Limited/minimal risk: Light or no regulatory requirements.
Harmonize AI Rules Across the EU
Create a uniform framework that avoids fragmented national laws, ensuring legal certainty for businesses operating in multiple EU countries.
Foster Innovation and Competitiveness
Encourage AI innovation by providing clear rules and setting up “regulatory sandboxes” where businesses can test AI in a supervised, low-risk environment.
Ensure Transparency for Users
Require disclosure when people interact with AI (e.g., chatbots, deepfakes) so users know they are dealing with a machine.
Strengthen Governance and Oversight
Establish national supervisory authorities and an EU-level AI Office to monitor compliance, enforce rules, and coordinate among Member States.
Address Bias and Discrimination
Mandate quality datasets, documentation, and testing to reduce harmful bias in AI systems, particularly in areas affecting citizens’ rights and opportunities.
Guarantee Robustness and Cybersecurity
Require that AI systems are secure, resilient against attacks or misuse, and perform reliably across their lifecycle.
Global Standard Setting
Position the EU as a leader in setting international norms for AI regulation, influencing global markets the way GDPR did for privacy.
understand the scope of the AI Act.
To understand the scope of the EU AI Act, it helps to break it down into who and what it applies to, and how risk determines obligations. Here’s a clear guide:
1. Who it Applies To
Providers: Anyone (companies, developers, public bodies) placing AI systems on the EU market, regardless of where they are based.
Deployers/Users: Organizations or individuals using AI within the EU.
Importers & Distributors: Those selling or distributing AI systems in the EU.
➡️ Even if a company is outside the EU, the Act applies if their AI systems are used in the EU.
2. What Counts as AI
The Act uses a broad definition of AI (based on OECD/Commission standards).
Includes machine learning, rule-based, statistical, and generative AI models.
3. Risk-Based Approach
The scope is defined by categorizing AI uses into risk levels:
Unacceptable Risk (Prohibited)
Social scoring, manipulative techniques, real-time biometric surveillance in public (with limited exceptions).
High Risk (Strictly Regulated)
AI in sensitive areas like:
healthcare (diagnostics, medical devices),
employment (CV screening),
education (exam scoring),
law enforcement and migration,
critical infrastructure (transport, energy).
Limited Risk (Transparency Requirements)
Chatbots, deepfakes, emotion recognition—users must be informed they are interacting with AI.
Minimal Risk (Largely Unregulated)
AI in spam filters, video games, recommendation engines—free to operate with voluntary best practices.
4. Exemptions
AI used for military and national security is outside the Act’s scope.
Systems used solely for research and prototyping are exempt until they are placed on the market.
5. Key Takeaway on Scope
The EU AI Act is horizontal (applies across sectors) but graduated (the rules depend on risk).
If you are a provider, you need to check whether your system falls into a prohibited, high, limited, or minimal category.
If you are a user, you need to know what obligations apply when deploying AI (especially if it’s high-risk).
👉 In short: The scope of the EU AI Act is broad, extraterritorial, and risk-based. It applies to almost anyone building, selling, or using AI in the EU, but the depth of obligations depends on how risky the AI application is considered.
These are AI practices banned outright because they pose a clear threat to safety, rights, or democracy. Examples:
Social scoring by governments (like assigning citizens a “trust score”).
Real-time biometric identification in public spaces for mass surveillance (with narrow exceptions like serious crime).
Manipulative AI that exploits vulnerabilities (e.g., toys with voice assistants that encourage dangerous behavior in kids).
👉 If your system falls here → cannot be marketed or used in the EU.
🔹 2. High Risk
These are AI systems with significant impact on people’s rights, safety, or livelihoods. They are allowed but subject to strict compliance (risk management, testing, transparency, human oversight, etc.). Examples:
AI in recruitment (CV screening, job interview analysis).
Credit scoring or AI used for approving loans.
Medical AI (diagnosis, treatment recommendations).
AI in critical infrastructure (electricity grid management, transport safety systems).
AI in education (grading, admissions decisions).
👉 If your system is high-risk → must undergo conformity assessment and registration before use.
🔹 3. Limited Risk
These require transparency obligations, but not full compliance like high-risk systems. Examples:
Chatbots (users must know they’re talking to AI, not a human).
AI systems generating deepfakes (must disclose synthetic nature unless for law enforcement/artistic/expressive purposes).
Emotion recognition systems in non-high-risk contexts.
👉 If limited risk → inform users clearly, but lighter obligations.
🔹 4. Minimal or No Risk
The majority of AI applications fall here. They’re largely unregulated beyond general EU laws. Examples:
Spam filters.
AI-powered video games.
Recommendation systems for e-commerce or music streaming.
AI-driven email autocomplete.
👉 If minimal/no risk → free use with no extra requirements.
⚖️ Rule of Thumb for Classification:
If it manipulates or surveils → often unacceptable risk.
If it affects health, jobs, education, finance, safety, or fundamental rights → high risk.
If it interacts with humans but without major consequences → limited risk.
If it’s just convenience or productivity-related → minimal/no risk.
A decision tree you can use to classify any AI system under the EU AI Act risk framework:
🧭 EU AI Act AI System Risk Classification Decision Tree
Step 1: Check for Prohibited Practices
👉 Does the AI system do any of the following?
Social scoring of individuals by governments or large-scale ranking of citizens?
Manipulative AI that exploits vulnerable groups (e.g., children, disabled, addicted)?
Real-time biometric identification in public spaces (mass surveillance), except for narrow law enforcement use?
Subliminal manipulation that harms people?
✅ Yes → UNACCEPTABLE RISK (Prohibited, not allowed in EU). ❌ No → go to Step 2.
Step 2: Check for High-Risk Use Cases
👉 Does the AI system significantly affect people’s safety, rights, or livelihoods, such as:
As AI adoption accelerates, especially in regulated or high-impact sectors, the European Union is setting the bar for responsible development. Article 15 of the EU AI Act lays out clear obligations for providers of high-risk AI systems—focusing on accuracy, robustness, and cybersecurity throughout the AI system’s lifecycle. Here’s what that means in practice—and why it matters now more than ever.
1. Security and Reliability From Day One
The AI Act demands that high-risk AI systems be designed with integrity and resilience from the ground up. That means integrating controls for accuracy, robustness, and cybersecurity not only at deployment but throughout the entire lifecycle. It’s a shift from reactive patching to proactive engineering.
2. Accuracy Is a Design Requirement
Gone are the days of vague performance promises. Under Article 15, providers must define and document expected accuracy levels and metrics in the user instructions. This transparency helps users and regulators understand how the system should perform—and flags any deviation from those expectations.
3. Guarding Against Exploitation
AI systems must also be robust against manipulation, whether it’s malicious input, adversarial attacks, or system misuse. This includes protecting against changes to the AI’s behavior, outputs, or performance caused by vulnerabilities or unauthorized interference.
4. Taming Feedback Loops in Learning Systems
Some AI systems continue learning even after deployment. That’s powerful—but dangerous if not governed. Article 15 requires providers to minimize or eliminate harmful feedback loops, which could reinforce bias or lead to performance degradation over time.
5. Compliance Isn’t Optional—It’s Auditable
The Act calls for documented procedures that demonstrate compliance with accuracy, robustness, and security standards. This includes verifying third-party contributions to system development. Providers must be ready to show their work to market surveillance authorities (MSAs) on request.
6. Leverage the Cyber Resilience Act
If your high-risk AI system also falls under the scope of the EU Cyber Resilience Act (CRA), good news: meeting the CRA’s essential cybersecurity requirements can also satisfy the AI Act’s demands. Providers should assess the overlap and streamline their compliance strategies.
7. Don’t Forget the GDPR
When personal data is involved, Article 15 interacts directly with the GDPR—especially Articles 5(1)(d), 5(1)(f), and 32, which address accuracy and security. If your organization is already GDPR-compliant, you’re on the right track, but Article 15 still demands additional technical and operational precision.
Final Thought:
Article 15 raises the bar for how we build, deploy, and monitor high-risk AI systems. It doesn’t just aim to prevent failures—it pushes providers to deliver trustworthy, resilient, and secure AI from the start. For organizations that embrace this proactively, it’s not just about avoiding fines—it’s about building AI systems that earn trust and deliver long-term value.
Lifecycle Risk Management Under the EU AI Act, providers of high-risk AI systems are obligated to establish a formal risk management system that spans the entire lifecycle of the AI system—from design and development to deployment and ongoing use.
Continuous Implementation This system must be established, implemented, documented, and maintained over time, ensuring that risks are continuously monitored and managed as the AI system evolves.
Risk Identification The first core step is to identify and analyze all reasonably foreseeable risks the AI system may pose. This includes threats to health, safety, and fundamental rights when used as intended.
Misuse Considerations Next, providers must assess the risks associated with misuse of the AI system—those that are not intended but are reasonably predictable in real-world contexts.
Post-Market Data Analysis The system must include regular evaluation of new risks identified through the post-market monitoring process, ensuring real-time adaptability to emerging concerns.
Targeted Risk Measures Following risk identification, providers must adopt targeted mitigation measures tailored to reduce or eliminate the risks revealed through prior assessments.
Residual Risk Management If certain risks cannot be fully eliminated, the system must ensure these residual risks are acceptable, using mitigation strategies that bring them to a tolerable level.
System Testing Requirements High-risk AI systems must undergo extensive testing to verify that the risk management measures are effective and that the system performs reliably and safely in all foreseeable scenarios.
Special Consideration for Vulnerable Groups The risk management system must account for potential impacts on vulnerable populations, particularly minors (under 18), ensuring their rights and safety are adequately protected.
Ongoing Review and Adjustment The entire risk management process should be dynamic, regularly reviewed and updated based on feedback from real-world use, incident reports, and changing societal or regulatory expectations.
🔐 Main Requirement Summary:
Providers of high-risk AI systems must implement a comprehensive, documented, and dynamic risk management system that addresses foreseeable and emerging risks throughout the AI lifecycle—ensuring safety, fundamental rights protection, and consideration for vulnerable groups.
EU AI Act: A Risk-Based Approach to Managing AI Compliance
1. Objective and Scope The EU AI Act aims to ensure that AI systems placed on the EU market are safe, respect fundamental rights, and encourage trustworthy innovation. It applies to both public and private actors who provide or use AI in the EU, regardless of whether they are based in the EU or not. The Act follows a risk-based approach, categorizing AI systems into four levels of risk: unacceptable, high, limited, and minimal.
2. Prohibited AI Practices Certain AI applications are completely banned because they violate fundamental rights. These include systems that manipulate human behavior, exploit vulnerabilities of specific groups, enable social scoring by governments, or use real-time remote biometric identification in public spaces (with narrow exceptions such as law enforcement).
3. High-Risk AI Systems AI systems used in critical sectors—like biometric identification, infrastructure, education, employment, access to public services, and law enforcement—are considered high-risk. These systems must undergo strict compliance procedures, including risk assessments, data governance checks, documentation, human oversight, and post-market monitoring.
4. Obligations for High-Risk AI Providers Providers of high-risk AI must implement and document a quality management system, ensure datasets are relevant and free from bias, establish transparency and traceability mechanisms, and maintain detailed technical documentation. They must also register their AI system in a publicly accessible EU database before placing it on the market.
5. Roles and Responsibilities The Act defines clear responsibilities for all actors in the AI supply chain—providers, importers, distributors, and deployers. Each has specific obligations based on their role. For instance, deployers of high-risk AI systems must ensure proper human oversight and inform individuals impacted by the system.
6. Limited and Minimal Risk AI For AI systems with limited risk (like chatbots), providers must meet transparency requirements, such as informing users that they are interacting with AI. Minimal-risk systems (e.g., spam filters or AI in video games) are largely unregulated, though developers are encouraged to voluntarily follow codes of conduct and ethical guidelines.
7. General Purpose AI Models General-purpose AI (GPAI) models, including foundation models like GPT, are subject to specific transparency obligations. Developers must provide technical documentation, summaries of training data, and usage instructions. Advanced GPAIs with systemic risks face additional requirements, including risk management and cybersecurity obligations.
8. Enforcement, Governance, and Sanctions Each Member State will designate a national supervisory authority, while the EU will establish a European AI Office to oversee coordination and enforcement. Non-compliance can result in fines of up to €35 million or 7% of annual global turnover, depending on the severity of the violation.
9. Timeline and Compliance Strategy The AI Act will come into effect in stages after formal adoption. Prohibited practices will be banned within six months; GPAI rules will apply after 12 months; and the core high-risk system obligations will become enforceable in 24 months. Businesses should begin gap assessments, build internal governance structures, and prepare for conformity assessments to ensure timely compliance.
For U.S. organizations operating in or targeting the EU market, preparation involves mapping AI use cases against the Act’s risk tiers, enhancing risk management practices, and implementing robust documentation and accountability frameworks. By aligning with the EU AI Act’s principles, U.S. firms can not only ensure compliance but also demonstrate leadership in trustworthy AI on a global scale.
A compliance readiness checklist for U.S. organizations preparing for the EU AI Act:
The AICM (AI Controls Matrix) is a cybersecurity and risk management framework developed by the Cloud Security Alliance (CSA) to help organizations manage AI-specific risks across the AI lifecycle.
AICM stands for AI Controls Matrix, and it is:
A risk and control framework tailored for Artificial Intelligence (AI) systems.
Built to address trustworthiness, safety, and compliance in the design, development, and deployment of AI.
Structured across 18 security domains with 243 control objectives.
Aligned with existing standards like:
ISO/IEC 42001 (AI Management Systems)
ISO/IEC 27001
NIST AI Risk Management Framework
BSI AIC4
EU AI Act
+———————————————————————————+ | ARTIFICIAL INTELLIGENCE CONTROL MATRIX (AICM) | | 243 Control Objectives | 18 Security Domains | +———————————————————————————+
Domain No.
Domain Name
Example Controls Count
1
Governance & Leadership
15
2
Risk Management
14
3
Compliance & Legal
13
4
AI Ethics & Responsible AI
18
5
Data Governance
16
6
Model Lifecycle Management
17
7
Privacy & Data Protection
15
8
Security Architecture
13
9
Secure Development Practices
15
10
Threat Detection & Response
12
11
Monitoring & Logging
12
12
Access Control
14
13
Supply Chain Security
13
14
Business Continuity & Resilience
12
15
Human Factors & Awareness
14
16
Incident Management
14
17
Performance & Explainability
13
18
Third-Party Risk Management
13
+———————————————————————————+
TOTAL CONTROL OBJECTIVES: 243
+———————————————————————————+
Legend: 📘 = Policy Control 🔧 = Technical Control 🧠 = Human/Process Control 🛡️ = Risk/Compliance Control
🧩 Key Features
Covers traditional cybersecurity and AI-specific threats (e.g., model poisoning, data leakage, prompt injection).
Applies across the entire AI lifecycle—from data ingestion and training to deployment and monitoring.
Includes a companion tool: the AI-CAIQ (Consensus Assessment Initiative Questionnaire for AI), enabling organizations to self-assess or vendor-assess against AICM controls.
🎯 Why It Matters
As AI becomes pervasive in business, compliance, and critical infrastructure, traditional frameworks (like ISO 27001 alone) are no longer enough. AICM helps organizations:
Implement responsible AI governance
Identify and mitigate AI-specific security risks
Align with upcoming global regulations (like the EU AI Act)
Demonstrate AI trustworthiness to customers, auditors, and regulators
Here are the 18 security domains covered by the AICM framework:
Audit and Assurance
Application and Interface Security
Business Continuity Management and Operational Resilience
Supply Chain Management, Transparency and Accountability
Threat & Vulnerability Management
Universal Endpoint Management
Gap Analysis Template based on AICM (Artificial Intelligence Control Matrix)
#
Domain
Control Objective
Current State (1-5)
Target State (1-5)
Gap
Responsible
Evidence/Notes
Remediation Action
Due Date
1
Governance & Leadership
AI governance structure is formally defined.
2
5
3
John D.
No documented AI policy
Draft governance charter
2025-08-01
2
Risk Management
AI risk taxonomy is established and used.
3
4
1
Priya M.
Partial mapping
Align with ISO 23894
2025-07-25
3
Privacy & Data Protection
AI models trained on PII have privacy controls.
1
5
4
Sarah W.
Privacy review not performed
Conduct DPIA
2025-08-10
4
AI Ethics & Responsible AI
AI systems are evaluated for bias and fairness.
2
5
3
Ethics Board
Informal process only
Implement AI fairness tools
2025-08-15
…
…
…
…
…
…
…
…
…
…
🔢 Scoring Scale (Current & Target State)
1 – Not Implemented
2 – Partially Implemented
3 – Implemented but Not Reviewed
4 – Implemented and Reviewed
5 – Optimized and Continuously Improved
The AICM contains 243 control objectives distributed across 18 security domains, analyzed by five critical pillars, including Control Type, Control Applicability and Ownership, Architectural Relevance, LLM Lifecycle Relevance, and Threat Category.
It maps to leading standards, including NIST AI RMF 1.0 (via AI NIST 600-1), and BSI AIC4 (included today), as well as ISO 42001 & ISO 27001 (next month).
This will be the framework for STAR for AI organizational certification program. Any AI model provider, cloud service provider or SaaS provider will want to go through this program. CSA is leaving it open as to enterprises, they believe it is going to make sense for them to consider the certification as well. The release includes the Consensus Assessment Initiative Questionnaire for AI (AI-CAIQ), so CSA encourage you to start thinking about showing your alignment with AICM soon.
CSA will also adapt our Valid-AI-ted AI-based automated scoring tool to analyze AI-CAIQ submissions
Mapping against ISO 42001:2023 and/or the EU Artificial Intelligence (AI) Act
The AI Act & ISO 42001 Gap Analysis Tool is a dual-purpose resource that helps organizations assess their current AI practices against both legal obligations under the EU AI Act and international standards like ISO/IEC 42001:2023. It allows users to perform a tailored gap analysis based on their specific needs, whether aligning with ISO 42001, the EU AI Act, or both. The tool facilitates early-stage project planning by identifying compliance gaps and setting actionable priorities.
With the EU AI Act now in force and enforcement of its prohibitions on high-risk AI systems beginning in February 2025, organizations face growing pressure to proactively manage AI risk. Implementing an AI management system (AIMS) aligned with ISO 42001 can reduce compliance risk and meet rising international expectations. As AI becomes more embedded in business operations, conducting a gap analysis has become essential for shaping a sound, legally compliant, and responsible AI strategy.
Feedback: This tool addresses a timely and critical need in the AI governance landscape. By combining legal and best-practice assessments into one streamlined solution, it helps reduce complexity for compliance teams. Highlighting the upcoming enforcement deadlines and the benefits of ISO 42001 certification reinforces urgency and practicality.
The AI Act & ISO 42001 Gap Analysis Tool is a user-friendly solution that helps organizations quickly and effectively assess their current AI practices against both the EU AI Act and the ISO/IEC 42001:2023 standard. With intuitive features, customizable inputs, and step-by-step guidance, the tool adapts to your organization’s specific needs—whether you’re looking to meet regulatory obligations, align with international best practices, or both. Its streamlined interface allows even non-technical users to conduct a thorough gap analysis with minimal training.
Designed to integrate seamlessly into your project planning process, the tool delivers clear, actionable insights into compliance gaps and priority areas. As enforcement of the EU AI Act begins in early 2025, and with increasing global focus on AI governance, this tool provides not only legal clarity but also practical, accessible support for developing a robust AI management system. By simplifying the complexity of AI compliance, it empowers teams to make informed, strategic decisions faster.
What does the tool provide?
Split into two sections, EU AI Act and ISO 42001, so you can perform analyses for both or an individual analysis.
The EU AI Act section is divided into six sets of questions: general requirements, entity requirements, assessment and registration, general-purpose AI, measures to support innovation and post-market monitoring.
Identify which requirements and sections of the AI Act are applicable by completing the provided screening questions. The tool will automatically remove any non-applicable questions.
The ISO 42001 section is divided into two sets of questions: ISO 42001 six clauses and ISO 42001 controls as outlined in Annex A.
Executive summary pages for both analyses, including by section or clause/control, the number of requirements met and compliance percentage totals.
A clear indication of strong and weak areas through colour-coded analysis graphs and tables to highlight key areas of development and set project priorities.
The tool is designed to work in any Microsoft environment; it does not need to be installed like software, and does not depend on complex databases. It is reliant on human involvement.
Items that can support an ISO 42001 (AIMS) implementation project
Scenario: A healthcare startup in the EU develops an AI system to assist doctors in diagnosing skin cancer from images. The system uses machine learning to classify lesions as benign or malignant.
1. Risk-Based Classification
EU AI Act Requirement: Classify the AI system into one of four risk categories: unacceptable, high-risk, limited-risk, minimal-risk.
Interpretation in Scenario: The diagnostic system qualifies as a high-risk AI because it affects people’s health decisions, thus requiring strict compliance with specific obligations.
2. Data Governance & Quality
EU AI Act Requirement: High-risk AI systems must use high-quality datasets to avoid bias and ensure accuracy.
Interpretation in Scenario: The startup must ensure that training data are representative of all demographic groups (skin tones, age ranges, etc.) to reduce bias and avoid misdiagnosis.
3. Transparency & Human Oversight
EU AI Act Requirement: Users should be aware they are interacting with an AI system; meaningful human oversight is required.
Interpretation in Scenario: Doctors must be clearly informed that the diagnosis is AI-assisted and retain final decision-making authority. The system should offer explainability features (e.g., heatmaps on images to show reasoning).
4. Robustness, Accuracy, and Cybersecurity
EU AI Act Requirement: High-risk AI systems must be technically robust and secure.
Interpretation in Scenario: The AI tool must maintain high accuracy under diverse conditions and protect patient data from breaches. It should include fallback mechanisms if anomalies are detected.
5. Accountability and Documentation
EU AI Act Requirement: Maintain detailed technical documentation and logs to demonstrate compliance.
Interpretation in Scenario: The startup must document model architecture, training methodology, test results, and monitoring processes, and be ready to submit these to regulators if required.
6. Registration and CE Marking
EU AI Act Requirement: High-risk systems must be registered in an EU database and undergo conformity assessments.
Interpretation in Scenario: The startup must submit their system to a notified body, demonstrate compliance, and obtain CE marking before deployment.