Apr 23 2026

The 2026 AI Compliance Checklist: 60 Controls Across 10 Domains

Published by DISC InfoSec ยท AI Governance & Cybersecurity

The 2026 AI Compliance Checklist: 60 Controls Across 10 Domains

If you run security, compliance, or AI at a B2B SaaS or financial services company, you have probably noticed something uncomfortable in the last six months: every framework you used to live by has grown an AI annex, every enterprise customer has added an AI section to their vendor questionnaire, and every regulator has decided 2026 is the year they stop asking nicely.

The EU AI Act’s high-risk obligations begin enforcement in August 2026. ISO/IEC 42001 has gone from “interesting standard” to “procurement requirement” inside eighteen months. The NIST AI RMF is quietly becoming the lingua franca of U.S. enterprise buyers. Article 22 of the GDPR is being dusted off and pointed at automated decisions that nobody bothered to call “AI” two years ago.

And most AI compliance programs we walk into are still a binder of policies and a hopeful Notion page.

We built the 2026 AI Compliance Checklist because the gap between having a policy and having a program an auditor will defend is where every consulting engagement we run actually lives. Sixty controls. Ten domains. Mapped to the four frameworks that matter โ€” ISO/IEC 42001, the EU AI Act, NIST AI RMF, and ISO/IEC 27001 โ€” with cross-references to GDPR, HIPAA, and SOC 2 where they apply.

Open the checklist โ†’


Why most AI compliance efforts stall

The pattern is consistent enough that we can name it. Companies start with enthusiasm: leadership signs an AI policy, someone is named “AI lead,” a vendor questionnaire gets updated. Six months later the same company cannot answer four questions:

  1. Which of our AI systems are high-risk under the EU AI Act, and who decided?
  2. What is our Statement of Applicability for ISO 42001, and is it defensible?
  3. If a customer asks for our AI sub-processor list tomorrow, can we produce it?
  4. If a regulator asks for our serious-incident reporting procedure, is it written down?

These are not exotic questions. They are the first four questions in any audit. The reason programs stall on them is not that the standards are unclear โ€” the standards are perfectly clear. The reason they stall is that nobody owns the implementation work, and nobody on the team has done it before.

That’s the gap the checklist is built around.

The 10 domains

Each domain reflects something we have implemented in production for a real client. Not theory. Not what we read in a study guide.

1. AI Governance Foundation

The boring stuff that determines whether anything else matters. A board-approved AI policy. A named, accountable AI owner โ€” CAIO, vCAIO, or equivalent โ€” with the authority to halt deployments. A cross-functional AI council with a written charter. A live AI system inventory that includes the shadow IT your engineers haven’t told you about. An Acceptable Use Policy with annual acknowledgment. And as of February 2025, an AI literacy program under EU AI Act Article 4 if you operate in the EU market.

If these six controls are not in place, the rest of your program is decorative.

2. EU AI Act Risk Classification

The single most consequential decision in your entire program is how you classify each AI system. Get it wrong and the rest of your effort is misallocated โ€” over-investing in low-risk systems, under-investing in the ones that will get you fined. The checklist walks you through prohibited use cases (Article 5), high-risk Annex III mappings, GPAI obligations under Article 53 if you deploy or fine-tune foundation models, and the post-market monitoring plan that everyone forgets until they need it.

3. ISO/IEC 42001 AIMS

The certifiable AI Management System scaffolding. Scope statement. Context analysis. Measurable objectives. Statement of Applicability covering all 38 Annex A controls. Internal audit cycle. Management review. Six controls โ€” and the difference between a program that passes a Stage 2 audit and one that doesn’t.

We know this domain particularly well because we are currently deploying it at ShareVault, a virtual data room platform serving M&A and financial services clients. ShareVault achieved ISO 42001 certification with DISC InfoSec serving as internal auditor and SenSiba conducting the Stage 2 audit. The same playbook is in the checklist.

4. NIST AI RMF Alignment

The four functions โ€” GOVERN, MAP, MEASURE, MANAGE โ€” give you a vocabulary U.S. enterprise buyers already understand. Most of the GOVERN function maps cleanly onto your ISO 42001 work, so you can reuse artifacts. The GenAI Profile (NIST AI 600-1) lists twelve risks specific to generative AI; if you deploy LLM-based systems and you have not reviewed it, you are flying blind.

5. Data Governance for AI

Most AI failures are data failures wearing a model’s clothes. Training, validation, and test data lineage. Bias and representativeness assessment. Pre-training data quality controls. PII and PHI handling per GDPR or HIPAA. Retention and right-to-deletion procedures that actually cover model artifacts โ€” because embeddings and fine-tuned weights derived from personal data are personal data, and a deletion request that doesn’t reach them is incomplete.

6. Third-Party & Vendor AI Risk

Most of your AI risk lives in someone else’s data center. A standard SIG questionnaire does not cover training-on-customer-data, model lineage, or sub-processor changes. Your DPAs probably need new clauses. Your sub-processor list almost certainly needs to include AI providers โ€” and to track when they change. Model cards or system cards should be on file for each vendor model in use; if a vendor refuses to share one, that is itself a risk signal.

7. Transparency & Documentation

If you cannot explain a system to a regulator in writing, you do not actually understand it. System cards. User-facing AI disclosure where Article 50 of the EU AI Act requires it (chatbots must self-identify; synthetic media must be labeled). Watermarking or provenance signals for synthetic content. Decision logs for high-risk automated decisions. A public-facing trust center page โ€” because procurement teams will look for it before they ask you for it.

8. Human Oversight

“Human-in-the-loop” loses meaning when the human is rubber-stamping at scale. The checklist forces you to define oversight roles, document and rehearse override procedures, build unambiguous escalation paths, and train reviewers โ€” including on automation bias, which is the number one failure mode of HITL systems. Where decisions are wholly automated, GDPR Article 22 rights to explanation and contest must be honored with documented procedures.

9. Security & Adversarial Testing

Your existing AppSec program does not cover prompt injection, model extraction, or training data poisoning. STRIDE does not cover evasion or membership inference attacks. You need a threat-modeling framework built for AI โ€” MITRE ATLAS is the current best-of-breed โ€” and you need red-teaming with current attack libraries, not last year’s. Output filtering and PII-leak detection at inference time are now essential, especially for any RAG pipeline pulling from internal data.

10. Incident Response & Monitoring

Drift is silent. Failure is loud. The checklist closes with the AI-specific incident response plan most companies don’t have, production drift monitoring with thresholds reviewed quarterly, the Article 73 serious-incident reporting criteria (15-day clock for high-risk systems), model change management with documented approvals, and a post-incident review process that actually feeds back into your AI risk register.

If your incidents don’t change anything, you are not learning. You are just absorbing.


Why DISC InfoSec

We are not a generalist firm with an AI practice grafted on. AI governance and cybersecurity are the practice. The principal consultant โ€” backed by 16+ years across NASA, Dell, Lam Research, and O’Reilly Media, with CISSP, CISM, ISO 27001 Lead Implementer, and ISO 42001 certifications โ€” is the person you actually work with. No partner-and-pyramid model. No junior consultants billing hours to learn ISO 42001 on your engagement.

This matters more than it sounds. AI governance is one of those domains where coordination overhead inside a consulting firm consumes most of the value the firm could deliver. Our vCAIO model is the structural answer: one expert, embedded, accountable.

And we are doing the work, not just teaching it. The ShareVault ISO 42001 deployment is live. The Annex A controls are operational. The Stage 2 audit is closed. Every control in the 2026 checklist is in the checklist because we have implemented it ourselves or watched someone else fail to implement it.

What to do this week

If you have not started: open the checklist, share it with your AI council (or convene one), and run through Section 1. Most companies discover their gap inside the first six controls.

If you are mid-program and stuck: Sections 2 and 3 are usually where we find the load-bearing problems. EU AI Act classification disagreements and ISO 42001 scope drift kill more programs than any other two issues combined.

If you want a second set of eyes โ€” a senior practitioner who has done this end-to-end โ€” that is exactly what the vCAIO engagement is built for.


โ†’ Open the 2026 AI Compliance Checklist

DISC InfoSec โ€” AI Governance & Cybersecurity for B2B SaaS and Financial Services https://deurainfosec.com ยท info@deurainfosec.com ยท 707-998-5164


AI Attack Surface ScoreCard

AI Vulnerability Scorecard: Discover Your AI Attack Surface Before Attackers Do

Your Shadow AI Problem Has a Name-And Now It Has a Score

Schedule a consultation or drop a note below: info@deurainfosec.com

InfoSec servicesย |ย InfoSec booksย |ย Follow our blogย |ย DISC llc is listed on The vCISO Directoryย |ย ISO 27k Chat botย |ย Comprehensive vCISO Servicesย |ย ISMS Servicesย |ย AIMS Servicesย |ย Security Risk Assessment Servicesย |ย Mergers and Acquisition Security

Tags: The 2026 AI Compliance Checklist

Leave a Reply

You must be logged in to post a comment. Login now.