Board-approved AI policy in force 42001 A.2.2
Signed, dated, and tied to a named executive sponsor. Not a Notion page.
Accountable AI owner appointed (CAIO, vCAIO, or equivalent) 42001 §5.3
One person whose calendar reflects this responsibility. Authority to halt deployments.
AI Council or governance committee with written charter
Cross-functional: Legal, Security, Engineering, Product, HR. Quorum and cadence defined.
Living AI system inventory (register) 42001 A.6.2.4
Every model, embedding, agent, copilot, and shadow-IT instance. Updated quarterly minimum.
AI Acceptable Use Policy distributed to all staff
Prohibited inputs, sanctioned tools, disclosure expectations. Acknowledged on hire and annually.
AI literacy program operational EU AI Act Art. 4
Required since Feb 2025 for any operator of AI in the EU market. Track completion.
Every AI system tiered: prohibited / high-risk / limited / minimal Art. 5–6
Document the rationale. Tiering disagreements are normal and need a paper trail.
Prohibited use cases identified and excluded from roadmap Art. 5
Social scoring, emotion inference at work/school, untargeted scraping for face DBs, etc.
High-risk Annex III mappings completed
Employment, credit, biometrics, critical infrastructure, education, law enforcement.
GPAI obligations assessed if you deploy or modify foundation models Art. 53
Technical documentation, copyright policy, training data summary — even for fine-tunes.
Conformity assessment & CE-marking plan for high-risk systems
Internal control or notified body? Decide before, not during, audit.
Post-market monitoring plan documented Art. 72
How you'll detect performance drift and serious incidents in production.
AIMS scope statement approved §4.3
Boundaries, exclusions, justifications. The scope question kills more audits than the controls do.
Context analysis & interested parties register §4.1–4.2
Internal/external issues. Stakeholder needs traced to controls.
Measurable AI objectives published §6.2
"Improve fairness" is not measurable. "Reduce demographic disparity in model X to <5%" is.
Statement of Applicability covering all 38 Annex A controls
Each control marked applicable/not-applicable with justification. No skipping.
Internal audit cycle executed at least once §9.2
Auditor independent of audited area. Findings tracked to closure.
Management review held with documented inputs and outputs §9.3
Top management. Real decisions. Minutes preserved.
Practitioner Anchor
"We're currently deploying ISO 42001 at ShareVault — a virtual data room serving M&A and financial services clients. The same playbook we use to advise yours."
DISC InfoSec served as internal auditor for ShareVault's certification audit, conducted by SenSiba.
42001
Certified
Implementation
Implementation
GOVERN: policies, accountability, and risk culture documented
Maps cleanly to ISO 42001 §5. Reuse the same artifacts.
MAP: context, intended use, and identified risks per system
Categories per AI RMF 1.0 — bias, privacy, security, safety, environmental.
MEASURE: metrics defined and instrumented for each risk
Quantitative where possible. Qualitative is acceptable when justified.
MANAGE: response procedures for prioritized risks
Treat / transfer / tolerate / terminate — applied per risk, not per system.
GenAI Profile reviewed and applied to LLM-based systems
NIST AI 600-1 lists 12 risks specific to generative AI. Use it.
Training, validation, and test data lineage documented
Source, license, collection date, transformations applied. Per dataset.
Bias and representativeness assessment performed
Don't just check protected classes — check the classes that matter for your use case.
Data quality controls in place pre-training
Deduplication, PII scrubbing, label-quality checks. Logged outcomes.
PII / PHI handling aligned to GDPR, HIPAA, or applicable regime
Lawful basis identified. DPIA completed where required. Cross-border flows mapped.
Data retention and right-to-deletion procedures cover model artifacts
Embeddings and fine-tuned weights are personal data when derived from personal data. Plan for it.
Synthetic data use documented and validated
Generation method, intended substitution, and tests showing it didn't import bias.
AI-specific vendor assessment questionnaire deployed
Beyond the standard SIG. Training data sources, retention, model lineage, opt-outs.
DPAs and MSAs updated for AI processing
Explicit clauses on training-on-customer-data, sub-processors, model output ownership.
Sub-processor disclosure includes AI providers and tracked changes
When OpenAI adds a region or Anthropic adds a sub-processor, your customers want to know.
Model card or system card collected for each vendor model in use
If they won't share one, that's a risk signal.
Contractual SLAs cover availability, accuracy, and incident notification
"Best efforts" is not an SLA. Numbers are.
System card or model documentation maintained for each deployed system
Purpose, capabilities, limitations, training data summary, performance metrics.
User-facing AI disclosure where required EU AI Act Art. 50
Chatbots must self-identify. Synthetic media must be labeled. Even outside the EU, this is becoming table stakes.
Watermarking or provenance signals for synthetic content
C2PA, SynthID, or equivalent. Document why if you've chosen not to.
Decision logs retained for high-risk automated decisions
Inputs, model version, output, human override (if any). Sufficient for after-the-fact review.
Public-facing transparency report or trust center page
Buyers will look for it. Procurement teams will require it.
Oversight role and authority defined per high-risk system
Who can intervene. What they can do. How fast they can act.
Override and rollback procedures documented and rehearsed
Tabletop at least annually. The first time you need it shouldn't be in production.
Escalation paths for adverse outcomes are unambiguous
Named individuals. Backup chain. Out-of-hours coverage if 24/7 system.
Reviewer training completed and refreshed annually
Including automation bias awareness. The number-one failure mode of HITL is the H trusting the AI too much.
Article 22 GDPR rights honored where decisions are wholly automated
Right to explanation, contest, and human review. Documented procedures.
Where most programs stall
Frameworks are clear. Implementation is where engagements die.
Most teams reach this point with a binder full of policies and no one who can stand behind them in front of an auditor. DISC InfoSec's vCAIO model is one expert, embedded — no coordination overhead, no junior consultants billing hours to learn on your dime. We've done it. We're doing it. We can do it for you.
Threat model produced for each AI system using ATLAS or equivalent
STRIDE doesn't cover model extraction, evasion, or poisoning. Use a framework that does.
Red-teaming completed for high-risk and customer-facing models
Internal team or external firm. Findings tracked to remediation.
Prompt injection and jailbreak defenses validated
Direct, indirect, and multi-turn. Tested with current attack libraries, not last year's.
Least-privilege access to models, weights, and inference endpoints
Including service accounts. Including the data scientists.
API keys and model secrets rotated and centrally managed
Vault. KMS. Secret manager. Not .env files in repos.
Output filtering and PII-leak detection at inference time
Especially for RAG pipelines pulling from internal data.
AI-specific incident response plan in place
Hallucination at scale, model misuse, prompt-injection breach. Your standard IR playbook doesn't cover these.
Production drift monitoring on inputs, outputs, and performance
Alerts wired to humans who can act. Thresholds reviewed quarterly.
Reportable serious incident criteria documented EU AI Act Art. 73
15-day clock for high-risk systems. Know what triggers it before you trip it.
Model change management with documented approvals
A new model version is a change. Treat it like one.
Continuous control monitoring tied to your AIMS objectives
Dashboards reviewed at management review cadence. Not just before audits.
Post-incident review process feeds back into the AI risk register
If incidents don't change anything, you're not learning — you're just absorbing.
0%
Program Maturity · Live Score