
What is ISO/IEC 42001:2023
- ISO 42001 (published December 2023) is the first international standard dedicated to how organizations should govern and manage AI systems — whether they build AI, use it, or deploy it in services.
- It lays out what the authors call an Artificial Intelligence Management System (AIMS) — a structured governance and management framework that helps companies reduce AI-related risks, build trust, and ensure responsible AI use.
Who can use it — and is it mandatory
- Any organization — profit or non-profit, large or small, in any industry — that develops or uses AI can implement ISO 42001.
- For now, ISO 42001 is not legally required. No country currently mandates it.
- But adopting it proactively can make future compliance with emerging AI laws and regulations easier.
What ISO 42001 requires / how it works
- The standard uses a “high-level structure” similar to other well-known frameworks (like ISO 27001), covering organizational context, leadership, planning, support, operations, performance evaluation, and continual improvement.
- Organizations need to: define their AI-policy and scope; identify stakeholders and expectations; perform risk and impact assessments (on company level, user level, and societal level); implement controls to mitigate risks; maintain documentation and records; monitor, audit, and review the AI system regularly; and continuously improve.
- As part of these requirements, there are 38 example controls (in the standard’s Annex A) that organizations can use to reduce various AI-related risks.
Why it matters
- Because AI is powerful but also risky (wrong outputs, bias, privacy leaks, system failures, etc.), having a formal governance framework helps companies be more responsible and transparent when deploying AI.
- For organizations that want to build trust with customers, regulators, or partners — or anticipate future AI-related regulations — ISO 42001 can serve as a credible, standardized foundation for AI governance.
My opinion
I think ISO 42001 is a valuable and timely step toward bringing some order and accountability into the rapidly evolving world of AI. Because AI is so flexible and can be used in many different contexts — some of them high-stakes — having a standard framework helps organizations think proactively about risk, ethics, transparency, and responsibility rather than scrambling reactively.
That said — because it’s new and not yet mandatory — its real-world impact depends heavily on how widely it’s adopted. For it to become meaningful beyond “nice to have,” regulators, governments, or large enterprises should encourage or require it (or similar frameworks). Until then, it will likely be adopted mostly by forward-thinking companies or those dealing with high-impact AI systems.
🔎 My view: ISO 42001 is a meaningful first step — but (for now) best seen as a foundation, not a silver bullet
I believe ISO 42001 represents a valuable starting point for bringing structure, accountability, and risk awareness to AI development and deployment. Its emphasis on governance, impact assessment, documentation, and continuous oversight is much needed in a world where AI adoption often runs faster than regulation or best practices.
That said — given its newness, generality, and the typical resource demands — I see it as necessary but not sufficient. It should be viewed as the base layer: useful for building internal discipline, preparing for regulatory demands, and signaling commitment. But to address real-world ethical, social, and technical challenges, organizations likely need additional safeguards — e.g. context-specific controls, ongoing audits, stakeholder engagement, domain-specific reviews, and perhaps even bespoke governance frameworks tailored to the type of AI system and its use cases.
In short: ISO 42001 is a strong first step — but real responsible AI requires going beyond standards to culture, context, and continuous vigilance.
✅ Real-world adopters of ISO 42001
IBM (Granite models)
- IBM became “the first major open-source AI model developer to earn ISO 42001 certification,” for its “Granite” family of open-source language models.
- The certification covers the management system for development, deployment, and maintenance of Granite — meaning IBM formalized policies, governance, data practices, documentation, and risk controls under AIMS (AI Management System).
- According to IBM, the certification provides external assurance of transparency, security, and governance — helping enterprises confidently adopt Granite in sensitive contexts (e.g. regulated industries).
Infosys
- Infosys — a global IT services and consulting company — announced in May 2024 that it had received ISO 42001:2023 certification for its AI Management System.
- Their certified “AIMS framework” is part of a broader set of offerings (the “Topaz Responsible AI Suite”), which supports clients in building and deploying AI responsibly, with structured risk mitigations and accountability.
- This demonstrates that even big consulting companies, not just pure-AI labs, see value in adopting ISO 42001 to manage AI at scale within enterprise services.
JAGGAER (Source-to-Pay / procurement software)
- JAGGAER — a global player in procurement / “source-to-pay” software — announced that it achieved ISO 42001 certification for its AI Management System in June 2025.
- For JAGGAER, the certification reflects a commitment to ethical, transparent, secure deployment of AI within its procurement platform.
- This shows how ISO 42001 can be used not only by AI labs or consultancy firms, but by business-software companies integrating AI into domain-specific applications.
🧠 My take — promising first signals, but still early days
These early adopters make a strong case that ISO 42001 can work in practice across very different kinds of organizations — not just AI-native labs, but enterprises, service providers, even consulting firms. The variety and speed of adoption (multiple firms in 2024–2025) demonstrate real momentum.
At the same time — adoption appears selective, and for many companies, the process may involve minimal compliance effort rather than deep, ongoing governance. Because the standard and the ecosystem (auditors, best-practice references, peer case studies) are both still nascent, there’s a real risk that ISO 42001 becomes more of a “badge” than a strong guardrail.
In short: I see current adoptions as proof-of-concepts — promising early examples showing how ISO 42001 could become an industry baseline. But for it to truly deliver on safe, ethical, responsible AI at scale, we’ll need: more widespread adoption across sectors; shared transparency about governance practices; public reporting on outcomes; and maybe supplementary audits or domain-specific guidelines (especially for high-risk AI uses).
Most organizations think they’re ready for AI governance — until ISO/IEC 42001 shines a light on the gaps. With 47 new AI-specific controls, this standard is quickly becoming the global expectation for responsible and compliant AI deployment. To help teams get ahead, we built a free ISO 42001 Compliance Checklist that gives you a readiness score in under 10 minutes, plus a downloadable gap report you can share internally. It’s a fast way to validate where you stand today and what you’ll need to align with upcoming regulatory and customer requirements. If improving AI trust, risk posture, and audit readiness is on your roadmap, this tool will save your team hours.
InfoSec services | ISMS Services | AIMS Services | InfoSec books | Follow our blog | DISC llc is listed on The vCISO Directory | ISO 27k Chat bot | Comprehensive vCISO Services | Security Risk Assessment Services | Mergers and Acquisition Security
- Want a Career in Governance, Risk & Compliance? Here’s the Real Path
- Are AI Companies Protecting Humanity? The Latest Scorecard Says No
- What ISO 42001 Looks Like in Practice: Insights From Early Certifications
- Why Auditing AI Is Critical for Responsible and Secure Adoption
- Why Practical Reliability is the New Competitive Edge in AI


