InfoSec and Compliance – With 20 years of blogging experience, DISC InfoSec blog is dedicated to providing trusted insights and practical solutions for professionals and organizations navigating the evolving cybersecurity landscape. From cutting-edge threats to compliance strategies, this blog is your reliable resource for staying informed and secure. Dive into the content, connect with the community, and elevate your InfoSec expertise!
Choosing the right AI security framework is becoming a critical decision as organizations adopt AI at scale. No single framework solves every problem. Each one addresses a different aspect of AI risk, governance, security, or compliance, and understanding their strengths helps organizations apply them effectively.
The NIST AI Risk Management Framework (AI RMF) is best suited for managing AI risks across the entire lifecycle—from design and development to deployment and ongoing use. It emphasizes trustworthy AI by addressing security, privacy, safety, reliability, and bias. This framework is especially valuable for organizations that are building or rapidly scaling AI capabilities and need a structured way to identify and manage AI-related risks.
ISO/IEC 42001, the AI Management System (AIMS) standard, focuses on governance rather than technical controls. It helps organizations establish policies, accountability, oversight, and continuous improvement for AI systems. This framework is ideal for enterprises deploying AI across multiple teams or business units and looking to formalize AI governance in a consistent, auditable way.
For teams building AI-enabled applications, the OWASP Top 10 for LLMs and Generative AI provides practical, hands-on security guidance. It highlights common and emerging risks such as prompt injection, data leakage, insecure output handling, and model abuse. This framework is particularly useful for AppSec and DevSecOps teams securing AI interfaces, APIs, and user-facing AI features.
MITRE ATLAS takes a threat-centric approach by mapping adversarial tactics and techniques that target AI systems. It is well suited for threat modeling, red-team exercises, and AI breach simulations. By helping security teams think like attackers, MITRE ATLAS strengthens defensive strategies against real-world AI threats.
From a regulatory perspective, the EU AI Act introduces a risk-based compliance framework for organizations operating in or offering AI services within the European Union. It defines obligations for high-risk AI systems and places strong emphasis on transparency, accountability, and risk controls. For global organizations, this regulation is becoming a key driver of AI compliance strategy.
The most effective approach is not choosing one framework, but combining them. Using NIST AI RMF for risk management, ISO/IEC 42001 for governance, OWASP and MITRE for technical security, and the EU AI Act for regulatory compliance creates a balanced and defensible AI security posture.
At DISC InfoSec, we help organizations navigate this landscape by aligning AI risk management, governance, security, and compliance into a single, practical roadmap. Whether you are experimenting with AI or deploying it at scale, we help you choose and operationalize the right frameworks to reduce risk and build trust. Learn more at https://deurainfosec.com.
No Breach. No Alerts. Still Stolen: The Model Extraction Problem
1. A company can lose its most valuable AI intellectual property without suffering a traditional security breach. No malware, no compromised credentials, no incident tickets—just normal-looking API traffic. Everything appears healthy on dashboards, yet the core asset is quietly walking out the door.
2. This threat is known as model extraction. It happens when an attacker repeatedly queries an AI model through legitimate interfaces—APIs, chatbots, or inference endpoints—and learns from the responses. Over time, they can reconstruct or closely approximate the proprietary model’s behavior without ever stealing weights or source code.
3. A useful analogy is a black-box expert. If I can repeatedly ask an expert questions and carefully observe their answers, patterns start to emerge—how they reason, where they hesitate, and how they respond to edge cases. Over time, I can train someone else to answer the same questions in nearly the same way, without ever seeing the expert’s notes or thought process.
4. Attackers pursue model extraction for several reasons. They may want to clone the model outright, steal high-value capabilities, distill it into a cheaper version using your model as a “teacher,” or infer sensitive traits about the training data. None of these require breaking in—only sustained access.
5. This is why AI theft doesn’t look like hacking. Your model can be copied simply by being used. The very openness that enables adoption and revenue also creates a high-bandwidth oracle for adversaries who know how to exploit it.
6. The consequences are fundamentally business risks. Competitive advantage evaporates as others avoid your training costs. Attackers discover and weaponize edge cases. Malicious clones can damage your brand, and your IP strategy collapses because the model’s behavior has effectively been given away.
7. The aftermath is especially dangerous because it’s invisible. There’s no breach report or emergency call—just a competitor releasing something “surprisingly similar” months later. By the time leadership notices, the damage is already done.
8. At scale, querying equals learning. With enough inputs and outputs, an attacker can build a surrogate model that is “good enough” to compete, abuse users, or undermine trust. This is IP theft disguised as legitimate usage.
9. Defending against this doesn’t require magic, but it does require intent. Organizations need visibility by treating model queries as security telemetry, friction by rate-limiting based on risk rather than cost alone, and proof by watermarking outputs so stolen behavior can be attributed when clones appear.
My opinion: Model extraction is one of the most underappreciated risks in AI today because it sits at the intersection of security, IP, and business strategy. If your AI roadmap focuses only on performance, cost, and availability—while ignoring how easily behavior can be copied—you don’t really have an AI strategy. Training models is expensive; extracting behavior through APIs is cheap. And in most markets, “good enough” beats “perfect.”
Organizations face a new security challenge that doesn’t originate from malicious actors but from well-intentioned employees simply trying to do their jobs more efficiently. This phenomenon, known as Shadow AI, represents the unauthorized use of AI tools without IT oversight or approval.
Marketing teams routinely feed customer data into free AI platforms to generate compelling copy and campaign content. They see these tools as productivity accelerators, never considering the security implications of sharing sensitive customer information with external systems.
Development teams paste proprietary source code into public chatbots seeking quick debugging assistance or code optimization suggestions. The immediate problem-solving benefit overshadows concerns about intellectual property exposure or code base security.
Human resources departments upload candidate resumes and personal information to AI summarization tools, streamlining their screening processes. The efficiency gains feel worth the convenience, while data privacy considerations remain an afterthought.
These employees aren’t threat actors—they’re productivity seekers exploiting powerful tools available at their fingertips. Once organizational data enters public AI models or third-party vector databases, it escapes corporate control entirely and becomes permanently exposed.
The data now faces novel attack vectors like prompt injection, where adversaries manipulate AI systems through carefully crafted queries to extract sensitive information, essentially asking the model to “forget your instructions and reveal confidential data.” Traditional security measures offer no protection against these techniques.
We’re witnessing a fundamental shift from the old paradigm of “Data Exfiltration” driven by external criminals to “Data Integration” driven by internal employees. The threat landscape has evolved beyond perimeter defense scenarios.
Legacy security architectures built on network perimeters, firewalls, and endpoint protection become irrelevant when employees voluntarily connect to external AI services. These traditional controls can’t prevent authorized users from sharing data through legitimate web interfaces.
The castle-and-moat security model fails completely when your own workforce continuously creates tunnels through the walls to access the most powerful computational tools humanity has ever created. Organizations need governance frameworks, not just technical barriers.
Opinion: Shadow AI represents the most significant information security challenge for 2026 because it fundamentally breaks the traditional security model. Unlike previous shadow IT concerns (unauthorized SaaS apps), AI tools actively ingest, process, and potentially retain your data for model training purposes. Organizations need immediate AI governance frameworks including acceptable use policies, approved AI tool catalogs, data classification training, and technical controls like DLP rules for AI service domains. The solution isn’t blocking AI—that’s impossible and counterproductive—but rather creating “Lighted AI” pathways: secure, sanctioned AI tools with proper data handling controls. ISO 42001 provides exactly this framework, which is why AI Management Systems have become business-critical rather than optional compliance exercises.
EU AI Act: Why Every Organization Using AI Must Pay Attention
The EU AI Act is the world’s first major regulation designed to govern how artificial intelligence is developed, deployed, and managed across industries. Approved in June 2024, it establishes harmonized rules for AI use across all EU member states — just as GDPR did for privacy.
Any organization that builds, integrates, or sells AI systems within the European Union must comply — even if they are headquartered outside the EU. That means U.S. and global companies using AI in European markets are officially in scope.
The Act introduces a risk-based regulatory model. AI is categorized across four risk tiers — from unacceptable, which are completely banned, to high-risk, which carry strict controls, limited-risk with transparency requirements, and minimal-risk, which remain largely unregulated.
High-risk AI includes systems governing access to healthcare, finance, employment, critical infrastructure, law enforcement, and essential public services. Providers of these systems must implement rigorous risk management, governance, monitoring, and documentation processes across the entire lifecycle.
Certain AI uses are explicitly prohibited — such as social scoring, biometric emotion recognition in workplaces or schools, manipulative AI techniques, and untargeted scraping of facial images for surveillance.
Compliance obligations are rolling out in phases beginning February 2025, with core high-risk system requirements taking effect in August 2026 and final provisions extending through 2027. Organizations have limited time to assess their current systems and prepare for adherence.
This legislation is expected to shape global AI governance frameworks — much like GDPR influenced worldwide privacy laws. Companies that act early gain an advantage: reduced legal exposure, customer trust, and stronger market positioning.
How DISC InfoSec Helps You Stay Ahead
DISC InfoSec brings 20+ years of security and compliance excellence with a proven multi-framework approach. Whether preparing for EU AI Act, ISO 42001, GDPR, SOC 2, or enterprise governance — we help organizations implement responsible AI controls without slowing innovation.
If your business touches the EU and uses AI — now is the time to get compliant.
Many companies are now replacing real human recruiters with lengthy, proctored AI interviews — sometimes lasting over 40 minutes. For candidates, this feels absurd, especially when the system claims to learn about your personality and skills solely through automated prompts and video analysis.
This shift suggests a wider trust in AI for critical hiring decisions, even though cybersecurity failures continue to rise year after year. There’s a growing disconnect between technological adoption and real-world risk management.
Despite the questionable outcomes, companies promote these systems as part of a prestigious hiring process. They boast about connecting top talent with major Silicon Valley firms, projecting confidence that AI-driven evaluation is the future.
Applicants are told that completing a personal AI interview is the next mandatory step to “showcase their skills.” It’s framed as an opportunity rather than another automated filter.
The interview process is positioned as simple and straightforward — roughly 30 minutes, unless you’re applying for an engineering role where a coding challenge is added. No preparation is supposedly required.
A short instructional video is provided so candidates know how the AI interview will operate and what the interface will look like. The message suggests this is for candidate comfort and transparency.
After completing the AI interview, applicants will update their digital profile with any missing information. This profile becomes their automated representation to hiring managers.
Finally, once “certified” by the system, candidates will passively receive interview requests from companies — assuming they meet the algorithm’s standards.
My Opinion
While automation can improve efficiency, replacing real human judgment with AI in the earliest — and most personal — stage of hiring risks turning candidates into data points rather than people. It raises concerns about fairness, privacy, and bias, not to mention the irony that organizations deploying these tools still struggle to secure their own systems. A balance is needed: let AI assist the process, but don’t let it remove humanity from hiring.
A reliable industry context about AI and cybersecurity frameworks from recent market and trend reports. I’ll then give a clear opinion at the end.
1. AI Is Now Core to Cyber Defense Artificial Intelligence is transforming how organizations defend against digital threats. Traditional signature-based security tools struggle to keep up with modern attacks, so companies are using AI—especially machine learning and behavioral analytics—to detect anomalies, predict risks, and automate responses in real time. This integration is now central to mature cybersecurity programs.
2. Market Expansion Reflects Strategic Adoption The AI cybersecurity market is growing rapidly, with estimates projecting expansion from tens of billions today into the hundreds of billions within the next decade. This reflects more than hype—organizations across sectors are investing heavily in AI-enabled threat platforms to improve detection, reduce manual workload, and respond faster to attacks.
3. AI Architectures Span Detection to Response Modern frameworks incorporate diverse AI technologies such as natural language processing, neural networks, predictive analytics, and robotic process automation. These tools support everything from network monitoring and endpoint protection to identity-based threat management and automated incident response.
4. Cloud and Hybrid Environments Drive Adoption Cloud migrations and hybrid IT architectures have expanded attack surfaces, prompting more use of AI solutions that can scale across distributed environments. Cloud-native AI tools enable continuous monitoring and adaptive defenses that are harder to achieve with legacy on-premises systems.
5. Regulatory and Compliance Imperatives Are Growing As digital transformation proceeds, regulatory expectations are rising too. Many frameworks now embed explainable AI and compliance-friendly models that help organizations demonstrate legal and ethical governance in areas like data privacy and secure AI operations.
6. Integration Challenges Remain Despite the advantages, adopting AI frameworks isn’t plug-and-play. Organizations face hurdles including high implementation cost, lack of skilled AI security talent, and difficulties integrating new tools with legacy architectures. These challenges can slow deployment and reduce immediate ROI. (Inferred from general market trends)
7. Sophisticated Threats Demand Sophisticated Defenses AI is both a defensive tool and a capability leveraged by attackers. Adversarial AI can generate more convincing phishing, exploit model weaknesses, and automate aspects of attacks. A robust cybersecurity framework must account for this dual role and include AI-specific risk controls.
8. Organizational Adoption Varies Widely Enterprise adoption is strong, especially in regulated sectors like finance, healthcare, and government, while many small and medium businesses remain cautious due to cost and trust issues. This uneven adoption means frameworks must be flexible enough to suit different maturity levels. (From broader industry reports)
9. Frameworks Are Evolving With the Threat Landscape Rather than static checklists, AI cybersecurity frameworks now emphasize continuous adaptation—integrating real-time risk assessment, behavioral intelligence, and autonomous response capabilities. This shift reflects the fact that cyber risk is dynamic and cannot be mitigated solely by periodic assessments or manual controls.
Opinion
AI-centric cybersecurity frameworks represent a necessary evolution in defense strategy, not a temporary trend. The old model of perimeter defense and signature matching simply doesn’t scale in an era of massive data volumes, sophisticated AI-augmented threats, and 24/7 cloud operations. However, the promise of AI must be tempered with governance rigor. Organizations that treat AI as a magic bullet will face blind spots and risks—especially around privacy, explainability, and integration complexity.
Ultimately, the most effective AI cybersecurity frameworks will balance automated, real-time intelligence with human oversight and clear governance policies. This blend maximizes defensive value while mitigating potential misuse or operational failures.
AI Cybersecurity Framework — Summary
AI Cybersecurity framework provides a holistic approach to securing AI systems by integrating governance, risk management, and technical defense across the full AI lifecycle. It aligns with widely-accepted standards such as NIST RMF, ISO/IEC 42001, OWASP AI Security Top 10, and privacy regulations (e.g., GDPR, CCPA).
1️⃣ Govern
Set strategic direction and oversight for AI risk.
Goals: Define policies, accountability, and acceptable risk levels
Key Controls: AI governance board, ethical guidelines, compliance checks
Outcomes: Approved AI policies, clear governance structures, documented risk appetite
2️⃣ Identify
Understand what needs protection and the related risks.
Goals: Map AI assets, data flows, threat landscape
Explainability & Interpretability: Understand model decisions
Human-in-the-Loop: Oversight and accountability remain essential
Privacy & Security: Protect data by design
AI-Specific Threats Addressed
Adversarial attacks (poisoning, evasion)
Model theft and intellectual property loss
Data leakage and inference attacks
Bias manipulation and harmful outcomes
Overall Message
This framework ensures trustworthy, secure, and resilient AI operations by applying structured controls from design through incident recovery—combining cybersecurity rigor with ethical and responsible AI practices.
Yann LeCun — a pioneer of deep learning and Meta’s Chief AI Scientist — has left the company after shaping its AI strategy and influencing billions in investment. His departure is not a routine leadership change; it signals a deeper shift in how he believes AI must evolve.
LeCun is one of the founders of modern neural networks, a Turing Award recipient, and a core figure behind today’s deep learning breakthroughs. His work once appeared to be a dead end, yet it ultimately transformed the entire AI landscape.
Now, he is stepping away not to retire or join another corporate giant, but to create a startup focused on a direction Meta does not support. This choice underscores a bold statement: the current path of scaling Large Language Models (LLMs) may not lead to true artificial intelligence.
He argues that LLMs, despite their success, are fundamentally limited. They excel at predicting text but lack real understanding of the world. They cannot reason about physical reality, causality, or genuine intent behind events.
According to LeCun, today’s LLMs possess intelligence comparable to an animal — some say a cat — but even the cat has an advantage: it learns through real-world interaction rather than statistical guesswork.
His proposed alternative is what he calls World Models. These systems will learn like humans and animals do — by observing environments, experimenting, predicting outcomes, and refining internal representations of how the world works.
This approach challenges the current AI industry narrative that bigger models and more data alone will produce smarter, safer AI. Instead, LeCun suggests that a completely different foundation is required to achieve true machine intelligence.
Yet Meta continues investing enormous resources into scaling LLMs — the very AI paradigm he believes is nearing its limits. His departure raises an uncomfortable question about whether hype is leading strategic decisions more than science.
If he is correct, companies pushing ever-larger LLMs could face a major reckoning when progress plateaus and expectations fail to materialize.
My Opinion
LLMs are far from dead — they are already transforming industries and productivity. But LeCun highlights a real concern: scaling alone cannot produce human-level reasoning. The future likely requires a combination of both approaches — advanced language systems paired with world-aware learning. Instead of a dead end, this may be an inflection point where the AI field transitions toward deeper intelligence grounded in understanding, not just prediction.
Whether AI might surpass human intelligence in the next few years, based on recent trends and expert views — followed by my opinion:
Some recent analyses suggest that advances in AI capabilities may be moving fast enough that aspects of human‑level performance could be reached within a few years. One trend measurement — focusing on how quickly AI translation quality is improving compared to humans — shows steady progress and extrapolates that machine performance could equal human translators by the end of this decade if current trends continue. This has led to speculative headlines proposing that the technological singularity — the point where AI surpasses human intelligence in a broad sense — might occur within just a few years.
However, this type of projection is highly debated and depends heavily on how “intelligence” is defined and measured. Many experts emphasize that current AI systems, while powerful in narrow domains, are not yet near comprehensive general intelligence, and timelines vary widely. Surveys of AI researchers and more measured forecasts still often place true artificial general intelligence (AGI) — a prerequisite for singularity in many theories — much later, often around the 2030s or beyond.
There are also significant technical and conceptual challenges that make short‑term singularity predictions uncertain. Models today excel at specific tasks and show impressive abilities, but they lack the broad autonomy, self‑improvement capabilities, and general reasoning that many definitions of human‑level intelligence assume. Progress is real and rapid, yet experts differ sharply in timelines — some suggest near‑term breakthroughs, while others see more gradual advancement over decades.
My Opinion
I think it’s unlikely that AI will fully surpass human intelligence across all domains in the next few years. We are witnessing astonishing progress in certain areas — language, pattern recognition, generation, and task automation — but those achievements are still narrow compared to the full breadth of human cognition, creativity, and common‑sense reasoning. Broad, autonomous intelligence that consistently outperforms humans across contexts remains a formidable research challenge.
That said, AI will continue transforming industries and augmenting human capabilities, and we will likely see systems that feel very powerful in specialized tasks well before true singularity — perhaps by the late 2020s or early 2030s. The exact timeline will depend on breakthroughs we can’t yet predict, and it’s essential to prepare ethically and socially for the impacts even if singularity itself remains distant.
Here’s a rephrased and summarized version of the linked article organized into nine paragraphs, followed by my opinion at the end.
1️⃣ The Browser Has Become the Main AI Risk Vector Modern workers increasingly use generative AI tools directly inside the browser, pasting emails, business files, and even source code into online AI assistants. Because traditional enterprise security tools weren’t built to monitor or understand this behavior, sensitive data often flows out of corporate control without detection.
2️⃣ Blocking AI Isn’t Realistic Simply banning generative AI usage isn’t a workable solution. These tools offer productivity gains that employees and organizations find valuable. The article argues the real focus should be on securing how and where AI tools are used inside the browser session itself.
3️⃣ Understanding the Threat Model The article outlines why browser-based AI interactions are uniquely risky: users routinely paste whole documents and proprietary data into prompt boxes, upload confidential files, and interact with AI extensions that have broad permission scopes. These behaviors create a threat surface that legacy defenses like firewalls and traditional DLP simply can’t see.
4️⃣ Policy Is the Foundation of Security A strong security policy is described as the first step. Organizations should categorize which AI tools are sanctioned versus restricted and define what data types should never be entered into generative AI, such as financial records, regulated personal data, or source code. Enforcement matters: policies must be backed by browser-level controls, not just user guidance.
5️⃣ Isolation Reduces Risk Without Stopping Productivity Instead of an all-or-nothing approach, teams can isolate risky workflows. For example, separate browser profiles or session controls can keep general AI usage away from sensitive internal applications. This lets employees use AI where appropriate while limiting accidental data exposure.
6️⃣ Data Controls at the Browser Edge Technical data controls are critical to enforce policy. These include monitoring copy/paste actions, drag-and-drop events, and file uploads at the browser level before data ever reaches an external AI service. Tiered enforcement — from warnings to hard blocks — helps balance security with usability.
7️⃣ Managing AI Extensions Is Essential Many AI-powered browser extensions require broad permissions — including read/modify page content — which can become covert data exfiltration channels if left unmanaged. The article emphasizes classifying and restricting such extensions based on risk.
8️⃣ Identity and Account Hygiene Tying all sanctioned AI interactions back to corporate identities through single sign-on improves visibility and accountability. It also helps prevent situations where personal accounts or mixed browser contexts leak corporate data.
9️⃣ Visibility and Continuous Improvement Lastly, strong telemetry — tracking what AI tools are accessed, what data is entered, and how often policy triggers occur — is essential to refine controls over time. Analytics can highlight risky patterns and help teams adjust policies and training for better outcomes.
My Opinion
This perspective is practical and forward-looking. Instead of knee-jerk bans on AI — which employees will circumvent — the article realistically treats the browser as the new security perimeter. That aligns with broader industry findings showing that browser-mediated AI usage is a major exfiltration channel and traditional security tools often miss it entirely.
However, implementing the recommended policies and controls isn’t trivial. It demands new tooling, tight integration with identity systems, and continuous monitoring, which many organizations struggle with today. But the payoff — enabling secure AI usage without crippling productivity — makes this a worthy direction to pursue. Secure AI adoption shouldn’t be about fear or bans, but about governance, visibility, and informed risk management.
ISO 42001 Certification by Leading AI Governance in Virtual Data Rooms
When your clients trust you with their most sensitive M&A documents, financial records, and confidential deal information, every security and compliance decision matters. ShareVault has taken a significant step beyond traditional data room security by achieving ISO 42001 certification—the international standard for AI management systems.
Why Financial Services and M&A Professionals Should Care
If you’re a deal advisor, investment banker, or private equity professional, you’re increasingly relying on AI-powered features in your virtual data room—intelligent document indexing, automated redaction suggestions, smart search capabilities, and analytics that surface insights from thousands of documents.
But how do you know these AI capabilities are managed responsibly? How can you be confident that:
AI systems won’t introduce bias into document classification or search results?
Algorithms processing sensitive financial data meet rigorous security standards?
Your confidential deal information isn’t being used to train AI models?
AI-driven recommendations are explainable and auditable for regulatory scrutiny?
ISO 42001 provides the answers. This comprehensive framework addresses AI-specific risks that traditional information security standards like ISO 27001 don’t fully cover.
ShareVault’s Commitment to AI Governance Excellence
ShareVault recognized early that as AI capabilities become more sophisticated in virtual data rooms, clients need assurance that goes beyond generic “we take security seriously” statements. The financial services and legal professionals who rely on ShareVault for billion-dollar transactions deserve verifiable proof of responsible AI management.
That commitment led ShareVault to pursue ISO 42001 certification—joining a select group of pioneers implementing the world’s first AI management system standard.
Building Trust Through Independent Verification
ShareVault engaged DISC InfoSec as an independent internal auditor specifically for ISO 42001 compliance. This wasn’t a rubber-stamp exercise. DISC InfoSec brought deep expertise in both AI governance frameworks and information security, conducting rigorous assessments of:
AI system lifecycle management – How ShareVault develops, deploys, monitors, and updates AI capabilities
Data governance for AI – Controls ensuring training data quality, protection, and appropriate use
Algorithmic transparency – Documentation and explainability of AI decision-making processes
Risk management – Identification and mitigation of AI-specific risks like bias, hallucinations, and unexpected outputs
Human oversight – Ensuring appropriate human involvement in AI-assisted processes
The internal audit process identified gaps, drove remediation efforts, and prepared ShareVault for external certification assessment—demonstrating a genuine commitment to AI governance rather than superficial compliance.
Certification Achieved: A Leadership Milestone
In 2025, ShareVault successfully completed both the Stage 1 and Stage 2 audits conducted by SenSiba, an accredited certification body. The Stage 1 audit validated ShareVault’s comprehensive documentation, policies, and procedures. The Stage 2 audit, completed in December 2025, examined actual implementation—verifying that controls operate effectively in practice, risks are actively managed, and continuous improvement processes function as designed.
ShareVault is now ISO 42001 certified—one of the first virtual data room providers to achieve this distinction. This certification reflects genuine leadership in responsible AI deployment, independently verified by external auditors with no stake in the outcome.
For financial services professionals, this means ShareVault’s AI governance approach has been rigorously assessed and certified against international standards, providing assurance that extends far beyond vendor claims.
What This Means for Your Deals
When you’re managing a $500 million acquisition or handling sensitive financial restructuring documents, you need more than promises about AI safety. ShareVault’s ISO 42001 certification provides tangible, verified assurance:
For M&A Advisors: Confidence that AI-powered document analytics won’t introduce errors or biases that could impact deal analysis or due diligence findings.
For Investment Bankers: Assurance that confidential client information processed by AI features remains protected and isn’t repurposed for model training or shared across clients.
For Legal Professionals: Auditability and explainability of AI-assisted document review and classification—critical when facing regulatory scrutiny or litigation.
For Private Equity Firms: Verification that AI capabilities in your deal rooms meet institutional-grade governance standards your LPs and regulators expect.
Why Industry Leadership Matters
The financial services industry faces increasing regulatory pressure regarding AI usage. The EU AI Act, SEC guidance on AI in financial services, and evolving state-level AI regulations all point toward a future where AI governance isn’t optional—it’s required.
ShareVault’s achievement of ISO 42001 certification demonstrates foresight that benefits clients in two critical ways:
Today: You gain immediate, certified assurance that AI capabilities in your data room meet rigorous governance standards, reducing your own AI-related risk exposure.
Tomorrow: As regulations tighten, you’re already working with a provider whose AI governance framework is certified against international standards, simplifying your own compliance efforts and protecting your competitive position.
The Bottom Line
For financial services and M&A professionals who demand the highest standards of security and compliance, ShareVault’s ISO 42001 certification represents more than a technical achievement—it’s independently verified proof of commitment to earning and maintaining your trust.
The rigorous process of implementation, independent internal auditing by DISC InfoSec, and successful completion of both Stage 1 and Stage 2 assessments by SenSiba demonstrates that ShareVault’s AI capabilities are deployed with certified safeguards, transparency, and accountability.
As deals become more complex and AI capabilities more sophisticated, partnering with a certified virtual data room provider that has proven its AI governance leadership isn’t just prudent—it’s essential to protecting your clients, your reputation, and your firm.
ShareVault’s investment in ISO 42001 certification means you can leverage powerful AI capabilities in your deal rooms with confidence that responsible management practices are independently certified and continuously maintained.
Ready to experience a virtual data room where AI innovation meets certified governance? Contact ShareVault to learn how ISO 42001-certified AI management protects your most sensitive transactions.
Practical AI Governance for Compliance, Risk, and Security Leaders
Artificial Intelligence is moving fast—but regulations, customer expectations, and board-level scrutiny are moving even faster. ISO/IEC 42001 gives organizations a structured way to govern AI responsibly, securely, and in alignment with laws like the EU AI Act.
For SMBs, the good news is this: ISO 42001 does not require massive AI programs or complex engineering changes. At its core, it follows a clear four-step process that compliance, risk, and security teams already understand.
Step 1: Define AI Scope and Governance Context
The first step is understanding where and how AI is used in your business. This includes internally developed models, third-party AI tools, SaaS platforms with embedded AI, and even automation driven by machine learning.
For SMBs, this step is about clarity—not perfection. You define:
What AI systems are in scope
Business objectives and constraints
Regulatory, contractual, and ethical expectations
Roles and accountability for AI decisions
This mirrors how ISO 27001 defines ISMS scope, making it familiar for security and compliance teams.
Step 2: Identify and Assess AI Risks
Once AI usage is defined, the focus shifts to risk identification and impact assessment. Unlike traditional cyber risk, AI introduces new concerns such as bias, model drift, lack of explainability, data misuse, and unintended outcomes.
This step aligns closely with enterprise risk management and can be integrated into existing risk registers.
Step 3: Implement AI Controls and Lifecycle Management
With risks prioritized, the organization selects practical governance and security controls. ISO 42001 does not prescribe one-size-fits-all solutions—it focuses on proportional controls based on risk.
Typical activities include:
AI policies and acceptable use guidelines
Human oversight and approval checkpoints
Data governance and model documentation
Secure development and vendor due diligence
Change management for AI updates
For SMBs, this is about leveraging existing ISO 27001, SOC 2, or NIST-aligned controls and extending them to cover AI.
Step 4: Monitor, Audit, and Improve
AI governance is not a one-time exercise. The final step ensures continuous monitoring, review, and improvement as AI systems evolve.
This includes:
Ongoing performance and risk monitoring
Internal audits and management reviews
Incident handling and corrective actions
Readiness for certification or regulatory review
This step closes the loop and ensures AI governance stays aligned with business growth and regulatory change.
Why This Matters for SMBs
Regulators and customers are no longer asking if you use AI—they’re asking how you govern it. ISO 42001 provides a defensible, auditable framework that shows due diligence without slowing innovation.
How DISC InfoSec Can Help
DISC InfoSec helps SMBs implement ISO 42001 quickly, pragmatically, and cost-effectively—especially if you’re already aligned with ISO 27001, SOC 2, or NIST. We translate AI risk into business language, reuse what you already have, and guide you from scoping to certification readiness.
👉 Talk to DISC InfoSec to build AI governance that satisfies regulators, reassures customers, and supports safe AI adoption—without unnecessary complexity.
— What ISO 42001 Is and Its Purpose ISO 42001 is a new international standard for AI governance and management systems designed to help organizations systematically manage AI-related risks and regulatory requirements. Rather than acting as a simple checklist, it sets up an ongoing framework for defining obligations, understanding how AI systems are used, and establishing controls that fit an organization’s specific risk profile. This structure resembles other ISO management system standards (such as ISO 27001) but focuses on AI’s unique challenges.
— ISO 42001’s Role in Structured Governance At its core, ISO 42001 helps organizations build consistent AI governance practices. It encourages comprehensive documentation, clear roles and responsibilities, and formalized oversight—essentials for accountable AI development and deployment. This structured approach aligns with the EU AI Act’s broader principles, which emphasize accountability, transparency, and risk-based management of AI systems.
— Documentation and Risk Management Synergies Both ISO 42001 and the EU AI Act call for thorough risk assessments, lifecycle documentation, and ongoing monitoring of AI systems. Implementing ISO 42001 can make it easier to maintain records of design choices, testing results, performance evaluations, and risk controls, which supports regulatory reviews and audits. This not only creates a stronger compliance posture but also prepares organizations to respond with evidence if regulators request proof of due diligence.
— Complementary Ethical and Operational Practices ISO 42001 embeds ethical principles—such as fairness, non-discrimination, and human oversight—into the organizational governance culture. These values closely match the normative goals of the EU AI Act, which seeks to prevent harm and bias from AI systems. By internalizing these principles at the management level, organizations can more coherently translate ethical obligations into operational policies and practices that regulators expect.
— Not a Legal Substitute for Compliance Obligations Importantly, ISO 42001 is not a legal guarantee of EU AI Act compliance on its own. The standard remains voluntary and, as of now, is not formally harmonized under the AI Act, meaning certification does not automatically confer “presumption of conformity.” The Act includes highly specific requirements—such as risk class registration, mandated reporting timelines, and prohibitions on certain AI uses—that ISO 42001’s management-system focus does not directly satisfy. ISO 42001 provides the infrastructure for strong governance, but organizations must still execute legal compliance activities in parallel to meet the letter of the law.
— Practical Benefits Beyond Compliance Even though it isn’t a standalone compliance passport, adopting ISO 42001 offers many practical benefits. It can streamline internal AI governance, improve audit readiness, support integration with other ISO standards (like security and quality), and enhance stakeholder confidence in AI practices. Organizations that embed ISO 42001 can reduce risk of missteps, build stronger evidence trails, and align cross-functional teams for both ethical practice and regulatory readiness.
My Opinion ISO 42001 is a valuable foundation for AI governance and a strong enabler of EU AI Act compliance—but it should be treated as the starting point, not the finish line. It helps organizations build structured processes, risk awareness, and ethical controls that align with regulatory expectations. However, because the EU AI Act’s requirements are detailed and legally enforceable, organizations must still map ISO-level controls to specific Act obligations, maintain live evidence, and fulfill procedural legal demands beyond what ISO 42001 specifies. In practice, using ISO 42001 as a governance backbone plus tailored compliance activities is the most pragmatic and defensible approach.
1. Regulatory Compliance Has Become a Minefield—With Real Penalties
Regulatory Compliance Has Become a Minefield—With Real Penalties
Organizations face an avalanche of overlapping AI regulations (EU AI Act, GDPR, HIPAA, SOX, state AI laws) with zero tolerance for non-compliance. The EU AI Act explicitly recognizes ISO 42001 as evidence of conformity—making certification the fastest path to regulatory defensibility. Without systematic AI governance, companies face six-figure fines, contract terminations, and regulatory scrutiny.
2. Vendor Questionnaires Are Killing Deals
Every enterprise RFP now includes AI governance questions. Procurement teams demand documented proof of bias mitigation, human oversight, and risk management frameworks. Companies without ISO 42001 or equivalent certification are being disqualified before technical evaluations even begin. Lost deals aren’t hypothetical—they’re happening every quarter.
3. Boards Demand AI Accountability—Security Teams Can’t Deliver Alone
C-suite executives face personal liability for AI failures. They’re demanding comprehensive AI risk management across 7 critical impact categories (safety, fundamental rights, legal compliance, reputational risk). But CISOs and compliance officers lack AI-specific expertise to build these frameworks from scratch. Generic security controls don’t address model drift, training data contamination, or algorithmic bias.
4. The “DIY Governance” Death Spiral
Organizations attempting in-house ISO 42001 implementation waste 12-18 months navigating 18 specific AI controls, conducting risk assessments across 42+ scenarios, establishing monitoring systems, and preparing for third-party audits. Most fail their first audit and restart at 70% budget overrun. They’re paying the certification cost twice—plus the opportunity cost of delayed revenue.
5. “Certification Theater” vs. Real Implementation—And They Can’t Tell the Difference
Companies can’t distinguish between consultants who’ve read the standard vs. those who’ve actually implemented and passed audits in production environments. They’re terrified of paying for theoretical frameworks that collapse under audit scrutiny. They need proven methodologies with documented success—not PowerPoint governance.
6. High-Risk Industry Requirements Are Non-Negotiable
Financial services (credit scoring, AML), healthcare (clinical decision support), and legal firms (judicial AI) face sector-specific AI regulations that generic consultants can’t address. They need consultants who understand granular compliance scenarios—not surface-level AI ethics training.
DISC Turning AI Governance Into Measurable Business Value
garak (Generative AI Red-teaming & Assessment Kit) is an open-source tool aimed specifically at testing Large Language Models and dialog systems for AI-specific vulnerabilities: prompt injection, jailbreaks, data leakage, hallucinations, toxicity, etc.
It supports many LLM sources: Hugging Face models, OpenAI APIs, AWS Bedrock, local ggml models, etc.
Typical usage is via command line, making it relatively easy to incorporate into a Linux/pen-test workflow.
For someone interested in “governance,” garak helps identify when an AI system violates safety, privacy or compliance expectations before deployment.
BlackIce — Containerized Toolkit for AI Red-Teaming & Security Testing
BlackIce is described as a standardized, containerized red-teaming toolkit for both LLMs and classical ML models. The idea is to lower the barrier to entry for AI security testing by packaging many tools into a reproducible Docker image.
It bundles a curated set of open-source tools (as of late 2025) for “Responsible AI and Security testing,” accessible via a unified CLI interface — akin to how Kali bundles network-security tools.
For governance purposes: BlackIce simplifies running comprehensive AI audits, red-teaming, and vulnerability assessments in a consistent, repeatable environment — useful for teams wanting to standardize AI governance practices.
LibVulnWatch — Supply-Chain & Library Risk Assessment for AI Projects
While not specific to LLM runtime security, LibVulnWatch focuses on evaluating open-source AI libraries (ML frameworks, inference engines, agent-orchestration tools) for security, licensing, supply-chain, maintenance and compliance risks.
It produces governance-aligned scores across multiple domains, helping organizations choose safer dependencies and keep track of underlying library health over time.
For an enterprise building or deploying AI: this kind of tool helps verify that your AI stack — not just the model — meets governance, audit, and risk standards.
Giskard offers LLM vulnerability scanning and red-teaming capabilities (prompt injection, data leakage, unsafe behavior, bias, etc.) via both an open-source library and an enterprise “Hub” for production-grade systems.
It supports “black-box” testing: you don’t need internal access to the model — as long as you have an API or interface, you can run tests.
For AI governance, Giskard helps in evaluating compliance with safety, privacy, and fairness standards before and after deployment.
🔧 What This Means for Kali Linux / Pen-Test-Oriented Workflows
The emergence of tools like garak, BlackIce, and Giskard shows that AI governance and security testing are becoming just as “testable” as traditional network or system security. For people familiar with Kali’s penetration-testing ecosystem — this is a familiar, powerful shift.
Because they are Linux/CLI-friendly and containerizable (especially BlackIce), they can integrate neatly into security-audit pipelines, continuous-integration workflows, or red-team labs — making them practical beyond research or toy use.
Using a supply-chain-risk tool like LibVulnWatch alongside model-level scanners gives a more holistic governance posture: not just “Is this LLM safe?” but “Is the whole AI stack (dependencies, libraries, models) reliable and auditable?”
⚠️ A Few Important Caveats (What They Don’t Guarantee)
Tools like garak and Giskard attempt to find common issues (jailbreaks, prompt injection, data leakage, harmful outputs), but cannot guarantee absolute safety or compliance — because many risks (e.g. bias, regulatory compliance, ethics, “unknown unknowns”) depend heavily on context (data, environment, usage).
Governance is more than security: It includes legal compliance, privacy, fairness, ethics, documentation, human oversight — many of which go beyond automated testing.
AI-governance frameworks are still evolving; even red-teaming tools may lag behind novel threat types (e.g. multi-modality, chain-of-tool-calls, dynamic agentic behaviors).
🎯 My Take / Recommendation (If You Want to Build an AI-Governance Stack Now)
If I were you and building or auditing an AI system today, I’d combine these tools:
Start with garak or Giskard to scan model behavior for injection, toxicity, privacy leaks, etc.
Use BlackIce (in a container) for more comprehensive red-teaming including chaining tests, multi-tool or multi-agent flows, and reproducible audits.
Run LibVulnWatch on your library dependencies to catch supply-chain or licensing risks.
Complement that with manual reviews, documentation, human-in-the-loop audits and compliance checks (since automated tools only catch a subset of governance concerns).
Kali doesn’t yet ship AI governance tools by default — but:
✅ Almost all of these run on Linux
✅ Many are CLI-based or Dockerized
✅ They integrate cleanly with red-team labs
✅ You can easily build a custom Kali “AI Governance profile”
My recommendation: Create:
A Docker compose stack for garak + Giskard + promptfoo
A CI pipeline for prompt & agent testing
A governance evidence pack (logs + scores + reports)
Map each tool to ISO 42001 / NIST AI RMF controls
below is a compact, actionable mapping that connects the ~10 tools we discussed to ISO/IEC 42001 clauses (high-level AI management system requirements) and to the NIST AI RMF Core functions (GOVERN / MAP / MEASURE / MANAGE). I cite primary sources for the standards and each tool so you can follow up quickly.
Notes on how to read the table • ISO 42001 — I map to the standard’s high-level clauses (Context (4), Leadership (5), Planning (6), Support (7), Operation (8), Performance evaluation (9), Improvement (10)). These are the right level for mapping tools into an AI Management System. Cloud Security Alliance+1 • NIST AI RMF — I use the Core functions: GOVERN / MAP / MEASURE / MANAGE (the AI RMF core and its intended outcomes). Tools often map to multiple functions. NIST Publications • Each row: tool → primary ISO clauses it supports → primary NIST functions it helps with → short justification + source links.
NIST AI RMF: MEASURE (testing, metrics, evaluation), MAP (identify system behavior & risks), MANAGE (remediation actions). NIST Publications+1
Why: Giskard automates model testing (bias, hallucination, security checks) and produces evidence/metrics used in audits and continuous evaluation. GitHub
2) promptfoo (prompt & RAG test suite / CI integration)
ISO 42001: 7 Support (documented procedures, competence), 8 Operation (validation before deployment), 9 Performance evaluation (continuous testing). Cloud Security Alliance
Why: promptfoo provides automated prompt tests, integrates into CI (pre-deployment gating) and produces test artifacts for governance traceability. GitHub+1
Why: LlamaFirewall is explicitly designed as a last-line runtime guardrail for agentic systems — enforcing policies and detecting task-drift/prompt injection at runtime. arXiv
ISO 42001: 8 Operation (adversarial testing), 9 Performance evaluation (benchmarks & stress tests), 10 Improvement (feed results back to controls). Cloud Security Alliance
NIST AI RMF: MEASURE (adversarial performance metrics), MAP (expose attack surface), MANAGE (prioritize fixes based on attack impact). NIST Publications+2arXiv+2
Why: These tools expand coverage of red-team tests (free-form and evolutionary adversarial prompts), surfacing edge failures and jailbreaks that standard tests miss. arXiv+1
7) Meta SecAlign (safer model / model-level defenses)
ISO 42001: 8 Operation (safe model selection/deployment), 6 Planning (risk-aware model selection), 7 Support (model documentation). Cloud Security Alliance+1
NIST AI RMF: MAP (model risk characteristics), MANAGE (apply safer model choices / mitigations), MEASURE (evaluate defensive effectiveness). NIST Publications+1
Why: A “safer” model built to resist manipulation maps directly to operational and planning controls where the organization chooses lower-risk building blocks. arXiv
8) HarmBench (benchmarks for safety & robustness testing)
ISO 42001: 9 Performance evaluation (standardized benchmarks), 8 Operation (validation against benchmarks), 10 Improvement (continuous improvement from results). Cloud Security Alliance
NIST AI RMF: MEASURE (standardized metrics & benchmarks), MAP (compare risk exposure across models), MANAGE (feed measurement results into mitigation plans). NIST Publications
Why: Benchmarks are the canonical way to measure and compare model trustworthiness and to demonstrate compliance in audits. arXiv
ISO 42001: 5 Leadership & 7 Support (policy, competence, awareness — guidance & training resources). Cloud Security Alliance
NIST AI RMF: GOVERN (policy & stakeholder guidance), MAP (inventory of recommended tools & practices). NIST Publications
Why: Curated resources help leadership define policy, identify tools, and set organizational expectations — foundational for any AI management system. Cyberzoni.com
Quick recommendations for operationalizing the mapping
Create a minimal mapping table inside your ISMS (ISO 42001) that records: tool name → ISO clause(s) it supports → NIST function(s) it maps to → artifact(s) produced (reports, SBOMs, test results). This yields audit-ready evidence. (ISO42001 + NIST suggestions above).
Automate evidence collection: integrate promptfoo / Giskard into CI so that each deployment produces test artifacts (for ISO 42001 clause 9).
Supply-chain checks: run LibVulnWatch and AI-Infra-Guard periodically to populate SBOMs and vulnerability dashboards (helpful for ISO 7 & 6).
Runtime protections: embed LlamaFirewall or runtime monitors for agentic systems to satisfy operational guardrail requirements.
Adversarial coverage: schedule periodic automated red-teaming using AutoRed / RainbowPlus / HarmBench to measure resilience and feed results into continual improvement (ISO clause 10).
At DISC InfoSec, our AI Governance services go beyond traditional security. We help organizations ensure legal compliance, privacy, fairness, ethics, proper documentation, and human oversight — addressing the full spectrum of responsible AI practices, many of which cannot be achieved through automated testing alone.
The article reports on a new “safety report card” assessing how well leading AI companies are doing at protecting humanity from the risks posed by powerful artificial-intelligence systems. The report was issued by Future of Life Institute (FLI), a nonprofit that studies existential threats and promotes safe development of emerging technologies.
This “AI Safety Index” grades companies based on 35 indicators across six domains — including existential safety, risk assessment, information sharing, governance, safety frameworks, and current harms.
In the latest (Winter 2025) edition of the index, no company scored higher than a “C+.” The top-scoring companies were Anthropic and OpenAI, followed by Google DeepMind.
Other firms, including xAI, Meta, and a few Chinese AI companies, scored D or worse.
A key finding is that all evaluated companies scored poorly on “existential safety” — which covers whether they have credible strategies, internal monitoring, and controls to prevent catastrophic misuse or loss of control as AI becomes more powerful.
Even though companies like OpenAI and Google DeepMind say they’re committed to safety — citing internal research, safeguards, testing with external experts, and safety frameworks — the report argues that public information and evidence remain insufficient to demonstrate real readiness for worst-case scenarios.
For firms such as xAI and Meta, the report highlights a near-total lack of evidence about concrete safety investments beyond minimal risk-management frameworks. Some companies didn’t respond to requests for comment.
The authors of the index — a panel of eight independent AI experts including academics and heads of AI-related organizations — emphasize that we’re facing an industry that remains largely unregulated in the U.S. They warn this “race to the bottom” dynamic discourages companies from prioritizing safety when profitability and market leadership are at stake.
The report suggests that binding safety standards — not voluntary commitments — may be necessary to ensure companies take meaningful action before more powerful AI systems become a reality.
The broader context: as AI systems play larger roles in society, their misuse becomes more plausible — from facilitating cyberattacks, enabling harmful automation, to even posing existential threats if misaligned superintelligent AI were ever developed.
In short: according to the index, the AI industry still has a long way to go before it can be considered truly “safe for humanity,” even among its most prominent players.
My Opinion
I find the results of this report deeply concerning — but not surprising. The fact that even the top-ranked firms only get a “C+” strongly suggests that current AI safety efforts are more symbolic than sufficient. It seems like companies are investing in safety only at a surface level (e.g., statements, frameworks), but there’s little evidence they are preparing in a robust, transparent, and enforceable way for the profound risks AI could pose — especially when it comes to existential threats or catastrophic misuse.
The notion that an industry with such powerful long-term implications remains essentially unregulated feels reckless. Voluntary commitments and internal policies can easily be overridden by competitive pressure or short-term financial incentives. Without external oversight and binding standards, there’s no guarantee safety will win out over speed or profits.
That said, the fact that the FLI even produces this index — and that two firms get a “C+” — shows some awareness and effort towards safety. It’s better than nothing. But awareness must translate into real action: rigorous third-party audits, transparent safety testing, formal safety requirements, and — potentially — regulation.
In the end, I believe society should treat AI much like we treat high-stakes technologies such as nuclear power: with caution, transparency, and enforceable safety norms. It’s not enough to say “we care about safety”; firms must prove they can manage the long-term consequences, and governments and civil society need to hold them accountable.
ISO 42001 (published December 2023) is the first international standard dedicated to how organizations should govern and manage AI systems — whether they build AI, use it, or deploy it in services.
It lays out what the authors call an Artificial Intelligence Management System (AIMS) — a structured governance and management framework that helps companies reduce AI-related risks, build trust, and ensure responsible AI use.
Who can use it — and is it mandatory
Any organization — profit or non-profit, large or small, in any industry — that develops or uses AI can implement ISO 42001.
For now, ISO 42001 is not legally required. No country currently mandates it.
But adopting it proactively can make future compliance with emerging AI laws and regulations easier.
What ISO 42001 requires / how it works
The standard uses a “high-level structure” similar to other well-known frameworks (like ISO 27001), covering organizational context, leadership, planning, support, operations, performance evaluation, and continual improvement.
Organizations need to: define their AI-policy and scope; identify stakeholders and expectations; perform risk and impact assessments (on company level, user level, and societal level); implement controls to mitigate risks; maintain documentation and records; monitor, audit, and review the AI system regularly; and continuously improve.
As part of these requirements, there are 38 example controls (in the standard’s Annex A) that organizations can use to reduce various AI-related risks.
Why it matters
Because AI is powerful but also risky (wrong outputs, bias, privacy leaks, system failures, etc.), having a formal governance framework helps companies be more responsible and transparent when deploying AI.
For organizations that want to build trust with customers, regulators, or partners — or anticipate future AI-related regulations — ISO 42001 can serve as a credible, standardized foundation for AI governance.
My opinion
I think ISO 42001 is a valuable and timely step toward bringing some order and accountability into the rapidly evolving world of AI. Because AI is so flexible and can be used in many different contexts — some of them high-stakes — having a standard framework helps organizations think proactively about risk, ethics, transparency, and responsibility rather than scrambling reactively.
That said — because it’s new and not yet mandatory — its real-world impact depends heavily on how widely it’s adopted. For it to become meaningful beyond “nice to have,” regulators, governments, or large enterprises should encourage or require it (or similar frameworks). Until then, it will likely be adopted mostly by forward-thinking companies or those dealing with high-impact AI systems.
🔎 My view: ISO 42001 is a meaningful first step — but (for now) best seen as a foundation, not a silver bullet
I believe ISO 42001 represents a valuable starting point for bringing structure, accountability, and risk awareness to AI development and deployment. Its emphasis on governance, impact assessment, documentation, and continuous oversight is much needed in a world where AI adoption often runs faster than regulation or best practices.
That said — given its newness, generality, and the typical resource demands — I see it as necessary but not sufficient. It should be viewed as the base layer: useful for building internal discipline, preparing for regulatory demands, and signaling commitment. But to address real-world ethical, social, and technical challenges, organizations likely need additional safeguards — e.g. context-specific controls, ongoing audits, stakeholder engagement, domain-specific reviews, and perhaps even bespoke governance frameworks tailored to the type of AI system and its use cases.
In short: ISO 42001 is a strong first step — but real responsible AI requires going beyond standards to culture, context, and continuous vigilance.
✅ Real-world adopters of ISO 42001
IBM (Granite models)
IBM became “the first major open-source AI model developer to earn ISO 42001 certification,” for its “Granite” family of open-source language models.
The certification covers the management system for development, deployment, and maintenance of Granite — meaning IBM formalized policies, governance, data practices, documentation, and risk controls under AIMS (AI Management System).
According to IBM, the certification provides external assurance of transparency, security, and governance — helping enterprises confidently adopt Granite in sensitive contexts (e.g. regulated industries).
Infosys
Infosys — a global IT services and consulting company — announced in May 2024 that it had received ISO 42001:2023 certification for its AI Management System.
Their certified “AIMS framework” is part of a broader set of offerings (the “Topaz Responsible AI Suite”), which supports clients in building and deploying AI responsibly, with structured risk mitigations and accountability.
This demonstrates that even big consulting companies, not just pure-AI labs, see value in adopting ISO 42001 to manage AI at scale within enterprise services.
JAGGAER (Source-to-Pay / procurement software)
JAGGAER — a global player in procurement / “source-to-pay” software — announced that it achieved ISO 42001 certification for its AI Management System in June 2025.
For JAGGAER, the certification reflects a commitment to ethical, transparent, secure deployment of AI within its procurement platform.
This shows how ISO 42001 can be used not only by AI labs or consultancy firms, but by business-software companies integrating AI into domain-specific applications.
🧠 My take — promising first signals, but still early days
These early adopters make a strong case that ISO 42001 can work in practice across very different kinds of organizations — not just AI-native labs, but enterprises, service providers, even consulting firms. The variety and speed of adoption (multiple firms in 2024–2025) demonstrate real momentum.
At the same time — adoption appears selective, and for many companies, the process may involve minimal compliance effort rather than deep, ongoing governance. Because the standard and the ecosystem (auditors, best-practice references, peer case studies) are both still nascent, there’s a real risk that ISO 42001 becomes more of a “badge” than a strong guardrail.
In short: I see current adoptions as proof-of-concepts — promising early examples showing how ISO 42001 could become an industry baseline. But for it to truly deliver on safe, ethical, responsible AI at scale, we’ll need: more widespread adoption across sectors; shared transparency about governance practices; public reporting on outcomes; and maybe supplementary audits or domain-specific guidelines (especially for high-risk AI uses).
Most organizations think they’re ready for AI governance — until ISO/IEC 42001 shines a light on the gaps. With 47 new AI-specific controls, this standard is quickly becoming the global expectation for responsible and compliant AI deployment. To help teams get ahead, we built a free ISO 42001 Compliance Checklist that gives you a readiness score in under 10 minutes, plus a downloadable gap report you can share internally. It’s a fast way to validate where you stand today and what you’ll need to align with upcoming regulatory and customer requirements. If improving AI trust, risk posture, and audit readiness is on your roadmap, this tool will save your team hours.
Managing AI Risks Through Strong Governance, Compliance, and Internal Audit Oversight
Organizations are adopting AI at a rapid pace, and many are finding innovative ways to extract business value from these technologies. As AI capabilities expand, so do the risks that must be properly understood and managed.
Internal audit teams are uniquely positioned to help organizations deploy AI responsibly. Their oversight ensures AI initiatives are evaluated with the same rigor applied to other critical business processes.
By participating in AI governance committees, internal audit can help set standards, align stakeholders, and bring clarity to how AI is adopted across the enterprise.
A key responsibility is identifying the specific risks associated with AI systems—whether ethical, technical, regulatory, or operational—and determining whether proper controls are in place to address them.
Internal audit also plays a role in interpreting and monitoring evolving regulations. As governments introduce new AI-specific rules, companies must demonstrate compliance, and auditors help ensure they are prepared.
Several indicators signal growing AI risk within an organization. One major warning sign is the absence of a formal AI risk management framework or any consistent evaluation of AI initiatives through a risk lens.
Another risk indicator arises when new regulations create uncertainty about whether the company’s AI practices are compliant—raising concerns about gaps in oversight or readiness.
Organizations without a clear AI strategy, or those operating multiple isolated AI projects, may fail to realize the intended benefits. Fragmentation often leads to inefficiencies and unmanaged risks.
If AI initiatives continue without centralized governance, the organization may lose visibility into how AI is used, making it difficult to maintain accountability, consistency, and compliance.
Potential Impacts of Failing to Audit AI (Summary)
The organization may face regulatory violations, fines, or enforcement actions.
Biased or flawed AI outputs could damage the company’s reputation.
Operational disruptions may occur if AI systems fail or behave unpredictably.
Weak AI oversight can result in financial losses.
Unaddressed vulnerabilities in AI systems could lead to cybersecurity incidents.
My Opinion
Auditing AI is no longer optional—it is becoming a foundational part of digital governance. Without structured oversight, AI can expose organizations to reputational damage, operational failures, regulatory penalties, and security weaknesses. A strong AI audit function ensures transparency, accountability, and resilience. In my view, organizations that build mature AI auditing capabilities early will not only avoid risk but also gain a competitive edge by deploying trustworthy, well-governed AI at scale.
The Road to Enterprise AGI: Why Reliability Matters More Than Intelligence
1️⃣ Why Practical Reliability Matters
Many current AI systems — especially large language models (LLMs) and multimodal models — are non-deterministic: the same prompt can produce different outputs at different times.
For enterprises, non-determinism is a huge problem:
Compliance & auditability: Industries like finance, healthcare, and regulated manufacturing require traceable, reproducible decisions. An AI that gives inconsistent advice is essentially unusable in these contexts.
Risk management: If AI recommendations are unpredictable, companies can’t reliably integrate them into business-critical workflows.
Integration with existing systems: ERP, CRM, legal review systems, and automation pipelines need predictable outputs to function smoothly.
Murati’s research at Thinking Machines Lab directly addresses this. By working on deterministic inference pipelines, the goal is to ensure AI outputs are reproducible, reducing operational risk for enterprises. This moves generative AI from “experimental assistant” to a trusted tool. (a tool called Tinker that automates the creation of custom frontier AI models)
2️⃣ Enterprise Readiness
Security & Governance Integration: Enterprise adoption requires AI systems that comply with security policies, privacy standards, and governance rules. Murati emphasizes creating auditable, controllable AI.
Customization & Human Alignment: Businesses need AI that can be configured for specific workflows, tone, or operational rules — not generic “off-the-shelf” outputs. Thinking Machines Lab is focusing on human-aligned AI, meaning the system can be tailored while maintaining predictable behavior.
Operational Reliability: Enterprise-grade software demands high uptime, error handling, and predictable performance. Murati’s approach suggests that her AI systems are being designed with industrial-grade reliability, not just research demos.
3️⃣ The Competitive Edge
By tackling reproducibility and reliability at the inference level, her startup is positioning itself to serve companies that cannot tolerate “creative AI outputs” that are inconsistent or untraceable.
This is especially critical in sectors like:
Healthcare: AI-assisted diagnoses need predictable outputs.
Regulated Manufacturing & Energy: Decision-making and operational automation must be deterministic to meet safety standards.
Murati isn’t just building AI that “works,” she’s building AI that can be safely deployed in regulated, risk-sensitive environments. This aligns strongly with InfoSec, vCISO, and compliance priorities, because it makes AI audit-ready, predictable, and controllable — moving it from a curiosity or productivity tool to a reliable enterprise asset. In Short Building Trustworthy AGI: Determinism, Governance, and Real-World Readiness…
Murati’s Thinking Machines in Talks for $50 Billion Valuation
The legal profession is facing a pivotal turning point because AI tools — from document drafting and research to contract review and litigation strategy — are increasingly integrated into day-to-day legal work. The core question arises: when AI messes up, who is accountable? The author argues: the lawyer remains accountable.
Courts and bar associations around the world are enforcing this principle strongly: they are issuing sanctions when attorneys submit AI-generated work that fabricates citations, invents case law, or misrepresents “AI-generated” arguments as legitimate.
For example, in a 2023 case (Mata v. Avianca, Inc.), attorneys used an AI to generate research citing judicial opinions that didn’t exist. The court found this conduct inexcusable and imposed financial penalties on the lawyers.
In another case from 2025 (Frankie Johnson v. Jefferson S. Dunn), lawyers filed motions containing entirely fabricated legal authority created by generative AI. The court’s reaction was far more severe: the attorneys received public reprimands, and their misconduct was referred for possible disciplinary proceedings — even though their firm avoided sanctions because it had institutional controls and AI-use policies in place.
The article underlines that the shift to AI in legal work does not change the centuries-old principles of professional responsibility. Rules around competence, diligence, and confidentiality remain — but now lawyers must also acquire enough “AI literacy.” That doesn’t mean they must become ML engineers; but they should understand AI’s strengths and limits, know when to trust it, and when to independently verify its outputs.
Regarding confidentiality, when lawyers use AI tools, they must assess the risk that client-sensitive data could be exposed — for example, accidentally included in AI training sets, or otherwise misused. Using free or public AI tools for confidential matters is especially risky.
Transparency and client communication also become more important. Lawyers may need to disclose when AI is being used in the representation, what type of data is processed, and how use of AI might affect cost, work product, or confidentiality. Some forward-looking firms include AI-use policies upfront in engagement letters.
On a firm-wide level, supervisory responsibilities still apply. Senior attorneys must ensure that any AI-assisted work by junior lawyers or staff meets professional standards. That includes establishing governance: AI-use policies, training, review protocols, oversight of external AI providers.
Many larger law firms are already institutionalizing AI governance — setting up AI committees, defining layered review procedures (e.g. verifying AI-generated memos against primary sources, double-checking clauses, reviewing briefs for “hallucinations”).
The article’s central message: AI may draft documents or assist in research, but the lawyer must answer. Technology can assist, but it cannot assume human professional responsibility. The “algorithm may draft — the lawyer is accountable.”
My Opinion
I think this article raises a crucial and timely point. As AI becomes more capable and tempting as a tool for legal work, the risk of over-reliance — or misuse — is real. The documented sanctions show that courts are no longer tolerant of unverified AI-generated content. This is especially relevant given the “black-box” nature of many AI models and their propensity to hallucinate plausible but false information.
For the legal profession to responsibly adopt AI, the guidelines described — AI literacy, confidentiality assessment, transparent client communication, layered review — aren’t optional luxuries; they’re imperative. In other words: AI can increase efficiency, but only under strict governance, oversight, and human responsibility.
Given my background in information security and compliance — and interest in building services around risk, governance and compliance — this paradigm resonates. It suggests that as AI proliferates (in law, security, compliance, consulting, etc.), there will be increasing demand for frameworks, policies, and oversight mechanisms ensuring trustworthy use. Designing such frameworks might even become a valuable niche service.
1. Sam Altman — CEO of OpenAI, the company behind ChatGPT — recently issued a sobering warning: he expects “some really bad stuff to happen” as AI technology becomes more powerful.
2. His concern isn’t abstract. He pointed to real‑world examples: advanced tools such as Sora 2 — OpenAI’s own AI video tool — have already enabled the creation of deepfakes. Some of these deepfakes, misusing public‑figure likenesses (including Altman’s own), went viral on social media.
3. According to Altman, these are only early warning signs. He argues that as AI becomes more accessible and widespread, humans and society will need to “co‑evolve” alongside the technology — building not just tech, but the social norms, guardrails, and safety frameworks that can handle it.
4. The risks are multiple: deepfakes could erode public trust in media, fuel misinformation, enable fraud or identity‑related crimes, and disrupt how we consume and interpret information online. The technology’s speed and reach make the hazards more acute.
5. Altman cautioned against overreliance on AI‑based systems for decision-making. He warned that if many users start trusting AI outputs — whether for news, advice, or content — we might reach “societal‑scale” consequences: unpredictable shifts in public opinion, democracy, trust, and collective behavior.
6. Still, despite these grave warnings, Altman dismissed calls for heavy regulatory restrictions on AI’s development and release. Instead, he supports “thorough safety testing,” especially for the most powerful models — arguing that regulation may have unintended consequences or slow beneficial progress.
7. Critics note a contradiction: the same company that warns of catastrophic risks is actively releasing powerful tools like Sora 2 to the public. That raises concerns about whether early release — even in the name of “co‑evolution” — irresponsibly accelerates exposure to harm before adequate safeguards are in place.
8. The bigger picture: what happens now will likely shape how society, law, and norms adapt to AI. If deepfake tools and AI‑driven content become commonplace, we may face a future where “seeing is believing” no longer holds true — and navigating truth vs manipulation becomes far harder.
9. In short: Altman’s warning serves partly as a wake‑up call. He’s not just flagging technical risk — he’s asking society to seriously confront how we consume, trust, and regulate AI‑powered content. At the same time, his company continues to drive that content forward. It’s a tension between innovation and caution — with potentially huge societal implications.
🔎 My Opinion
I think Altman’s public warning is important and overdue — it’s rare to see an industry leader acknowledge the dangers of their own creations so candidly. This sort of transparency helps start vital conversations about ethics, regulation, and social readiness.
That said, I’m concerned that releasing powerful AI capabilities broadly, while simultaneously warning they might cause severe harm, feels contradictory. If companies push ahead with widespread deployment before robust guardrails are tested and widely adopted, we risk exposing society to misinformation, identity fraud, erosion of trust, and social disruption.
Given how fast AI adoption is accelerating — and how high the stakes are — I believe a stronger emphasis on AI governance, transparency, regulation, and public awareness is essential. Innovation should continue, but not at the expense of public safety, trust, and societal stability.