When people talk about “AI hallucinations,” they usually frame them as technical glitches — something engineers will eventually fix. But a new research paper, Why Language Models Hallucinate (Kalai, Nachum, Vempala, Zhang, 2025), makes a critical point: hallucinations aren’t just quirks of large language models. They are statistically inevitable.
Even if you train a model on flawless data, there will always be situations where true and false statements are indistinguishable. Like students facing hard exam questions, models are incentivized to “guess” rather than admit uncertainty. This guessing is what creates hallucinations.
Here’s the governance problem: most AI benchmarks reward accuracy over honesty. A model that answers every question — even with confident falsehoods — often scores better than one that admits “I don’t know.” That means many AI vendors are optimizing for sounding right, not being right.
For regulated industries, that’s not a technical nuisance. It’s a compliance risk. Imagine a customer service AI falsely assuring a patient that their health records are encrypted, or an AI-generated financial disclosure that contains fabricated numbers. The fallout isn’t just reputational — it’s regulatory.
Organizations need to treat hallucinations the same way they treat phishing, insider threats, or any other persistent risk:
- Add AI hallucinations explicitly to the risk register.
- Define acceptable error thresholds by use case (what’s tolerable in marketing may be catastrophic in finance).
- Require vendors to disclose hallucination rates and abstention behavior, not just accuracy scores.
- Build governance processes where AI is allowed — even encouraged — to say, “I don’t know.”
AI hallucinations aren’t going away. The question is whether your governance framework is mature enough to manage them. In compliance, pretending the problem doesn’t exist is the real hallucination.

AI HALLUCINATION DEFENSE: Building Robust and Reliable Artificial Intelligence Systems
Trust Me – ISO 42001 AI Management System
ISO/IEC 42001:2023 – from establishing to maintain an AI management system
AI Act & ISO 42001 Gap Analysis Tool
Agentic AI: Navigating Risks and Security Challenges
Artificial Intelligence: The Next Battlefield in Cybersecurity
AI and The Future of Cybersecurity: Navigating the New Digital Battlefield
“Whether you’re a technology professional, policymaker, academic, or simply a curious reader, this book will arm you with the knowledge to navigate the complex intersection of AI, security, and society.”
AI Act & ISO 42001 Gap Analysis Tool

AI Governance Is a Boardroom Imperative—The SEC Just Raised the Stakes on AI Hype
How AI Is Transforming the Cybersecurity Leadership Playbook
IBM’s model-routing approach
Top 5 AI-Powered Scams to Watch Out for in 2025
Summary of CISO 3.0: Leading AI Governance and Security in the Boardroom
AI in the Workplace: Replacing Tasks, Not People
Why CISOs Must Prioritize Data Provenance in AI Governance
Interpretation of Ethical AI Deployment under the EU AI Act
AI Governance: Applying AI Policy and Ethics through Principles and Assessments
Businesses leveraging AI should prepare now for a future of increasing regulation.
Digital Ethics in the Age of AI
DISC InfoSec’s earlier posts on the AI topic
Secure Your Business. Simplify Compliance. Gain Peace of Mind
InfoSec services | InfoSec books | Follow our blog | DISC llc is listed on The vCISO Directory | ISO 27k Chat bot | Comprehensive vCISO Services | ISMS Services | Security Risk Assessment Services | Mergers and Acquisition Security