Jul 19 2025

The AI Readiness Gap: High Usage, Low Security

Category: AIdisc7 @ 3:56 pm

1. AI Adoption Rates Are Sky‑High
According to F5’s mid‑2025 report based on input from 650 IT leaders and 150 AI strategists across large enterprises, a staggering 96 % of organizations are deploying AI models in some form. Yet, only 2 % qualify as ‘highly ready’ to scale AI securely throughout their operations.

2. Readiness Is Mostly Moderate or Low
While the majority—77 %—fall into a “moderately ready” category, they often lack robust governance and security practices. Meanwhile, 21 % are low–readiness, executing AI in siloed or experimental contexts rather than at scale .

3. AI Usage vs. Saturation
Even in moderately ready firms, AI is actively used—around 70 % already employ generative AI, and 25 % of applications on average incorporate AI. In low‑readiness firms, AI remains under‑utilized—typically in less than one‑quarter of apps.

4. Model Diversity and Risks
Most organizations use a diverse mix of tools—65 % run two or more paid AI models alongside at least one open‑source variant (e.g. GPT‑4, Llama, Mistral, Gemma). However, this diversity heightens risk unless proper governance is in place.

5. Security Gaps Leave Firms Vulnerable
Only 18 % of moderately ready firms have deployed an AI firewall, though 47 % plan to in a year. Continuous data labeling—a key measure for transparency and adversarial resilience—is practiced by just 24 %. Hybrid and multi-cloud environments exacerbate governance gaps and expand the attack surface.

6. Recommendations for Improvement
F5’s report urges companies to: diversify models under tight governance; embed AI across workflows, analytics, and security; deploy AI‑specific protections like firewalls; and institutionalize formal data governance—including continuous labeling—to safely scale AI.

7. Strategic Alignment Is Essential
Leaders are clear: AI demands more than experimentation. To truly harness AI’s potential, organizations must align strategy, operations, and risk controls. Without mature governance and cross‑cloud security alignment, AI risks becoming a liability rather than a transformative asset.


AI adoption is widespread, but deep readiness is rare

This report paints a familiar picture: AI adoption is widespread, but deep readiness is rare. While nearly all organizations are deploying AI, very few—just 2 %—are prepared to scale it securely and strategically. The gap between “AI explored” and “AI operationalized responsibly” is wide and risky.

The reliance on multiple models—particularly open‑source variants—without strong governance frameworks is especially concerning. AI firewalls and continuous data labeling, currently underutilized, should be treated as foundational controls—not optional add‑ons.

Ultimately, organizations that treat AI scaling as a strategic transformation—rather than just a technical experiment—will lead. This requires aligning technology investment, data culture, governance, and workforce skills. Firms that ignore these pillars may see short‑term gains in AI experimentation, but they’ll miss long‑term value—and may expose themselves to unnecessary risk.

Artificial Intelligence (AI) Governance and Cyber-Security: A beginner’s handbook on securing and governing AI systems

Databricks AI Security Framework (DASF) and the AI Controls Matrix (AICM) from CSA can both be used effectively for AI security readiness assessments, though they serve slightly different purposes and scopes.


How to Use DASF for AI Security Readiness Assessment

DASF focuses specifically on securing AI and ML systems throughout the model lifecycle. It’s particularly suited for technical assessments in data and model-centric environments like Databricks, but can be adapted elsewhere.

Key steps:

  1. Map Your AI Lifecycle: Identify where your models are in the lifecycle—data ingestion, training, evaluation, deployment, monitoring.
  2. Assess Security Controls by Domain: DASF has categories like:
    • Data protection
    • Model integrity
    • Access controls
    • Incident response
  3. Score Maturity: Rate each domain (e.g., 0–5 scale) based on current security implementations.
  4. Gap Analysis: Highlight where controls are absent or underdeveloped.
  5. Prioritize Remediation: Use risk impact (data sensitivity, exposure risk) to prioritize control improvements.

✅ Best for:

  • ML-heavy organizations
  • Data science and engineering teams
  • Deep-dive technical control validation


How to Use AICM (AI Controls Matrix by CSA)

AICM is a comprehensive, governance-first matrix with 243 control objectives across 18 domains, aligned with industry standards like ISO 42001, NIST AI RMF, and EU AI Act.

Key steps:

  1. Map Business and Risk Context: Understand how AI is used in business processes, risk categories, and critical assets.
  2. Select Relevant Controls: Use AICM to filter based on AI system types (foundational, open source, fine-tuned, etc.).
  3. Perform Readiness Assessment:
    • Mark controls as implemented, partially implemented, or not implemented.
    • Evaluate across governance, privacy, data security, lifecycle management, transparency, etc.
  4. Generate a Risk Scorecard: Assign weighted risk scores to each domain or control set.
  5. Benchmark Against Frameworks: AICM allows alignment with ISO 42001, NIST AI RMF, etc., to help demonstrate compliance.

✅ Best for:

  • Enterprise risk & compliance teams
  • vCISOs / AI governance leads
  • Cross-functional readiness scoring (governance + technical)


🔁 How to Combine DASF and AICM

You can layer both:

  • Use AICM for the top-down governance, risk, and control mapping, especially to align with regulatory requirements.
  • Use DASF for bottom-up, technical control assessments focused on securing actual AI/ML pipelines and systems.

For example:

  • AICM will ask “Do you have data lineage and model accountability policies?”
  • DASF will validate “Are you logging model inputs/outputs and tracking versions with access controls in place?”


🧠 Final Thought

Using DASF + AICM together gives you a holistic AI security readiness assessment—governance at the top, technical controls at the ground level. This combination is particularly powerful for AI risk audits, compliance readiness, or building an AI security roadmap.

⚙️ Service Name

AI Security Readiness Assessment (ASRA)
(Powered by CSA AICM + Databricks DASF)

📋 Scope of Work

Phase 1 – Discovery & Scoping

  • Business use cases of AI
  • Model types and deployment workflows
  • Regulatory obligations (e.g., ISO 42001, NIST AI RMF, EU AI Act)

Phase 2 – AICM-Based Governance Readiness

  • 18 domains / 243 controls (filtered by your AI system type)
  • Governance, accountability, transparency, bias, privacy, etc.
  • Scorecard: Implemented / Partial / Not Implemented
  • Regulatory alignment

Phase 3 – DASF-Based Technical Security Review

  • AI/ML pipeline review (data ingestion → model monitoring)
  • Model protection, access controls, audit logging
  • ML-specific threat modeling
  • Deployment maturity review (cloud, on-prem, hybrid)

Phase 4 – Gap Analysis & Risk Scorecard

  • Heat map by control domain
  • Weighted risk scores and impact areas
  • Governance + technical risk exposure

Phase 5 – Action Plan & Recommendations

  • Prioritized remediation roadmap
  • Suggested tooling or automation
  • Quick wins vs strategic improvements
  • Optional: Continuous assessment model

📊 Deliverables

  • 10-page AI Security Risk Scorecard
  • 1-page Executive Summary with Risk Heatmap
  • Custom Governance & Security Gap Report
  • Actionable Roadmap aligned to business goals

Feel free to reach out with any questions. ✉ info@deurainfosec.com ☏ (707) 998-5164

AIMS and Data Governance

Hands-On Large Language Models: Language Understanding and Generation

AWS Databases for AI/ML: Architecting Intelligent Data Workflows (AWS Cloud Mastery: Building and Securing Applications)


Trust Me – ISO 42001 AI Management System

ISO/IEC 42001:2023 – from establishing to maintain an AI management system

AI Act & ISO 42001 Gap Analysis Tool

Agentic AI: Navigating Risks and Security Challenges

Artificial Intelligence: The Next Battlefield in Cybersecurity

AI and The Future of Cybersecurity: Navigating the New Digital Battlefield

“Whether you’re a technology professional, policymaker, academic, or simply a curious reader, this book will arm you with the knowledge to navigate the complex intersection of AI, security, and society.”

AI Governance Is a Boardroom Imperative—The SEC Just Raised the Stakes on AI Hype

How AI Is Transforming the Cybersecurity Leadership Playbook

Previous AI posts

IBM’s model-routing approach

Top 5 AI-Powered Scams to Watch Out for in 2025

Summary of CISO 3.0: Leading AI Governance and Security in the Boardroom

AI in the Workplace: Replacing Tasks, Not People

Why CISOs Must Prioritize Data Provenance in AI Governance

Interpretation of Ethical AI Deployment under the EU AI Act

AI Governance: Applying AI Policy and Ethics through Principles and Assessments

ISO/IEC 42001:2023, First Edition: Information technology – Artificial intelligence – Management system

ISO 42001 Artificial Intelligence Management Systems (AIMS) Implementation Guide: AIMS Framework | AI Security Standards

Businesses leveraging AI should prepare now for a future of increasing regulation.

Digital Ethics in the Age of AI 

DISC InfoSec’s earlier posts on the AI topic

Secure Your Business. Simplify Compliance. Gain Peace of Mind

InfoSec services | InfoSec books | Follow our blog | DISC llc is listed on The vCISO Directory | ISO 27k Chat bot | Comprehensive vCISO Services | ISMS Services | Security Risk Assessment Services | Mergers and Acquisition Security

Tags: AI Readiness Gap