1. AI Adoption Rates Are SkyâHigh
According to F5âs midâ2025 report based on input from 650 IT leaders and 150 AI strategists across large enterprises, a staggering 96âŻ% of organizations are deploying AI models in some form. Yet, only 2âŻ% qualify as âhighly readyâ to scale AI securely throughout their operations.
2. Readiness Is Mostly Moderate or Low
While the majorityâ77âŻ%âfall into a âmoderately readyâ category, they often lack robust governance and security practices. Meanwhile, 21âŻ% are lowâreadiness, executing AI in siloed or experimental contexts rather than at scale .
3. AI Usage vs. Saturation
Even in moderately ready firms, AI is actively usedâaround 70âŻ% already employ generative AI, and 25âŻ% of applications on average incorporate AI. In lowâreadiness firms, AI remains underâutilizedâtypically in less than oneâquarter of apps.
4. Model Diversity and Risks
Most organizations use a diverse mix of toolsâ65âŻ% run two or more paid AI models alongside at least one openâsource variant (e.g. GPTâ4, Llama, Mistral, Gemma). However, this diversity heightens risk unless proper governance is in place.
5. Security Gaps Leave Firms Vulnerable
Only 18âŻ% of moderately ready firms have deployed an AI firewall, though 47âŻ% plan to in a year. Continuous data labelingâa key measure for transparency and adversarial resilienceâis practiced by just 24âŻ%. Hybrid and multi-cloud environments exacerbate governance gaps and expand the attack surface.
6. Recommendations for Improvement
F5âs report urges companies to: diversify models under tight governance; embed AI across workflows, analytics, and security; deploy AIâspecific protections like firewalls; and institutionalize formal data governanceâincluding continuous labelingâto safely scale AI.
7. Strategic Alignment Is Essential
Leaders are clear: AI demands more than experimentation. To truly harness AIâs potential, organizations must align strategy, operations, and risk controls. Without mature governance and crossâcloud security alignment, AI risks becoming a liability rather than a transformative asset.
AI adoption is widespread, but deep readiness is rare
This report paints a familiar picture: AI adoption is widespread, but deep readiness is rare. While nearly all organizations are deploying AI, very fewâjust 2âŻ%âare prepared to scale it securely and strategically. The gap between âAI exploredâ and âAI operationalized responsiblyâ is wide and risky.
The reliance on multiple modelsâparticularly openâsource variantsâwithout strong governance frameworks is especially concerning. AI firewalls and continuous data labeling, currently underutilized, should be treated as foundational controlsânot optional addâons.
Ultimately, organizations that treat AI scaling as a strategic transformationârather than just a technical experimentâwill lead. This requires aligning technology investment, data culture, governance, and workforce skills. Firms that ignore these pillars may see shortâterm gains in AI experimentation, but theyâll miss longâterm valueâand may expose themselves to unnecessary risk.

Databricks AI Security Framework (DASF) and the AI Controls Matrix (AICM) from CSA can both be used effectively for AI security readiness assessments, though they serve slightly different purposes and scopes.
✅ How to Use DASF for AI Security Readiness Assessment
DASF focuses specifically on securing AI and ML systems throughout the model lifecycle. Itâs particularly suited for technical assessments in data and model-centric environments like Databricks, but can be adapted elsewhere.
Key steps:
- Map Your AI Lifecycle: Identify where your models are in the lifecycleâdata ingestion, training, evaluation, deployment, monitoring.
- Assess Security Controls by Domain: DASF has categories like:
- Data protection
- Model integrity
- Access controls
- Incident response
- Score Maturity: Rate each domain (e.g., 0â5 scale) based on current security implementations.
- Gap Analysis: Highlight where controls are absent or underdeveloped.
- Prioritize Remediation: Use risk impact (data sensitivity, exposure risk) to prioritize control improvements.
✅ Best for:
- ML-heavy organizations
- Data science and engineering teams
- Deep-dive technical control validation
✅ How to Use AICM (AI Controls Matrix by CSA)
AICM is a comprehensive, governance-first matrix with 243 control objectives across 18 domains, aligned with industry standards like ISO 42001, NIST AI RMF, and EU AI Act.
Key steps:
- Map Business and Risk Context: Understand how AI is used in business processes, risk categories, and critical assets.
- Select Relevant Controls: Use AICM to filter based on AI system types (foundational, open source, fine-tuned, etc.).
- Perform Readiness Assessment:
- Mark controls as implemented, partially implemented, or not implemented.
- Evaluate across governance, privacy, data security, lifecycle management, transparency, etc.
- Generate a Risk Scorecard: Assign weighted risk scores to each domain or control set.
- Benchmark Against Frameworks: AICM allows alignment with ISO 42001, NIST AI RMF, etc., to help demonstrate compliance.
✅ Best for:
- Enterprise risk & compliance teams
- vCISOs / AI governance leads
- Cross-functional readiness scoring (governance + technical)
🔁 How to Combine DASF and AICM
You can layer both:
- Use AICM for the top-down governance, risk, and control mapping, especially to align with regulatory requirements.
- Use DASF for bottom-up, technical control assessments focused on securing actual AI/ML pipelines and systems.
For example:
- AICM will ask “Do you have data lineage and model accountability policies?”
- DASF will validate “Are you logging model inputs/outputs and tracking versions with access controls in place?”
🧠 Final Thought
Using DASF + AICM together gives you a holistic AI security readiness assessmentâgovernance at the top, technical controls at the ground level. This combination is particularly powerful for AI risk audits, compliance readiness, or building an AI security roadmap.
⚙️ Service Name
AI Security Readiness Assessment (ASRA)
(Powered by CSA AICM + Databricks DASF)
📋 Scope of Work
Phase 1 â Discovery & Scoping
- Business use cases of AI
- Model types and deployment workflows
- Regulatory obligations (e.g., ISO 42001, NIST AI RMF, EU AI Act)
Phase 2 â AICM-Based Governance Readiness
- 18 domains / 243 controls (filtered by your AI system type)
- Governance, accountability, transparency, bias, privacy, etc.
- Scorecard: Implemented / Partial / Not Implemented
- Regulatory alignment
Phase 3 â DASF-Based Technical Security Review
- AI/ML pipeline review (data ingestion â model monitoring)
- Model protection, access controls, audit logging
- ML-specific threat modeling
- Deployment maturity review (cloud, on-prem, hybrid)
Phase 4 â Gap Analysis & Risk Scorecard
- Heat map by control domain
- Weighted risk scores and impact areas
- Governance + technical risk exposure
Phase 5 â Action Plan & Recommendations
- Prioritized remediation roadmap
- Suggested tooling or automation
- Quick wins vs strategic improvements
- Optional: Continuous assessment model
📊 Deliverables
- 10-page AI Security Risk Scorecard
- 1-page Executive Summary with Risk Heatmap
- Custom Governance & Security Gap Report
- Actionable Roadmap aligned to business goals
Feel free to reach out with any questions. ✉ info@deurainfosec.com â (707) 998-5164
Hands-On Large Language Models: Language Understanding and Generation
Trust Me â ISO 42001 AI Management System
ISO/IEC 42001:2023 â from establishing to maintain an AI management system
AI Act & ISO 42001 Gap Analysis Tool
Agentic AI: Navigating Risks and Security Challenges
Artificial Intelligence: The Next Battlefield in Cybersecurity
AI and The Future of Cybersecurity: Navigating the New Digital Battlefield
âWhether youâre a technology professional, policymaker, academic, or simply a curious reader, this book will arm you with the knowledge to navigate the complex intersection of AI, security, and society.â
AI Governance Is a Boardroom ImperativeâThe SEC Just Raised the Stakes on AI Hype
How AI Is Transforming the Cybersecurity Leadership Playbook
IBMâs model-routing approach
Top 5 AI-Powered Scams to Watch Out for in 2025
Summary of CISO 3.0: Leading AI Governance and Security in the Boardroom
AI in the Workplace: Replacing Tasks, Not People
Why CISOs Must Prioritize Data Provenance in AI Governance
Interpretation of Ethical AI Deployment under the EU AI Act
AI Governance: Applying AI Policy and Ethics through Principles and Assessments
Businesses leveraging AI should prepare now for a future of increasing regulation.
Digital Ethics in the Age of AI
DISC InfoSecâs earlier posts on the AI topic
Secure Your Business. Simplify Compliance. Gain Peace of Mind
InfoSec services | InfoSec books | Follow our blog | DISC llc is listed on The vCISO Directory | ISO 27k Chat bot | Comprehensive vCISO Services | ISMS Services | Security Risk Assessment Services | Mergers and Acquisition Security