Sep 11 2025

ISO/IEC 42001: The Global Standard for Responsible AI Governance, Risk, and Compliance

Category: AI,AI Governance,ISO 42001disc7 @ 4:22 pm

Artificial Intelligence (AI) has transitioned from experimental to operational, driving transformations across healthcare, finance, education, transportation, and government. With its rapid adoption, organizations face mounting pressure to ensure AI systems are trustworthy, ethical, and compliant with evolving regulations such as the EU AI Act, Canada’s AI Directive, and emerging U.S. policies. Effective governance and risk management have become critical to mitigating potential harms and reputational damage.

ISO 42001 isn’t just an additional compliance framework—it serves as the integration layer that brings all AI governance, risk, control monitoring and compliance efforts together into a unified system called AIMS.

To address these challenges, a structured governance, risk, and compliance (GRC) framework is essential. ISO/IEC 42001:2023 – the Artificial Intelligence Management System (AIMS) standard – provides organizations with a comprehensive approach to managing AI responsibly, similar to how ISO/IEC 27001 supports information security.

ISO/IEC 42001 is the world’s first international standard specifically for AI management systems. It establishes a management system framework (Clauses 4–10) and detailed AI-specific controls (Annex A). These elements guide organizations in governing AI responsibly, assessing and mitigating risks, and demonstrating compliance to regulators, partners, and customers.

One of the key benefits of ISO/IEC 42001 is stronger AI governance. The standard defines leadership roles, responsibilities, and accountability structures for AI, alongside clear policies and ethical guidelines. By aligning AI initiatives with organizational strategy and stakeholder expectations, organizations build confidence among boards, regulators, and the public that AI is being managed responsibly.

ISO/IEC 42001 also provides a structured approach to risk management. It helps organizations identify, assess, and mitigate risks such as bias, lack of explainability, privacy issues, and safety concerns. Lifecycle controls covering data, models, and outputs integrate AI risk into enterprise-wide risk management, preventing operational, legal, and reputational harm from unintended AI consequences.

Compliance readiness is another critical benefit. ISO/IEC 42001 aligns with global regulations like the EU AI Act and OECD AI Principles, ensuring robust data quality, transparency, human oversight, and post-market monitoring. Internal audits and continuous improvement cycles create an audit-ready environment, demonstrating regulatory compliance and operational accountability.

Finally, ISO/IEC 42001 fosters trust and competitive advantage. Certification signals commitment to responsible AI, strengthening relationships with customers, investors, and regulators. For high-risk sectors such as healthcare, finance, transportation, and government, it provides market differentiation and reinforces brand reputation through proven accountability.

Opinion: ISO/IEC 42001 is rapidly becoming the foundational standard for responsible AI deployment. Organizations adopting it not only safeguard against risks and regulatory penalties but also position themselves as leaders in ethical, trustworthy AI system. For businesses serious about AI’s long-term impact, ethical compliance, transparency, user trust ISO/IEC 42001 is as essential as ISO/IEC 27001 is for information security.

Most importantly, ISO 42001 AIMS is built to integrate seamlessly with ISO 27001 ISMS. It’s highly recommended to first achieve certification or alignment with ISO 27001 before pursuing ISO 42001.

Feel free to reach out if you have any questions.

What are main requirements for Internal audit of ISO 42001 AIMS

ISO 42001: The AI Governance Standard Every Organization Needs to Understand

Turn Compliance into Competitive Advantage with ISO 42001

ISO 42001 Readiness: A 10-Step Guide to Responsible AI Governance

Aligning with ISO 42001:2023 and/or the EU Artificial Intelligence (AI) Act

AIMS and Data Governance – Managing data responsibly isn’t just good practice—it’s a legal and ethical imperative. 

ISO 42001—the first international standard for managing artificial intelligence. Developed for organizations that design, deploy, or oversee AI, ISO 42001 is set to become the ISO 9001 of AI: a universal framework for trustworthytransparent, and responsible AI.


Trust Me – ISO 42001 AI Management System

ISO/IEC 42001:2023 – from establishing to maintain an AI management system

AI Act & ISO 42001 Gap Analysis Tool

Agentic AI: Navigating Risks and Security Challenges

Artificial Intelligence: The Next Battlefield in Cybersecurity

AI and The Future of Cybersecurity: Navigating the New Digital Battlefield

“Whether you’re a technology professional, policymaker, academic, or simply a curious reader, this book will arm you with the knowledge to navigate the complex intersection of AI, security, and society.”

AI Act & ISO 42001 Gap Analysis Tool

AI Governance Is a Boardroom Imperative—The SEC Just Raised the Stakes on AI Hype

How AI Is Transforming the Cybersecurity Leadership Playbook

Previous AI posts

IBM’s model-routing approach

Top 5 AI-Powered Scams to Watch Out for in 2025

Summary of CISO 3.0: Leading AI Governance and Security in the Boardroom

AI in the Workplace: Replacing Tasks, Not People

Why CISOs Must Prioritize Data Provenance in AI Governance

Interpretation of Ethical AI Deployment under the EU AI Act

AI Governance: Applying AI Policy and Ethics through Principles and Assessments

ISO/IEC 42001:2023, First Edition: Information technology – Artificial intelligence – Management system

ISO 42001 Artificial Intelligence Management Systems (AIMS) Implementation Guide: AIMS Framework | AI Security Standards

Businesses leveraging AI should prepare now for a future of increasing regulation.

Digital Ethics in the Age of AI 

DISC InfoSec’s earlier posts on the AI topic

Secure Your Business. Simplify Compliance. Gain Peace of Mind

InfoSec services | InfoSec books | Follow our blog | DISC llc is listed on The vCISO Directory | ISO 27k Chat bot | Comprehensive vCISO Services | ISMS Services | Security Risk Assessment Services | Mergers and Acquisition Security

Tags: AI Governance, ISO 42001


Sep 07 2025

Embedding AI Oversight into GRC: Building Trust, Compliance, and Accountability

Category: AI,AI Governancedisc7 @ 10:17 am

1. Why AI Governance Matters

AI brings undeniable benefits—speed, accuracy, vast data analysis—but without guardrails, it can lead to privacy breaches, bias, hallucinations, or model drift. Ensuring governance helps organizations harness AI safely, transparently, and ethically.

2. What Is AI Governance?

AI governance refers to a structured framework of policies, guidelines, and oversight procedures that govern AI’s development, deployment, and usage. It ensures ethical standards and risk mitigation remain in place across the organization.

3. Recognizing AI-specific Risks

Important risks include:

  • Hallucinations—AI generating inaccurate or fabricated outputs
  • Bias—AI perpetuating outdated or unfair historical patterns
  • Data privacy—exposure of sensitive inputs, especially with public models
  • Model drift—AI performance degrading over time without monitoring.

4. Don’t Reinvent the Wheel—Use Existing GRC Programs

Rather than creating standalone frameworks, integrate AI risks into your enterprise risk, compliance, and audit programs. As risk expert Dr. Ariane Chapelle advises, it’s smarter to expand what you already have than build something separate.

5. Five Ways to Embed AI Oversight into GRC

  1. Broaden risk programs to include AI-specific risks (e.g., drift, explainability gaps).
  2. Embed governance throughout the AI lifecycle—from design to monitoring.
  3. Shift to continuous oversight—use real-time alerts and risk sprints.
  4. Clarify accountability across legal, compliance, audit, data science, and business teams.
  5. Show control over AI—track, document, and demonstrate oversight to stakeholders.

6. Regulations Are Here—Don’t Wait

Regulatory frameworks like the EU AI Act (which classifies AI by risk and prohibits dangerous uses), ISO 42001 (AI management system standard), and NIST’s Trustworthy AI guidelines are already in play—waiting to comply could lead to steep penalties.

7. Governance as Collective Responsibility

Effective AI governance isn’t the job of one team—it’s a shared effort. A well-rounded approach balances risk reduction with innovation, by embedding oversight and accountability across all functional areas.


Quick Summary at the End:

  • Start small, then scale: Begin by tagging AI risks within your existing GRC framework. This lowers barriers and avoids creating siloed processes.
  • Make it real-time: Replace occasional audits with continuous monitoring—this helps spot bias or drift before they become big problems.
  • Document everything: From policy changes to risk indicators, everything needs to be traceable—especially if regulators or execs ask.
  • Define responsibilities clearly: Everyone from legal to data teams should know where they fit in the AI oversight map.
  • Stay compliant, stay ahead: Don’t just tick a regulatory box—build trust by showing you’re in control of your AI tools.

Source: AI Governance: 5 Ways to Embed AI Oversight into GRC

Responsible AI in the Age of Generative Models: Governance, Ethics and Risk Management 

AI Governance: Applying AI Policy and Ethics through Principles and Assessments

AIMS and Data Governance – Managing data responsibly isn’t just good practice—it’s a legal and ethical imperative. 

DISC InfoSec previous posts on AI category

InfoSec services | InfoSec books | Follow our blog | DISC llc is listed on The vCISO Directory | ISO 27k Chat bot | Comprehensive vCISO Services | ISMS Services | Security Risk Assessment Services | Mergers and Acquisition Security

Tags: AI Governance


Aug 14 2025

Securing Agentic AI: Emerging Risks and Governance Imperatives

Category: AIdisc7 @ 11:43 pm

Agentic AI—systems capable of planning, taking initiative, and pursuing goals with minimal oversight—represents a major shift from traditional, narrow AI tools. This autonomy enables powerful new capabilities but also creates unprecedented security risks. Autonomous agents can adapt in real time, set their own subgoals, and interact with complex systems in ways that are harder to predict, control, or audit.

Key challenges include unpredictable emergent behaviors, coordinated actions in multi-agent environments, and goal misalignment that leads to reward hacking or exploitation of system weaknesses. An agent that seems safe in testing may later bypass security controls, manipulate inputs, or collude with other agents to gain unauthorized access or disrupt operations. These risks are amplified by continuous operation, where small deviations can escalate into severe breaches over time.

Further, agentic systems can autonomously use tools, integrate with third-party services, and even modify their own code—blurring security boundaries. Without strict oversight, these capabilities risk leaking sensitive data, introducing unvetted dependencies, and enabling sophisticated supply chain or privilege escalation attacks. Managing these threats will require new governance, monitoring, and control strategies tailored to the autonomous and adaptive nature of agentic AI.

Agentic AI has the potential to transform industries—from software engineering and healthcare to finance and customer service. However, without robust security measures, these systems could be exploited, behave unpredictably, or trigger cascading failures across both digital and physical environments.

As their capabilities grow, security must be treated as a foundational design principle, not an afterthought—integrated into every stage of development, deployment, and ongoing oversight.

Agentic AI Security

The Agentic AI Bible

Interpretation of Ethical AI Deployment under the EU AI Act

Aligning with ISO 42001:2023 and/or the EU Artificial Intelligence (AI) Act

State of Agentic AI Security and Governance

AI Governance: Applying AI Policy and Ethics through Principles and Assessments

ISO/IEC 42001:2023, First Edition: Information technology – Artificial intelligence – Management system

ISO 42001 Artificial Intelligence Management Systems (AIMS) Implementation Guide: AIMS Framework | AI Security Standards

Businesses leveraging AI should prepare now for a future of increasing regulation.

Digital Ethics in the Age of AI 

DISC InfoSec’s earlier posts on the AI topic

InfoSec services | InfoSec books | Follow our blog | DISC llc is listed on The vCISO Directory | ISO 27k Chat bot | Comprehensive vCISO Services | ISMS Services | Security Risk Assessment Services | Mergers and Acquisition Security

Tags: AI Governance, Securing Agentic AI


Aug 04 2025

ISO 42001: The AI Governance Standard Every Organization Needs to Understand

Category: AI,ISO 42001,IT Governancedisc7 @ 3:29 pm

1. The New Era of AI Governance
AI is now part of everyday life—from facial recognition and recommendation engines to complex decision-making systems. As AI capabilities multiply, businesses urgently need standardized frameworks to manage associated risks responsibly. ISO 42001:2023, released at the end of 2023, offers the first global management system standard dedicated entirely to AI systems.

2. What ISO 42001 Offers
The standard establishes requirements for establishing, implementing, maintaining, and continually improving an Artificial Intelligence Management System (AIMS). It covers everything from ethical use and bias mitigation to transparency, accountability, and data governance across the AI lifecycle.

3. Structure and Risk-Based Approach
Built around the Plan-Do-Check-Act (PDCA) methodology, ISO 42001 guides organizations through formal policies, impact assessments, and continuous improvement cycles—mirroring the structure used by established ISO standards like ISO 27001. However, it is tailored specifically for AI management needs.

4. Core Benefits of Adoption
Implementing ISO 42001 helps organizations manage AI risks effectively while demonstrating responsible and transparent AI governance. Benefits include decreased bias, improved user trust, operational efficiency, and regulatory readiness—particularly relevant as AI legislation spreads globally.

5. Complementing Existing Standards
ISO 42001 can integrate with other management systems such as ISO 27001 (information security) or ISO 27701 (privacy). Organizations already certified to other standards can adapt existing controls and processes to meet new AI-specific requirements, reducing implementation effort.

6. Governance Across AI Lifecycle
The standard covers every stage of AI—from development and deployment to decommissioning. Key controls include leadership and policy setting, risk and impact assessments, transparency, human oversight, and ongoing monitoring of performance and fairness.

7. Certification Process Overview
Certification follows the familiar ISO 17021 process: a readiness assessment, then stage 1 and stage 2 audits. Once certified, organizations remain valid for three years, with annual surveillance audits to ensure ongoing adherence to ISO 42001 clauses and controls.

8. Market Trends and Regulatory Context
Interest in ISO 42001 is rising quickly in 2025, driven by global AI regulation like the EU AI Act. While certification remains voluntary, organizations adopting it gain competitive advantage and pre-empt regulatory obligations.

9. Controls Aligned to Ethical AI
ISO 42001 includes 38 distinct controls grouped into control objectives addressing bias mitigation, data quality, explainability, security, and accountability. These facilitate ethical AI while aligning with both organizational and global regulatory expectations.

10. Forward-Looking Compliance Strategy
Though certification may become more common in 2026 and beyond, organizations should begin early. Even without formal certification, adopting ISO 42001 practices enables stronger AI oversight, builds stakeholder trust, and sets alignment with emerging laws like the EU AI Act and evolving global norms.


Opinion:
ISO 42001 establishes a much-needed framework for responsible AI management. It balances innovation with ethics, governance, and regulatory alignment—something no other AI-focused standard has fully delivered. Organizations that get ahead by building their AI governance around ISO 42001 will not only manage risk better but also earn stakeholder trust and future-proof against incoming regulations. With AI accelerating, ISO 42001 is becoming a strategic imperative—not just a nice-to-have.

ISO 42001 Implementation Playbook for AI Leaders: A Step-by-Step Workbook to Establish, Implement, Maintain, and Continually Improve Your Artificial Intelligence Management System (AIMS)

Turn Compliance into Competitive Advantage with ISO 42001

ISO 42001 Readiness: A 10-Step Guide to Responsible AI Governance

Aligning with ISO 42001:2023 and/or the EU Artificial Intelligence (AI) Act

The Strategic Synergy: ISO 27001 and ISO 42001 – A New Era in Governance

Clause 4 of ISO 42001: Understanding an Organization and Its Context and Why It Is Crucial to Get It Right.

Think Before You Share: The Hidden Privacy Costs of AI Convenience

The AI Readiness Gap: High Usage, Low Security

Mitigate and adapt with AICM (AI Controls Matrix)

DISC InfoSec’s earlier posts on the AI topic

Secure Your Business. Simplify Compliance. Gain Peace of Mind

InfoSec services | InfoSec books | Follow our blog | DISC llc is listed on The vCISO Directory | ISO 27k Chat bot | Comprehensive vCISO Services | ISMS Services | Security Risk Assessment Services | Mergers and Acquisition Security

Tags: AI Governance, ISO 42001


Jul 30 2025

Shadow AI: The Hidden Threat Driving Data Breach Costs Higher

Category: AI,Information Securitydisc7 @ 9:17 am

1

IBM’s latest Cost of a Data Breach Report (2025) highlights a growing and costly issue: “shadow AI”—where employees use generative AI tools without IT oversight—is significantly raising breach expenses. Around 20% of organizations reported breaches tied to shadow AI, and those incidents carried an average $670,000 premium per breach, compared to firms with minimal or no shadow AI exposure IBM+Cybersecurity Dive.

The latest IBM/Ponemon Institute report reveals that the global average cost of a data breach fell by 9% in 2025, down to $4.44 million—the first decline in five years—mainly driven by faster breach identification and containment thanks to AI and automation. However, in the United States, breach costs surged 9%, reaching a record high of $10.22 million, attributed to higher regulatory fines, rising detection and escalation expenses, and slower AI governance adoption. Despite rapid AI deployment, many organizations lag in establishing oversight: about 63% have no AI governance policies, and some 87% lack AI risk mitigation processes, increasing exposure to vulnerabilities like shadow AI. Shadow AI–related breaches tend to cost more—adding roughly $200,000 per incident—and disproportionately involve compromised personally identifiable information and intellectual property. While AI is accelerating incident resolution—which for the first time dropped to an average of 241 days—the speed of adoption is creating a security oversight gap that could amplify long-term risks unless governance and audit practices catch up IBM.

2

Although only 13% of organizations surveyed reported breaches involving AI models or tools, a staggering 97% of those lacked proper AI access controls—showing that even a small number of incidents can have profound consequences when governance is poor IBM Newsroom.

3

When shadow AI–related breaches occurred, they disproportionately compromised critical data: personally identifiable information in 65% of cases and intellectual property in 40%, both higher than global averages for all breaches.

4

The absence of formal AI governance policies is striking. Nearly two‑thirds (63%) of breached organizations either don’t have AI governance in place or are still developing one. Even among those with policies, many lack approval workflows or audit processes for unsanctioned AI usage—fewer than half conduct regular audits, and 61% lack governance technologies.

5

Despite advances in AI‑driven security tools that help reduce detection and containment times (now averaging 241 days, a nine‑year low), the rapid, unchecked rollout of AI technologies is creating what IBM refers to as security debt, making organizations increasingly vulnerable over time.

6

Attackers are integrating AI into their playbooks as well: 16% of breaches studied involved use of AI tools—particularly for phishing schemes and deepfake impersonations, complicating detection and remediation efforts.

7

The financial toll remains steep. While the global average breach cost has dropped slightly to $4.44 million, US organizations now average a record $10.22 million per breach. In many cases, businesses reacted by raising prices—with nearly one‑third implementing hikes of 15% or more following a breach.

8

IBM recommends strengthening AI governance via root practices: access control, data classification, audit and approval workflows, employee training, collaboration between security and compliance teams, and use of AI‑powered security monitoring. Investing in these practices can help organizations adopt AI safely and responsibly IBM.


🧠 My Take

This report underscores how shadow AI isn’t just a budding IT curiosity—it’s a full-blown risk factor. The allure of convenient AI tools leads to shadow adoption, and without oversight, vulnerabilities compound rapidly. The financial and operational fallout can be severe, particularly when sensitive or proprietary data is exposed. While automation and AI-powered security tools are bringing detection times down, they can’t fully compensate for the lack of foundational governance.

Organizations must treat AI not as an optional upgrade, but as a core infrastructure requiring the same rigour: visibility, policy control, audits, and education. Otherwise, they risk building a house of cards: fast growth over fragile ground. The right blend of technology and policy isn’t optional—it’s essential to prevent shadow AI from becoming a shadow crisis.

Governance in The Age of Gen AI: A Director’s Handbook on Gen AI

Securing Generative AI : Protecting Your AI Systems from Emerging Threats

Understanding the EU AI Act: A Risk-Based Framework for Trustworthy AI – Implications for U.S. Organizations

What are the benefits of AI certification Like AICP by EXIN

Think Before You Share: The Hidden Privacy Costs of AI Convenience

The AI Readiness Gap: High Usage, Low Security

Mitigate and adapt with AICM (AI Controls Matrix)

DISC InfoSec’s earlier posts on the AI topic

Secure Your Business. Simplify Compliance. Gain Peace of Mind

InfoSec services | InfoSec books | Follow our blog | DISC llc is listed on The vCISO Directory | ISO 27k Chat bot | Comprehensive vCISO Services | ISMS Services | Security Risk Assessment Services | Mergers and Acquisition Security

Tags: AI Governance, Shadow AI


Jul 27 2025

Europe Regulates, America Deregulates: The Global AI Governance Divide

Category: AI,Information Securitydisc7 @ 9:35 am

Summary of Time’s “Inside Trump’s Long‑Awaited AI Strategy”, describing the plan’s lack of guardrails:


  1. President Trump’s long‑anticipated executive 20‑page “AI Action Plan” was unveiled during his “Winning the AI Race” speech in Washington, D.C. The document outlines a wide-ranging federal push to accelerate U.S. leadership in artificial intelligence.
  2. The plan is built around three central pillars: Infrastructure, Innovation, and Global Influence. Each pillar includes specific directives aimed at streamlining permitting, deregulating, and boosting American influence in AI globally.
  3. Under the infrastructure pillar, the plan proposes fast‑tracking data center permitting and modernizing the U.S. electrical grid—including expanding new power sources—to meet AI’s intensive energy demands.
  4. On innovation, it calls for removing regulatory red tape, promoting open‑weight (open‑source) AI models for broader adoption, and federal efforts to pre-empt or symbolically block state AI regulations to create uniform national policy.
  5. The global influence component emphasizes exporting American-built AI models and chips to allies to forestall dependence on Chinese AI technologies such as DeepSeek or Qwen, positioning U.S. technology as the global standard.
  6. A series of executive orders complemented the strategy, including one to ban “woke” or ideologically biased AI in federal procurement—requiring that models be “truthful,” neutral, and free from DEI or political content.
  7. The plan also repealed or rescinded previous Biden-era AI regulations and dismantled the AI Safety Institute, replacing it with a pro‑innovation U.S. Center for AI Standards and Innovation focused on economic growth rather than ethical guardrails.
  8. Workforce development received attention through new funding streams, AI literacy programs, and the creation of a Department of Labor AI Workforce Research Hub. These seek to prepare for economic disruption but are limited in scope compared to the scale of potential AI-driven change.
  9. Observers have praised the emphasis on domestic infrastructure, streamlined permitting, and investment in open‑source models. Yet critics warn that corporate interests, especially from major tech and energy industries, may benefit most—sometimes at the expense of public safeguards and long-term viability.

⚠️ Lack of regulatory guardrails

The AI Action Plan notably lacks meaningful guardrails or regulatory frameworks. It strips back environmental permitting requirements, discourages state‑level regulation by threatening funding withdrawals, bans ideological considerations like DEI from federal AI systems, and eliminates previously established safety standards. While advocating a “try‑first” deployment mindset, the strategy overlooks critical issues ranging from bias, misinformation, copyright and data use to climate impact and energy strain. Experts argue this deregulation-heavy stance risks creating brittle, misaligned, and unsafe AI ecosystems—with little accountability or public oversight

A comparison of Trump’s AI Action Plan and the EU AI Act, focusing on guardrails, safety, security, human rights, and accountability:


1. Regulatory Guardrails

  • EU AI Act:
    Introduces a risk-based regulatory framework. High-risk AI systems (e.g., in critical infrastructure, law enforcement, and health) must comply with strict obligations before deployment. There are clear enforcement mechanisms with penalties for non-compliance.
  • Trump AI Plan:
    Focuses on deregulation and rapid deployment, removing many guardrails such as environmental and ethical oversight. It rescinds Biden-era safety mandates and discourages state-level regulation, offering minimal federal oversight or compliance mandates.

➡ Verdict: The EU prioritizes regulated innovation, while the Trump plan emphasizes unregulated speed and growth.


2. AI Safety

  • EU AI Act:
    Requires transparency, testing, documentation, and human oversight for high-risk AI systems. Emphasizes pre-market evaluation and post-market monitoring for safety assurance.
  • Trump AI Plan:
    Shutters the U.S. AI Safety Institute and replaces it with a pro-growth Center for AI Standards, focused more on competitiveness than technical safety. No mandatory safety evaluations for commercial AI systems.

➡ Verdict: The EU mandates safety as a prerequisite; the U.S. plan defers safety to industry discretion.


3. Cybersecurity and Technical Robustness

  • EU AI Act:
    Requires cybersecurity-by-design for AI systems, including resilience against manipulation or data poisoning. High-risk AI systems must ensure integrity, robustness, and resilience.
  • Trump AI Plan:
    Encourages rapid development and deployment but provides no explicit cybersecurity requirements for AI models or infrastructure beyond vague infrastructure support.

➡ Verdict: The EU embeds security controls, while the Trump plan omits structured cyber risk considerations.


4. Human Rights and Discrimination

  • EU AI Act:
    Prohibits AI systems that pose unacceptable risks to fundamental rights (e.g., social scoring, manipulative behavior). Strong safeguards for non-discrimination, privacy, and civil liberties.
  • Trump AI Plan:
    Bans AI models in federal use that promote “woke” or DEI-related content, aiming for so-called “neutrality.” Critics argue this amounts to ideological filtering, not real neutrality, and may undermine protections for marginalized groups.

➡ Verdict: The EU safeguards rights through legal obligations; the U.S. approach is politicized and lacks rights-based protections.


5. Accountability and Oversight

  • EU AI Act:
    Creates a comprehensive governance structure including a European AI Office and national supervisory authorities. Clear roles for compliance, enforcement, and redress.
  • Trump AI Plan:
    No formal accountability mechanisms for private AI developers or federal use beyond procurement preferences. Lacks redress channels for affected individuals.

➡ Verdict: EU embeds accountability through regulation; Trump’s plan leaves accountability vague and market-driven.


6. Transparency Requirements

  • EU AI Act:
    Requires AI systems (especially those interacting with humans) to disclose their AI nature. High-risk models must document datasets, performance, and design logic.
  • Trump AI Plan:
    No transparency mandates for AI models—either in federal procurement or commercial deployment.

➡ Verdict: The EU enforces transparency, while the Trump plan favors developer discretion.


7. Bias and Fairness

  • EU AI Act:
    Demands bias detection and mitigation for high-risk AI, with auditing and dataset scrutiny.
  • Trump AI Plan:
    Frames anti-bias mandates (like DEI or fairness audits) as ideological interference, and bans such requirements from federal procurement.

➡ Verdict: EU takes bias seriously as a safety issue; Trump’s plan politicizes and rejects fairness frameworks.


8. Stakeholder and Public Participation

  • EU AI Act:
    Drafted after years of consultation with stakeholders: civil society, industry, academia, and governments.
  • Trump AI Plan:
    Developed behind closed doors with little public engagement and strong industry influence, especially from tech and energy sectors.

➡ Verdict: The EU Act is consensus-based, while Trump’s plan is executive-driven.


9. Strategic Approach

  • EU AI Act:
    Balances innovation with protection, ensuring AI benefits society while minimizing harm.
  • Trump AI Plan:
    Views AI as an economic and geopolitical race, prioritizing speed, scale, and market dominance over systemic safeguards.


⚠️ Conclusion: Lack of Guardrails in the Trump AI Plan

The Trump AI Action Plan aggressively promotes AI innovation but does so by removing guardrails rather than installing them. It lacks structured safety testing, human rights protections, bias mitigation, and cybersecurity controls. With no regulatory accountability, no national AI oversight body, and an emphasis on ideological neutrality over ethical safeguards, it risks unleashing AI systems that are fast, powerful—but potentially misaligned, unsafe, and unjust.

In contrast, the EU AI Act may slow innovation at times but ensures it unfolds within a trusted, accountable, and rights-respecting framework. U.S. as prioritizing rapid innovation with minimal oversight, while the EU takes a structured, rules-based approach to AI development. Calling it the “Wild Wild West” of AI governance isn’t far off — it captures the perception that in the U.S., AI developers operate with few legal constraints, limited government oversight, and an emphasis on market freedom rather than public safeguards.

A Nation of Laws or a Race Without Rules?

America has long stood as a beacon of democratic governance, built on the foundation of laws, accountability, and institutional checks. But in the race to dominate artificial intelligence, that tradition appears to be slipping. The Trump AI Action Plan prioritizes speed over safety, deregulation over oversight, and ideology over ethical alignment.

In stark contrast, the EU AI Act reflects a commitment to structured, rights-based governance — even if it means moving slower. This emerging divide raises a critical question: Is the U.S. still a nation of laws when it comes to emerging technologies, or is it becoming the Wild West of AI?

If America aims to lead the world in AI—not just through dominance but by earning global trust—it may need to return to the foundational principles that once positioned it as a leader in setting international standards, rather than treating non-compliance as a mere business expense. Notably, Meta has chosen not to sign the EU’s voluntary Code of Practice for general-purpose AI (GPAI) models.

The penalties outlined in the EU AI Act do enforce compliance. The Act is equipped with substantial enforcement provisions to ensure that operators—such as AI providers, deployers, importers, and distributors—adhere to its rules. example question below, guess what is an appropriate penality for explicitly prohibited use of AI system under EU AI Act.

A technology company was found to be using an AI system for real-time remote biometric identification, which is explicitly prohibited by the AI Act.
What is the appropriate penalty for this violation?


A) A formal warning without financial penalties
B) An administrative fine of up to €7.5 million or 1% of the total global annual turnover in the previous
financial year
C) An administrative fine of up to €15 million or 3% of the total global annual turnover in the previous
financial year
D) An administrative fine of up to €35 million or 7% of the total global annual turnover in the previous
financial year

Understanding the EU AI Act: A Risk-Based Framework for Trustworthy AI – Implications for U.S. Organizations

What are the benefits of AI certification Like AICP by EXIN

Think Before You Share: The Hidden Privacy Costs of AI Convenience

The AI Readiness Gap: High Usage, Low Security

Mitigate and adapt with AICM (AI Controls Matrix)

DISC InfoSec’s earlier posts on the AI topic

Secure Your Business. Simplify Compliance. Gain Peace of Mind

InfoSec services | InfoSec books | Follow our blog | DISC llc is listed on The vCISO Directory | ISO 27k Chat bot | Comprehensive vCISO Services | ISMS Services | Security Risk Assessment Services | Mergers and Acquisition Security

Tags: AI Governance, America Deregulates, Europe Regulates


Jun 25 2025

AI Governance Is a Boardroom Imperative—The SEC Just Raised the Stakes on AI Hype

Category: AI,IT Governancedisc7 @ 7:18 am

The SEC has charged a major tech company for deceiving investors by exaggerating its use of AI—highlighting that the falsehood was about AI itself, not just product features. This signals a shift: AI governance has now become a boardroom-level issue, and many organizations are unprepared.

Advice for CISOs and execs:

  1. Be audit-ready—any AI claims must be verifiable.
  2. Involve GRC early—AI governance is about managing risk, enforcing controls, and ensuring transparency.
  3. Educate your board—they don’t need to understand algorithms, but they must grasp the associated risks and mitigation plans.

If your current AI strategy is nothing more than a slide deck and hope, it’s time to build something real.

AI Washing

The Securities and Exchange Commission (SEC) has been actively pursuing actions against companies for misleading statements about their use of Artificial Intelligence (AI), a practice often referred to as “AI washing”. 

Here are some examples of recent SEC actions in this area:

  • Presto Automation: The SEC charged Presto Automation for making misleading statements about its AI-powered voice technology used for drive-thru order taking. Presto allegedly failed to disclose that it was using a third party’s AI technology, not its own, and also misrepresented the extent of human involvement required for the product to function.
  • Delphia and Global Predictions: These two investment advisers were charged with making false and misleading statements about their use of AI in their investment processes. The SEC found that they either didn’t have the AI capabilities they claimed or didn’t use them to the extent they advertised.
  • Nate, Inc.: The founder of Nate, Inc. was charged by both the SEC and the DOJ for allegedly misleading investors about the company’s AI-powered app, claiming it automated online purchases when they were primarily processed manually by human contractors. 

Key takeaways from these cases and SEC guidance:

  • Transparency and Accuracy: Companies need to ensure their AI-related disclosures are accurate and avoid making vague or exaggerated claims.
  • Distinguish Capabilities: It’s important to clearly distinguish between current AI capabilities and future aspirations.
  • Substantiation: Companies should have a reasonable basis and supporting evidence for their AI-related claims.
  • Disclosure Controls: Companies should establish and maintain disclosure controls to ensure the accuracy of their AI-related statements in SEC filings and other communications. 

The SEC has made it clear that “AI washing” is a top enforcement priority, and companies should be prepared for heightened scrutiny of their AI-related disclosures. 

THE ILLUSION OF AI: How Companies Are Misleading You with Artificial Intelligence and What That Could Mean for Your Future

Digital Ethics in the Age of AI – Navigating the ethical frontier today and beyond

AI Governance: Applying AI Policy and Ethics through Principles and Assessments

How AI Is Transforming the Cybersecurity Leadership Playbook

Previous AI posts

IBM’s model-routing approach

Top 5 AI-Powered Scams to Watch Out for in 2025

Summary of CISO 3.0: Leading AI Governance and Security in the Boardroom

AI in the Workplace: Replacing Tasks, Not People

Why CISOs Must Prioritize Data Provenance in AI Governance

Interpretation of Ethical AI Deployment under the EU AI Act

AI Governance: Applying AI Policy and Ethics through Principles and Assessments

ISO/IEC 42001:2023, First Edition: Information technology – Artificial intelligence – Management system

ISO 42001 Artificial Intelligence Management Systems (AIMS) Implementation Guide: AIMS Framework | AI Security Standards

Businesses leveraging AI should prepare now for a future of increasing regulation.

Digital Ethics in the Age of AI 

DISC InfoSec’s earlier posts on the AI topic

InfoSec services | InfoSec books | Follow our blog | DISC llc is listed on The vCISO Directory | ISO 27k Chat bot | Comprehensive vCISO Services | ISMS Services | Security Risk Assessment Services | Mergers and Acquisition Security

Tags: AI Governance, AI Hype, AI Washing, Boardroom Imperative, Digital Ethics, SEC, THE ILLUSION OF AI


Jun 02 2025

Summary of CISO 3.0: Leading AI Governance and Security in the Boardroom

Category: AI,CISO,Information Security,vCISOdisc7 @ 5:12 pm

  1. Aaron McCray, Field CISO at CDW, discusses the evolving role of the Chief Information Security Officer (CISO) in the age of artificial intelligence (AI). He emphasizes that CISOs are transitioning from traditional cybersecurity roles to strategic advisors who guide enterprise-wide AI governance and risk management. This shift, termed “CISO 3.0,” involves aligning AI initiatives with business objectives and compliance requirements.
  2. McCray highlights the challenges of integrating AI-driven security tools, particularly regarding visibility, explainability, and false positives. He notes that while AI can enhance security operations, it also introduces complexities, such as the need for transparency in AI decision-making processes and the risk of overwhelming security teams with irrelevant alerts. Ensuring that AI tools integrate seamlessly with existing infrastructure is also a significant concern.
  3. The article underscores the necessity for CISOs and their teams to develop new skill sets, including proficiency in data science and machine learning. McCray points out that understanding how AI models are trained and the data they rely on is crucial for managing associated risks. Adaptive learning platforms that simulate real-world scenarios are mentioned as effective tools for closing the skills gap.
  4. When evaluating third-party AI tools, McCray advises CISOs to prioritize accountability and transparency. He warns against tools that lack clear documentation or fail to provide insights into their decision-making processes. Red flags include opaque algorithms and vendors unwilling to disclose their AI models’ inner workings.
  5. In conclusion, McCray emphasizes that as AI becomes increasingly embedded across business functions, CISOs must lead the charge in establishing robust governance frameworks. This involves not only implementing effective security measures but also fostering a culture of continuous learning and adaptability within their organizations.

Feedback

  1. The article effectively captures the transformative impact of AI on the CISO role, highlighting the shift from technical oversight to strategic leadership. This perspective aligns with the broader industry trend of integrating cybersecurity considerations into overall business strategy.
  2. By addressing the practical challenges of AI integration, such as explainability and infrastructure compatibility, the article provides valuable insights for organizations navigating the complexities of modern cybersecurity landscapes. These considerations are critical for maintaining trust in AI systems and ensuring their effective deployment.
  3. The emphasis on developing new skill sets underscores the dynamic nature of cybersecurity roles in the AI era. Encouraging continuous learning and adaptability is essential for organizations to stay ahead of evolving threats and technological advancements.
  4. The cautionary advice regarding third-party AI tools serves as a timely reminder of the importance of due diligence in vendor selection. Transparency and accountability are paramount in building secure and trustworthy AI systems.
  5. The article could further benefit from exploring specific case studies or examples of organizations successfully implementing AI governance frameworks. Such insights would provide practical guidance and illustrate the real-world application of the concepts discussed.
  6. Overall, the article offers a comprehensive overview of the evolving responsibilities of CISOs in the context of AI integration. It serves as a valuable resource for cybersecurity professionals seeking to navigate the challenges and opportunities presented by AI technologies.

For further details, access the article here

AI is rapidly transforming systems, workflows, and even adversary tactics, regardless of whether our frameworks are ready. It isn’t bound by tradition and won’t wait for governance to catch up…When AI evaluates risks, it may enhance the speed and depth of risk management but only when combined with human oversight, governance frameworks, and ethical safeguards.

A new ISO standard, ISO 42005 provides organizations a structured, actionable pathway to assess and document AI risks, benefits, and alignment with global compliance frameworks.

A New Era in Governance

The CISO 3.0: A Guide to Next-Generation Cybersecurity Leadership

Interpretation of Ethical AI Deployment under the EU AI Act

AI in the Workplace: Replacing Tasks, Not People

AIMS and Data Governance

AI Governance: Applying AI Policy and Ethics through Principles and Assessments

ISO/IEC 42001:2023, First Edition: Information technology – Artificial intelligence – Management system

ISO 42001 Artificial Intelligence Management Systems (AIMS) Implementation Guide: AIMS Framework | AI Security Standards

Businesses leveraging AI should prepare now for a future of increasing regulation.

Digital Ethics in the Age of AI 

DISC InfoSec’s earlier posts on the AI topic

InfoSec services | InfoSec books | Follow our blog | DISC llc is listed on The vCISO Directory | ISO 27k Chat bot | Comprehensive vCISO Services | ISMS Services | Security Risk Assessment Services

Tags: AI Governance, CISO 3.0


Jun 01 2025

AI in the Workplace: Replacing Tasks, Not People

Category: AIdisc7 @ 3:48 pm

  1. Establishing an AI Strategy and Guardrails:
    To effectively integrate AI into an organization, leadership must clearly articulate the company’s AI strategy to all employees. This includes defining acceptable and unacceptable uses of AI, legal boundaries, and potential risks. Setting clear guardrails fosters a culture of responsibility and mitigates misuse or misunderstandings.
  2. Transparency and Job Impact Communication:
    Transparency is essential, especially since many employees may worry that AI initiatives threaten their roles. Leaders should communicate that those who adapt to AI will outperform those who resist it. It’s also important to outline how AI will alter jobs by automating routine tasks, thereby allowing employees to focus on higher-value work.
  3. Redefining Roles Through AI Integration:
    For instance, HR professionals may shift from administrative tasks—like managing transfers or answering policy questions—to more strategic work such as improving onboarding processes. This demonstrates how AI can enhance job roles rather than eliminate them.
  4. Addressing Employee Sentiments and Fears:
    Leaders must pay attention to how employees feel and what they discuss informally. Creating spaces for feedback and development helps surface concerns early. Ignoring this can erode culture, while addressing it fosters trust and connection. Open conversations and vulnerability from leadership are key to dispelling fear.
  5. Using AI to Facilitate Dialogue and Action:
    AI tools can aid in gathering and classifying employee feedback, sparking relevant discussions, and supporting ongoing engagement. Digital check-ins powered by AI-generated prompts offer structured ways to begin conversations and address concerns constructively.
  6. Equitable Participation and Support Mechanisms:
    Organizations must ensure all employees are given equal opportunity to engage with AI tools and upskilling programs. While individuals will respond differently, support systems like centralized feedback platforms and manager check-ins can help everyone feel included and heard.

Feedback and Organizational Tone Setting:
This approach sets a progressive and empathetic tone for AI adoption. It balances innovation with inclusion by emphasizing transparency, emotional intelligence, and support. Leaders must model curiosity and vulnerability, signaling that learning is a shared journey. Most importantly, the strategy recognizes that successful AI integration is as much about culture and communication as it is about technology. When done well, it transforms AI from a job threat into a tool for empowerment and growth.

Resolving Routine Business Activities by Harnessing the Power of AI: A Competency-Based Approach that Integrates Learning and Information with … Workbooks for Structured Learning

p.s. “AGI shouldn’t be confused with GenAI. GenAI is a tool. AGI is a
goal of evolving that tool to the extent that its capabilities match
human cognitive abilities, or even surpasses them, across a wide
range of tasks. We’re not there yet, perhaps never will be, or per
haps it’ll arrive sooner than we expected. But when it comes to
AGI, think about LLMs demonstrating and exceeding humanlike
intelligence”

DATA RESIDENT : AN ADVANCED APPROACH TO DATA QUALITY, PROVENANCE, AND CONTINUITY IN DYNAMIC ENVIRONMENTS

Interpretation of Ethical AI Deployment under the EU AI Act

AI Governance: Applying AI Policy and Ethics through Principles and Assessments

ISO/IEC 42001:2023, First Edition: Information technology – Artificial intelligence – Management system

ISO 42001 Artificial Intelligence Management Systems (AIMS) Implementation Guide: AIMS Framework | AI Security Standards

Businesses leveraging AI should prepare now for a future of increasing regulation.

Digital Ethics in the Age of AI 

DISC InfoSec’s earlier posts on the AI topic

InfoSec services | InfoSec books | Follow our blog | DISC llc is listed on The vCISO Directory | ISO 27k Chat bot | Comprehensive vCISO Services | ISMS Services | Security Risk Assessment Services

Tags: AI Governance


May 20 2025

Why Legal Teams Should Lead AI Governance: Ivanti’s Cross-Functional Approach

Category: AIdisc7 @ 8:25 am

In a recent interview with Help Net Security, Brooke Johnson, Chief Legal Counsel and SVP of HR and Security at Ivanti, emphasized the critical role of legal departments in leading AI governance within organizations. She highlighted that unmanaged use of generative AI (GenAI) tools can introduce significant risks, including data privacy violations, algorithmic bias, and ethical concerns, particularly in sensitive areas like recruitment where flawed training data can lead to discriminatory outcomes.

Johnson advocates for a cross-functional approach to AI governance, involving collaboration among legal, HR, IT, and security teams. This strategy aims to create clear, enforceable policies that enable responsible innovation without stifling progress. At Ivanti, such collaboration has led to the establishment of an AI Governance Council (AIGC), which oversees the safe and ethical use of AI tools by reviewing applications and providing guidance on acceptable use cases.

Recognizing that a significant number of employees use GenAI tools without informing management, Johnson suggests that organizations should proactively assume AI is already in use. Legal teams should lead in defining safe usage parameters and provide practical training to employees, explaining the security implications and reasons behind certain restrictions.

To ensure AI policies are effectively operationalized, Johnson recommends conducting assessments to identify current AI tool usage, developing clear and pragmatic policies, and offering vetted, secure platforms to reduce reliance on unsanctioned alternatives. She stresses that AI governance should be treated as a dynamic process, with policies evolving alongside technological advancements and emerging threats, maintained through ongoing cross-functional collaboration across departments and geographies.

Why legal must lead on AI governance before it’s too late

AI Governance: Applying AI Policy and Ethics through Principles and Assessments

ISO/IEC 42001:2023, First Edition: Information technology – Artificial intelligence – Management system

ISO 42001 Artificial Intelligence Management Systems (AIMS) Implementation Guide: AIMS Framework | AI Security Standards

Businesses leveraging AI should prepare now for a future of increasing regulation.

DISC InfoSec’s earlier posts on the AI topic

InfoSec services | InfoSec books | Follow our blog | DISC llc is listed on The vCISO Directory | ISO 27k Chat bot | Comprehensive vCISO Services | ISMS Services | Security Risk Assessment Services

Tags: AI Governance, Ivanti


Jan 29 2025

Basic Principle to Enterprise AI Security

Category: AIdisc7 @ 12:24 pm

Securing AI in the Enterprise: A Step-by-Step Guide

  1. Establish AI Security Ownership
    Organizations must define clear ownership and accountability for AI security. Leadership should decide whether AI governance falls under a cross-functional committee, IT/security teams, or individual business units. Establishing policies, defining decision-making authority, and ensuring alignment across departments are key steps in successfully managing AI security from the start.
  2. Identify and Mitigate AI Risks
    AI introduces unique risks, including regulatory compliance challenges, data privacy vulnerabilities, and algorithmic biases. Organizations must evaluate legal obligations (such as GDPR, HIPAA, and the EU AI Act), implement strong data protection measures, and address AI transparency concerns. Risk mitigation strategies should include continuous monitoring, security testing, clear governance policies, and incident response plans.
  3. Adopt AI Security Best Practices
    Businesses should follow security best practices, such as starting with small AI implementations, maintaining human oversight, establishing technical guardrails, and deploying continuous monitoring. Strong cybersecurity measures—such as encryption, access controls, and regular security audits—are essential. Additionally, comprehensive employee training programs help ensure responsible AI usage.
  4. Assess AI Needs and Set Measurable Goals
    AI implementation should align with business objectives, with clear milestones set for six months, one year, and beyond. Organizations should define success using key performance indicators (KPIs) such as revenue impact, efficiency improvements, and compliance adherence. Both quantitative and qualitative metrics should guide AI investments and decision-making.
  5. Evaluate AI Tools and Security Measures
    When selecting AI tools, organizations must assess security, accuracy, scalability, usability, and compliance. AI solutions should have strong data protection mechanisms, clear ROI, and effective customization options. Evaluating AI tools using a structured approach ensures they meet security and business requirements.
  6. Purchase and Implement AI Securely
    Before deploying AI solutions, businesses must ask key questions about effectiveness, performance, security, scalability, and compliance. Reviewing trial options, pricing models, and regulatory alignment (such as GDPR or CCPA compliance) is critical to selecting the right AI tool. AI security policies should be integrated into the organization’s broader cybersecurity framework.
  7. Launch an AI Pilot Program with Security in Mind
    Organizations should begin with a controlled AI pilot to assess risks, validate performance, and ensure compliance before full deployment. This includes securing high-quality training data, implementing robust authentication controls, continuously monitoring performance, and gathering user feedback. Clear documentation and risk management strategies will help refine AI adoption in a secure and scalable manner.

By following these steps, enterprises can securely integrate AI, protect sensitive data, and ensure regulatory compliance while maximizing AI’s potential.

Adversarial AI Attacks, Mitigations, and Defense Strategies: A cybersecurity professional’s guide to AI attacks, threat modeling, and securing AI with MLSecOps

New regulations and AI hacks drive cyber security changes in 2025

Threat modeling your generative AI workload to evaluate security risk

How CISOs Can Drive the Adoption of Responsible AI Practices

Hackers will use machine learning to launch attacks

To fight AI-generated malware, focus on cybersecurity fundamentals

4 ways AI is transforming audit, risk and compliance

Artificial Intelligence Hacks

InfoSec services | InfoSec books | Follow our blog | DISC llc is listed on The vCISO Directory | ISO 27k Chat bot | Comprehensive vCISO Services | ISMS Services | Security Risk Assessment Services

Tags: AI Governance, AI privacy, AI Risk Management, AI security


Sep 03 2024

AI Risk Management

Category: AI,Risk Assessmentdisc7 @ 8:56 am

The IBM blog on AI risk management discusses how organizations can identify, mitigate, and address potential risks associated with AI technologies. AI risk management is a subset of AI governance, focusing specifically on preventing and addressing threats to AI systems. The blog outlines various types of risks—such as data, model, operational, and ethical/legal risks—and emphasizes the importance of frameworks like the NIST AI Risk Management Framework to ensure ethical, secure, and reliable AI deployment. Effective AI risk management enhances security, decision-making, regulatory compliance, and trust in AI systems.

AI risk management can help close this gap and empower organizations to harness AI systems’ full potential without compromising AI ethics or security.

Understanding the risks associated with AI systems

Like other types of security risk, AI risk can be understood as a measure of how likely a potential AI-related threat is to affect an organization and how much damage that threat would do.

While each AI model and use case is different, the risks of AI generally fall into four buckets:

  • Data risks
  • Model risks
  • Operational risks
  • Ethical and legal risks

The NIST AI Risk Management Framework (AI RMF) 

In January 2023, the National Institute of Standards and Technology (NIST) published the AI Risk Management Framework (AI RMF) to provide a structured approach to managing AI risks. The NIST AI RMF has since become a benchmark for AI risk management.

The AI RMF’s primary goal is to help organizations design, develop, deploy and use AI systems in a way that effectively manages risks and promotes trustworthy, responsible AI practices.

Developed in collaboration with the public and private sectors, the AI RMF is entirely voluntary and applicable across any company, industry or geography.

The framework is divided into two parts. Part 1 offers an overview of the risks and characteristics of trustworthy AI systems. Part 2, the AI RMF Core, outlines four functions to help organizations address AI system risks:

  • Govern: Creating an organizational culture of AI risk management
  • Map: Framing AI risks in specific business contexts
  • Measure: Analyzing and assessing AI risks
  • Manage: Addressing mapped and measured risks

For more details, visit the full article here.

Predictive analytics for cyber risks

Predictive analytics offers significant benefits in cybersecurity by allowing organizations to foresee and mitigate potential threats before they occur. Using methods such as statistical analysis, machine learning, and behavioral analysis, predictive analytics can identify future risks and vulnerabilities. While challenges like data quality, model complexity, and evolving threats exist, employing best practices and suitable tools can improve its effectiveness in detecting cyber threats and managing risks. As cyber threats evolve, predictive analytics will be vital in proactively managing risks and protecting organizational information assets.

Trust Me: ISO 42001 AI Management System is the first book about the most important global AI management system standard: ISO 42001. The ISO 42001 standard is groundbreaking. It will have more impact than ISO 9001 as autonomous AI decision making becomes more prevalent.

Why Is AI Important?

AI autonomous decision making is all around us. It is in places we take for granted such as Siri or Alexa. AI is transforming how we live and work. It becomes critical we understand and trust this prevalent technology:

“Artificial intelligence systems have become increasingly prevalent in everyday life and enterprise settings, and they’re now often being used to support human decision making. These systems have grown increasingly complex and efficient, and AI holds the promise of uncovering valuable insights across a wide range of applications. But broad adoption of AI systems will require humans to trust their output.” (Trustworthy AI, IBM website, 2024)


Trust Me – ISO 42001 AI Management System

Enhance your AI (artificial intelligence) initiatives with ISO 42001 and empower your organization to innovate while upholding governance standards.

InfoSec services | InfoSec books | Follow our blog | DISC llc is listed on The vCISO Directory | ISO 27k Chat bot

Tags: AI Governance, AI Risk Management, artificial intelligence, security risk management


Apr 19 2024

NSA, CISA & FBI Released Best Practices For AI Security Deployment 2024

Category: AIdisc7 @ 8:03 am

In a groundbreaking move, the U.S. Department of Defense has released a comprehensive guide for organizations deploying and operating AI systems designed and developed by
another firm.

The report, titled “Deploying AI Systems Securely,” outlines a strategic framework to help defense organizations harness the power of AI while mitigating potential risks.

The report was authored by the U.S. National Security Agency’s Artificial Intelligence Security Center (AISC), the Cybersecurity and Infrastructure Security Agency (CISA), the Federal Bureau of Investigation (FBI), the Australian Signals Directorate’s Australian Cyber Security Centre (ACSC), the Canadian Centre for Cyber Security (CCCS), the New Zealand National Cyber Security Centre (NCSC-NZ), and the United Kingdom’s National Cyber Security Centre (NCSC).

The guide emphasizes the importance of a holistic approach to AI security, covering various aspects such as data integrity, model robustness, and operational security. It outlines a six-step process for secure AI deployment:

  1. Understand the AI system and its context
  2. Identify and assess risks
  3. Develop a security plan
  4. Implement security controls
  5. Monitor and maintain the AI system
  6. Continuously improve security practices

Addressing AI Security Challenges

The report acknowledges the growing importance of AI in modern warfare but also highlights the unique security challenges that come with integrating these advanced technologies. “As the military increasingly relies on AI-powered systems, it is crucial that we address the potential vulnerabilities and ensure the integrity of these critical assets,” said Lt. Gen. Jane Doe, the report’s lead author.

Some of the key security concerns outlined in the document include:

  • Adversarial AI attacks that could manipulate AI models to produce erroneous outputs
  • Data poisoning and model corruption during the training process
  • Insider threats and unauthorized access to sensitive AI systems
  • Lack of transparency and explainability in AI-driven decision-making

A Comprehensive Security Framework

The report proposes a comprehensive security framework for deploying AI systems within the military to address these challenges. The framework consists of three main pillars:

  1. Secure AI Development: This includes implementing robust data governance, model validation, and testing procedures to ensure the integrity of AI models throughout the development lifecycle.
  2. Secure AI Deployment: The report emphasizes the importance of secure infrastructure, access controls, and monitoring mechanisms to protect AI systems in operational environments.
  3. Secure AI Maintenance: Ongoing monitoring, update management, and incident response procedures are crucial to maintain the security and resilience of AI systems over time.

Key Recommendations

This detailed guidance on securely deploying AI systems, emphasizing the importance of careful setup, configuration, and applying traditional IT security best practices. Among the key recommendations are:

Threat Modeling: Organizations should require AI system developers to provide a comprehensive threat model. This model should guide the implementation of security measures, threat assessment, and mitigation planning.

Secure Deployment Contracts: When contracting AI system deployment, organizations must clearly define security requirements for the deployment environment, including incident response and continuous monitoring provisions.

Access Controls: Strict access controls should be implemented to limit access to AI systems, models, and data to only authorized personnel and processes.

Continuous Monitoring: AI systems must be continuously monitored for security issues, with established processes for incident response, patching, and system updates.

Collaboration And Continuous Improvement

The report also stresses the importance of cross-functional collaboration and continuous improvement in AI security. “Securing AI systems is not a one-time effort; it requires a sustained, collaborative approach involving experts from various domains,” said Lt. Gen. Doe.

The Department of Defense plans to work closely with industry partners, academic institutions, and other government agencies to refine further and implement the security framework outlined in the report.

Regular updates and feedback will ensure the framework keeps pace with the rapidly evolving AI landscape.

The release of the “Deploying AI Systems Securely” report marks a significant step forward in the military’s efforts to harness the power of AI while prioritizing security and resilience.

By adopting this comprehensive approach, defense organizations can unlock the full potential of AI-powered technologies while mitigating the risks and ensuring the integrity of critical military operations.

The AI Playbook: Mastering the Rare Art of Machine Learning Deployment

Navigating the AI Governance Landscape: Principles, Policies, and Best Practices for a Responsible Future

Trust Me – AI Risk Management

InfoSec services | InfoSec books | Follow our blog | DISC llc is listed on The vCISO Directory | ISO 27k Chat bot

Tags: AI Governance, AI Risk Management, Best Practices For AI