Oct 17 2025

Deploying Agentic AI Safely: A Strategic Playbook for Technology Leaders

Category: AI,AI Governance,AI Guardrails,Information Securitydisc7 @ 11:16 am

McKinsey’s playbook, “Deploying Agentic AI with Safety and Security,” outlines a strategic approach for technology leaders to harness the potential of autonomous AI agents while mitigating associated risks. These AI systems, capable of reasoning, planning, and acting without human oversight, offer transformative opportunities across various sectors, including customer service, software development, and supply chain optimization. However, their autonomy introduces novel vulnerabilities that require proactive management.

The playbook emphasizes the importance of understanding the emerging risks associated with agentic AI. Unlike traditional AI systems, these agents function as “digital insiders,” operating within organizational systems with varying levels of privilege and authority. This autonomy can lead to unintended consequences, such as improper data exposure or unauthorized access to systems, posing significant security challenges.

To address these risks, the playbook advocates for a comprehensive AI governance framework that integrates safety and security measures throughout the AI lifecycle. This includes embedding control mechanisms within workflows, such as compliance agents and guardrail agents, to monitor and enforce policies in real time. Additionally, human oversight remains crucial, with leaders focusing on defining policies, monitoring outliers, and adjusting the level of human involvement as necessary.

The playbook also highlights the necessity of reimagining organizational workflows to accommodate the integration of AI agents. This involves transitioning to AI-first workflows, where human roles are redefined to steer and validate AI-driven processes. Such an approach ensures that AI agents operate within the desired parameters, aligning with organizational goals and compliance requirements.

Furthermore, the playbook underscores the importance of embedding observability into AI systems. By implementing monitoring tools that provide insights into AI agent behaviors and decision-making processes, organizations can detect anomalies and address potential issues promptly. This transparency fosters trust and accountability, essential components in the responsible deployment of AI technologies.

In addition to internal measures, the playbook advises technology leaders to engage with external stakeholders, including regulators and industry peers, to establish shared standards and best practices for AI safety and security. Collaborative efforts can lead to the development of industry-wide frameworks that promote consistency and reliability in AI deployments.

The playbook concludes by reiterating the transformative potential of agentic AI when deployed responsibly. By adopting a proactive approach to risk management and integrating safety and security measures into every phase of AI deployment, organizations can unlock the full value of these technologies while safeguarding against potential threats.

My Opinion:

The McKinsey playbook provides a comprehensive and pragmatic approach to deploying agentic AI technologies. Its emphasis on proactive risk management, integrated governance, and organizational adaptation offers a roadmap for technology leaders aiming to leverage AI’s potential responsibly. In an era where AI’s capabilities are rapidly advancing, such frameworks are essential to ensure that innovation does not outpace the safeguards necessary to protect organizational integrity and public trust.

Agentic AI: Navigating Risks and Security Challenges: A Beginner’s Guide to Understanding the New Threat Landscape of AI Agents

 

“AI is already the single largest uncontrolled channel for corporate data exfiltration—bigger than shadow SaaS or unmanaged file sharing.”

Click the ISO 42001 Awareness Quiz — it will open in your browser in full-screen mode

iso42001_quizDownload

Protect your AI systems — make compliance predictable.
Expert ISO-42001 readiness for small & mid-size orgs. Get a AI Risk vCISO-grade program without the full-time cost.

Secure Your Business. Simplify Compliance. Gain Peace of Mind

Check out our earlier posts on AI-related topics: AI topic

InfoSec services | InfoSec books | Follow our blog | DISC llc is listed on The vCISO Directory | ISO 27k Chat bot | Comprehensive vCISO Services | ISMS Services | Security Risk Assessment Services | Mergers and Acquisition Security

Tags: AI Agents, AI Playbook, AI safty


Oct 16 2025

AI Infrastructure Debt: Cisco Report Highlights Risks and Readiness Gaps for Enterprise AI Adoption

Category: AI,AI Governance,AI Guardrails,Information Securitydisc7 @ 4:55 pm

A recent Cisco report highlights a critical issue in the rapid adoption of artificial intelligence (AI) technologies by enterprises: the growing phenomenon of “AI infrastructure debt.” This term refers to the accumulation of technical gaps and delays that arise when organizations attempt to deploy AI on systems not originally designed to support such advanced workloads. As companies rush to integrate AI, many are discovering that their existing infrastructure is ill-equipped to handle the increased demands, leading to friction, escalating costs, and heightened security vulnerabilities.

The study reveals that while a majority of organizations are accelerating their AI initiatives, a significant number lack the confidence that their systems can scale appropriately to meet the demands of AI workloads. Security concerns are particularly pronounced, with many companies admitting that their current systems are not adequately protected against potential AI-related threats. Weaknesses in data protection, access control, and monitoring tools are prevalent, and traditional security measures that once safeguarded applications and users may not extend to autonomous AI systems capable of making independent decisions and taking actions.

A notable aspect of the report is the emphasis on “agentic AI”—systems that can perform tasks, communicate with other software, and make operational decisions without constant human supervision. While these autonomous agents offer significant operational efficiencies, they also introduce new attack surfaces. If such agents are misconfigured or compromised, they can propagate issues across interconnected systems, amplifying the potential impact of security breaches. Alarmingly, many organizations have yet to establish comprehensive plans for controlling or monitoring these agents, and few have strategies for human oversight once these systems begin to manage critical business operations.

Even before the widespread deployment of agentic AI, companies are encountering foundational challenges. Rising computational costs, limited data integration capabilities, and network strain are common obstacles. Many organizations lack centralized data repositories or reliable infrastructure necessary for large-scale AI implementations. Furthermore, security measures such as encryption, access control, and tamper detection are inconsistently applied, often treated as separate add-ons rather than being integrated into the core infrastructure. This fragmented approach complicates the identification and resolution of issues, making it more difficult to detect and contain problems promptly.

The concept of AI infrastructure debt underscores the gradual accumulation of these technical deficiencies. Initially, what may appear as minor gaps in computing resources or data management can evolve into significant weaknesses that hinder growth and expose organizations to security risks. If left unaddressed, this debt can impede innovation and erode trust in AI systems. Each new AI model, dataset, and integration point introduces potential vulnerabilities, and without consistent investment in infrastructure, it becomes increasingly challenging to understand where sensitive information resides and how it is protected.

Conversely, organizations that proactively address these infrastructure gaps are better positioned to reap the benefits of AI. The report identifies a group of “Pacesetters”—companies that have integrated AI readiness into their long-term strategies by building robust infrastructures and embedding security measures from the outset. These organizations report measurable gains in profitability, productivity, and innovation. Their disciplined approach to modernizing infrastructure and maintaining strong governance frameworks provides them with the flexibility to scale operations and respond to emerging threats effectively.

In conclusion, the Cisco report emphasizes that the value derived from AI is heavily contingent upon the strength and preparedness of the underlying systems that support it. For most enterprises, the primary obstacle is not the technology itself but the readiness to manage it securely and at scale. As AI continues to evolve, organizations that plan, modernize, and embed security early will be better equipped to navigate the complexities of this transformative technology. Those that delay addressing infrastructure debt may find themselves facing escalating technical and financial challenges in the future.

Opinion: The concept of AI infrastructure debt serves as a crucial reminder for organizations to adopt a proactive approach in preparing their systems for AI integration. Neglecting to modernize infrastructure and implement comprehensive security measures can lead to significant vulnerabilities and hinder the potential benefits of AI. By prioritizing infrastructure readiness and security, companies can position themselves to leverage AI technologies effectively and sustainably.

Everyone wants AI, but few are ready to defend it

Data for AI: Data Infrastructure for Machine Intelligence

“AI is already the single largest uncontrolled channel for corporate data exfiltration—bigger than shadow SaaS or unmanaged file sharing.”

Click the ISO 42001 Awareness Quiz — it will open in your browser in full-screen mode

iso42001_quizDownload

Protect your AI systems — make compliance predictable.
Expert ISO-42001 readiness for small & mid-size orgs. Get a AI Risk vCISO-grade program without the full-time cost.

Secure Your Business. Simplify Compliance. Gain Peace of Mind

Check out our earlier posts on AI-related topics: AI topic

InfoSec services | InfoSec books | Follow our blog | DISC llc is listed on The vCISO Directory | ISO 27k Chat bot | Comprehensive vCISO Services | ISMS Services | Security Risk Assessment Services | Mergers and Acquisition Security

Tags: AI Infrastructure Debt


Oct 10 2025

Anthropic Expands AI Role in U.S. National Security Amid Rising Oversight Concerns

Category: AI,AI Governance,AI Guardrails,Information Securitydisc7 @ 1:09 pm

Anthropic is looking to expand how its AI models can be used by the government for national security purposes.

Anthropic, the AI company, is preparing to broaden how its technology is used in U.S. national security settings. The move comes as the Trump administration is pushing for more aggressive government use of artificial intelligence. While Anthropic has already begun offering restricted models for national security tasks, the planned expansion would stretch into more sensitive areas.


Currently, Anthropic’s Claude models are used by government agencies for tasks such as cyber threat analysis. Under the proposed plan, customers like the Department of Defense would be allowed to use Claude Gov models to carry out cyber operations, so long as a human remains “in the loop.” This is a shift from solely analytical applications to more operational roles.


In addition to cyber operations, Anthropic intends to allow the Claude models to advance from just analyzing foreign intelligence to recommending actions based on that intelligence. This step would position the AI in a more decision-support role rather than purely informational.


Another proposed change is to use Claude in military and intelligence training contexts. This would include generating materials for war games, simulations, or educational content for officers and analysts. The expansion would allow the models to more actively support scenario planning and instruction.


Anthropic also plans to make sandbox environments available to government customers, lowering previous restrictions on experimentation. These environments would be safe spaces for exploring new use cases of the AI models without fully deploying them in live systems. This flexibility marks a change from more cautious, controlled deployments so far.


These steps build on Anthropic’s June rollout of Claude Gov models made specifically for national security usage. The proposed enhancements would push those models into more central, operational, and generative roles across defense and intelligence domains.


But this expansion raises significant trade-offs. On the one hand, enabling more capable AI support for intelligence, cyber, and training functions may enhance the U.S. government’s ability to respond faster and more effectively to threats. On the other hand, it amplifies risks around the handling of sensitive or classified data, the potential for AI-driven misjudgments, and the need for strong AI governance, oversight, and safety protocols. The balance between innovation and caution becomes more delicate the deeper AI is embedded in national security work.


My opinion
I think Anthropic’s planned expansion into national security realms is bold and carries both promise and peril. On balance, the move makes sense: if properly constrained and supervised, AI could provide real value in analyzing threats, aiding decision-making, and simulating scenarios that humans alone struggle to keep pace with. But the stakes are extremely high. Even small errors or biases in recommendations could have serious consequences in defense or intelligence contexts. My hope is that as Anthropic and the government go forward, they do so with maximum transparency, rigorous auditing, strict human oversight, and clearly defined limits on how and when AI can act. The potential upside is large, but the oversight must match the magnitude of risk.

“AI is already the single largest uncontrolled channel for corporate data exfiltration—bigger than shadow SaaS or unmanaged file sharing.”

Click the ISO 42001 Awareness Quiz — it will open in your browser in full-screen mode

iso42001_quizDownload

Protect your AI systems — make compliance predictable.
Expert ISO-42001 readiness for small & mid-size orgs. Get a AI Risk vCISO-grade program without the full-time cost.

Secure Your Business. Simplify Compliance. Gain Peace of Mind

Check out our earlier posts on AI-related topics: AI topic

InfoSec services | InfoSec books | Follow our blog | DISC llc is listed on The vCISO Directory | ISO 27k Chat bot | Comprehensive vCISO Services | ISMS Services | Security Risk Assessment Services | Mergers and Acquisition Security

Tags: Anthropic, National security


Oct 09 2025

AI Boom or Bubble? Experts Warn of Overheating as Investments Outpace Real Returns

Category: AI,AI Governance,Information Securitydisc7 @ 10:43 am

‘I Believe It’s a Bubble’: What Some Smart People Are Saying About AI — Bloomberg Businessweek 

1. Rising Fears of an AI Bubble
A growing chorus of analysts and industry veterans is voicing concern that the current enthusiasm around artificial intelligence might be entering bubble territory. While AI is often cast as a transformative revolution, signs of overvaluation, speculative behavior, and capital misallocation are drawing comparisons to past tech bubbles.

2. Circular Deals and Valuation Spirals
One troubling pattern is “circular deals,” where AI hardware firms invest in cloud or infrastructure players that, in turn, buy their chips. This feedback loop inflates the appearance of demand, distorting fundamentals. Some analysts say it’s a symptom of speculative overreach, though others argue the effect remains modest.

3. Debt-Fueled Investment and Cash Burn
Many firms are funding their AI buildouts via debt, even as their revenue lags or remains uncertain. High interest rates and mounting liabilities raise the risk that some may not be able to sustain their spending, especially if returns don’t materialize quickly.

4. Disparity Between Vision and Consumption
The scale of infrastructure investment is being questioned relative to actual usage and monetization. Some data suggest that while corporate AI spending is soaring, the end-consumer market remains relatively modest. That gap raises skepticism about whether demand will catch up to hype.

5. Concentration and Winner-Takes-All Dynamics
The AI boom is increasingly dominated by a few giants—especially hardware, cloud, and model providers. Emerging firms, even with promising tech, struggle to compete for capital. This concentration increases systemic risk: if one of the dominants falters, ripple effects could be severe.

6. Skeptics, Warnings, and Dissenting Views
Institutions like the Bank of England and IMF are cautioning about financial instability from AI overvaluation. Meanwhile, leaders in tech (such as Sam Altman) acknowledge bubble risk even as they remain bullish on long-term potential. Some bull-side analysts (e.g. Goldman Sachs) contend that the rally still rests partly on solid fundamentals.

7. Warning Signals and Bubble Analogies
Observers point to classic bubble signals—exuberant speculation, weak linkage to earnings, use of SPVs or accounting tricks, and momentum-driven valuation detached from fundamentals. Some draw parallels to the dot-com bust, while others argue that today’s AI wave may be more structurally grounded.

8. Market Implications and Timing Uncertainty
If a correction happens, it could ripple across tech stocks and broader markets, particularly given how much AI now underpins valuations. But timing is uncertain: it may happen abruptly or gradually. Some suggest the downturn might begin in the next 1–2 years, especially if earnings don’t keep pace.


My View
I believe we are in a “frothy” phase of the AI boom—one with real technological foundations, but also inflated expectations and speculative excess. Some companies will deliver massive upside; many others may not survive the correction. Prudent investors should assume that a pullback is likely, and guard against concentration risk. But rather than avoiding AI entirely, I’d lean toward a selective, cautious exposure—backing companies with solid fundamentals, defensible moats, and manageable capital structures.

AI Investment → Return Flywheel (Near to Mid Term)

Here’s a simplified flywheel model showing how current investments in AI could generate returns (or conversely, stress) over the next few years:

StageInputs / InvestmentsMechanisms / LeverageOutputs / ReturnsRisks / Leakages
1. Infrastructure BuildoutCapital into GPUs, data centers, cloud platformsScale, network effects, lower marginal costAccelerated training, model capacity growthOvercapacity, underutilization, power constraints
2. Model & Algorithm DevelopmentInvestment in R&D, talent, datasetsImproved accuracy, specialization, speedNew products, APIs, licensingDiminishing returns, competitive replication
3. Integration & DeploymentCapital for embedding models into verticalsCustomization, process automation, SaaS modelsEfficiency gains, new services, revenue growthAdoption lag, integration challenges
4. Monetization & PricingCustomer acquisition, pricing modelsSubscription, usage fees, enterprise contractsRecurring revenue, higher marginsMarket resistance, commoditization, margin pressure
5. Reinvestment & ScalingProfits or further capitalExpand into adjacent markets, cross-sellingFlywheel effect, valuation re-ratingCash outflows, competitive erosion, regulation

In an ideal version:

  1. Each dollar invested into infrastructure leads to economies of scale and enables cheaper model training (stage 1 → 2).
  2. Better models enable more integration (stage 3).
  3. Integration leads to monetization and revenue (stage 4).
  4. Profits get partly reinvested, accelerating expansion and capturing more markets (stage 5).

However, the chain can break if any link fails: infrastructure overhang, weak demand, pricing pressure, or inability to scale commercial adoption. In such a case, returns erode, valuations contract, and parts of the flywheel slow or reverse.

If the boom plays out well, the flywheel could generate compounding value for top-tier AI operators and their ecosystem over the next 3–5 years. But if the hype overshadows fundamentals, the flywheel could seize.

Related Articles:

High stock valuations sparking investor worries about market bubble

Is there an AI bubble? Financial institutions sound a warning 

Sam Altman says ‘yes,’ AI is in a bubble

AI Bubble: How to Survive the Next Stock Market Crash (Trading and Artificial Intelligence (AI))

AIMS and Data Governance – Managing data responsibly isn’t just good practice—it’s a legal and ethical imperative. Ready to start? Scroll down and try our free ISO-42001 Awareness Quiz at the bottom of the page!

“AI is already the single largest uncontrolled channel for corporate data exfiltration—bigger than shadow SaaS or unmanaged file sharing.”

Click the ISO 42001 Awareness Quiz — it will open in your browser in full-screen mode

iso42001_quizDownload

Protect your AI systems — make compliance predictable.
Expert ISO-42001 readiness for small & mid-size orgs. Get a AI Risk vCISO-grade program without the full-time cost.

Secure Your Business. Simplify Compliance. Gain Peace of Mind

Check out our earlier posts on AI-related topics: AI topic

InfoSec services | InfoSec books | Follow our blog | DISC llc is listed on The vCISO Directory | ISO 27k Chat bot | Comprehensive vCISO Services | ISMS Services | Security Risk Assessment Services | Mergers and Acquisition Security

Tags: AI Bubble


Sep 26 2025

Aligning risk management policy with ISO 42001 requirements

AI risk management and governance, so aligning your risk management policy means integrating AI-specific considerations alongside your existing risk framework. Here’s a structured approach:


1. Understand ISO 42001 Scope and Requirements

  • ISO 42001 sets standards for AI governance, risk management, and compliance across the AI lifecycle.
  • Key areas include:
    • Risk identification and assessment for AI systems.
    • Mitigation strategies for bias, errors, security, and ethical concerns.
    • Transparency, explainability, and accountability of AI models.
    • Compliance with legal and regulatory requirements (GDPR, EU AI Act, etc.).


2. Map Your Current Risk Policy

  • Identify where your existing policy addresses:
    • Risk assessment methodology
    • Roles and responsibilities
    • Monitoring and reporting
    • Incident response and corrective actions
  • Note gaps related to AI-specific risks, such as algorithmic bias, model explainability, or data provenance.


3. Integrate AI-Specific Risk Controls

  • AI Risk Identification: Add controls for data quality, model performance, and potential bias.
  • Risk Assessment: Include likelihood, impact, and regulatory consequences of AI failures.
  • Mitigation Strategies: Document methods like model testing, monitoring, human-in-the-loop review, or bias audits.
  • Governance & Accountability: Assign clear ownership for AI system oversight and compliance reporting.


4. Ensure Regulatory and Ethical Alignment

  • Map your AI systems against applicable standards:
    • EU AI Act (high-risk AI systems)
    • GDPR or HIPAA for data privacy
    • ISO 31000 for general risk management principles
  • Document how your policy addresses ethical AI principles, including fairness, transparency, and accountability.


5. Update Policy Language and Procedures

  • Add a dedicated “AI Risk Management” section to your policy.
  • Include:
    • Scope of AI systems covered
    • Risk assessment processes
    • Monitoring and reporting requirements
    • Training and awareness for stakeholders
  • Ensure alignment with ISO 42001 clauses (risk identification, evaluation, mitigation, monitoring).


6. Implement Monitoring and Continuous Improvement

  • Establish KPIs and metrics for AI risk monitoring.
  • Include regular audits and reviews to ensure AI systems remain compliant.
  • Integrate lessons learned into updates of the policy and risk register.


7. Documentation and Evidence

  • Keep records of:
    • AI risk assessments
    • Mitigation plans
    • Compliance checks
    • Incident responses
  • This will support ISO 42001 certification or internal audits.

Mastering ISO 23894 – AI Risk Management: The AI Risk Management Blueprint | AI Lifecycle and Risk Management Demystified | AI Risk Mastery with ISO 23894 | Navigating the AI Lifecycle with Confidence

AI Compliance in M&A: Essential Due Diligence Checklist

DISC InfoSec’s earlier posts on the AI topic

AIMS ISO42001 Data governance

AI is Powerful—But Risky. ISO/IEC 42001 Can Help You Govern It

Secure Your Business. Simplify Compliance. Gain Peace of Mind

InfoSec services | InfoSec books | Follow our blog | DISC llc is listed on The vCISO Directory | ISO 27k Chat bot | Comprehensive vCISO Services | ISMS Services | Security Risk Assessment Services | Mergers and Acquisition Security

Tags: AI Risk Management, AIMS, ISO 42001


Sep 24 2025

When AI Hype Weakens Society: Lessons from Karen Hao

Category: AI,AI Governance,Information Security,ISO 42001disc7 @ 12:23 pm

Karen Hao’s Empire of AI provides a critical lens on the current AI landscape, questioning what intelligence truly means in these systems. Hao explores how AI is often framed as an extraordinary form of intelligence, yet in reality, it remains highly dependent on the data it is trained on and the design choices of its creators.

She highlights the ways companies encourage users to adopt AI tools, not purely for utility, but to collect massive amounts of data that can later be monetized. This approach, she argues, blurs the line between technological progress and corporate profit motives.

According to Hao, the AI industry often distorts reality. She describes AI as overhyped, framing the movement almost as a quasi-religious phenomenon. This hype, she suggests, fuels unrealistic expectations both among developers and the public.

Within the AI discourse, two camps emerge: the “boomers” and the “doomers.” Boomers herald AI as a new form of superior intelligence that can solve all problems, while doomers warn that this same intelligence could ultimately be catastrophic. Both, Hao argues, exaggerate what AI can actually do.

Prominent figures sometimes claim that AI possesses “PhD-level” intelligence, capable of performing complex, expert-level tasks. In practice, AI systems often succeed or fail depending on the quality of the data they consume—a vulnerability when that data includes errors or misinformation.

Hao emphasizes that the hype around AI is driven by money and venture capital, not by a transformation of the economy. According to her, Silicon Valley’s culture thrives on exaggeration: bigger models, more data, and larger data centers are marketed as revolutionary, but these features alone do not guarantee real-world impact.

She also notes that technology is not omnipotent. AI is not independently replacing jobs; company executives make staffing decisions. As people recognize the limits of AI, they can make more informed, “intelligent” choices themselves, countering some of the fears and promises surrounding automation.

OpenAI exemplifies these tensions. Founded as a nonprofit intended to counter Silicon Valley’s profit-driven AI development, it quickly pivoted toward a capitalistic model. Today, OpenAI is valued around $300–400 billion, and its focus is on data and computing power rather than purely public benefit, reflecting the broader financial incentives in the AI ecosystem.

Hao likens the AI industry to 18th-century colonialism: labor exploitation, monopolization of energy resources, and accumulation of knowledge and talent in wealthier nations echo historical imperial practices. This highlights that AI’s growth has social, economic, and ethical consequences far beyond mere technological achievement.

Hao’s analysis shows that AI, while powerful, is far from omnipotent. The overhype and marketing-driven narrative can weaken society by creating unrealistic expectations, concentrating wealth and power in the hands of a few corporations, and masking the social and ethical costs of these technologies. Instead of empowering people, it can distort labor markets, erode worker rights, and foster dependence on systems whose decision-making processes are opaque. A society that uncritically embraces AI risks being shaped more by financial incentives than by human-centered needs.

Today’s AI can perform impressive feats—from coding and creating images to diagnosing diseases and simulating human conversation. While these capabilities offer huge benefits, AI could be misused, from autonomous weapons to tools that spread misinformation and destabilize societies. Experts like Elon Musk and Geoffrey Hinton echo these concerns, advocating for regulations to keep AI safely under human control.

Empire of AI: Dreams and Nightmares in Sam Altman’s OpenAI

Letters and Politics Mitch Jeserich interview Karen Hao 09/24/25

Generative AI is a “remarkable con” and “the perfect nihilistic form of tech bubbles”Ed Zitron

AI Darwin Awards Show AI’s Biggest Problem Is Human

DISC InfoSec’s earlier posts on the AI topic

AIMS ISO42001 Data governance

AI is Powerful—But Risky. ISO/IEC 42001 Can Help You Govern It

Secure Your Business. Simplify Compliance. Gain Peace of Mind

InfoSec services | InfoSec books | Follow our blog | DISC llc is listed on The vCISO Directory | ISO 27k Chat bot | Comprehensive vCISO Services | ISMS Services | Security Risk Assessment Services | Mergers and Acquisition Security

Tags: AI Hype Weakens Society, Empire of AI, Karen Hao


Sep 22 2025

Qantas just showed us that cyber-attacks don’t just hit customers—they can hit the CEO’s bonus

Category: Cyber Attack,Information Securitydisc7 @ 10:15 am

Hackers breached a third-party contact center platform, stealing data from 6M customers. No credit cards or passwords were exposed, but the board still cut senior leader bonuses by 15%. The CEO alone lost A$250,000.

This isn’t just an airline problem. It’s a wake-up call: boards are now holding executives financially accountable for cyber failures.

Key lessons for leaders:
🔹 Harden your help desk – add multi-step verification, ban one-step resets.
🔹 Do a vendor “containment sweep” – limit what customer data sits in third-party tools.
🔹 Prep customer comms kits – be ready to notify with clarity and speed.
🔹 Minimize sensitive data – don’t let vendors store more than they need.
🔹 Enforce strong controls – MFA, device trust checks, and callback verification.
🔹 Report to the board – show vendor exposure, tabletop results, and timelines.

My take: Boards are done treating cybersecurity as “someone else’s problem.” Linking executive pay to cyber resilience is the fastest way to drive accountability. If you’re an executive, assume vendor platforms are your systems—because when they fail, you’re the one explaining it to customers and shareholders.

Qantas executives punished for major cyber attack with cut to bonuses as Alan Joyce pockets another $3.8m

Secure Your Business. Simplify Compliance. Gain Peace of Mind

InfoSec services | InfoSec books | Follow our blog | DISC llc is listed on The vCISO Directory | ISO 27k Chat bot | Comprehensive vCISO Services | ISMS Services | Security Risk Assessment Services | Mergers and Acquisition Security

Tags: CEO bonus, Quantas


Sep 13 2025

How AI system provider can build data center which are deeply decarbonized data center

Category: AI,Information Securitydisc7 @ 10:32 am

Here’s a layered diagram showing how an AI system provider can build a deeply decarbonized data center — starting from clean energy supply at the outer layer down to handling residual emissions at the core.

AI data centers are the backbone of modern artificial intelligence—but they come with a growing list of side effects that are raising eyebrows across environmental, health, and policy circles. Here’s a breakdown of the most pressing concerns:

⚡ Environmental & Energy Impacts

  • Massive energy consumption: AI workloads require high-performance computing, which dramatically increases electricity demand. This strains local grids and often leads to reliance on fossil fuels.
  • Water usage for cooling: Many data centers use evaporative cooling systems, consuming millions of gallons of water annually—especially problematic in drought-prone regions.
  • Carbon emissions: Unless powered by renewables, data centers contribute significantly to greenhouse gas emissions, undermining climate goals

An AI system provider can build a deeply decarbonized data center by designing it to minimize greenhouse gas emissions across its full lifecycle—construction, energy use, and operations. Here’s how:

  1. Power Supply (Clean Energy First)
    • Run entirely on renewable electricity (solar, wind, hydro, geothermal).
    • Use power purchase agreements (PPAs) or direct renewable energy sourcing.
    • Design for 24/7 carbon-free energy rather than annual offsets.
  2. Efficient Infrastructure
    • Deploy high-efficiency cooling systems (liquid cooling, free-air cooling, immersion).
    • Optimize server utilization (AI workload scheduling, virtualization, consolidation).
    • Use energy-efficient chips/accelerators designed for AI workloads.
  3. Sustainable Building Design
    • Construct facilities with low-carbon materials (green concrete, recycled steel).
    • Maximize modular and prefabricated components to cut waste.
    • Use circular economy practices for equipment reuse and recycling.
  4. Carbon Capture & Offsets (Residual Emissions)
    • Where emissions remain (backup generators, construction), apply carbon capture or credible carbon removal offsets.
  5. Water & Heat Management
    • Implement closed-loop water cooling to minimize freshwater use.
    • Recycle waste heat to warm nearby buildings or supply district heating.
  6. Smart Operations
    • Apply AI-driven energy optimization to reduce idle consumption.
    • Dynamically shift workloads to regions/times where renewable energy is abundant.
  7. Supply Chain Decarbonization
    • Work with hardware vendors committed to net-zero manufacturing.
    • Require carbon transparency in procurement.

👉 In short: A deeply decarbonized AI data center runs on clean energy, uses ultra-efficient infrastructure, minimizes embodied carbon, and intelligently manages workloads and resources.

Sustainable Content: How to Measure and Mitigate the Carbon Footprint of Digital Data

Energy Efficient Algorithms and Green Data Centers for Sustainable Computing

🏘️ Societal & Equity Concerns

  • Disproportionate impact on marginalized communities: Many data centers are built in areas with existing environmental burdens, compounding risks for vulnerable populations.
  • Land use and displacement: Large-scale facilities can disrupt ecosystems and push out local residents or businesses.
  • Transparency issues: Communities often lack access to information about the risks and benefits of hosting data centers, leading to mistrust and resistance.

🔋 Strategic & Policy Challenges

  • Energy grid strain: The rapid expansion of AI infrastructure is pushing governments to consider controversial solutions like small modular nuclear reactors.
  • Regulatory gaps: Current zoning and environmental regulations may not be equipped to handle the scale and speed of AI data center growth.

Secure Your Business. Simplify Compliance. Gain Peace of Mind

InfoSec services | InfoSec books | Follow our blog | DISC llc is listed on The vCISO Directory | ISO 27k Chat bot | Comprehensive vCISO Services | ISMS Services | Security Risk Assessment Services | Mergers and Acquisition Security

Tags: AI data center, sustainable


Sep 12 2025

SANS “Own AI Securely” Blueprint: A Strategic Framework for Secure AI Integration

Category: AI,AI Governance,Information Securitydisc7 @ 1:58 pm
SANS Institute

The SANS Institute has unveiled its “Own AI Securely” blueprint, a strategic framework designed to help organizations integrate artificial intelligence (AI) securely and responsibly. This initiative addresses the growing concerns among Chief Information Security Officers (CISOs) about the rapid adoption of AI technologies without corresponding security measures, which has created vulnerabilities that cyber adversaries are quick to exploit.

A significant challenge highlighted by SANS is the speed at which AI-driven attacks can occur. Research indicates that such attacks can unfold more than 40 times faster than traditional methods, making it difficult for defenders to respond promptly. Moreover, many Security Operations Centers (SOCs) are incorporating AI tools without customizing them to their specific needs, leading to gaps in threat detection and response capabilities.

To mitigate these risks, the blueprint proposes a three-part framework: Protect AI, Utilize AI, and Govern AI. The “Protect AI” component emphasizes securing models, data, and infrastructure through measures such as access controls, encryption, and continuous monitoring. It also addresses emerging threats like model poisoning and prompt injection attacks.

The “Utilize AI” aspect focuses on empowering defenders to leverage AI in enhancing their operations. This includes integrating AI into detection and response systems to keep pace with AI-driven threats. Automation is encouraged to reduce analyst workload and expedite decision-making, provided it is implemented carefully and monitored closely.

The “Govern AI” segment underscores the importance of establishing clear policies and guidelines for AI usage within organizations. This includes defining acceptable use, ensuring compliance with regulations, and maintaining transparency in AI operations.

Rob T. Lee, Chief of Research and Chief AI Officer at SANS Institute, advises that CISOs should prioritize investments that offer both security and operational efficiency. He recommends implementing an adoption-led control plane that enables employees to access approved AI tools within a protected environment, ensuring security teams maintain visibility into AI operations across all data domains.

In conclusion, the SANS AI security blueprint provides a comprehensive approach to integrating AI technologies securely within organizations. By focusing on protection, utilization, and governance, it offers a structured path to mitigate risks associated with AI adoption. However, the success of this framework hinges on proactive implementation and continuous monitoring to adapt to the evolving threat landscape.

Sorce: CISOs brace for a new kind of AI chaos

ISO/IEC 42001: The Global Standard for Responsible AI Governance, Risk, and Compliance

What are main requirements for Internal audit of ISO 42001 AIMS

ISO 42001: The AI Governance Standard Every Organization Needs to Understand

Turn Compliance into Competitive Advantage with ISO 42001

ISO 42001 Readiness: A 10-Step Guide to Responsible AI Governance

Aligning with ISO 42001:2023 and/or the EU Artificial Intelligence (AI) Act

AIMS and Data Governance – Managing data responsibly isn’t just good practice—it’s a legal and ethical imperative. 

ISO 42001—the first international standard for managing artificial intelligence. Developed for organizations that design, deploy, or oversee AI, ISO 42001 is set to become the ISO 9001 of AI: a universal framework for trustworthytransparent, and responsible AI.


Trust Me – ISO 42001 AI Management System

ISO/IEC 42001:2023 – from establishing to maintain an AI management system

AI Act & ISO 42001 Gap Analysis Tool

Agentic AI: Navigating Risks and Security Challenges

Artificial Intelligence: The Next Battlefield in Cybersecurity

AI and The Future of Cybersecurity: Navigating the New Digital Battlefield

“Whether you’re a technology professional, policymaker, academic, or simply a curious reader, this book will arm you with the knowledge to navigate the complex intersection of AI, security, and society.”

AI Act & ISO 42001 Gap Analysis Tool

AI Governance Is a Boardroom Imperative—The SEC Just Raised the Stakes on AI Hype

How AI Is Transforming the Cybersecurity Leadership Playbook

Previous AI posts

IBM’s model-routing approach

Top 5 AI-Powered Scams to Watch Out for in 2025

Summary of CISO 3.0: Leading AI Governance and Security in the Boardroom

AI in the Workplace: Replacing Tasks, Not People

Why CISOs Must Prioritize Data Provenance in AI Governance

Interpretation of Ethical AI Deployment under the EU AI Act

AI Governance: Applying AI Policy and Ethics through Principles and Assessments

ISO/IEC 42001:2023, First Edition: Information technology – Artificial intelligence – Management system

ISO 42001 Artificial Intelligence Management Systems (AIMS) Implementation Guide: AIMS Framework | AI Security Standards

Businesses leveraging AI should prepare now for a future of increasing regulation.

Digital Ethics in the Age of AI 

DISC InfoSec’s earlier posts on the AI topic

Secure Your Business. Simplify Compliance. Gain Peace of Mind

InfoSec services | InfoSec books | Follow our blog | DISC llc is listed on The vCISO Directory | ISO 27k Chat bot | Comprehensive vCISO Services | ISMS Services | Security Risk Assessment Services | Mergers and Acquisition Security

Tags: SANS AI security blueprint


Sep 09 2025

Exploring AI security, privacy, and the pressing regulatory gaps—especially relevant to today’s fast-paced AI landscape

Category: AI,AI Governance,Information Securitydisc7 @ 12:44 pm

Featured Read: Securing the AI Frontier: Urgent Ethical and Regulatory Imperatives for AI-Driven Cybersecurity

  • Overview: This academic paper examines the growing ethical and regulatory challenges brought on by AI’s integration with cybersecurity. It traces the evolution of AI regulation, highlights pressing concerns—like bias, transparency, accountability, and data privacy—and emphasizes the tension between innovation and risk mitigation.
  • Key Insights:
    • AI systems raise unique privacy/security issues due to their opacity and lack of human oversight.
    • Current regulations are fragmented—varying by sector—with no unified global approach.
    • Bridging the regulatory gap requires improved AI literacy, public engagement, and cooperative policymaking to shape responsible frameworks.
  • Source: Authored by Vikram Kulothungan, published in January 2025, this paper cogently calls for a globally harmonized regulatory strategy and multi-stakeholder collaboration to ensure AI’s secure deployment.

Why This Post Stands Out

  • Comprehensive: Tackles both cybersecurity and privacy within the AI context—not just one or the other.
  • Forward-Looking: Addresses systemic concerns, laying the groundwork for future regulation rather than retrofitting rules around current technology.
  • Action-Oriented: Frames AI regulation as a collaborative challenge involving policymakers, technologists, and civil society.

Additional Noteworthy Commentary on AI Regulation

1. Anthropic CEO’s NYT Op-ed: A Call for Sensible Transparency

Anthropic CEO Dario Amodei criticized a proposed 10-year ban on state-level AI regulation as “too blunt.” He advocates a federal transparency standard requiring AI developers to disclose testing methods, risk mitigation, and pre-deployment safety measures.

2. California’s AI Policy Report: Guarding Against Irreversible Harms

A report commissioned by Governor Newsom warns of AI’s potential to facilitate biological and nuclear threats. It advocates “trust but verify” frameworks, increased transparency, whistleblower protections, and independent safety validation.

3. Mutually Assured Deregulation: The Risks of a Race Without Guardrails

Gilad Abiri argues that dismantling AI safety oversight in the name of competition is dangerous. Deregulation doesn’t give lasting advantages—it undermines long-term security, enabling proliferation of harmful AI capabilities like bioweapon creation or unstable AGI.


Broader Context & Insights

  • Fragmented Landscape: U.S. lacks unified privacy or AI laws; even executive orders remain limited in scope.
  • Data Risk: Many organizations suffer from unintended AI data exposure and poor governance despite having some policies in place.
  • Regulatory Innovation: Texas passed a law focusing only on government AI use, signaling a partial step toward regulation—but private sector oversight remains limited.
  • International Efforts: The Council of Europe’s AI Convention (2024) is a rare international treaty aligning AI development with human rights and democratic values.
  • Research Proposals: Techniques like blockchain-enabled AI governance are being explored as transparency-heavy, cross-border compliance tools.

Opinion

AI’s pace of innovation is extraordinary—and so are its risks. We’re at a crossroads where lack of regulation isn’t a neutral stance—it accelerates inequity, privacy violations, and even public safety threats.

What’s needed:

  1. Layered Regulation: From sector-specific rules to overarching international frameworks; we need both precision and stability.
  2. Transparency Mandates: Companies must be held to explicit standards—model testing practices, bias mitigation, data usage, and safety protocols.
  3. Public Engagement & Literacy: AI literacy shouldn’t be limited to technologists. Citizens, policymakers, and enforcement institutions must be equipped to participate meaningfully.
  4. Safety as Innovation Avenue: Strong regulation doesn’t kill innovation—it guides it. Clear rules create reliable markets, investor confidence, and socially acceptable products.

The paper “Securing the AI Frontier” sets the right tone—urging collaboration, ethics, and systemic governance. Pair that with state-level transparency measures (like Newsom’s report) and critiques of over-deregulation (like Abiri’s essay), and we get a multi-faceted strategy toward responsible AI.

Anthropic CEO says proposed 10-year ban on state AI regulation ‘too blunt’ in NYT op-ed

California AI Policy Report Warns of ‘Irreversible Harms’ 

Responsible AI in the Age of Generative Models: Governance, Ethics and Risk Management 

AI Governance: Applying AI Policy and Ethics through Principles and Assessments

AIMS and Data Governance – Managing data responsibly isn’t just good practice—it’s a legal and ethical imperative. 

DISC InfoSec previous posts on AI category

InfoSec services | InfoSec books | Follow our blog | DISC llc is listed on The vCISO Directory | ISO 27k Chat bot | Comprehensive vCISO Services | ISMS Services | Security Risk Assessment Services | Mergers and Acquisition Security

Tags: AI privacy, AI Regulations, AI security, AI standards


Sep 09 2025

Connected Cars in Europe: Balancing Innovation with Cybersecurity and Privacy

Category: Connected Cars,Information Securitydisc7 @ 11:39 am

Connected vehicles have rapidly proliferated across Europe, brimming with sophisticated software, myriad sensors, and continuous connectivity. While these advancements deliver conveniences like remote control features and intelligent navigation, they simultaneously expand the vehicle’s digital attack surface, what enhances “smartness” inherently introduces fresh cybersecurity vulnerabilities.

A recent study — both technical and survey-based — questioned roughly 300 mostly European participants about their awareness and attitudes regarding smart-car security and privacy. The findings indicate that most people understand their vehicles share data with both manufacturers and third parties, particularly those driving newer models. Western Europeans showed greater awareness of these data flows than respondents from Eastern Europe.

Despite rising awareness, many drivers lack clarity about what precisely “smart car” entails. Consumers tend to emphasize visible functionalities — such as self-driving aids or entertainment systems — while overlooking the less visible but critical issue of how data is managed, stored, or potentially exploited.

The existing regulatory environment is striving to catch up. Frameworks like UN R155 and R156, already in effect, mandate systematic cybersecurity management and secure software update mechanisms for connected cars. Similarly, from July 2024, EU rules require that new vehicles cannot be registered unless they guarantee robust cybersecurity—pushing automakers toward ‘security by design.

Moreover, Europe is developing additional protective technologies. For example, the EU-funded SELFY project is building a toolkit to safeguard connected traffic systems, aiming to issue cybersecurity certificates and bolster defenses against cyber threats. The European Commission is also establishing protocols around testing, data recording, safety monitoring, and incident reporting for advanced automated and driverless vehicle systems.

Nevertheless, gaps remain—particularly between policy progress and public trust. Even as regulations evolve and technical tools mature, many vehicle users remain uncertain about the extent of data collection, storage, and sharing. Without stronger transparency, consumer trust is likely to lag behind technological and regulatory advancements.


Car Security and Privacy

Connected cars represent a defining shift in the mobility landscape—offering unprecedented convenience but accompanied by elevated risks. The central paradox is clear: as vehicles become more connected and intelligent, they become more exposed. This isn’t just a matter of potential remote hacking; it’s about data flow—where, how, and by whom vehicle data is used.

Europe is taking commendable steps by enforcing cybersecurity mandates (like R155/R156) and promoting proactive, security-by-design approaches. Projects like SELFY and structured regulatory initiatives around automated vehicles signal forward motion.

However, the real challenge lies in closing the trust gap. Many drivers still don’t have a clear understanding of data practices. Communicating complex cybersecurity architecture in accessible terms is essential. Automakers and regulators must both educate and reassure—perhaps through public dashboards, standardized labels on data practices, or periodic transparency reports that explain what data is collected, why, who has access, and how it’s protected.

For drivers, vigilance remains crucial. Prioritize vehicles that support secure over-the-air updates, enforce two-factor authentication for vehicle apps, and carefully review privacy settings. As consumers, push for clarity and accountability—our vehicles shouldn’t just be smart; they should also be secure and respectful of our privacy.

“Connected cars are racing ahead, but security is stuck in neutral”

Connected cars and cybercrime: A primer

How Virtualization Helps Secure Connected Cars

Modern cars: A growing bundle of security vulnerabilities

Car Hacking and its Countermeasures

Bug in Toyota, Honda, and Nissan Car App Let Hackers Unlock & Start The Car Remotely

Multiple Vulnerabilities in the Mazda In-Vehicle Infotainment (IVI) System

Integrating cybersecurity into vehicle design and manufacturing

Your car is probably harvesting your data. Here’s how you can wipe it

The Rise of Automotive Hacking: How to Secure Your Vehicles Against Hacking

Car companies massively exposed to web vulnerabilities

Million of vehicles can be attacked via MiCODUS MV720 GPS Trackers

Securing vehicles from potential cybersecurity threats

Building Secure Cars

InfoSec services | InfoSec books | Follow our blog | DISC llc is listed on The vCISO Directory | ISO 27k Chat bot | Comprehensive vCISO Services | ISMS Services | Security Risk Assessment Services | Mergers and Acquisition Security

Tags: Connected cars


Sep 08 2025

What are main requirements for Internal audit of ISO 42001 AIMS

Category: AI,Information Security,ISO 42001disc7 @ 2:23 pm

ISO 42001 is the upcoming standard for AI Management Systems (AIMS), similar in structure to ISO 27001 for information security. While the full standard is not yet widely published, the main requirements for an internal audit of an ISO 42001 AIMS can be outlined based on common audit principles and the expected clauses in the standard. Here’s a structured view:


1. Audit Scope and Objectives

  • Define what parts of the AI management system will be audited (processes, teams, AI models, AI governance, data handling, etc.).
  • Ensure the audit covers all ISO 42001 clauses relevant to your organization.
  • Determine audit objectives, e.g.,:
    • Compliance with ISO 42001.
    • Effectiveness of risk management for AI.
    • Alignment with organizational AI strategy and policies.


2. Compliance with AIMS Requirements

  • Check whether the organization’s AI management system meets ISO 42001 requirements, which likely include:
    • AI governance framework.
    • Risk management for AI (AI lifecycle, bias, safety, privacy).
    • Policies and procedures for AI development, deployment, and monitoring.
    • Data management and ethical AI principles.
    • Roles, responsibilities, and competency requirements for AI personnel.


3. Documentation and Records

  • Verify that documentation exists and is maintained, e.g.:
    • AI policies, procedures, and guidelines.
    • Risk assessments, impact assessments, and mitigation plans.
    • Training records and personnel competency evaluations.
    • Records of AI incidents, anomalies, or failures.
    • Audit logs of AI models and data handling activities.


4. Risk Management and Controls

  • Review whether risks related to AI (bias, safety, security, privacy) are identified, assessed, and mitigated.
  • Check implementation of controls:
    • Data quality and integrity controls.
    • Model validation and testing.
    • Human oversight and accountability mechanisms.
    • Compliance with relevant regulations and ethical standards.


5. Performance Monitoring and Improvement

  • Evaluate monitoring and measurement processes:
    • Metrics for AI model performance and compliance.
    • Monitoring of ethical and legal adherence.
    • Feedback loops for continuous improvement.
  • Assess whether corrective actions and improvements are identified and implemented.


6. Internal Audit Process Requirements

  • Audits should be planned, objective, and systematic.
  • Auditors must be independent of the area being audited.
  • Audit reports must include:
    • Findings (compliance, nonconformities, opportunities for improvement).
    • Recommendations.
  • Follow-up to verify closure of nonconformities.


7. Management Review Alignment

  • Internal audit results should feed into management reviews for:
    • AI risk mitigation effectiveness.
    • Resource allocation.
    • Policy updates and strategic AI decisions.


Key takeaway: An ISO 42001 internal audit is not just about checking boxes—it’s about verifying that AI systems are governed, ethical, and risk-managed throughout their lifecycle, with evidence, controls, and continuous improvement in place.

An Internal Audit agreement aligned with ISO 42001 should include the following key components, each described below to ensure clarity and operational relevance:

🧭 Scope of Services

The agreement should clearly define the consultant’s role in leading and advising the internal audit team. This includes directing the audit process, training team members on ISO 42001 methodologies, and overseeing all phases—from planning to reporting. It should also specify advisory responsibilities such as interpreting ISO 42001 requirements, identifying compliance gaps, and validating governance frameworks. The scope must emphasize the consultant’s authority to review and approve all audit work to ensure alignment with professional standards.

📄 Deliverables

A detailed list of expected outputs should be included, such as a comprehensive audit report with an executive summary, gap analysis, and risk assessment. The agreement should also cover a remediation plan with prioritized actions, implementation guidance, and success metrics. Supporting materials like policy templates, training recommendations, and compliance monitoring frameworks should be outlined. Finally, it should ensure the development of a capable internal audit team and documentation of audit procedures for future use.

⏳ Timeline

The agreement must specify key milestones, including project start and completion dates, training deadlines, audit phase completion, and approval checkpoints for draft and final reports. This timeline ensures accountability and helps coordinate internal resources effectively.

💰 Compensation

This section should detail the total project fee, payment terms, and a milestone-based payment schedule. It should also clarify reimbursable expenses (e.g., travel) and note that internal team costs and facilities are the client’s responsibility. Transparency in financial terms helps prevent disputes and ensures mutual understanding.

👥 Client Responsibilities

The client’s obligations should be clearly stated, including assigning qualified internal audit team members, ensuring their availability, designating a project coordinator, and providing access to necessary personnel, systems, and facilities. The agreement should also require timely feedback on deliverables and commitment from the internal team to complete audit tasks under the consultant’s guidance.

🎓 Consultant Responsibilities

The consultant’s duties should include providing expert leadership, training the internal team, reviewing and approving all work products, maintaining quality standards, and being available for ongoing consultation. This ensures the consultant remains accountable for the integrity and effectiveness of the audit process.

🔐 Confidentiality

A robust confidentiality clause should protect proprietary information shared during the engagement. It should specify the duration of confidentiality obligations post-engagement and ensure that internal audit team members are bound by equivalent terms. This builds trust and safeguards sensitive data.

💡 Intellectual Property

The agreement should clarify ownership of work products, stating that outputs created by the internal team under the consultant’s guidance belong to the client. It should also allow the consultant to retain general methodologies and templates for future use, while jointly owning training materials and audit frameworks.

⚖️ Limitation of Liability

This clause should cap the consultant’s liability to the total fee paid and exclude consequential or punitive damages. It should reinforce that ISO 42001 compliance is ultimately the client’s responsibility, with the consultant providing guidance and oversight—not execution.

🛑 Termination

The agreement should include provisions for termination with advance notice, payment for completed work, delivery of all completed outputs, and survival of confidentiality obligations. It should also ensure that any training and knowledge transfer remains with the client post-termination.

📜 General Terms

Standard legal provisions should be included, such as independent contractor status, governing law, severability, and a clause stating that the agreement represents the entire understanding between parties. These terms provide legal clarity and protect both sides.

Internal Auditing in Plain English: A Simple Guide to Super Effective ISO Audits

Responsible AI in the Age of Generative Models: Governance, Ethics and Risk Management 

AI Governance: Applying AI Policy and Ethics through Principles and Assessments

AIMS and Data Governance – Managing data responsibly isn’t just good practice—it’s a legal and ethical imperative. 

DISC InfoSec previous posts on AI category

InfoSec services | InfoSec books | Follow our blog | DISC llc is listed on The vCISO Directory | ISO 27k Chat bot | Comprehensive vCISO Services | ISMS Services | Security Risk Assessment Services | Mergers and Acquisition Security

Tags: AIMS, Internal audit of ISO 42001


Sep 03 2025

An AI-Powered Brute-Force Tool for Ethical Security Testing

Category: AI,Information Security,Security Toolsdisc7 @ 2:05 pm

Summary of the Help Net Security article.



BruteForceAI is a free, open-source penetration testing tool that enhances traditional brute-force attacks by integrating large language models (LLMs). It automates identification of login form elements—such as username and password fields—by analyzing HTML content and deducing the correct selectors.


After mapping out the login structure, the tool conducts multi-threaded brute-force or password-spraying attacks. It simulates human-like behavior by randomizing timing, introducing slight delays, and varying the user-agent—concealing its activity from conventional detection systems.


Intended for legitimate security use, BruteForceAI is geared toward authorized penetration testing, academic research, self-assessment of one’s applications, and participation in bug bounty programs—always within proper legal and ethical bounds. It is freely available on GitHub for practitioners to explore and deploy.


By combining intelligence-powered analysis and automated attack execution, BruteForceAI streamlines what used to be a tedious and manual process. It automates both discovery (login field detection) and exploitation (attack execution). This dual capability can significantly speed up testing workflows for security professionals.


BruteForceAI

BruteForceAI represents a meaningful leap in how penetration testers can validate and improve authentication safeguards. On the positive side, its automation and intelligent behavior modeling could expedite thorough and realistic attack simulations—especially useful for uncovering overlooked vulnerabilities hidden in login logic or form implementations.

That said, such power is a double-edged sword. There’s an inherent risk that malicious actors could repurpose the tool for unauthorized attacks, given its stealthy methods and automation. Its detection evasion tactics—mimicking human activity to avoid being flagged—could be exploited by bad actors to evade traditional defenses. For defenders, this heightens the importance of deploying robust controls like rate limiting, behavioral monitoring, anomaly detection, and multi-factor authentication.

In short, as a security tool it’s impressive and helpful—if used responsibly. Ensuring it remains in the hands of ethical professionals and not abused requires awareness, cautious deployment, and informed defense strategies.


Download

This tool is designed for responsible and ethical use, including authorized penetration testing, security research and education, testing your own applications, and participating in bug bounty programs within the proper scope.

BruteForceAI is available for free on GitHub.

Source: BruteForce AI

DISC InfoSec previous posts on AI category

InfoSec services | InfoSec books | Follow our blog | DISC llc is listed on The vCISO Directory | ISO 27k Chat bot | Comprehensive vCISO Services | ISMS Services | Security Risk Assessment Services | Mergers and Acquisition Security

Tags: Brute-Force Tool


Aug 21 2025

ISO/IEC 42001 Requirements Mapped to ShareVault

Category: AI,Information Securitydisc7 @ 2:55 pm

🏢 Strategic Benefits for ShareVault

  • Regulatory Alignment: ISO 42001 supports GDPR, HIPAA, and EU AI Act compliance.
  • Client Trust: Demonstrates responsible AI governance to enterprise clients.
  • Competitive Edge: Positions ShareVault as a forward-thinking, standards-compliant VDR provider.
  • Audit Readiness: Facilitates internal and external audits of AI systems and data handling.

If ShareVault were to pursue ISO 42001 certification, it would not only strengthen its AI governance but also reinforce its reputation in regulated industries like life sciences, finance, and legal services.

Here’s a tailored ISO/IEC 42001 implementation roadmap for a Virtual Data Room (VDR) provider like ShareVault, focusing on responsible AI governance, risk mitigation, and regulatory alignment.

🗺️ ISO/IEC 42001 Implementation Roadmap for ShareVault

Phase 1: Initiation & Scoping

🔹 Objective: Define the scope of AI use and align with business goals.

  • Identify AI-powered features (e.g., smart search, document tagging, access analytics).
  • Map stakeholders: internal teams, clients, regulators.
  • Define scope of the AI Management System (AIMS): which systems, processes, and data are covered.
  • Appoint an AI Governance Lead or Steering Committee.

Phase 2: Gap Analysis & Risk Assessment

🔹 Objective: Understand current state vs. ISO 42001 requirements.

  • Conduct a gap analysis against ISO 42001 clauses.
  • Evaluate risks related to:
    • Data privacy (e.g., GDPR, HIPAA)
    • Bias in AI-driven document classification
    • Misuse of access analytics
  • Review existing controls and identify vulnerabilities.

Phase 3: Policy & Governance Framework

🔹 Objective: Establish foundational policies and oversight mechanisms.

  • Draft an AI Policy aligned with ethical principles and legal obligations.
  • Define roles and responsibilities for AI oversight.
  • Create procedures for:
    • Human oversight and intervention
    • Incident reporting and escalation
    • Lifecycle management of AI models

Phase 4: Data & Model Governance

🔹 Objective: Ensure trustworthy data and model practices.

  • Implement controls for training and testing data quality.
  • Document data sources, preprocessing steps, and validation methods.
  • Establish model documentation standards (e.g., model cards, audit trails).
  • Define retention and retirement policies for outdated models.

Phase 5: Operational Controls & Monitoring

🔹 Objective: Embed AI governance into daily operations.

  • Integrate AI risk controls into DevOps and product workflows.
  • Set up performance monitoring dashboards for AI features.
  • Enable logging and traceability of AI decisions.
  • Conduct regular internal audits and reviews.

Phase 6: Stakeholder Engagement & Transparency

🔹 Objective: Build trust with users and clients.

  • Communicate AI capabilities and limitations clearly in the UI.
  • Provide opt-out or override options for AI-driven decisions.
  • Engage clients in defining acceptable AI behavior and use cases.
  • Train staff on ethical AI use and ISO 42001 principles.

Phase 7: Certification & Continuous Improvement

🔹 Objective: Achieve compliance and evolve responsibly.

  • Prepare documentation for ISO 42001 certification audit.
  • Conduct mock audits and address gaps.
  • Establish feedback loops for continuous improvement.
  • Monitor regulatory changes (e.g., EU AI Act, U.S. AI bills) and update policies accordingly.

🧠 Bonus Tip: Align with Other Standards

ShareVault can integrate ISO 42001 with:

  • ISO 27001 (Information Security)
  • ISO 9001 (Quality Management)
  • SOC 2 (Trust Services Criteria)
  • EU AI Act (for high-risk AI systems)

visual roadmap for implementing ISO/IEC 42001 tailored to a Virtual Data Room (VDR) provider like ShareVault:

🗂️ ISO 42001 Implementation Roadmap for VDR Providers

Each phase is mapped to a monthly milestone, showing how AI governance can be embedded step-by-step:

📌 Milestone Highlights

  • Month 1 – Initiation & Scoping Define AI use cases (e.g., smart search, access analytics), map stakeholders, appoint governance lead.
  • Month 2 – Gap Analysis & Risk Assessment Evaluate risks like bias in document tagging, privacy breaches, and misuse of analytics.
  • Month 3 – Policy & Governance Framework Draft AI policy, define oversight roles, and create procedures for human intervention and incident handling.
  • Month 4 – Data & Model Governance Implement controls for training data, document model behavior, and set retention policies.
  • Month 5 – Operational Controls & Monitoring Embed governance into workflows, monitor AI performance, and conduct internal audits.
  • Month 6 – Stakeholder Engagement & Transparency Communicate AI capabilities to users, engage clients in ethical discussions, and train staff.
  • Month 7 – Certification & Continuous Improvement Prepare for ISO audit, conduct mock assessments, and monitor evolving regulations like the EU AI Act.

Practical OWASP Security Testing: Hands-On Strategies for Detecting and Mitigating Web Vulnerabilities in the Age of AI

Building Trust with High-Risk AI: What Article 15 of the EU AI Act Means for Accuracy, Robustness & Cybersecurity

From Compliance to Confidence: How DISC LLC Delivers Strategic Cybersecurity Services That Scale

Secure Your Business. Simplify Compliance. Gain Peace of Mind

Managing Artificial Intelligence Threats with ISO 27001

InfoSec services | InfoSec books | Follow our blog | DISC llc is listed on The vCISO Directory | ISO 27k Chat bot | Comprehensive vCISO Services | ISMS Services | Security Risk Assessment Services | Mergers and Acquisition Security

Tags: ISO 42001, Sharevault


Aug 18 2025

AI-Driven Hacking: The New Frontier in Cybersecurity

Category: AI,Hacking,Information Securitydisc7 @ 10:02 am

The age of AI-assisted hacking is no longer looming—it’s here. Hackers of all stripes—from state actors to cybercriminals—are now integrating AI tools into their operations, while defenders are racing to catch up.

Key Developments

  • In mid‑2025, Russian intelligence reportedly sent phishing emails to Ukrainians containing AI-powered attachments that automatically scanned victims’ computers for sensitive files and transmitted them back to Russia. NBC Bay Area
  • AI models like ChatGPT have become highly adept at translating natural language into code, helping hackers automate their work and scale operations. NBC Bay Area
  • AI hasn’t ushered in a hacking revolution that enables novices to bring down power grids—but it is significantly enhancing the efficiency and reach of skilled hackers. NBC Bay Area

On the Defensive Side

  • Cybersecurity defenders are also turning to AI—Google’s “Gemini” model helped identify over 20 software vulnerabilities, speeding up bug detection and patching.
  • Alexei Bulazel of the White House’s National Security Council believes defenders currently hold a slight edge over attackers, thanks to America’s tech infrastructure, but that balance may shift as agentic (autonomous) AI tools proliferate.
  • A notable milestone: an AI called “Xbow” topped the HackerOne leaderboard, prompting the platform to create a separate category for AI-generated hacking tools.


My Take

This article paints a vivid picture of an escalating AI arms race in cybersecurity. My view? It’s a dramatic turning point:

  • AI is already tipping the scale—but not overwhelmingly. Hackers are more efficient, but full-scale automated digital threats haven’t arrived. Still, what used to require deep expertise is becoming accessible to more people.
  • Defenders aren’t standing idle. AI-assisted scanning and rapid vulnerability detection are powerful tools in the white-hat arsenal—and may remain decisive, especially when backed by robust tech ecosystems.
  • The real battleground is trust. As AI makes exploits more sophisticated and deception more believable (e.g., deepfakes or phishing), trust becomes the most vulnerable asset. This echoes broader reports showing attacks are increasingly AI‑powered, whether via deceptive audio/video or tailored phishing campaigns.
  • Vigilance must evolve. Automated defenses and rapid detection will be key. Organizations should also invest in digital literacy—training humans to recognize deception even as AI tools become ever more convincing.


Related Reading Highlights

Here are some recent news pieces that complement the NBC article, reinforcing the duality of AI’s role in cyber threats:

Further reading on AI and cybersecurity

Cybersecurity's dual AI reality: Hacks and defenses both turbocharged

Axios

Cybersecurity’s dual AI reality: Hacks and defenses both turbocharged

5 days ago

AI-powered phishing attacks are on the rise and getting smarter - here's how to stay safe

TechRadar

AI-powered phishing attacks are on the rise and getting smarter – here’s how to stay safe

4 days ago

Weaponized AI is making hackers faster, more aggressive, and more successful

TechRadar

Weaponized AI is making hackers faster, more aggressive, and more successful

14 days ago


In Summary

  • AI is enhancing both hacking and defense—but it’s not yet an apocalyptic breakthrough.
  • Skilled attackers can now move faster and more subtly.
  • Defenders have powerful AI tools in their corner—but must remain agile.
  • As deception scales, safeguarding trust and awareness is crucial.

Master AI Tools Like ChatGPT and MidJourney to Automate Tasks, Generate Content, and Stay Ahead in the Digital Age

Secure Your Business. Simplify Compliance. Gain Peace of Mind

InfoSec services | InfoSec books | Follow our blog | DISC llc is listed on The vCISO Directory | ISO 27k Chat bot | Comprehensive vCISO Services | ISMS Services | Security Risk Assessment Services | Mergers and Acquisition Security

Tags: AI-Driven Hacking, Generative AI Hacks


Aug 17 2025

The CISO 3.0: A Guide to Next-Generation Cybersecurity Leadership

Category: CISO,Information Security,vCISOdisc7 @ 2:31 pm

The CISO 3.0: A Guide to Next-Generation Cybersecurity Leadership – Security, Audit and Leadership Series is out by Walt Powell.

This book positions itself not just as a technical guide but as a strategic roadmap for the future of cybersecurity leadership. It emphasizes that in today’s complex threat environment, CISOs must evolve beyond technical mastery and step into the role of business leaders who weave cybersecurity into the very fabric of organizational strategy.

The core message challenges the outdated view of CISOs as purely technical experts. Instead, it calls for a strategic shift toward business alignment, measurable risk management, and adoption of emerging technologies like AI and machine learning. This evolution reflects growing expectations from boards, executives, and regulators—expectations that CISOs must now meet with business fluency, not just technical insight.

The book goes further by offering actionable guidance, case studies, and real-world examples drawn from extensive experience across hundreds of security programs. It explores practical topics such as risk quantification, cyber insurance, and defining materiality, filling the gap left by more theory-heavy resources.

For aspiring CISOs, the book provides a clear path to transition from technical expertise to strategic leadership. For current CISOs, it delivers fresh insight into strengthening business acumen and boardroom credibility, enabling them to better drive value while protecting organizational assets.

My thought: This book’s strength lies in recognizing that the modern CISO role is no longer just about defending networks but about enabling business resilience and trust. By blending strategy with technical depth, it seems to prepare security leaders for the boardroom-level influence they now require. In an era where cybersecurity is a business risk, not just an IT issue, this perspective feels both timely and necessary.

Secure Your Business. Simplify Compliance. Gain Peace of Mind

InfoSec services | InfoSec books | Follow our blog | DISC llc is listed on The vCISO Directory | ISO 27k Chat bot | Comprehensive vCISO Services | ISMS Services | Security Risk Assessment Services | Mergers and Acquisition Security

Tags: CISO 3.0


Aug 17 2025

Benefits and drawbacks of using open-source models versus closed-source models under the AI Act

Category: AI,Information Securitydisc7 @ 1:36 pm

Objectives of EU AI Act is:

Harmonized rules for AI systems in the EU, prohibitions on certain AI practices, requirements for high
risk AI, transparency rules, market surveillance, and innovation support.

1. Overview: How the AI Act Treats Open-Source vs. Closed-Source Models

  • The EU AI Act (formalized in 2024) regulates AI systems using a risk-based framework that ranges from unacceptable to minimal risk. It also includes a specific layer for general-purpose AI (GPAI)—“foundation models” like large language models.
  • Open-source models enjoy limited exemptions, especially if:
    • They’re not high-risk,
    • Not unsafe or interacting directly with individuals,
    • Not monetized,
    • Or not deemed to present systemic risk.
  • Closed-source (proprietary) models don’t benefit from such leniency and must comply with all applicable obligations across risk categories.

2. Benefits of Open-Source Models under the AI Act

a) Greater Transparency & Documentation

  • Open-source code, weights, and architecture are accessible by default—aligning with transparency expectations (e.g., model cards, training data logs)—and often already publicly documented.
  • Independent auditing becomes more feasible through community visibility.
  • A Stanford study found open-source models tend to comply more readily with data and compute transparency requirements than closed-source alternatives.

b) Lower Compliance Burden (in Certain Cases)

  • Exemptions: Non-monetized open-source models that don’t pose systemic risk may dodge burdensome obligations like documentation or designated representatives.
  • For academic or purely scientific purposes, there’s additional leniency—even if models are open-source.

c) Encourages Innovation, Collaboration & Inclusion

  • Open-source democratizes AI access, reducing barriers for academia, startups, nonprofits, and regional players.
  • Wider collaboration speeds up innovation and enables localization (e.g., fine-tuning for local languages or use cases).
  • Diverse contributors help surface bias and ethical concerns, making models more inclusive.

3. Drawbacks of Open-Source under the AI Act

a) Disproportionate Regulatory Burden

  • The Act’s “one-size-fits-all” approach imposes heavy requirements (like ten-year documentation, third-party audits) even on decentralized, collectively developed models—raising feasibility concerns.
  • Who carries responsibility in distributed, open environments remains unclear.

b) Loopholes and Misuse Risks

  • The Act’s light treatment of non-monetized open-source models could be exploited by malicious actors to skirt regulations.
  • Open-source models can be modified or misused to generate disinformation, deepfakes, or hate content—without safeguards that closed systems enforce.

c) Still Subject to Core Obligations

  • Even under exemptions, open-source GPAI must still:
    • Disclose training content,
    • Respect EU copyright laws,
    • Possibly appoint authorized representatives if systemic risk is suspected.

d) Additional Practical & Legal Complications

  • Licensing: Some so-called “open-source” models carry restrictive terms (e.g., commercial restrictions, copyleft provisions) that may hinder compliance or downstream use.
  • Support disclaimers: Open-source licenses typically disclaim warranties—risking liability gaps.
  • Security vulnerabilities: Public availability of code may expose models to tampering or release of harmful versions.


4. Closed-Source Models: Benefits & Drawbacks

Benefits

  • Able to enforce usage restrictions, internal safety mechanisms, and fine-grained control over deployment—reducing misuse risk.
  • Clear compliance path: centralized providers can manage documentation, audits, and risk mitigation systematically.
  • Stable liability chain, with better alignment to legal frameworks.

Drawbacks

  • Less transparency: core workings are hidden, making audits and oversight harder.
  • Higher compliance burden: must meet all applicable obligations across risk categories without the possibility of exemptions.
  • Innovation lock-in: smaller players and researchers may face high entry barriers.

5. Synthesis: Choosing Between Open-Source and Closed-Source under the AI Act

DimensionOpen-SourceClosed-Source
Transparency & AuditingHigh—code, data, model accessibleLow—black box systems
Regulatory BurdenLower for non-monetized, low-risk models; heavy for complex, high-risk casesUniformly high, though manageable by central entities
Innovation & AccessibilityHigh—democratizes access, collaborationLimited—controlled by large orgs
Security & Misuse RiskHigher—modifiable, misuse easierLower—safeguarded, controlled deployment
Liability & AccountabilityDiffuse—decentralized contributors complicate oversightClear—central authority responsible

6. Final Thoughts

Under the EU AI Act, open-source AI is recognized and, in some respects, encouraged—but only under narrow, carefully circumscribed conditions. When models are non-monetized, low-risk, or aimed at scientific research, open-source opens up paths for innovation. The transparency and collaborative dynamics are strong virtues.

However, when open-source intersects with high risk, monetization, or systemic potential, the Act tightens its grip—subjecting models to many of the same obligations as proprietary ones. Worse, ambiguity in responsibility and enforcement may undermine both innovation and safety.

Conversely, closed-source models offer regulatory clarity, security, and control; but at the cost of transparency, higher compliance burden, and restricted access for smaller players.


TL;DR

  • Choose open-source if your goal is transparency, inclusivity, and innovation—so long as you keep your model non-monetized, transparently documented, and low-risk.
  • Choose closed-source when safety, regulatory oversight, and controlled deployment are paramount, especially in sensitive or high-risk applications.

Further reading on EU AI Act implications

https://www.barrons.com/articles/ai-tech-stocks-regulation-microsoft-google-amazon-meta-30424359?

https://apnews.com/article/a3df6a1a8789eea7fcd17bffc750e291?

The European Union flag stands inside the atrium at the European Council building in Brussels, June 17, 2024. (AP Photo/Omar Havana, file)

Agentic AI Security Risks: Why Enterprises Can’t Afford to Fly Blind

NIST Strengthens Digital Identity Security to Tackle AI-Driven Threats

Securing Agentic AI: Emerging Risks and Governance Imperatives

Building Trust with High-Risk AI: What Article 15 of the EU AI Act Means for Accuracy, Robustness & Cybersecurity

From Compliance to Confidence: How DISC LLC Delivers Strategic Cybersecurity Services That Scale

Expertise-in-Virtual-CISO-vCISO-Services-2Download

Secure Your Business. Simplify Compliance. Gain Peace of Mind

InfoSec services | InfoSec books | Follow our blog | DISC llc is listed on The vCISO Directory | ISO 27k Chat bot | Comprehensive vCISO Services | ISMS Services | Security Risk Assessment Services | Mergers and Acquisition Security

Tags: open-source models versus closed-source models under the AI Act


Aug 06 2025

Building Trust with High-Risk AI: What Article 15 of the EU AI Act Means for Accuracy, Robustness & Cybersecurity

Category: AI,Information Securitydisc7 @ 4:06 pm

As AI adoption accelerates, especially in regulated or high-impact sectors, the European Union is setting the bar for responsible development. Article 15 of the EU AI Act lays out clear obligations for providers of high-risk AI systems—focusing on accuracy, robustness, and cybersecurity throughout the AI system’s lifecycle. Here’s what that means in practice—and why it matters now more than ever.

1. Security and Reliability From Day One

The AI Act demands that high-risk AI systems be designed with integrity and resilience from the ground up. That means integrating controls for accuracy, robustness, and cybersecurity not only at deployment but throughout the entire lifecycle. It’s a shift from reactive patching to proactive engineering.

2. Accuracy Is a Design Requirement

Gone are the days of vague performance promises. Under Article 15, providers must define and document expected accuracy levels and metrics in the user instructions. This transparency helps users and regulators understand how the system should perform—and flags any deviation from those expectations.

3. Guarding Against Exploitation

AI systems must also be robust against manipulation, whether it’s malicious input, adversarial attacks, or system misuse. This includes protecting against changes to the AI’s behavior, outputs, or performance caused by vulnerabilities or unauthorized interference.

4. Taming Feedback Loops in Learning Systems

Some AI systems continue learning even after deployment. That’s powerful—but dangerous if not governed. Article 15 requires providers to minimize or eliminate harmful feedback loops, which could reinforce bias or lead to performance degradation over time.

5. Compliance Isn’t Optional—It’s Auditable

The Act calls for documented procedures that demonstrate compliance with accuracy, robustness, and security standards. This includes verifying third-party contributions to system development. Providers must be ready to show their work to market surveillance authorities (MSAs) on request.

6. Leverage the Cyber Resilience Act

If your high-risk AI system also falls under the scope of the EU Cyber Resilience Act (CRA), good news: meeting the CRA’s essential cybersecurity requirements can also satisfy the AI Act’s demands. Providers should assess the overlap and streamline their compliance strategies.

7. Don’t Forget the GDPR

When personal data is involved, Article 15 interacts directly with the GDPR—especially Articles 5(1)(d), 5(1)(f), and 32, which address accuracy and security. If your organization is already GDPR-compliant, you’re on the right track, but Article 15 still demands additional technical and operational precision.


Final Thought:

Article 15 raises the bar for how we build, deploy, and monitor high-risk AI systems. It doesn’t just aim to prevent failures—it pushes providers to deliver trustworthy, resilient, and secure AI from the start. For organizations that embrace this proactively, it’s not just about avoiding fines—it’s about building AI systems that earn trust and deliver long-term value.

EU AI Act concerning Risk Management Systems for High-Risk AI

Understanding the EU AI Act: A Risk-Based Framework for Trustworthy AI – Implications for U.S. Organizations

Interpretation of Ethical AI Deployment under the EU AI Act

Aligning with ISO 42001:2023 and/or the EU Artificial Intelligence (AI) Act

State of Agentic AI Security and Governance

AI Governance: Applying AI Policy and Ethics through Principles and Assessments

ISO/IEC 42001:2023, First Edition: Information technology – Artificial intelligence – Management system

ISO 42001 Artificial Intelligence Management Systems (AIMS) Implementation Guide: AIMS Framework | AI Security Standards

Businesses leveraging AI should prepare now for a future of increasing regulation.

Digital Ethics in the Age of AI 

DISC InfoSec’s earlier posts on the AI topic

InfoSec services | InfoSec books | Follow our blog | DISC llc is listed on The vCISO Directory | ISO 27k Chat bot | Comprehensive vCISO Services | ISMS Services | Security Risk Assessment Services | Mergers and Acquisition Security

Tags: Article 15, EU AI Act


Aug 06 2025

From Compliance to Confidence: How DISC LLC Delivers Strategic Cybersecurity Services That Scale

Category: Information Securitydisc7 @ 1:33 pm

Transforming Cybersecurity & Compliance into Strategic Strength

In an era of ever-tightening regulations and ever-evolving threats, Deura InfoSec Consulting (DISC LLC) stands out by turning compliance from a checkbox into a proactive asset.

🛡️ What We Offer: Core Services at a Glance

1. vCISO Services

Access seasoned CISO-level expertise—without the cost of a full-time executive. Our vCISO services provide strategic leadership, ongoing security guidance, executive reporting, and risk management aligned with your business needs.

2. Compliance & Certification Support

Whether you’re targeting ISO 27001, ISO 27701, ISO 42001, NIST, GDPR, SOC 2, HIPAA, or PCI DSS, DISC supports your entire journey—from assessments and gap analysis to policy creation, control implementation, and audit preparation.

3. Security Risk Assessments

Identify risks across infrastructure, cloud, vendors, and business-critical systems using frameworks such as MITRE ATT&CK (via CALDERA), with actionable risk scorecards and remediation roadmaps.

4. Risk‑based Strategic Planning

We bridge the gap from your current (“as‑is”) security state to your desired (“to‑be”) maturity level. Our process includes strategic roadmapping, metrics to measure progress, and embedding business-aligned security into operations.

5. Security Awareness & Training

Equip your workforce and leadership with tailored training programs—ranging from executive briefings to role-based education—in vital areas like governance, compliance, and emerging threats.

6. Penetration Testing & Tool Oversight

Using top-tier tools like Burp Suite Pro and OWASP ZAP, DISC uncovers vulnerabilities in web applications and APIs. These assessments are accompanied by remediation guidance and optional managed detection support.

7. At DISC LLC, we help organizations harness the power of data and artificial intelligence—responsibly. Our AIMS (Artificial Intelligence Management System) & Data Governance solutions are designed to reduce risk, ensure compliance, and build trust. We implement governance frameworks that align with ISO 27001, ISO 27701, ISO 42001, GDPR, EU AI ACT, HIPAA, and CCPA, supporting both data accuracy and AI accountability. From data classification policies to ethical AI guidelines, bias monitoring, and performance audits, our approach ensures your AI and data strategies are transparent, secure, and future-ready. By integrating AI and data governance, DISC empowers you to lead with confidence in a rapidly evolving digital world.


🔍 Why DISC Works

  • Fixed-fee, hands‑on approach: No bloated documents, just precise and efficient delivery aligned with your needs.
  • Expert-led services: With 20+ years in security and compliance, DISC’s consultants guide you at every stage.
  • Audit-ready processes: Leverage frameworks and tools like GRC platform to streamline compliance, reduce overhead, and stay audit-ready.
  • Tailored to SMBs & enterprises: From startups to established firms, DISC crafts solutions scalable to your size and skillset.


🚀 Ready to Elevate Your Security?

DISC LLC is more than a service provider—it’s your long-term advisor. Whether you’re combating cyber risk or scaling your compliance posture, our services deliver predictable value and empower you to make security a strategic advantage.

Get started today with a free consultation, including a one-hour session with a vCISO, to see where your organization stands—and where it needs to go.

Info@deurainfosec.com |   https://www.deurainfosec.com | 📞 (707) 998-5164

Secure Your Business. Simplify Compliance. Gain Peace of Mind

InfoSec services | InfoSec books | Follow our blog | DISC llc is listed on The vCISO Directory | ISO 27k Chat bot | Comprehensive vCISO Services | ISMS Services | Security Risk Assessment Services | Mergers and Acquisition Security


Jul 30 2025

Shadow AI: The Hidden Threat Driving Data Breach Costs Higher

Category: AI,Information Securitydisc7 @ 9:17 am

1

IBM’s latest Cost of a Data Breach Report (2025) highlights a growing and costly issue: “shadow AI”—where employees use generative AI tools without IT oversight—is significantly raising breach expenses. Around 20% of organizations reported breaches tied to shadow AI, and those incidents carried an average $670,000 premium per breach, compared to firms with minimal or no shadow AI exposure IBM+Cybersecurity Dive.

The latest IBM/Ponemon Institute report reveals that the global average cost of a data breach fell by 9% in 2025, down to $4.44 million—the first decline in five years—mainly driven by faster breach identification and containment thanks to AI and automation. However, in the United States, breach costs surged 9%, reaching a record high of $10.22 million, attributed to higher regulatory fines, rising detection and escalation expenses, and slower AI governance adoption. Despite rapid AI deployment, many organizations lag in establishing oversight: about 63% have no AI governance policies, and some 87% lack AI risk mitigation processes, increasing exposure to vulnerabilities like shadow AI. Shadow AI–related breaches tend to cost more—adding roughly $200,000 per incident—and disproportionately involve compromised personally identifiable information and intellectual property. While AI is accelerating incident resolution—which for the first time dropped to an average of 241 days—the speed of adoption is creating a security oversight gap that could amplify long-term risks unless governance and audit practices catch up IBM.

2

Although only 13% of organizations surveyed reported breaches involving AI models or tools, a staggering 97% of those lacked proper AI access controls—showing that even a small number of incidents can have profound consequences when governance is poor IBM Newsroom.

3

When shadow AI–related breaches occurred, they disproportionately compromised critical data: personally identifiable information in 65% of cases and intellectual property in 40%, both higher than global averages for all breaches.

4

The absence of formal AI governance policies is striking. Nearly two‑thirds (63%) of breached organizations either don’t have AI governance in place or are still developing one. Even among those with policies, many lack approval workflows or audit processes for unsanctioned AI usage—fewer than half conduct regular audits, and 61% lack governance technologies.

5

Despite advances in AI‑driven security tools that help reduce detection and containment times (now averaging 241 days, a nine‑year low), the rapid, unchecked rollout of AI technologies is creating what IBM refers to as security debt, making organizations increasingly vulnerable over time.

6

Attackers are integrating AI into their playbooks as well: 16% of breaches studied involved use of AI tools—particularly for phishing schemes and deepfake impersonations, complicating detection and remediation efforts.

7

The financial toll remains steep. While the global average breach cost has dropped slightly to $4.44 million, US organizations now average a record $10.22 million per breach. In many cases, businesses reacted by raising prices—with nearly one‑third implementing hikes of 15% or more following a breach.

8

IBM recommends strengthening AI governance via root practices: access control, data classification, audit and approval workflows, employee training, collaboration between security and compliance teams, and use of AI‑powered security monitoring. Investing in these practices can help organizations adopt AI safely and responsibly IBM.


🧠 My Take

This report underscores how shadow AI isn’t just a budding IT curiosity—it’s a full-blown risk factor. The allure of convenient AI tools leads to shadow adoption, and without oversight, vulnerabilities compound rapidly. The financial and operational fallout can be severe, particularly when sensitive or proprietary data is exposed. While automation and AI-powered security tools are bringing detection times down, they can’t fully compensate for the lack of foundational governance.

Organizations must treat AI not as an optional upgrade, but as a core infrastructure requiring the same rigour: visibility, policy control, audits, and education. Otherwise, they risk building a house of cards: fast growth over fragile ground. The right blend of technology and policy isn’t optional—it’s essential to prevent shadow AI from becoming a shadow crisis.

The Invisible Threat: Shadow AI

Governance in The Age of Gen AI: A Director’s Handbook on Gen AI

Securing Generative AI : Protecting Your AI Systems from Emerging Threats

Understanding the EU AI Act: A Risk-Based Framework for Trustworthy AI – Implications for U.S. Organizations

What are the benefits of AI certification Like AICP by EXIN

Think Before You Share: The Hidden Privacy Costs of AI Convenience

The AI Readiness Gap: High Usage, Low Security

Mitigate and adapt with AICM (AI Controls Matrix)

DISC InfoSec’s earlier posts on the AI topic

Secure Your Business. Simplify Compliance. Gain Peace of Mind

InfoSec services | InfoSec books | Follow our blog | DISC llc is listed on The vCISO Directory | ISO 27k Chat bot | Comprehensive vCISO Services | ISMS Services | Security Risk Assessment Services | Mergers and Acquisition Security

Tags: AI Governance, Shadow AI


Next Page »