Aug 06 2025

IBM’s Five-Pillar Framework for Securing Generative AI: A Lifecycle-Based Approach to Risk Management

Category: AIdisc7 @ 7:39 am


IBM introduces a structured approach to securing generative AI by focusing on protection at each phase of the AI lifecycle. The framework emphasizes securing three critical elements: the data consumed by AI systems, the model itself (during development/training), and the usage environment (live inference). These are supported by robust infrastructure controls and governance mechanisms to oversee fairness, bias, and drift over time.


In the data collection and handling stage, risks include centralized repositories that grant broad access to intellectual property and personally identifiable information (PII). To mitigate threats like data exfiltration or misuse, IBM recommends rigorous access controls, encryption, and continuous risk assessments tailored to specific data types.


Next, during model development and training, the framework warns about threats such as data poisoning and the insertion of malicious code. It advises implementing secure development practices—scanning for vulnerabilities, enforcing access policies, and treating the model build process with the same rigor as secure software development.


When it comes to model inference and live deployment, organizations face risks like prompt‑injection, adversarial attacks, and unauthorized usage. IBM recommends real-time monitoring, anomaly detection, usage policies, and safeguards to validate inputs and outputs in live AI environments.


Beyond securing each phase of the pipeline, the framework emphasizes the importance of securing the underlying infrastructure—infrastructure-as-a-service, compute nodes, storage systems—so that large language models and associated applications operate in hardened, compliant environments.


Crucially, IBM insists on embedding strong AI governance: policies, oversight structures, and continuous monitoring to detect bias, drift, and compliance issues. Governance should integrate with existing regulatory frameworks like the NIST AI Risk Management Framework and adapt alongside evolving regulations such as the EU AI Act.


Additionally, IBM’s broader work—including partnerships with AWS and internal tools like X‑Force Red—surfaced common gaps in security posture: many organizations prioritize innovation over security. Findings indicate that most active generative AI initiatives lack foundational controls across these five pillars: data, model, usage, infrastructure, and governance.


Opinion

IBM’s framework delivers a well-structured, holistic approach to the complex challenge of securing generative AI. By breaking security into discrete but interlinked phases — data, model, usage, infrastructure, governance — it helps organizations methodically build defenses where vulnerabilities are most likely. It’s also valuable that IBM aligns its framework with broader models such as NIST and incorporates continuous governance, which is essential in fast-moving AI environments.

That said, the real test lies in execution. Many enterprises still grapple with “shadow AI” — unsanctioned AI tools used by employees — and IBM’s own recent breach report suggests that only around 3% of organizations studied have adequate AI access controls in place, despite steep average breach costs ($670K extra from shadow AI alone). This gap between framework and reality underscores the need for cultural buy-in, investment in tooling, and staff training alongside technical controls.

All told, IBM’s Framework for Securing Generative AI is a strong starting point—especially when paired with governance, red teaming, infrastructure hardening, and awareness programs. But its impact will vary widely depending on how well organizations integrate its principles into everyday operations and security culture.

Generative AI, Cybersecurity, and Ethics

ISO 42001 Readiness: A 10-Step Guide to Responsible AI Governance

Aligning with ISO 42001:2023 and/or the EU Artificial Intelligence (AI) Act

The Strategic Synergy: ISO 27001 and ISO 42001 – A New Era in Governance

Clause 4 of ISO 42001: Understanding an Organization and Its Context and Why It Is Crucial to Get It Right.

Think Before You Share: The Hidden Privacy Costs of AI Convenience

The AI Readiness Gap: High Usage, Low Security

Mitigate and adapt with AICM (AI Controls Matrix)

DISC InfoSec’s earlier posts on the AI topic

Secure Your Business. Simplify Compliance. Gain Peace of Mind

InfoSec services | InfoSec books | Follow our blog | DISC llc is listed on The vCISO Directory | ISO 27k Chat bot | Comprehensive vCISO Services | ISMS Services | Security Risk Assessment Services | Mergers and Acquisition Security

Tags: Generative AI Security, IBM's Five-Pillar Framework, Risk management


May 18 2025

Why GenAI SaaS is insecure and how to secure it

Category: AI,Cloud computingdisc7 @ 8:54 am

Many believe that Generative AI Software-as-a-Service (SaaS) tools, such as ChatGPT, are insecure because they train on user inputs and can retain data indefinitely. While these concerns are valid, there are ways to mitigate the risks, such as opting out, using enterprise versions, or implementing zero data retention (ZDR) policies. Self-hosting models also has its own challenges, such as cloud misconfigurations that can lead to data breaches.

The key to addressing AI security concerns is to adopt a balanced, risk-based approach that considers security, compliance, privacy, and business needs. It is crucial to avoid overcompensating for SaaS risks by inadvertently turning your organization into a data center company.

Another common myth is that organizations should start their AI program with security tools. While tools can be helpful, they should be implemented after establishing a solid foundation, such as maintaining an asset inventory, classifying data, and managing vendors.

Some organizations believe that once they have an AI governance committee, their work is done. However, this is a misconception. Committees can be helpful if structured correctly, with clear decision authority, an established risk appetite, and hard limits on response times.

If an AI governance committee turns into a debating club and cannot make decisions, it can hinder innovation. To avoid this, consider assigning AI risk management (but not ownership) to a single business unit before establishing a committee.

It is essential to re-evaluate your beliefs about AI governance if they are not serving your organization effectively. Common mistakes companies make in this area will be discussed further in the future.

GenAI is insecure because it trains on user inputs and can retain data indefinitely, posing risks to data privacy and security. To secure GenAI, organizations should adopt a balanced, risk-based approach that incorporates security, compliance, privacy, and business needs (AIMS). This can be achieved through measures such as opting out of data retention, using enterprise versions with enhanced security features, implementing zero data retention policies, or self-hosting models with proper cloud security configurations.

Generative AI Security: Theories and Practices

Step-by-Step: Build an Agent on AWS Bedrock

From Oversight to Override: Enforcing AI Safety Through Infrastructure

The Strategic Synergy: ISO 27001 and ISO 42001 – A New Era in Governance

ISO/IEC 42001:2023, First Edition: Information technology – Artificial intelligence – Management system

ISO 42001 Artificial Intelligence Management Systems (AIMS) Implementation Guide: AIMS Framework | AI Security Standards

Businesses leveraging AI should prepare now for a future of increasing regulation.

DISC InfoSec’s earlier posts on the AI topic

InfoSec services | InfoSec books | Follow our blog | DISC llc is listed on The vCISO Directory | ISO 27k Chat bot | Comprehensive vCISO Services | ISMS Services | Security Risk Assessment Services

Tags: GenAI, Generative AI Security, InsecureGenAI, saas