Nov 19 2024

Threat modeling your generative AI workload to evaluate security risk

Category: AI,Risk Assessmentdisc7 @ 8:40 am

AWS emphasizes the importance of threat modeling for securing generative AI workloads, focusing on balancing risk management and business outcomes. A robust threat model is essential across the AI lifecycle stages, including design, deployment, and operations. Risks specific to generative AI, such as model poisoning and data leakage, need proactive mitigation, with organizations tailoring risk tolerance to business needs. Regular testing for vulnerabilities, like malicious prompts, ensures resilience against evolving threats.

Generative AI applications follow a structured lifecycle, from identifying business objectives to monitoring deployed models. Security considerations should be integral from the start, with measures like synthetic threat simulations during testing. For applications on AWS, leveraging its security tools, such as Amazon Bedrock and OpenSearch, helps enforce role-based access controls and prevent unauthorized data exposure.

AWS promotes building secure AI solutions on its cloud, which offers over 300 security services. Customers can utilize AWS infrastructure’s compliance and privacy frameworks while tailoring controls to organizational needs. For instance, techniques like Retrieval-Augmented Generation ensure sensitive data is redacted before interaction with foundational models, minimizing risks.

Threat modeling is described as a collaborative process involving diverse roles—business stakeholders, developers, security experts, and adversarial thinkers. Consistency in approach and alignment with development workflows (e.g., Agile) ensures scalability and integration. Using existing tools for collaboration and issue tracking reduces friction, making threat modeling a standard step akin to unit testing.

Organizations are urged to align security practices with business priorities while maintaining flexibility. Regular audits and updates to models and controls help adapt to the dynamic AI threat landscape. AWS provides reference architectures and security matrices to guide organizations in implementing these best practices efficiently.

Threat composer threat statement builder

You can write and document these possible threats to your application in the form of threat statements. Threat statements are a way to maintain consistency and conciseness when you document your threat. At AWS, we adhere to a threat grammar which follows the syntax:

[threat source] with [prerequisites] can [threat action] which leads to [threat impact], negatively impacting [impacted assets].

This threat grammar structure helps you to maintain consistency and allows you to iteratively write useful threat statements. As shown in Figure 2, Threat Composer provides you with this structure for new threat statements and includes examples to assist you.

You can read the full article here

Proactive governance is a continuous process of risk and threat identification, analysis and remediation. In addition, it also includes proactively updating policies, standards and procedures in response to emerging threats or regulatory changes.

OWASP updated 2025 Top 10 Risks for Large Language Models (LLMs), a crucial resource for developers, security teams, and organizations working with AI.

How CISOs Can Drive the Adoption of Responsible AI Practices

The CISO’s Guide to Securing Artificial Intelligence

AI in Cyber Insurance: Risk Assessments and Coverage Decisions

Hackers will use machine learning to launch attacks

To fight AI-generated malware, focus on cybersecurity fundamentals

4 ways AI is transforming audit, risk and compliance

AI security bubble already springing leaks

Could APIs be the undoing of AI?

The Rise of AI Bots: Understanding Their Impact on Internet Security

How to Address AI Security Risks With ISO 27001

Comprehensive vCISO Services

InfoSec services | InfoSec books | Follow our blog | DISC llc is listed on The vCISO Directory | ISO 27k Chat bot

Tags: LLM, OWASP, Threat modeling


Oct 15 2022

STRIDE covers threats to the CIA

Category: Information Security,Threat ModelingDISC @ 12:53 pm

I’ve been meaning to talk more about what I actually do, which is help the teams within Microsoft who are threat modeling (for our boxed software) to do their jobs better.  Better means faster, cheaper or more effectively.  There are good reasons to optimize for different points on that spectrum (of better/faster/cheaper) at different times in different products.   One of the things that I’ve learned is that we ask a lot of developers, testers, and PMs here.  They all have some exposure to security, but terms that I’ve been using for years are often new to them.

Larry Osterman is a longtime MS veteran, currently working in Windows audio.  He’s been a threat modeling advocate for years, and has been blogging a lot about our new processes, and describes in great detail the STRIDE per element process.   His recent posts are “Threat Modeling, Once Again,” “Threat modeling again. Drawing the diagram,” “Threat Modeling Again: STRIDE,” “Threat modeling again, STRIDE mitigations,” “Threat modeling again, what does STRIDE have to do with threat modeling,” “Threat modeling again, STRIDE per element,” “Threat modeling again, threat modeling playsound.”

I wanted to chime in and offer up this handy chart that we use.  It’s part of how we teach people to go from a diagram to a set of threats.  We used to ask them to brainstorm, and have discovered that that works a lot better with some structure.

Source:

Threat Modeling for security

Tags: STRIDE Chart, Threat modeling