Apr 09 2025

NIST: AI/ML Security Still Falls Short

Category: AI,Cyber Attack,cyber security,Cyber Threatsdisc7 @ 8:47 am

​The U.S. National Institute of Standards and Technology (NIST) has raised concerns about the security vulnerabilities inherent in artificial intelligence (AI) systems. In a recent report, NIST emphasizes that there is currently no foolproof method to defend AI technologies from adversarial attacks. The institute warns against accepting vendor claims of absolute AI security, noting that developers and users should be cautious of such assurances. ​

NIST’s research highlights several types of attacks that can compromise AI systems:​

  • Evasion Attacks: These occur when adversaries manipulate inputs to deceive AI models, leading to incorrect outputs.​
  • Poisoning Attacks: In these cases, attackers corrupt training data, causing the AI system to learn incorrect behaviors.​
  • Privacy Attacks: These involve extracting sensitive information from AI models, potentially leading to data breaches.​
  • Abuse Attacks: Here, legitimate sources of information are compromised to mislead the AI system’s operations. ​

NIST underscores that existing defenses against such attacks are insufficient and lack robust assurances. The agency calls on the broader tech community to develop more effective security measures to protect AI systems. ​

In response to these challenges, NIST has launched the Cybersecurity, Privacy, and AI Program. This initiative aims to support organizations in adapting their risk management strategies to address the evolving landscape of AI-related cybersecurity and privacy risks. ​

Overall, NIST’s findings serve as a cautionary reminder of the current limitations in AI security and the pressing need for continued research and development of robust defense mechanisms.

For further details, access the article here

While no AI system is fully immune, several practical strategies can reduce the risk of evasion, poisoning, privacy, and abuse attacks:


🔐 1. Evasion Attacks

(Manipulating inputs to fool the model)

  • Adversarial Training: Include adversarial examples in training data to improve robustness.
  • Input Validation: Use preprocessing techniques to sanitize or detect manipulated inputs.
  • Model Explainability: Apply tools like SHAP or LIME to understand decision logic and spot anomalies.


🧪 2. Poisoning Attacks

(Injecting malicious data into training sets)

  • Data Provenance & Validation: Track and vet data sources to prevent tampered datasets.
  • Anomaly Detection: Use statistical analysis to spot outliers in the training set.
  • Robust Learning Algorithms: Choose models that are more resistant to noise and outliers (e.g., RANSAC, robust SVM).


🔍 3. Privacy Attacks

(Extracting sensitive data from the model)

  • Differential Privacy: Add noise during training or inference to protect individual data points.
  • Federated Learning: Train models across multiple devices without centralizing data.
  • Access Controls: Limit who can query or download the model.


🎭 4. Abuse Attacks

(Misusing models in unintended ways)

  • Usage Monitoring: Log and audit usage patterns for unusual behavior.
  • Rate Limiting: Throttle access to prevent large-scale probing or abuse.
  • Red Teaming: Regularly simulate attacks to identify weaknesses.


📘 Bonus Best Practices

  • Threat Modeling: Apply STRIDE or similar frameworks focused on AI.
  • Model Watermarking: Identify ownership and detect unauthorized use.
  • Continuous Monitoring & Patching: Keep models and pipelines under review and updated.

STRIDE stands for a threat modeling methodology that categorizes security threats into six types: SpoofingTamperingRepudiationInformation DisclosureDenial of Service, and Elevation of Privilege

DISC InfoSec’s earlier post on the AI topic

Trust Me – ISO 42001 AI Management System

 Adversarial AI Attacks, Mitigations, and Defense Strategies: A cybersecurity professional’s guide to AI attacks, threat modeling, and securing AI with MLSecOps

What You Are Not Told About ChatGPT: Key Insights into the Inner Workings of ChatGPT & How to Get the Most Out of It

Digital Ethics in the Age of AI – Navigating the ethical frontier today and beyond

Artificial intelligence – Ethical, social, and security impacts for the present and the future

InfoSec services | InfoSec books | Follow our blog | DISC llc is listed on The vCISO Directory | ISO 27k Chat bot | Comprehensive vCISO Services | ISMS Services | Security Risk Assessment Services

Tags: AI security, ML Security


Mar 25 2025

The Developer’s Playbook for Large Language Model Security Review

Category: AI,Information Security,Security playbookdisc7 @ 12:06 pm

In “The Developer’s Playbook for Large Language Model Security,” Steve Wilson, Chief Product Officer at Exabeam, addresses the growing integration of large language models (LLMs) into various industries and the accompanying security challenges. Leveraging over two decades of experience in AI, cybersecurity, and cloud computing, Wilson offers a practical guide for security professionals to navigate the complex landscape of LLM vulnerabilities.

A notable aspect of the book is its alignment with the OWASP Top 10 for LLM Applications project, which Wilson leads. This connection ensures that the security risks discussed are vetted by a global network of experts. The playbook delves into critical threats such as data leakage, prompt injection attacks, and supply chain vulnerabilities, providing actionable mitigation strategies for each.

Wilson emphasizes the unique security challenges posed by LLMs, which differ from traditional web applications due to new trust boundaries and attack surfaces. The book offers defensive strategies, including runtime safeguards and input validation techniques, to harden LLM-based systems. Real-world case studies illustrate how attackers exploit AI-driven applications, enhancing the practical value of the guidance provided.

Structured to serve both as an introduction and a reference guide, “The Developer’s Playbook for Large Language Model Security” is an essential resource for security professionals tasked with safeguarding AI-driven applications. Its technical depth, practical strategies, and real-world examples make it a timely and relevant addition to the field of AI security.

Sources

The Developer’s Playbook for Large Language Model Security: Building Secure AI Applications

InfoSec services | InfoSec books | Follow our blog | DISC llc is listed on The vCISO Directory | ISO 27k Chat bot | Comprehensive vCISO Services | ISMS Services | Security Risk Assessment Services

Tags: AI security, Large Language Model


Jan 29 2025

Basic Principle to Enterprise AI Security

Category: AIdisc7 @ 12:24 pm

Securing AI in the Enterprise: A Step-by-Step Guide

  1. Establish AI Security Ownership
    Organizations must define clear ownership and accountability for AI security. Leadership should decide whether AI governance falls under a cross-functional committee, IT/security teams, or individual business units. Establishing policies, defining decision-making authority, and ensuring alignment across departments are key steps in successfully managing AI security from the start.
  2. Identify and Mitigate AI Risks
    AI introduces unique risks, including regulatory compliance challenges, data privacy vulnerabilities, and algorithmic biases. Organizations must evaluate legal obligations (such as GDPR, HIPAA, and the EU AI Act), implement strong data protection measures, and address AI transparency concerns. Risk mitigation strategies should include continuous monitoring, security testing, clear governance policies, and incident response plans.
  3. Adopt AI Security Best Practices
    Businesses should follow security best practices, such as starting with small AI implementations, maintaining human oversight, establishing technical guardrails, and deploying continuous monitoring. Strong cybersecurity measures—such as encryption, access controls, and regular security audits—are essential. Additionally, comprehensive employee training programs help ensure responsible AI usage.
  4. Assess AI Needs and Set Measurable Goals
    AI implementation should align with business objectives, with clear milestones set for six months, one year, and beyond. Organizations should define success using key performance indicators (KPIs) such as revenue impact, efficiency improvements, and compliance adherence. Both quantitative and qualitative metrics should guide AI investments and decision-making.
  5. Evaluate AI Tools and Security Measures
    When selecting AI tools, organizations must assess security, accuracy, scalability, usability, and compliance. AI solutions should have strong data protection mechanisms, clear ROI, and effective customization options. Evaluating AI tools using a structured approach ensures they meet security and business requirements.
  6. Purchase and Implement AI Securely
    Before deploying AI solutions, businesses must ask key questions about effectiveness, performance, security, scalability, and compliance. Reviewing trial options, pricing models, and regulatory alignment (such as GDPR or CCPA compliance) is critical to selecting the right AI tool. AI security policies should be integrated into the organization’s broader cybersecurity framework.
  7. Launch an AI Pilot Program with Security in Mind
    Organizations should begin with a controlled AI pilot to assess risks, validate performance, and ensure compliance before full deployment. This includes securing high-quality training data, implementing robust authentication controls, continuously monitoring performance, and gathering user feedback. Clear documentation and risk management strategies will help refine AI adoption in a secure and scalable manner.

By following these steps, enterprises can securely integrate AI, protect sensitive data, and ensure regulatory compliance while maximizing AI’s potential.

Adversarial AI Attacks, Mitigations, and Defense Strategies: A cybersecurity professional’s guide to AI attacks, threat modeling, and securing AI with MLSecOps

New regulations and AI hacks drive cyber security changes in 2025

Threat modeling your generative AI workload to evaluate security risk

How CISOs Can Drive the Adoption of Responsible AI Practices

Hackers will use machine learning to launch attacks

To fight AI-generated malware, focus on cybersecurity fundamentals

4 ways AI is transforming audit, risk and compliance

Artificial Intelligence Hacks

InfoSec services | InfoSec books | Follow our blog | DISC llc is listed on The vCISO Directory | ISO 27k Chat bot | Comprehensive vCISO Services | ISMS Services | Security Risk Assessment Services

Tags: AI Governance, AI privacy, AI Risk Management, AI security


Oct 03 2024

AI security bubble already springing leaks

Category: AIdisc7 @ 1:17 pm

AI security bubble already springing leaks

The article highlights how the AI boom, especially in cybersecurity, is already showing signs of strain. Many AI startups, despite initial hype, are facing financial challenges, as they lack the funds to develop large language models (LLMs) independently. Larger companies are taking advantage by acquiring or licensing the technologies from these smaller firms at a bargain.

AI is just one piece of the broader cybersecurity puzzle, but it isn’t a silver bullet. Issues like system updates and cloud vulnerabilities remain critical, and AI-only security solutions may struggle without more comprehensive approaches.

Some efforts to set benchmarks for LLMs, like NIST, are underway, helping to establish standards in areas such as automated exploits and offensive security. However, AI startups face increasing difficulty competing with big players who have the resources to scale.

For more information, you can visit here

Adversarial AI Attacks, Mitigations, and Defense Strategies: A cybersecurity professional’s guide to AI attacks, threat modeling, and securing AI with MLSecOps

Could APIs be the undoing of AI?

Previous posts on AI

AI Security risk assessment quiz

Implementing BS ISO/IEC 42001 will demonstrate that you’re developing AI responsibly

InfoSec services | InfoSec books | Follow our blog | DISC llc is listed on The vCISO Directory | ISO 27k Chat bot

Tags: Adversarial AI Attacks, AI security


Sep 09 2024

AI cybersecurity needs to be as multi-layered as the system it’s protecting

The article emphasizes that AI cybersecurity must be multi-layered, like the systems it protects. Cybercriminals increasingly exploit large language models (LLMs) with attacks such as data poisoning, jailbreaks, and model extraction. To counter these threats, organizations must implement security strategies during the design, development, deployment, and operational phases of AI systems. Effective measures include data sanitization, cryptographic checks, adversarial input detection, and continuous testing. A holistic approach is needed to protect against growing AI-related cyber risks.

For more details, visit the full article here

Benefits and Concerns of AI in Data Security and Privacy

Predictive analytics provides substantial benefits in cybersecurity by helping organizations forecast and mitigate threats before they arise. Using statistical analysis, machine learning, and behavioral insights, it highlights potential risks and vulnerabilities. Despite hurdles such as data quality, model complexity, and the dynamic nature of threats, adopting best practices and tools enhances its efficacy in threat detection and response. As cyber risks evolve, predictive analytics will be essential for proactive risk management and the protection of organizational data assets.

AI raises concerns about data privacy and security. Ensuring that AI tools comply with privacy regulations and protect sensitive information.

AI systems must adhere to privacy laws and regulations, such as GDPR, CPRA to protect individuals’ information. Compliance ensures ethical data handling practices.

Implementing robust security measures to protect data (data governance) from unauthorized access and breaches is critical. Data protection practices safeguard sensitive information and maintain trust.

1. Predictive Analytics in Cybersecurity

Predictive analytics offers substantial benefits by helping organizations anticipate and prevent cyber threats before they occur. It leverages statistical models, machine learning, and behavioral analysis to identify potential risks. These insights enable proactive measures, such as threat mitigation and vulnerability management, ensuring an organization’s defenses are always one step ahead.

2. AI and Data Privacy

AI systems raise concerns regarding data privacy and security, especially as they process sensitive information. Ensuring compliance with privacy regulations like GDPR and CPRA is crucial. Organizations must prioritize safeguarding personal data while using AI tools to maintain trust and avoid legal ramifications.

3. Security and Data Governance

Robust security measures are essential to protect data from breaches and unauthorized access. Implementing effective data governance ensures that sensitive information is managed, stored, and processed securely, thus maintaining organizational integrity and preventing potential data-related crises.

Adversarial AI Attacks, Mitigations, and Defense Strategies: A cybersecurity professional’s guide to AI attacks, threat modeling, and securing AI with MLSecOps

Data Governance: The Definitive Guide: People, Processes, and Tools to Operationalize Data Trustworthiness

InfoSec services | InfoSec books | Follow our blog | DISC llc is listed on The vCISO Directory | ISO 27k Chat bot

Tags: AI attacks, AI security, Data Governance


Jan 22 2024

AI AND SECURITY: ARE THEY AIDING EACH OTHER OR CREATING MORE ISSUES? EXPLORING THE COMPLEX RELATIONSHIP IN TECHNOLOGY

Category: AI,cyber securitydisc7 @ 12:13 pm

Artificial Intelligence (AI) has arisen as a wildly disruptive technology across many industries. As AI models continue to improve, more industries are sure to be disrupted and affected. One industry that is already feeling the effects of AI is digital security. The use of this new technology has opened up new avenues of protecting data, but it has also caused some concerns about its ethicality and effectiveness when compared with what we will refer to as traditional or established security practices.

This article will touch on the ways that this new tech is affecting already established practices, what new practices are arising, and whether or not they are safe and ethical.

HOW DOES AI AFFECT ALREADY ESTABLISHED SECURITY PRACTICES?

It is a fair statement to make that AI is still a nascent technology. Most experts agree that it is far from reaching its full potential, yet even so, it has still been able to disrupt many industries and practices. In terms of already established security practices, AI is providing operators with the opportunity to analyze huge amounts of data at incredible speed and with impressive accuracy. Identifying patterns and detecting anomalies is easy for AI to do, and incredibly useful for most traditional data security practices. 

Previously these systems would rely solely on human operators to perform the data analyses, which can prove time-consuming and would be prone to errors. Now, with AI help, human operators need only understand the refined data the AI is providing them and act on it.

IN WHAT WAYS CAN AI BE USED TO BOLSTER AND IMPROVE EXISTING SECURITY MEASURES?

AI can be used in several other ways to improve security measures. In terms of access protection, AI-driven facial recognition and other forms of biometric security can easily provide a relatively foolproof access protection solution. Using biometric access can eliminate passwords, which are often a weak link in data security.

AI’s ability to sort through large amounts of data means that it can be very effective in detecting and preventing cyber threats. An AI-supported network security program could, with relatively little oversight, analyze network traffic, identify vulnerabilities, and proactively defend against any incoming attacks. 

THE DIFFICULTIES IN UPDATING EXISTING SECURITY SYSTEMS WITH AI SOLUTIONS

The most pressing difficulty is that some old systems are simply not compatible with AI solutions. Security systems designed and built to be operated solely by humans are often not able to be retrofitted with AI algorithms, which means that any upgrades necessitate a complete, and likely expensive, overhaul of the security systems. 

One industry that has been quick to embrace AI-powered security systems is the online gambling industry. For those who are interested in seeing what AI-driven security can look like, visiting a casino online and investigating its security protocols will give you an idea of what is possible. Having an industry that has been an early adoption of such a disruptive technology can help other industries learn what to do and what not to do. In many cases, online casinos staged entire overhauls of their security suites to incorporate AI solutions, rather than trying to incorporate new tech, with older non-compatible security technology.

Another important factor in the difficulty of incorporating AI systems is that it takes a very large amount of data to properly train an AI algorithm. Thankfully, other companies are doing this work, and it should be possible to buy an already trained AI, fit to purpose. All that remains is trusting that the trainers did their due diligence and that the AI will be effective.

EFFECTIVENESS OF AI-DRIVEN SECURITY SYSTEMS

AI-driven security systems are, for the most part, lauded as being effective. With faster threat detection and response times quicker than humanly possible, the advantage of using AI for data security is clear.

AI has also proven resilient in terms of adapting to new threats. AI has an inherent ability to learn, which means that as new threats are developed and new vulnerabilities emerge, a well-built AI will be able to learn and eventually respond to new threats just as effectively as old ones.

It has been suggested that AI systems must completely replace traditional data security solutions shortly. Part of the reason for this is not just their inherent effectiveness, but there is an anticipation that incoming threats will also be using AI. Better to fight fire with fire.

IS USING AI FOR SECURITY DANGEROUS?

The short answer is no, the long answer is no, but. The main concern when using AI security measures with little human input is that they could generate false positives or false negatives. AI is not infallible, and despite being able to process huge amounts of data, it can still get confused.

It could also be possible for the AI security system to itself be attacked and become a liability. If an attack were to target and inject malicious code into the AI system, it could see a breakdown in its effectiveness which would potentially allow multiple breaches.

The best remedy for both of these concerns is likely to ensure that there is still an alert human component to the security system. By ensuring that well-trained individuals are monitoring the AI systems, the dangers of false positives or attacks on the AI system are reduced greatly.

ARE THERE LEGITIMATE ETHICAL CONCERNS WHEN AI IS USED FOR SECURITY?

Yes. The main ethical concern relating to AI when used for security is that the algorithm could have an inherent bias. This can occur if the data used for the training of the AI is itself biased or incomplete in some way. 

Another important ethical concern is that AI security systems are known to sort through personal data to do their job, and if this data were to be accessed or misused, privacy rights would be compromised.

Many AI systems also have a lack of transparency and accountability, which compounds the problem of the AI algorithm’s potential for bias. If an AI is concluding that a human operator cannot understand the reasoning, the AI system must be held suspect.

CONCLUSION

AI could be a great boon to security systems and is likely an inevitable and necessary upgrade. The inability of human operators to combat AI threats alone seems to suggest its necessity. Coupled with its ability to analyze and sort through mountains of data and adapt to threats as they develop, AI has a bright future in the security industry.

However, AI-driven security systems must be overseen by trained human operators who understand the complexities and weaknesses that AI brings to their systems.

Must Learn AI Security

Artificial Intelligence (AI) Governance and Cyber-Security: A beginner’s handbook on securing and governing AI systems

InfoSec tools | InfoSec services | InfoSec books | Follow our blog | DISC llc is listed on The vCISO Directory

Tags: AI security, Artificial Intelligence (AI) Governance, Must Learn AI Security