Apr 09 2025

NIST: AI/ML Security Still Falls Short

Category: AI,Cyber Attack,cyber security,Cyber Threatsdisc7 @ 8:47 am

​The U.S. National Institute of Standards and Technology (NIST) has raised concerns about the security vulnerabilities inherent in artificial intelligence (AI) systems. In a recent report, NIST emphasizes that there is currently no foolproof method to defend AI technologies from adversarial attacks. The institute warns against accepting vendor claims of absolute AI security, noting that developers and users should be cautious of such assurances. ​

NIST’s research highlights several types of attacks that can compromise AI systems:​

  • Evasion Attacks: These occur when adversaries manipulate inputs to deceive AI models, leading to incorrect outputs.​
  • Poisoning Attacks: In these cases, attackers corrupt training data, causing the AI system to learn incorrect behaviors.​
  • Privacy Attacks: These involve extracting sensitive information from AI models, potentially leading to data breaches.​
  • Abuse Attacks: Here, legitimate sources of information are compromised to mislead the AI system’s operations. ​

NIST underscores that existing defenses against such attacks are insufficient and lack robust assurances. The agency calls on the broader tech community to develop more effective security measures to protect AI systems. ​

In response to these challenges, NIST has launched the Cybersecurity, Privacy, and AI Program. This initiative aims to support organizations in adapting their risk management strategies to address the evolving landscape of AI-related cybersecurity and privacy risks. ​

Overall, NIST’s findings serve as a cautionary reminder of the current limitations in AI security and the pressing need for continued research and development of robust defense mechanisms.

For further details, access the article here

While no AI system is fully immune, several practical strategies can reduce the risk of evasion, poisoning, privacy, and abuse attacks:


🔐 1. Evasion Attacks

(Manipulating inputs to fool the model)

  • Adversarial Training: Include adversarial examples in training data to improve robustness.
  • Input Validation: Use preprocessing techniques to sanitize or detect manipulated inputs.
  • Model Explainability: Apply tools like SHAP or LIME to understand decision logic and spot anomalies.


🧪 2. Poisoning Attacks

(Injecting malicious data into training sets)

  • Data Provenance & Validation: Track and vet data sources to prevent tampered datasets.
  • Anomaly Detection: Use statistical analysis to spot outliers in the training set.
  • Robust Learning Algorithms: Choose models that are more resistant to noise and outliers (e.g., RANSAC, robust SVM).


🔍 3. Privacy Attacks

(Extracting sensitive data from the model)

  • Differential Privacy: Add noise during training or inference to protect individual data points.
  • Federated Learning: Train models across multiple devices without centralizing data.
  • Access Controls: Limit who can query or download the model.


🎭 4. Abuse Attacks

(Misusing models in unintended ways)

  • Usage Monitoring: Log and audit usage patterns for unusual behavior.
  • Rate Limiting: Throttle access to prevent large-scale probing or abuse.
  • Red Teaming: Regularly simulate attacks to identify weaknesses.


📘 Bonus Best Practices

  • Threat Modeling: Apply STRIDE or similar frameworks focused on AI.
  • Model Watermarking: Identify ownership and detect unauthorized use.
  • Continuous Monitoring & Patching: Keep models and pipelines under review and updated.

STRIDE stands for a threat modeling methodology that categorizes security threats into six types: SpoofingTamperingRepudiationInformation DisclosureDenial of Service, and Elevation of Privilege

DISC InfoSec’s earlier post on the AI topic

Trust Me – ISO 42001 AI Management System

 Adversarial AI Attacks, Mitigations, and Defense Strategies: A cybersecurity professional’s guide to AI attacks, threat modeling, and securing AI with MLSecOps

What You Are Not Told About ChatGPT: Key Insights into the Inner Workings of ChatGPT & How to Get the Most Out of It

Digital Ethics in the Age of AI – Navigating the ethical frontier today and beyond

Artificial intelligence – Ethical, social, and security impacts for the present and the future

InfoSec services | InfoSec books | Follow our blog | DISC llc is listed on The vCISO Directory | ISO 27k Chat bot | Comprehensive vCISO Services | ISMS Services | Security Risk Assessment Services

Tags: AI security, ML Security

One Response to “NIST: AI/ML Security Still Falls Short”

Leave a Reply

You must be logged in to post a comment. Login now.