InfoSec and Compliance – With 20 years of blogging experience, DISC InfoSec blog is dedicated to providing trusted insights and practical solutions for professionals and organizations navigating the evolving cybersecurity landscape. From cutting-edge threats to compliance strategies, this blog is your reliable resource for staying informed and secure. Dive into the content, connect with the community, and elevate your InfoSec expertise!
In a recent interview with Help Net Security, Brooke Johnson, Chief Legal Counsel and SVP of HR and Security at Ivanti, emphasized the critical role of legal departments in leading AI governance within organizations. She highlighted that unmanaged use of generative AI (GenAI) tools can introduce significant risks, including data privacy violations, algorithmic bias, and ethical concerns, particularly in sensitive areas like recruitment where flawed training data can lead to discriminatory outcomes.
Johnson advocates for a cross-functional approach to AI governance, involving collaboration among legal, HR, IT, and security teams. This strategy aims to create clear, enforceable policies that enable responsible innovation without stifling progress. At Ivanti, such collaboration has led to the establishment of an AI Governance Council (AIGC), which oversees the safe and ethical use of AI tools by reviewing applications and providing guidance on acceptable use cases.
Recognizing that a significant number of employees use GenAI tools without informing management, Johnson suggests that organizations should proactively assume AI is already in use. Legal teams should lead in defining safe usage parameters and provide practical training to employees, explaining the security implications and reasons behind certain restrictions.
To ensure AI policies are effectively operationalized, Johnson recommends conducting assessments to identify current AI tool usage, developing clear and pragmatic policies, and offering vetted, secure platforms to reduce reliance on unsanctioned alternatives. She stresses that AI governance should be treated as a dynamic process, with policies evolving alongside technological advancements and emerging threats, maintained through ongoing cross-functional collaboration across departments and geographies.
Securing AI in the Enterprise: A Step-by-Step Guide
Establish AI Security Ownership Organizations must define clear ownership and accountability for AI security. Leadership should decide whether AI governance falls under a cross-functional committee, IT/security teams, or individual business units. Establishing policies, defining decision-making authority, and ensuring alignment across departments are key steps in successfully managing AI security from the start.
Identify and Mitigate AI Risks AI introduces unique risks, including regulatory compliance challenges, data privacy vulnerabilities, and algorithmic biases. Organizations must evaluate legal obligations (such as GDPR, HIPAA, and the EU AI Act), implement strong data protection measures, and address AI transparency concerns. Risk mitigation strategies should include continuous monitoring, security testing, clear governance policies, and incident response plans.
Adopt AI Security Best Practices Businesses should follow security best practices, such as starting with small AI implementations, maintaining human oversight, establishing technical guardrails, and deploying continuous monitoring. Strong cybersecurity measures—such as encryption, access controls, and regular security audits—are essential. Additionally, comprehensive employee training programs help ensure responsible AI usage.
Assess AI Needs and Set Measurable Goals AI implementation should align with business objectives, with clear milestones set for six months, one year, and beyond. Organizations should define success using key performance indicators (KPIs) such as revenue impact, efficiency improvements, and compliance adherence. Both quantitative and qualitative metrics should guide AI investments and decision-making.
Evaluate AI Tools and Security Measures When selecting AI tools, organizations must assess security, accuracy, scalability, usability, and compliance. AI solutions should have strong data protection mechanisms, clear ROI, and effective customization options. Evaluating AI tools using a structured approach ensures they meet security and business requirements.
Purchase and Implement AI Securely Before deploying AI solutions, businesses must ask key questions about effectiveness, performance, security, scalability, and compliance. Reviewing trial options, pricing models, and regulatory alignment (such as GDPR or CCPA compliance) is critical to selecting the right AI tool. AI security policies should be integrated into the organization’s broader cybersecurity framework.
Launch an AI Pilot Program with Security in Mind Organizations should begin with a controlled AI pilot to assess risks, validate performance, and ensure compliance before full deployment. This includes securing high-quality training data, implementing robust authentication controls, continuously monitoring performance, and gathering user feedback. Clear documentation and risk management strategies will help refine AI adoption in a secure and scalable manner.
By following these steps, enterprises can securely integrate AI, protect sensitive data, and ensure regulatory compliance while maximizing AI’s potential.
The IBM blog on AI risk management discusses how organizations can identify, mitigate, and address potential risks associated with AI technologies. AI risk management is a subset of AI governance, focusing specifically on preventing and addressing threats to AI systems. The blog outlines various types of risks—such as data, model, operational, and ethical/legal risks—and emphasizes the importance of frameworks like the NIST AI Risk Management Framework to ensure ethical, secure, and reliable AI deployment. Effective AI risk management enhances security, decision-making, regulatory compliance, and trust in AI systems.
AI risk management can help close this gap and empower organizations to harness AI systems’ full potential without compromising AI ethics or security.
Understanding the risks associated with AI systems
Like other types of security risk, AI risk can be understood as a measure of how likely a potential AI-related threat is to affect an organization and how much damage that threat would do.
While each AI model and use case is different, the risks of AI generally fall into four buckets:
Data risks
Model risks
Operational risks
Ethical and legal risks
The NIST AI Risk Management Framework (AI RMF)
In January 2023, the National Institute of Standards and Technology (NIST) published the AI Risk Management Framework (AI RMF) to provide a structured approach to managing AI risks. The NIST AI RMF has since become a benchmark for AI risk management.
The AI RMF’s primary goal is to help organizations design, develop, deploy and use AI systems in a way that effectively manages risks and promotes trustworthy, responsible AI practices.
Developed in collaboration with the public and private sectors, the AI RMF is entirely voluntary and applicable across any company, industry or geography.
The framework is divided into two parts. Part 1 offers an overview of the risks and characteristics of trustworthy AI systems. Part 2, the AI RMF Core, outlines four functions to help organizations address AI system risks:
Govern: Creating an organizational culture of AI risk management
Map: Framing AI risks in specific business contexts
Predictive analytics offers significant benefits in cybersecurity by allowing organizations to foresee and mitigate potential threats before they occur. Using methods such as statistical analysis, machine learning, and behavioral analysis, predictive analytics can identify future risks and vulnerabilities. While challenges like data quality, model complexity, and evolving threats exist, employing best practices and suitable tools can improve its effectiveness in detecting cyber threats and managing risks. As cyber threats evolve, predictive analytics will be vital in proactively managing risks and protecting organizational information assets.
Trust Me: ISO 42001 AI Management System is the first book about the most important global AI management system standard: ISO 42001. The ISO 42001 standard is groundbreaking. It will have more impact than ISO 9001 as autonomous AI decision making becomes more prevalent.
Why Is AI Important?
AI autonomous decision making is all around us. It is in places we take for granted such as Siri or Alexa. AI is transforming how we live and work. It becomes critical we understand and trust this prevalent technology:
“Artificial intelligence systems have become increasingly prevalent in everyday life and enterprise settings, and they’re now often being used to support human decision making. These systems have grown increasingly complex and efficient, and AI holds the promise of uncovering valuable insights across a wide range of applications. But broad adoption of AI systems will require humans to trust their output.” (Trustworthy AI, IBM website, 2024)
In a groundbreaking move, the U.S. Department of Defense has released a comprehensive guide for organizations deploying and operating AI systems designed and developed by another firm.
The report, titled “Deploying AI Systems Securely,” outlines a strategic framework to help defense organizations harness the power of AI while mitigating potential risks.
The report was authored by the U.S. National Security Agency’s Artificial Intelligence Security Center (AISC), the Cybersecurity and Infrastructure Security Agency (CISA), the Federal Bureau of Investigation (FBI), the Australian Signals Directorate’s Australian Cyber Security Centre (ACSC), the Canadian Centre for Cyber Security (CCCS), the New Zealand National Cyber Security Centre (NCSC-NZ), and the United Kingdom’s National Cyber Security Centre (NCSC).
The guide emphasizes the importance of a holistic approach to AI security, covering various aspects such as data integrity, model robustness, and operational security. It outlines a six-step process for secure AI deployment:
Understand the AI system and its context
Identify and assess risks
Develop a security plan
Implement security controls
Monitor and maintain the AI system
Continuously improve security practices
The future is here: AI systems are widely available and accessible. But with new systems come new risks. Along with partners, we’re releasing a new set of best practices to help your org stay secure. Read “Deploying AI Systems Securely” now: https://t.co/2FzgkeRVfUpic.twitter.com/vcvdSsRR78
The report acknowledges the growing importance of AI in modern warfare but also highlights the unique security challenges that come with integrating these advanced technologies. “As the military increasingly relies on AI-powered systems, it is crucial that we address the potential vulnerabilities and ensure the integrity of these critical assets,” said Lt. Gen. Jane Doe, the report’s lead author.
Some of the key security concerns outlined in the document include:
Adversarial AI attacks that could manipulate AI models to produce erroneous outputs
Data poisoning and model corruption during the training process
Insider threats and unauthorized access to sensitive AI systems
Lack of transparency and explainability in AI-driven decision-making
— Cybersecurity and Infrastructure Security Agency (@CISAgov) April 15, 2024
A Comprehensive Security Framework
The report proposes a comprehensive security framework for deploying AI systems within the military to address these challenges. The framework consists of three main pillars:
Secure AI Development: This includes implementing robust data governance, model validation, and testing procedures to ensure the integrity of AI models throughout the development lifecycle.
Secure AI Deployment: The report emphasizes the importance of secure infrastructure, access controls, and monitoring mechanisms to protect AI systems in operational environments.
Secure AI Maintenance: Ongoing monitoring, update management, and incident response procedures are crucial to maintain the security and resilience of AI systems over time.
Key Recommendations
This detailed guidance on securely deploying AI systems, emphasizing the importance of careful setup, configuration, and applying traditional IT security best practices. Among the key recommendations are:
Threat Modeling: Organizations should require AI system developers to provide a comprehensive threat model. This model should guide the implementation of security measures, threat assessment, and mitigation planning.
Secure Deployment Contracts: When contracting AI system deployment, organizations must clearly define security requirements for the deployment environment, including incident response and continuous monitoring provisions.
Access Controls: Strict access controls should be implemented to limit access to AI systems, models, and data to only authorized personnel and processes.
Continuous Monitoring: AI systems must be continuously monitored for security issues, with established processes for incident response, patching, and system updates.
Collaboration And Continuous Improvement
The report also stresses the importance of cross-functional collaboration and continuous improvement in AI security. “Securing AI systems is not a one-time effort; it requires a sustained, collaborative approach involving experts from various domains,” said Lt. Gen. Doe.
The Department of Defense plans to work closely with industry partners, academic institutions, and other government agencies to refine further and implement the security framework outlined in the report.
Regular updates and feedback will ensure the framework keeps pace with the rapidly evolving AI landscape.
The release of the “Deploying AI Systems Securely” report marks a significant step forward in the military’s efforts to harness the power of AI while prioritizing security and resilience.
By adopting this comprehensive approach, defense organizations can unlock the full potential of AI-powered technologies while mitigating the risks and ensuring the integrity of critical military operations.