Jul 02 2025

 ISO/IEC 42001:2023 – from establishing to maintain an AI management system

Category: AIdisc7 @ 12:06 pm

AI businesses are at risk due to growing cyber threats, regulatory pressure, and ethical concerns. They often process vast amounts of sensitive data, making them prime targets for breaches and data misuse. Malicious actors can exploit AI systems through model manipulation, adversarial inputs, or unauthorized access. Additionally, lack of standardized governance and compliance frameworks exposes them to legal and reputational damage. As AI adoption accelerates, so do the risks.

AI businesses are at risk because they often handle large volumes of sensitive data, rely on complex algorithms that may be vulnerable to manipulation, and operate in a rapidly evolving regulatory landscape. Threats include data breaches, model poisoning, IP theft, bias in decision-making, and misuse of AI tools by attackers. Additionally, unclear accountability and lack of standardized AI security practices increase their exposure to legal, reputational, and operational risks.

Why it matters

It matters because the integrity, security, and trustworthiness of AI systems directly impact business reputation, customer trust, and regulatory compliance. A breach or misuse of AI can lead to financial loss, legal penalties, and harm to users. As AI becomes more embedded in critical decision-making—like healthcare, finance, and security—the risks grow more severe. Ensuring responsible and secure AI isn’t just good practice—it’s essential for long-term success and societal trust.

To reduce risks in AI businesses, we can:

  1. Implement strong governance with AIMS – Define clear accountability, policies, and oversight for AI development and use.
  2. Secure data and models – Encrypt sensitive data, restrict access, and monitor for tampering or misuse.
  3. Conduct risk assessments – Regularly evaluate threats, vulnerabilities, and compliance gaps in AI systems.
  4. Ensure transparency and fairness – Use explainable AI and audit algorithms for bias or unintended consequences.
  5. Stay compliant – Align with evolving regulations like GDPR, NIST AI RMF, or the EU AI Act.
  6. Train teams – Educate employees on AI ethics, security best practices, and safe use of generative tools.

Proactive risk management builds trust, protects assets, and positions AI businesses for sustainable growth.

 ISO/IEC 42001:2023 – from establishing to maintain an AI management system (AIMS)

BSI ISO 31000 is standard for any organization seeking risk management guidance

ISO/IEC 27001 and ISO/IEC 42001, both standards address risk and management systems, but with different focuses. ISO/IEC 27001 is centered on information security—protecting data confidentiality, integrity, and availability—while ISO/IEC 42001 is the first standard designed specifically for managing artificial intelligence systems responsibly. ISO/IEC 42001 includes considerations like AI-specific risks, ethical concerns, transparency, and human oversight, which are not fully addressed in ISO 27001. Organizations working with AI should not rely solely on traditional information security controls.

While ISO/IEC 27001 remains critical for securing data, ISO/IEC 42001 complements it by addressing broader governance and accountability issues unique to AI. The article suggests that companies developing or deploying AI should integrate both standards to build trust and meet growing stakeholder and regulatory expectations. Applying ISO 42001 can help demonstrate responsible AI practices, ensure explainability, and mitigate unintended consequences, positioning organizations to lead in a more regulated AI landscape.

AI Act & ISO 42001 Gap Analysis Tool

Agentic AI: Navigating Risks and Security Challenges

Artificial Intelligence: The Next Battlefield in Cybersecurity

AI and The Future of Cybersecurity: Navigating the New Digital Battlefield

“Whether you’re a technology professional, policymaker, academic, or simply a curious reader, this book will arm you with the knowledge to navigate the complex intersection of AI, security, and society.”

AI Governance Is a Boardroom Imperative—The SEC Just Raised the Stakes on AI Hype

How AI Is Transforming the Cybersecurity Leadership Playbook

Previous AI posts

IBM’s model-routing approach

Top 5 AI-Powered Scams to Watch Out for in 2025

Summary of CISO 3.0: Leading AI Governance and Security in the Boardroom

AI in the Workplace: Replacing Tasks, Not People

Why CISOs Must Prioritize Data Provenance in AI Governance

Interpretation of Ethical AI Deployment under the EU AI Act

AI Governance: Applying AI Policy and Ethics through Principles and Assessments

ISO/IEC 42001:2023, First Edition: Information technology – Artificial intelligence – Management system

ISO 42001 Artificial Intelligence Management Systems (AIMS) Implementation Guide: AIMS Framework | AI Security Standards

Businesses leveraging AI should prepare now for a future of increasing regulation.

Digital Ethics in the Age of AI 

DISC InfoSec’s earlier posts on the AI topic

InfoSec services | InfoSec books | Follow our blog | DISC llc is listed on The vCISO Directory | ISO 27k Chat bot | Comprehensive vCISO Services | ISMS Services | Security Risk Assessment Services | Mergers and Acquisition Security

Tags: AIMS, ISO 42001, ISO/IEC 42001


Jul 02 2025

Emerging AI Security and Privacy Challenges and Risks

Several posts published recently discuss AI security and privacy, highlighting different perspectives and concerns. Here’s a summary of the most prominent themes and posts:

Emerging Concerns and Risks:

  • Growing Anxiety around AI Data Privacy: A recent survey found that a significant majority of Americans (91%) are concerned about social media platforms using their data to train AI models, with 69% aware of this practice.
  • AI-Powered Cyber Threats on the Rise: AI is increasingly being used to generate sophisticated phishing attacks and malware, making it harder to distinguish between legitimate and malicious content.
  • Gap between AI Adoption and Security Measures: Many organizations are quickly adopting AI but lag in implementing necessary security controls, creating a major vulnerability for data leaks and compliance issues.
  • Deepfakes and Impersonation Scams: The use of AI in creating realistic deepfakes is fueling a surge in impersonation scams, increasing privacy risks.
  • Opaque AI Models and Bias: The “black box” nature of some AI models makes it difficult to understand how they make decisions, raising concerns about potential bias and discrimination. 

Regulatory Developments:

  • Increasing Regulatory Scrutiny: Governments worldwide are focusing on regulating AI, with the EU AI Act setting a risk-based framework and China implementing comprehensive regulations for generative AI.
  • Focus on Data Privacy and User Consent: New regulations emphasize data minimization, purpose limitation, explicit user consent for data collection and processing, and requirements for data deletion upon request. 

Best Practices and Mitigation Strategies:

  • Robust Data Governance: Organizations must establish clear data governance frameworks, including data inventories, provenance tracking, and access controls.
  • Privacy by Design: Integrating privacy considerations from the initial stages of AI system development is crucial.
  • Utilizing Privacy-Preserving Techniques: Employing techniques like differential privacy, federated learning, and synthetic data generation can enhance data protection.
  • Continuous Monitoring and Threat Detection: Implementing tools for continuous monitoring, anomaly detection, and security audits helps identify and address potential threats.
  • Employee Training: Educating employees about AI-specific privacy risks and best practices is essential for building a security-conscious culture. 

Specific Mentions:

  • NSA’s CSI Guidance: The National Security Agency (NSA) released joint guidance on AI data security, outlining best practices for organizations.
  • Stanford’s 2025 AI Index Report: This report highlighted a significant increase in AI-related privacy and security incidents, emphasizing the need for stronger governance frameworks.
  • DeepSeek AI App Risks: Experts raised concerns about the DeepSeek AI app, citing potential security and privacy vulnerabilities. 

Based on current trends and recent articles, it’s evident that AI security and privacy are top-of-mind concerns for individuals, organizations, and governments alike. The focus is on implementing strong data governance, adopting privacy-preserving techniques, and adapting to evolving regulatory landscapes. 

The rapid rise of AI has introduced new cyber threats, as bad actors increasingly exploit AI tools to enhance phishing, social engineering, and malware attacks. Generative AI makes it easier to craft convincing deepfakes, automate hacking tasks, and create realistic fake identities at scale. At the same time, the use of AI in security tools also raises concerns about overreliance and potential vulnerabilities in AI models themselves. As AI capabilities grow, so does the urgency for organizations to strengthen AI governance, improve employee awareness, and adapt cybersecurity strategies to meet these evolving risks.

There is a lack of comprehensive federal security and privacy regulations in the U.S., but violations of international standards often lead to substantial penalties abroad for U.S. organizations. Penalties imposed abroad effectively become a cost of doing business for U.S. organizations.

Meta has faced dozens of fines and settlements across multiple jurisdictions, with at least a dozen significant penalties totaling tens of billions of dollars/euros cumulatively.

Artificial intelligence (AI) and large language models (LLMs) emerging as the top concern for security leaders. For the first time, AI, including tools such as LLMs, has overtaken ransomware as the most pressing issue.

AI-Driven Security: Enhancing Large Language Models and Cybersecurity: Large Language Models (LLMs) Security

AI Security Essentials: Strategies for Securing Artificial Intelligence Systems with the NIST AI Risk Management Framework (Artificial Intelligence (AI) Security)

ISO 42001 Readiness: A 10-Step Guide to Responsible AI Governance

AI Act & ISO 42001 Gap Analysis Tool

Agentic AI: Navigating Risks and Security Challenges

Artificial Intelligence: The Next Battlefield in Cybersecurity

AI and The Future of Cybersecurity: Navigating the New Digital Battlefield

“Whether you’re a technology professional, policymaker, academic, or simply a curious reader, this book will arm you with the knowledge to navigate the complex intersection of AI, security, and society.”

AI Governance Is a Boardroom Imperative—The SEC Just Raised the Stakes on AI Hype

How AI Is Transforming the Cybersecurity Leadership Playbook

Previous AI posts

IBM’s model-routing approach

Top 5 AI-Powered Scams to Watch Out for in 2025

Summary of CISO 3.0: Leading AI Governance and Security in the Boardroom

AI in the Workplace: Replacing Tasks, Not People

Why CISOs Must Prioritize Data Provenance in AI Governance

Interpretation of Ethical AI Deployment under the EU AI Act

AI Governance: Applying AI Policy and Ethics through Principles and Assessments

ISO/IEC 42001:2023, First Edition: Information technology – Artificial intelligence – Management system

ISO 42001 Artificial Intelligence Management Systems (AIMS) Implementation Guide: AIMS Framework | AI Security Standards

Businesses leveraging AI should prepare now for a future of increasing regulation.

Digital Ethics in the Age of AI 

DISC InfoSec’s earlier posts on the AI topic

InfoSec services | InfoSec books | Follow our blog | DISC llc is listed on The vCISO Directory | ISO 27k Chat bot | Comprehensive vCISO Services | ISMS Services | Security Risk Assessment Services | Mergers and Acquisition Security

Tags: AI privacy, AI Security Essentials, AI Security Risks, AI-Driven Security


Jul 01 2025

ISO 42001 Readiness: A 10-Step Guide to Responsible AI Governance

Category: AI,ISO 27k,ISO 42001disc7 @ 10:51 am

The ISO 42001 readiness checklist structured into ten key sections, followed by my feedback at the end:


1. Context & Scope
Identify internal and external factors affecting AI use, clarify stakeholder requirements, and define the scope of your AI Management System (AIMS)

2. Leadership & Governance
Secure executive sponsorship, assign AIMS responsibilities, establish an ethics‐driven AI policy, and communicate roles and accountability clearly

3. Planning
Perform a gap analysis to benchmark current state, conduct a risk and opportunity assessment, set measurable AI objectives, and integrate risk practices throughout the AI lifecycle.

4. Support & Resources
Dedicate resources for AIMS, create training around AI ethics, safety, and governance, raise awareness, establish communication protocols, and maintain documentation.

5. Operational Controls
Outline stages of the AI lifecycle (design to monitoring), conduct risk assessments (bias, safety, legal), ensure transparency and explainability, maintain data quality and privacy, and implement incident response.

6. Change Management
Implement structured change control—assessing proposed AI modifications, conducting ethical and feasibility reviews, cross‐functional governance, staged rollouts, and post‐implementation audits.

7. Performance Evaluation
Monitor AIMS effectiveness using KPIs, conduct internal audits, and hold management reviews to validate performance and compliance.

8. Nonconformity & Corrective Action
Identify and document nonconformities, implement corrective measures, review their efficacy, and update the AIMS accordingly.

9. Certification Preparation
Collect evidence for internal audits, address gaps, assemble required documentation (including SoA), choose an accredited certification body, and finalize pre‐audit preparations .

10. External Audit & Continuous Improvement
Engage auditors, facilitate assessments, resolve audit findings, publicly share certification results, and embed continuous improvement in AIMS operations.


📝 Feedback

  • Comprehensive but heavy: The checklist covers every facet of AI governance—from initial scoping and leadership engagement to external audits and continuous improvement.
  • Aligns well with ISO 27001: Many controls are familiar to ISMS practitioners, making ISO 42001 a viable extension.
  • Resource-intensive: Expect demands on personnel, training, documentation, and executive involvement.
  • Change management focus is smart: The dedication to handling AI updates (design, rollout, monitoring) is a notable strength.
  • Documentation is key: Templates like Statement of Applicability and impact assessment forms (e.g., AISIA) significantly streamline preparation.
  • Recommendation: Prioritize gap analysis early, leverage existing ISMS frameworks, and allocate clear roles—this positions you well for a smooth transition to certification readiness.

Overall, ISO 42001 readiness is achievable by taking a methodical, risk-based, and well-resourced approach. Let me know if you’d like templates or help mapping this to your current ISMS.

AI Act & ISO 42001 Gap Analysis Tool

Agentic AI: Navigating Risks and Security Challenges

Artificial Intelligence: The Next Battlefield in Cybersecurity

AI and The Future of Cybersecurity: Navigating the New Digital Battlefield

“Whether you’re a technology professional, policymaker, academic, or simply a curious reader, this book will arm you with the knowledge to navigate the complex intersection of AI, security, and society.”

AI Governance Is a Boardroom Imperative—The SEC Just Raised the Stakes on AI Hype

How AI Is Transforming the Cybersecurity Leadership Playbook

Previous AI posts

IBM’s model-routing approach

Top 5 AI-Powered Scams to Watch Out for in 2025

Summary of CISO 3.0: Leading AI Governance and Security in the Boardroom

AI in the Workplace: Replacing Tasks, Not People

Why CISOs Must Prioritize Data Provenance in AI Governance

Interpretation of Ethical AI Deployment under the EU AI Act

AI Governance: Applying AI Policy and Ethics through Principles and Assessments

ISO/IEC 42001:2023, First Edition: Information technology – Artificial intelligence – Management system

ISO 42001 Artificial Intelligence Management Systems (AIMS) Implementation Guide: AIMS Framework | AI Security Standards

Businesses leveraging AI should prepare now for a future of increasing regulation.

Digital Ethics in the Age of AI 

DISC InfoSec’s earlier posts on the AI topic

InfoSec services | InfoSec books | Follow our blog | DISC llc is listed on The vCISO Directory | ISO 27k Chat bot | Comprehensive vCISO Services | ISMS Services | Security Risk Assessment Services | Mergers and Acquisition Security

Tags: ISO 42001 Readiness


Jun 30 2025

Why AI agents could be the next insider threat

Category: AI,Risk Assessment,Security Risk Assessmentdisc7 @ 5:11 pm

1. Invisible, Over‑Privileged Agents
Help Net Security highlights how AI agents—autonomous software acting on behalf of users—are increasingly embedded in enterprise systems without proper oversight. They often receive excessive permissions, operate unnoticed, and remain outside traditional identity governance controls

2. Critical Risks in Healthcare
Arun Shrestha from BeyondID emphasizes the healthcare sector’s vulnerability. AI agents there handle Protected Health Information (PHI) and system access, increasing risks to patient privacy, safety, and regulatory compliance (e.g., HIPAA)

3. Identity Blind Spots
Research shows many firms lack clarity about which AI agents have access to critical systems. AI agents can impersonate users or take unauthorized actions—yet these “non‑human identities” are seldom treated as significant security threats.

4. Growing Threat from Impersonation
TechRepublic’s data indicates only roughly 30% of US organizations map AI agent access, and 37% express concern over agents posing as users. In healthcare, up to 61% report experiencing attacks involving AI agents

5. Five Mitigation Steps
Shrestha outlines five key defenses: (1) inventory AI agents, (2) enforce least privilege, (3) monitor their actions, (4) integrate them into identity governance processes, and (5) establish human oversight—ensuring no agent operates unchecked.

6. Broader Context
This video builds on earlier insights about securing agentic AI, such as monitoring, prompt‑injection protection, and privilege scoping. The core call: treat AI agents like any high-risk insider.


📝 Feedback (7th paragraph):
This adeptly brings attention to a critical and often overlooked risk: AI agents as non‑human insiders. The healthcare case strengthens the urgency, yet adding quantitative data—such as what percentage of enterprises currently enforce least privilege on agents—would provide stronger impact. Explaining how to align these steps with existing frameworks like ISO 27001 or NIST would add practical value. Overall, it raises awareness and offers actionable controls, but would benefit from deeper technical guidance and benchmarks to empower concrete implementation.

Source Help Net security: Why AI agents could be the next insider threat

Agentic AI: Navigating Risks and Security Challenges

Artificial Intelligence: The Next Battlefield in Cybersecurity

AI and The Future of Cybersecurity: Navigating the New Digital Battlefield

“Whether you’re a technology professional, policymaker, academic, or simply a curious reader, this book will arm you with the knowledge to navigate the complex intersection of AI, security, and society.”

AI Governance Is a Boardroom Imperative—The SEC Just Raised the Stakes on AI Hype

How AI Is Transforming the Cybersecurity Leadership Playbook

Previous AI posts

IBM’s model-routing approach

Top 5 AI-Powered Scams to Watch Out for in 2025

Summary of CISO 3.0: Leading AI Governance and Security in the Boardroom

AI in the Workplace: Replacing Tasks, Not People

Why CISOs Must Prioritize Data Provenance in AI Governance

Interpretation of Ethical AI Deployment under the EU AI Act

AI Governance: Applying AI Policy and Ethics through Principles and Assessments

ISO/IEC 42001:2023, First Edition: Information technology – Artificial intelligence – Management system

ISO 42001 Artificial Intelligence Management Systems (AIMS) Implementation Guide: AIMS Framework | AI Security Standards

Businesses leveraging AI should prepare now for a future of increasing regulation.

Digital Ethics in the Age of AI 

DISC InfoSec’s earlier posts on the AI topic

InfoSec services | InfoSec books | Follow our blog | DISC llc is listed on The vCISO Directory | ISO 27k Chat bot | Comprehensive vCISO Services | ISMS Services | Security Risk Assessment Services | Mergers and Acquisition Security

Tags: AI Agents, Insider Threat


Jun 30 2025

Artificial Intelligence: The Next Battlefield in Cybersecurity

Category: AI,cyber securitydisc7 @ 8:56 am

Artificial Intelligence (AI) stands as a paradox in the cybersecurity landscape. While it empowers attackers with tools to launch faster, more convincing scams, it also offers defenders unmatched capabilities—if used strategically.

1. AI: A Dual-Edged Sword
The post emphasizes AI’s paradox in cybersecurity—it empowers attackers to launch sophisticated assaults while offering defenders potent tools to counteract those very threats

2. Rising Threats from Adversarial AI
AI emerging risks, such as data poisoning and adversarial inputs that can subtly mislead or manipulate AI systems deployed for defense

3. Secure AI Lifecycle Practices
To mitigate these threats, the article recommends implementing security across the entire AI lifecycle—covering design, development, deployment, and continual monitoring

4. Regulatory and Framework Alignment
It points out the importance of adhering to standards like ISO and NIST, as well as upcoming regulations around AI safety, to ensure both compliance and security .

5. Human-AI Synergy
A key insight is blending AI with human oversight/processes, such as threat modeling and red teaming, to maximize AI’s effectiveness while maintaining accountability

6. Continuous Adaptation and Education

Modern social engineering attacks have evolved beyond basic phishing emails. Today, they may come as deepfake videos of executives, convincingly realistic invoices, or well-timed scams exploiting current events or behavioral patterns.

The sophistication of these AI-powered attacks has rendered traditional cybersecurity tools inadequate. Defenders can no longer rely solely on static rules and conventional detection methods.

To stay ahead, organizations must counter AI threats with AI-driven defenses. This means deploying systems that can analyze behavioral patterns, verify identity authenticity, and detect subtle anomalies in real time.

Forward-thinking security teams are embedding AI into critical areas like endpoint protection, authentication, and threat detection. These adaptive systems provide proactive security rather than reactive fixes.

Ultimately, the goal is not to fear AI but to outsmart the adversaries who use it. By mastering and leveraging the same tools, defenders can shift the balance of power.

🧠 Case Study: AI-Generated Deepfake Voice Scam — $35 Million Heist

In 2023, a multinational company in the UK fell victim to a highly sophisticated AI-driven voice cloning attack. Fraudsters used deepfake audio to impersonate the company’s CEO, directing a senior executive to authorize a $35 million transfer to a fake supplier account. The cloned voice was realistic enough to bypass suspicion, especially because the attackers timed the call during a period when the CEO was known to be traveling.

This attack exploited AI-based social engineering and psychological trust cues, bypassing traditional cybersecurity defenses such as spam filters and endpoint protection.

Defense Lesson:
To prevent such attacks, organizations are now adopting AI-enabled voice biometrics, real-time anomaly detection, and multi-factor human-in-the-loop verification for high-value transactions. Some are also training employees to identify subtle behavioral or contextual red flags, even when the source seems authentic.

In early 2023, a multinational company in Hong Kong lost over $25 million after employees were tricked by a deepfake video call featuring AI-generated replicas of senior executives. The attackers used AI to mimic voices and appearances convincingly enough to authorize fraudulent transfers—highlighting how far social engineering has advanced with AI.

Source: [CNN Business, Feb 2024 – “Scammers used deepfake video call to steal millions”]

This example reinforces the urgency of integrating AI into threat detection and identity verification systems, showing how traditional security tools are no longer sufficient against such deception.

AI and The Future of Cybersecurity: Navigating the New Digital Battlefield

“Whether you’re a technology professional, policymaker, academic, or simply a curious reader, this book will arm you with the knowledge to navigate the complex intersection of AI, security, and society.”

Digital Ethics in the Age of AI – Navigating the ethical frontier today and beyond

AI Governance Is a Boardroom Imperative—The SEC Just Raised the Stakes on AI Hype

How AI Is Transforming the Cybersecurity Leadership Playbook

Previous AI posts

IBM’s model-routing approach

Top 5 AI-Powered Scams to Watch Out for in 2025

Summary of CISO 3.0: Leading AI Governance and Security in the Boardroom

AI in the Workplace: Replacing Tasks, Not People

Why CISOs Must Prioritize Data Provenance in AI Governance

Interpretation of Ethical AI Deployment under the EU AI Act

AI Governance: Applying AI Policy and Ethics through Principles and Assessments

ISO/IEC 42001:2023, First Edition: Information technology – Artificial intelligence – Management system

ISO 42001 Artificial Intelligence Management Systems (AIMS) Implementation Guide: AIMS Framework | AI Security Standards

Businesses leveraging AI should prepare now for a future of increasing regulation.

Digital Ethics in the Age of AI 

DISC InfoSec’s earlier posts on the AI topic

InfoSec services | InfoSec books | Follow our blog | DISC llc is listed on The vCISO Directory | ISO 27k Chat bot | Comprehensive vCISO Services | ISMS Services | Security Risk Assessment Services | Mergers and Acquisition Security

Tags: AI and Security, artificial intelligence, Digital Battlefield, Digital Ethics, Ethical Frontier


Jun 25 2025

AI Governance Is a Boardroom Imperative—The SEC Just Raised the Stakes on AI Hype

Category: AI,IT Governancedisc7 @ 7:18 am

The SEC has charged a major tech company for deceiving investors by exaggerating its use of AI—highlighting that the falsehood was about AI itself, not just product features. This signals a shift: AI governance has now become a boardroom-level issue, and many organizations are unprepared.

Advice for CISOs and execs:

  1. Be audit-ready—any AI claims must be verifiable.
  2. Involve GRC early—AI governance is about managing risk, enforcing controls, and ensuring transparency.
  3. Educate your board—they don’t need to understand algorithms, but they must grasp the associated risks and mitigation plans.

If your current AI strategy is nothing more than a slide deck and hope, it’s time to build something real.

AI Washing

The Securities and Exchange Commission (SEC) has been actively pursuing actions against companies for misleading statements about their use of Artificial Intelligence (AI), a practice often referred to as “AI washing”. 

Here are some examples of recent SEC actions in this area:

  • Presto Automation: The SEC charged Presto Automation for making misleading statements about its AI-powered voice technology used for drive-thru order taking. Presto allegedly failed to disclose that it was using a third party’s AI technology, not its own, and also misrepresented the extent of human involvement required for the product to function.
  • Delphia and Global Predictions: These two investment advisers were charged with making false and misleading statements about their use of AI in their investment processes. The SEC found that they either didn’t have the AI capabilities they claimed or didn’t use them to the extent they advertised.
  • Nate, Inc.: The founder of Nate, Inc. was charged by both the SEC and the DOJ for allegedly misleading investors about the company’s AI-powered app, claiming it automated online purchases when they were primarily processed manually by human contractors. 

Key takeaways from these cases and SEC guidance:

  • Transparency and Accuracy: Companies need to ensure their AI-related disclosures are accurate and avoid making vague or exaggerated claims.
  • Distinguish Capabilities: It’s important to clearly distinguish between current AI capabilities and future aspirations.
  • Substantiation: Companies should have a reasonable basis and supporting evidence for their AI-related claims.
  • Disclosure Controls: Companies should establish and maintain disclosure controls to ensure the accuracy of their AI-related statements in SEC filings and other communications. 

The SEC has made it clear that “AI washing” is a top enforcement priority, and companies should be prepared for heightened scrutiny of their AI-related disclosures. 

THE ILLUSION OF AI: How Companies Are Misleading You with Artificial Intelligence and What That Could Mean for Your Future

Digital Ethics in the Age of AI – Navigating the ethical frontier today and beyond

AI Governance: Applying AI Policy and Ethics through Principles and Assessments

How AI Is Transforming the Cybersecurity Leadership Playbook

Previous AI posts

IBM’s model-routing approach

Top 5 AI-Powered Scams to Watch Out for in 2025

Summary of CISO 3.0: Leading AI Governance and Security in the Boardroom

AI in the Workplace: Replacing Tasks, Not People

Why CISOs Must Prioritize Data Provenance in AI Governance

Interpretation of Ethical AI Deployment under the EU AI Act

AI Governance: Applying AI Policy and Ethics through Principles and Assessments

ISO/IEC 42001:2023, First Edition: Information technology – Artificial intelligence – Management system

ISO 42001 Artificial Intelligence Management Systems (AIMS) Implementation Guide: AIMS Framework | AI Security Standards

Businesses leveraging AI should prepare now for a future of increasing regulation.

Digital Ethics in the Age of AI 

DISC InfoSec’s earlier posts on the AI topic

InfoSec services | InfoSec books | Follow our blog | DISC llc is listed on The vCISO Directory | ISO 27k Chat bot | Comprehensive vCISO Services | ISMS Services | Security Risk Assessment Services | Mergers and Acquisition Security

Tags: AI Governance, AI Hype, AI Washing, Boardroom Imperative, Digital Ethics, SEC, THE ILLUSION OF AI


Jun 24 2025

OWASP Releases AI Testing Guide to Strengthen Security and Trust in AI Systems

Category: AI,Information Securitydisc7 @ 9:03 am

The Open Web Application Security Project (OWASP) has released the AI Testing Guide (AITG)—a structured, technology-agnostic framework to test and secure artificial intelligence systems. Developed in response to the growing adoption of AI in sensitive and high-stakes sectors, the guide addresses emerging AI-specific threats, such as adversarial attacks, model poisoning, and prompt injection. It is led by security experts Matteo Meucci and Marco Morana and is designed to support a wide array of stakeholders, including developers, architects, data scientists, and risk managers.

The guide provides comprehensive resources across the AI lifecycle, from design to deployment. It emphasizes the need for rigorous and repeatable testing processes to ensure AI systems are secure, trustworthy, and aligned with compliance requirements. The AITG also helps teams formalize testing efforts through structured documentation, thereby enhancing audit readiness and regulatory transparency. It supports due diligence efforts that are crucial for organizations operating in heavily regulated sectors like finance, healthcare, and critical infrastructure.

A core premise of the guide is that AI testing differs significantly from conventional software testing. Traditional applications exhibit deterministic behavior, while AI systems—especially machine learning models—are probabilistic in nature. They produce varying outputs depending on input variability and data distribution. Therefore, testing must account for issues such as data drift, fairness, transparency, and robustness. The AITG stresses that evaluating model performance alone is insufficient; testers must probe how models react to both benign and malicious changes in data.

Another standout feature of the AITG is its deep focus on adversarial robustness. AI systems can be deceived through carefully engineered inputs that appear normal to humans but cause erroneous model behavior. The guide provides methodologies to assess and mitigate such risks. Additionally, it includes techniques like differential privacy to protect individual data within training sets—critical in the age of stringent data protection regulations. This holistic testing approach strengthens confidence in AI systems both internally and among external stakeholders.

The AITG also acknowledges the fluid nature of AI environments. Models can silently degrade over time due to data drift or concept shift. To address this, the guide recommends implementing continuous monitoring frameworks that detect such degradation early and trigger automated responses. It incorporates fairness assessments and bias mitigation strategies, which are particularly important in ensuring that AI systems remain equitable and inclusive over time.

Importantly, the guide equips security professionals with specialized AI-centric penetration testing tools. These include tests for membership inference (to determine if a specific record was in the training data), model extraction (to recreate or steal the model), and prompt injection (particularly relevant for LLMs). These techniques are crucial for evaluating AI’s real-world attack surface, making the AITG a practical resource not just for developers, but also for red teams and security auditors.

Feedback:
The OWASP AI Testing Guide is a timely and well-structured contribution to the AI security landscape. It effectively bridges the gap between software engineering practices and the emerging realities of machine learning systems. Its technology-agnostic stance and lifecycle coverage make it broadly applicable across industries and AI maturity levels. However, the guide’s ultimate impact will depend on how well it is adopted by practitioners, particularly in fast-paced AI environments. OWASP might consider developing companion tools, templates, and case studies to accelerate practical adoption. Overall, this is a foundational step toward building secure, transparent, and accountable AI systems.

How AI Is Transforming the Cybersecurity Leadership Playbook

Previous AI posts

IBM’s model-routing approach

Top 5 AI-Powered Scams to Watch Out for in 2025

Summary of CISO 3.0: Leading AI Governance and Security in the Boardroom

AI in the Workplace: Replacing Tasks, Not People

Why CISOs Must Prioritize Data Provenance in AI Governance

Interpretation of Ethical AI Deployment under the EU AI Act

AI Governance: Applying AI Policy and Ethics through Principles and Assessments

ISO/IEC 42001:2023, First Edition: Information technology – Artificial intelligence – Management system

ISO 42001 Artificial Intelligence Management Systems (AIMS) Implementation Guide: AIMS Framework | AI Security Standards

Businesses leveraging AI should prepare now for a future of increasing regulation.

Digital Ethics in the Age of AI 

DISC InfoSec’s earlier posts on the AI topic

InfoSec services | InfoSec books | Follow our blog | DISC llc is listed on The vCISO Directory | ISO 27k Chat bot | Comprehensive vCISO Services | ISMS Services | Security Risk Assessment Services | Mergers and Acquisition Security

Tags: AITG, ISO 42001 Artificial Intelligence Management Systems (AIMS) Implementation Guide: AIMS Framework | AI Security Standards, OWASP guide


Jun 23 2025

How AI Is Transforming the Cybersecurity Leadership Playbook

Category: AI,CISO,Information Security,Security playbook,vCISOdisc7 @ 12:13 pm

1. AI transforms cybersecurity roles

AI isn’t just another tool—it’s a paradigm shift. CISOs must now integrate AI-driven analytics into real-time threat detection and incident response. These systems analyze massive volumes of data faster and surface patterns humans might miss.

2. New vulnerabilities from AI use

Deploying AI creates unique risks: biased outputs, prompt injection, data leakage, and compliance challenges across global jurisdictions. CISOs must treat models themselves as attack surfaces, ensuring robust governance.

3. AI amplifies offensive threats

Adversaries now weaponize AI to automate reconnaissance, craft tailored phishing lures or deepfakes, generate malicious code, and launch fast-moving credential‑stuffing campaigns.

4. Building an AI‑enabled cyber team

Moving beyond tool adoption, CISOs need to develop core data capabilities: quality pipelines, labeled datasets, and AI‑savvy talent. This includes threat‑hunting teams that grasp both AI defense and AI‑driven offense.

5. Core capabilities & controls

The playbook highlights foundational strategies:

  • Data governance (automated discovery and metadata tagging).
  • Zero trust and adaptive access controls down to file-system and AI pipelines.
  • AI-powered XDR and automated IR workflows to reduce dwell time.

6. Continuous testing & offensive security

CISOs must adopt offensive measures—AI pen testing, red‑teaming models, adversarial input testing, and ongoing bias audits. This mirrors traditional vulnerability management, now adapted for AI-specific threats.

7. Human + machine synergy

Ultimately, AI acts as a force multiplier—not a surrogate. Humans must oversee, interpret, understand model limitations, and apply context. A successful cyber‑AI strategy relies on continuous training and board engagement .


🧩 Feedback

  • Comprehensive: Excellent balance of offense, defense, data governance, and human oversight.
  • Actionable: Strong emphasis on building capabilities—not just buying tools—is a key differentiator.
  • Enhance with priorities: Highlighting fast-moving threats like prompt‑injection or autonomous AI agents could sharpen urgency.
  • Communications matter: Reminding CISOs to engage leadership with justifiable ROI and scenario planning ensures support and budget.

A CISO’s AI Playbook

AI transforms the cybersecurity role—especially for CISOs—in several fundamental ways:


1. From Reactive to Predictive

Traditionally, security teams react to alerts and known threats. AI shifts this model by enabling predictive analytics. AI can detect anomalies, forecast potential attacks, and recommend actions before damage is done.

2. Augmented Decision-Making

AI enhances the CISO’s ability to make high-stakes decisions under pressure. With tools that summarize incidents, prioritize risks, and assess business impact, CISOs move from gut instinct to data-informed leadership.

3. Automation of Repetitive Tasks

AI automates tasks like log analysis, malware triage, alert correlation, and even generating incident reports. This allows security teams to focus on strategic, higher-value work, such as threat modeling or security architecture.

4. Expansion of Threat Surface Oversight

With AI deployed in business functions (e.g., chatbots, LLMs, automation platforms), the CISO must now secure AI models and pipelines themselves—treating them as critical assets subject to attack and misuse.

5. Offensive AI Readiness

Adversaries are using AI too—to craft phishing campaigns, generate polymorphic malware, or automate social engineering. The CISO’s role expands to understanding offensive AI tactics and defending against them in real time.

6. AI Governance Leadership

CISOs are being pulled into AI governance: setting policies around responsible AI use, bias detection, explainability, and model auditing. Security leadership now intersects with ethical AI oversight and compliance.

7. Cross-Functional Influence

Because AI touches every function—HR, legal, marketing, product—the CISO must collaborate across departments, ensuring security is baked into AI initiatives from the ground up.


Summary:
AI transforms the CISO from a control enforcer into a strategic enabler who drives predictive defense, leads governance, secures machine intelligence, and shapes enterprise-wide digital resilience. It’s a shift from gatekeeping to guiding responsible, secure innovation.

CISO Playbook: Mastering Risk Quantification

Previous AI posts

IBM’s model-routing approach

Top 5 AI-Powered Scams to Watch Out for in 2025

Summary of CISO 3.0: Leading AI Governance and Security in the Boardroom

AI in the Workplace: Replacing Tasks, Not People

Why CISOs Must Prioritize Data Provenance in AI Governance

Interpretation of Ethical AI Deployment under the EU AI Act

AI Governance: Applying AI Policy and Ethics through Principles and Assessments

ISO/IEC 42001:2023, First Edition: Information technology – Artificial intelligence – Management system

ISO 42001 Artificial Intelligence Management Systems (AIMS) Implementation Guide: AIMS Framework | AI Security Standards

Businesses leveraging AI should prepare now for a future of increasing regulation.

Digital Ethics in the Age of AI 

DISC InfoSec’s earlier posts on the AI topic

InfoSec services | InfoSec books | Follow our blog | DISC llc is listed on The vCISO Directory | ISO 27k Chat bot | Comprehensive vCISO Services | ISMS Services | Security Risk Assessment Services | Mergers and Acquisition Security

Tags: Cybersecurity Leadership Playbook


Jun 19 2025

Aligning with ISO 42001:2023 and/or the EU Artificial Intelligence (AI) Act

Category: AI,Information Securitydisc7 @ 9:14 am

Mapping against ISO 42001:2023 and/or the EU Artificial Intelligence (AI) Act

The AI Act & ISO 42001 Gap Analysis Tool is a dual-purpose resource that helps organizations assess their current AI practices against both legal obligations under the EU AI Act and international standards like ISO/IEC 42001:2023. It allows users to perform a tailored gap analysis based on their specific needs, whether aligning with ISO 42001, the EU AI Act, or both. The tool facilitates early-stage project planning by identifying compliance gaps and setting actionable priorities.

With the EU AI Act now in force and enforcement of its prohibitions on high-risk AI systems beginning in February 2025, organizations face growing pressure to proactively manage AI risk. Implementing an AI management system (AIMS) aligned with ISO 42001 can reduce compliance risk and meet rising international expectations. As AI becomes more embedded in business operations, conducting a gap analysis has become essential for shaping a sound, legally compliant, and responsible AI strategy.

Feedback:
This tool addresses a timely and critical need in the AI governance landscape. By combining legal and best-practice assessments into one streamlined solution, it helps reduce complexity for compliance teams. Highlighting the upcoming enforcement deadlines and the benefits of ISO 42001 certification reinforces urgency and practicality.

The AI Act & ISO 42001 Gap Analysis Tool is a user-friendly solution that helps organizations quickly and effectively assess their current AI practices against both the EU AI Act and the ISO/IEC 42001:2023 standard. With intuitive features, customizable inputs, and step-by-step guidance, the tool adapts to your organization’s specific needs—whether you’re looking to meet regulatory obligations, align with international best practices, or both. Its streamlined interface allows even non-technical users to conduct a thorough gap analysis with minimal training.

Designed to integrate seamlessly into your project planning process, the tool delivers clear, actionable insights into compliance gaps and priority areas. As enforcement of the EU AI Act begins in early 2025, and with increasing global focus on AI governance, this tool provides not only legal clarity but also practical, accessible support for developing a robust AI management system. By simplifying the complexity of AI compliance, it empowers teams to make informed, strategic decisions faster.

What does the tool provide?

  • Split into two sections, EU AI Act and ISO 42001, so you can perform analyses for both or an individual analysis.
  • The EU AI Act section is divided into six sets of questions: general requirements, entity requirements, assessment and registration, general-purpose AI, measures to support innovation and post-market monitoring.
  • Identify which requirements and sections of the AI Act are applicable by completing the provided screening questions. The tool will automatically remove any non-applicable questions.
  • The ISO 42001 section is divided into two sets of questions: ISO 42001 six clauses and ISO 42001 controls as outlined in Annex A.
  • Executive summary pages for both analyses, including by section or clause/control, the number of requirements met and compliance percentage totals.
  • A clear indication of strong and weak areas through colour-coded analysis graphs and tables to highlight key areas of development and set project priorities.

The tool is designed to work in any Microsoft environment; it does not need to be installed like software, and does not depend on complex databases. It is reliant on human involvement.

Items that can support an ISO 42001 (AIMS) implementation project

Previous AI posts

IBM’s model-routing approach

Top 5 AI-Powered Scams to Watch Out for in 2025

Summary of CISO 3.0: Leading AI Governance and Security in the Boardroom

AI in the Workplace: Replacing Tasks, Not People

Why CISOs Must Prioritize Data Provenance in AI Governance

Interpretation of Ethical AI Deployment under the EU AI Act

AI Governance: Applying AI Policy and Ethics through Principles and Assessments

ISO/IEC 42001:2023, First Edition: Information technology – Artificial intelligence – Management system

ISO 42001 Artificial Intelligence Management Systems (AIMS) Implementation Guide: AIMS Framework | AI Security Standards

Businesses leveraging AI should prepare now for a future of increasing regulation.

Digital Ethics in the Age of AI 

DISC InfoSec’s earlier posts on the AI topic

InfoSec services | InfoSec books | Follow our blog | DISC llc is listed on The vCISO Directory | ISO 27k Chat bot | Comprehensive vCISO Services | ISMS Services | Security Risk Assessment Services | Mergers and Acquisition Security

Tags: EU AI Act, ISO 42001


Jun 13 2025

Prompt injection attacks can have serious security implications

Category: AI,App Securitydisc7 @ 11:50 am

Prompt injection attacks can have serious security implications, particularly for AI-driven applications. Here are some potential consequences:

  • Unauthorized data access: Attackers can manipulate AI models to reveal sensitive information that should remain protected.
  • Bypassing security controls: Malicious inputs can override built-in safeguards, leading to unintended outputs or actions.
  • System prompt leakage: Attackers may extract internal configurations or instructions meant to remain hidden.
  • False content generation: AI models can be tricked into producing misleading or harmful information.
  • Persistent manipulation: Some attacks can alter AI behavior across multiple interactions, making mitigation more difficult.
  • Exploitation of connected tools: If an AI system integrates with external APIs or automation tools, attackers could misuse these connections for unauthorized actions.

Preventing prompt injection attacks requires a combination of security measures and careful prompt design. Here are some best practices:

  • Separate user input from system instructions: Avoid directly concatenating user input with system prompts to prevent unintended command execution.
  • Use structured input formats: Implement XML or JSON-based structures to clearly differentiate user input from system directives.
  • Apply input validation and sanitization: Filter out potentially harmful instructions and restrict unexpected characters or phrases.
  • Limit model permissions: Ensure AI systems have restricted access to sensitive data and external tools to minimize exploitation risks.
  • Monitor and log interactions: Track AI responses for anomalies that may indicate an attempted injection attack.
  • Implement guardrails: Use predefined security policies and response filtering to prevent unauthorized actions.

Strengthen your AI system against prompt injection attacks, here are some tailored strategies:

  • Define clear input boundaries: Ensure user inputs are handled separately from system instructions to avoid unintended command execution.
  • Use predefined response templates: This limits the ability of injected prompts to influence output behavior.
  • Regularly audit and update security measures: AI models evolve, so keeping security protocols up to date is essential.
  • Restrict model privileges: Minimize the AI’s access to sensitive data and external integrations to mitigate risks.
  • Employ adversarial testing: Simulate attacks to identify weaknesses and improve defenses before exploitation occurs.
  • Educate users and developers: Understanding potential threats helps in maintaining secure interactions.
  • Leverage external validation: Implement third-party security reviews to uncover vulnerabilities from an unbiased perspective.

Source: https://security.googleblog.com/2025/06/mitigating-prompt-injection-attacks.html

InfoSec services | InfoSec books | Follow our blog | DISC llc is listed on The vCISO Directory | ISO 27k Chat bot | Comprehensive vCISO Services | ISMS Services | Security Risk Assessment Services

Tags: prompt Injection


Jun 11 2025

Three Essentials for Agentic AI Security

Category: AIdisc7 @ 11:11 am

The article “Three Essentials for Agentic AI Security” explores the security challenges posed by AI agents, which operate autonomously across multiple systems. While these agents enhance productivity and streamline workflows, they also introduce vulnerabilities that businesses must address. The article highlights how AI agents interact with APIs, core data systems, and cloud infrastructures, making security a critical concern. Despite their growing adoption, many companies remain unprepared, with only 42% of executives balancing AI development with adequate security measures.

A Brazilian health care provider’s experience serves as a case study for managing agentic AI security risks. The company, with over 27,000 employees, relies on AI agents to optimize operations across various medical services. However, the autonomous nature of these agents necessitates a robust security framework to ensure compliance and data integrity. The article outlines a three-phase security approach that includes threat modeling, security testing, and runtime protections.

The first phase, threat modeling, involves identifying potential risks associated with AI agents. This step helps organizations anticipate vulnerabilities before deployment. The second phase, security testing, ensures that AI tools undergo rigorous assessments to validate their resilience against cyber threats. The final phase, runtime protections, focuses on continuous monitoring and response mechanisms to mitigate security breaches in real time.

The article emphasizes that trust in AI agents cannot be assumed—it must be built through proactive security measures. Companies that successfully integrate AI security strategies are more likely to achieve operational efficiency and financial performance. The research suggests that businesses investing in agentic architectures are 4.5 times more likely to see enterprise-level value from AI adoption.

In conclusion, the article underscores the importance of balancing AI innovation with security preparedness. As AI agents become more autonomous, organizations must implement comprehensive security frameworks to safeguard their systems. The Brazilian health care provider’s approach serves as a valuable blueprint for businesses looking to enhance their AI security posture.

Feedback: The article provides a compelling analysis of the security risks associated with AI agents and offers practical solutions. The three-phase framework is particularly insightful, as it highlights the need for a proactive security strategy rather than a reactive one. However, the discussion could benefit from more real-world examples beyond the Brazilian case study to illustrate diverse industry applications. Overall, the article is a valuable resource for organizations navigating the complexities of AI security.

The three-phase security approach for agentic AI focuses on ensuring that AI agents operate securely while interacting with various systems. Here’s a breakdown of each phase:

  1. Threat Modeling – This initial phase involves identifying potential security risks associated with AI agents before deployment. Organizations assess how AI interacts with APIs, databases, and cloud environments to pinpoint vulnerabilities. By understanding possible attack vectors, companies can proactively design security measures to mitigate risks.
  2. Security Testing – Once threats are identified, AI agents undergo rigorous testing to validate their resilience against cyber threats. This phase includes penetration testing, adversarial simulations, and compliance checks to ensure that AI systems can withstand real-world security challenges. Testing helps organizations refine their security protocols before AI agents are fully integrated into business operations.
  3. Runtime Protections – The final phase focuses on continuous monitoring and response mechanisms. AI agents operate dynamically, meaning security measures must adapt in real time. Organizations implement automated threat detection, anomaly monitoring, and rapid response strategies to prevent breaches. This ensures that AI agents remain secure throughout their lifecycle.

This structured approach helps businesses balance AI innovation with security preparedness. By implementing these phases, companies can safeguard their AI-driven workflows while maintaining compliance and data integrity. You can explore more details in the original article here.

IBM’s model-routing approach

Top 5 AI-Powered Scams to Watch Out for in 2025

Summary of CISO 3.0: Leading AI Governance and Security in the Boardroom

AI in the Workplace: Replacing Tasks, Not People

Why CISOs Must Prioritize Data Provenance in AI Governance

Interpretation of Ethical AI Deployment under the EU AI Act

AI Governance: Applying AI Policy and Ethics through Principles and Assessments

ISO/IEC 42001:2023, First Edition: Information technology – Artificial intelligence – Management system

ISO 42001 Artificial Intelligence Management Systems (AIMS) Implementation Guide: AIMS Framework | AI Security Standards

Businesses leveraging AI should prepare now for a future of increasing regulation.

Digital Ethics in the Age of AI 

DISC InfoSec’s earlier posts on the AI topic

InfoSec services | InfoSec books | Follow our blog | DISC llc is listed on The vCISO Directory | ISO 27k Chat bot | Comprehensive vCISO Services | ISMS Services | Security Risk Assessment Services

Tags: Agentic AI Security


Jun 09 2025

Securing Enterprise AI Agents: Managing Access, Identity, and Sensitive Data

Category: AIdisc7 @ 11:29 pm

1. Deploying AI agents in enterprise environments comes with a range of security and safety concerns, particularly when the agents are customized for internal use. These concerns must be addressed thoroughly before allowing such agents to operate in production systems.

2. Take the example of an HR agent handling employee requests. If it has broad access to an HR database, it risks exposing sensitive information — not just for the requesting employee but potentially for others as well. This scenario highlights the importance of data isolation and strict access protocols.

3. To prevent such risks, enterprises must implement fine-grained access controls (FGACs) and role-based access controls (RBACs). These mechanisms ensure that agents only access the data necessary for their specific role, in alignment with security best practices like the principle of least privilege.

4. It’s also essential to follow proper protocols for handling personally identifiable information (PII). This includes compliance with PII transfer regulations and adopting an identity fabric to manage digital identities and enforce secure interactions across systems.

5. In environments where multiple agents interact, secure communication protocols become critical. These protocols must prevent data leaks during inter-agent collaboration and ensure encrypted transmission of sensitive data, in accordance with regulatory standards.


6. Feedback:
This passage effectively outlines the critical need for layered security when deploying AI agents in enterprise contexts. However, it could benefit from specific examples of implementation strategies or frameworks already in use (e.g., Zero Trust Architecture or identity and access management platforms). Additionally, highlighting the consequences of failing to address these concerns (e.g., data breaches, compliance violations) would make the risks more tangible for decision-makers.

AI Agents in Action

IBM’s model-routing approach

Top 5 AI-Powered Scams to Watch Out for in 2025

Summary of CISO 3.0: Leading AI Governance and Security in the Boardroom

AI in the Workplace: Replacing Tasks, Not People

Why CISOs Must Prioritize Data Provenance in AI Governance

Interpretation of Ethical AI Deployment under the EU AI Act

AI Governance: Applying AI Policy and Ethics through Principles and Assessments

ISO/IEC 42001:2023, First Edition: Information technology – Artificial intelligence – Management system

ISO 42001 Artificial Intelligence Management Systems (AIMS) Implementation Guide: AIMS Framework | AI Security Standards

Businesses leveraging AI should prepare now for a future of increasing regulation.

Digital Ethics in the Age of AI 

DISC InfoSec’s earlier posts on the AI topic

InfoSec services | InfoSec books | Follow our blog | DISC llc is listed on The vCISO Directory | ISO 27k Chat bot | Comprehensive vCISO Services | ISMS Services | Security Risk Assessment Services

Tags: AI Agents, AI Agents in Action


Jun 03 2025

IBM’s model-routing approach

Category: AIdisc7 @ 4:14 pm

IBM’s model-routing approach—where a model-routing algorithm acts as an orchestrator—is part of a growing trend in AI infrastructure known as multi-model inference orchestration. Let’s break down what this approach involves and why it matters:


🔄 What It Is

Instead of using a single large model (like a general-purpose LLM) for all inference tasks, IBM’s approach involves multiple specialized models—each potentially optimized for different domains, tasks, or modalities (e.g., text, code, image, or legal reasoning).

At the center of this architecture sits a routing algorithm, which functions like a traffic controller. When an inference request (e.g., a user prompt) comes in, the router analyzes it and predicts which model is best suited to handle it based on context, past performance, metadata, or learned patterns.


⚙️ How It Works (Simplified Flow)

  1. Request Input: A user sends a prompt (e.g., a question or task).
  2. Router Evaluation: The orchestrator examines the request’s content—this might involve analyzing intent, complexity, or topic (e.g., legal vs. creative writing).
  3. Model Selection: Based on predefined rules, statistical learning, or even another ML model, the router selects the optimal model from a pool.
  4. Forwarding & Inference: The request is forwarded to the chosen model, which generates the response.
  5. Feedback Loop (optional): Performance outcomes can be fed back to improve future routing decisions.


🧠 Why It’s Powerful

  • Efficiency: Lighter or more task-specific models can be used instead of always relying on a massive general model—saving compute costs.
  • Performance: Task-optimized models may outperform general LLMs in niche domains (e.g., finance, medicine, or law).
  • Scalability: Multiple models can be run in parallel and updated independently.
  • Modularity: Easier to plug in or retire models without affecting the whole system.


📊 Example Use Case

Suppose a user asks:

  • “Summarize this legal contract.”
    The router detects legal language and routes to a model fine-tuned on legal documents.

If instead the user asks:

  • “Write a poem about space,”
    It could route to a creative-writing-optimized model.

AI Value Creators: Beyond the Generative AI User Mindset

InfoSec services | InfoSec books | Follow our blog | DISC llc is listed on The vCISO Directory | ISO 27k Chat bot | Comprehensive vCISO Services | ISMS Services | Security Risk Assessment Services

Tags: IBM model-routing


Jun 03 2025

Top 5 AI-Powered Scams to Watch Out for in 2025

Category: AI,Security Awarenessdisc7 @ 8:00 am

1. Deep-fake celebrity impersonations
Scammers now mass-produce AI-generated videos, photos, or voice clips that convincingly mimic well-known figures. The fake “celebrity” pushes a giveaway, investment tip, or app download, lending instant credibility and reach across social platforms and ads. Because the content looks and sounds authentic, victims lower their guard and click through.

2. “Too-good-to-fail” crypto investments
Fraud rings promise eye-watering returns on digital-currency schemes, often reinforced by forged celebrity endorsements or deep-fake interviews. Once funds are transferred to the scammers’ wallets, they vanish—and the cross-border nature of the crime makes recovery almost impossible.

3. Cloned apps and look-alike websites
Attackers spin up near-pixel-perfect copies of banking apps, customer-support portals, or employee login pages. Entering credentials or card details hands them straight to the crooks, who may also drop malware for future access or ransom. Even QR codes and app-store listings are spoofed to lure downloads.

4. Landing-page cloaking
To dodge automated scanners, scammers show Google’s crawlers a harmless page while serving users a malicious one—often phishing forms or scareware purchase screens. The mismatch (“cloaking”) lets the fraudulent ad or search result slip past filters until victims report it.

5. Event-driven hustles
Whenever a big election, disaster, eclipse, or sporting final hits the headlines, fake charities, ticket sellers, or NASA-branded “special glasses” pop up overnight. The timely hook plus fabricated urgency (“donate now or miss out”) drives impulsive clicks and payments before scrutiny kicks in.

6. Quick take
Google’s May-2025 advisory is a solid snapshot of how criminals are weaponizing generative AI and marketing tactics in real time. Its tips (check URLs, doubt promises, use Enhanced Protection, etc.) are sound, but the bigger lesson is behavioral: pause before you pay, download, or share credentials—especially when a message leans on urgency or authority. Technology can flag threats, yet habitual skepticism remains the best last-mile defense.

Protecting Yourself: Stay Away from AI Scams

InfoSec services | InfoSec books | Follow our blog | DISC llc is listed on The vCISO Directory | ISO 27k Chat bot | Comprehensive vCISO Services | ISMS Services | Security Risk Assessment Services

Tags: AI Fraud, AI scams, AI-Powered Scams


Jun 02 2025

Summary of CISO 3.0: Leading AI Governance and Security in the Boardroom

Category: AI,CISO,Information Security,vCISOdisc7 @ 5:12 pm

  1. Aaron McCray, Field CISO at CDW, discusses the evolving role of the Chief Information Security Officer (CISO) in the age of artificial intelligence (AI). He emphasizes that CISOs are transitioning from traditional cybersecurity roles to strategic advisors who guide enterprise-wide AI governance and risk management. This shift, termed “CISO 3.0,” involves aligning AI initiatives with business objectives and compliance requirements.
  2. McCray highlights the challenges of integrating AI-driven security tools, particularly regarding visibility, explainability, and false positives. He notes that while AI can enhance security operations, it also introduces complexities, such as the need for transparency in AI decision-making processes and the risk of overwhelming security teams with irrelevant alerts. Ensuring that AI tools integrate seamlessly with existing infrastructure is also a significant concern.
  3. The article underscores the necessity for CISOs and their teams to develop new skill sets, including proficiency in data science and machine learning. McCray points out that understanding how AI models are trained and the data they rely on is crucial for managing associated risks. Adaptive learning platforms that simulate real-world scenarios are mentioned as effective tools for closing the skills gap.
  4. When evaluating third-party AI tools, McCray advises CISOs to prioritize accountability and transparency. He warns against tools that lack clear documentation or fail to provide insights into their decision-making processes. Red flags include opaque algorithms and vendors unwilling to disclose their AI models’ inner workings.
  5. In conclusion, McCray emphasizes that as AI becomes increasingly embedded across business functions, CISOs must lead the charge in establishing robust governance frameworks. This involves not only implementing effective security measures but also fostering a culture of continuous learning and adaptability within their organizations.

Feedback

  1. The article effectively captures the transformative impact of AI on the CISO role, highlighting the shift from technical oversight to strategic leadership. This perspective aligns with the broader industry trend of integrating cybersecurity considerations into overall business strategy.
  2. By addressing the practical challenges of AI integration, such as explainability and infrastructure compatibility, the article provides valuable insights for organizations navigating the complexities of modern cybersecurity landscapes. These considerations are critical for maintaining trust in AI systems and ensuring their effective deployment.
  3. The emphasis on developing new skill sets underscores the dynamic nature of cybersecurity roles in the AI era. Encouraging continuous learning and adaptability is essential for organizations to stay ahead of evolving threats and technological advancements.
  4. The cautionary advice regarding third-party AI tools serves as a timely reminder of the importance of due diligence in vendor selection. Transparency and accountability are paramount in building secure and trustworthy AI systems.
  5. The article could further benefit from exploring specific case studies or examples of organizations successfully implementing AI governance frameworks. Such insights would provide practical guidance and illustrate the real-world application of the concepts discussed.
  6. Overall, the article offers a comprehensive overview of the evolving responsibilities of CISOs in the context of AI integration. It serves as a valuable resource for cybersecurity professionals seeking to navigate the challenges and opportunities presented by AI technologies.

For further details, access the article here

AI is rapidly transforming systems, workflows, and even adversary tactics, regardless of whether our frameworks are ready. It isn’t bound by tradition and won’t wait for governance to catch up…When AI evaluates risks, it may enhance the speed and depth of risk management but only when combined with human oversight, governance frameworks, and ethical safeguards.

A new ISO standard, ISO 42005 provides organizations a structured, actionable pathway to assess and document AI risks, benefits, and alignment with global compliance frameworks.

A New Era in Governance

The CISO 3.0: A Guide to Next-Generation Cybersecurity Leadership

Interpretation of Ethical AI Deployment under the EU AI Act

AI in the Workplace: Replacing Tasks, Not People

AIMS and Data Governance

AI Governance: Applying AI Policy and Ethics through Principles and Assessments

ISO/IEC 42001:2023, First Edition: Information technology – Artificial intelligence – Management system

ISO 42001 Artificial Intelligence Management Systems (AIMS) Implementation Guide: AIMS Framework | AI Security Standards

Businesses leveraging AI should prepare now for a future of increasing regulation.

Digital Ethics in the Age of AI 

DISC InfoSec’s earlier posts on the AI topic

InfoSec services | InfoSec books | Follow our blog | DISC llc is listed on The vCISO Directory | ISO 27k Chat bot | Comprehensive vCISO Services | ISMS Services | Security Risk Assessment Services

Tags: AI Governance, CISO 3.0


Jun 01 2025

AI in the Workplace: Replacing Tasks, Not People

Category: AIdisc7 @ 3:48 pm

  1. Establishing an AI Strategy and Guardrails:
    To effectively integrate AI into an organization, leadership must clearly articulate the company’s AI strategy to all employees. This includes defining acceptable and unacceptable uses of AI, legal boundaries, and potential risks. Setting clear guardrails fosters a culture of responsibility and mitigates misuse or misunderstandings.
  2. Transparency and Job Impact Communication:
    Transparency is essential, especially since many employees may worry that AI initiatives threaten their roles. Leaders should communicate that those who adapt to AI will outperform those who resist it. It’s also important to outline how AI will alter jobs by automating routine tasks, thereby allowing employees to focus on higher-value work.
  3. Redefining Roles Through AI Integration:
    For instance, HR professionals may shift from administrative tasks—like managing transfers or answering policy questions—to more strategic work such as improving onboarding processes. This demonstrates how AI can enhance job roles rather than eliminate them.
  4. Addressing Employee Sentiments and Fears:
    Leaders must pay attention to how employees feel and what they discuss informally. Creating spaces for feedback and development helps surface concerns early. Ignoring this can erode culture, while addressing it fosters trust and connection. Open conversations and vulnerability from leadership are key to dispelling fear.
  5. Using AI to Facilitate Dialogue and Action:
    AI tools can aid in gathering and classifying employee feedback, sparking relevant discussions, and supporting ongoing engagement. Digital check-ins powered by AI-generated prompts offer structured ways to begin conversations and address concerns constructively.
  6. Equitable Participation and Support Mechanisms:
    Organizations must ensure all employees are given equal opportunity to engage with AI tools and upskilling programs. While individuals will respond differently, support systems like centralized feedback platforms and manager check-ins can help everyone feel included and heard.

Feedback and Organizational Tone Setting:
This approach sets a progressive and empathetic tone for AI adoption. It balances innovation with inclusion by emphasizing transparency, emotional intelligence, and support. Leaders must model curiosity and vulnerability, signaling that learning is a shared journey. Most importantly, the strategy recognizes that successful AI integration is as much about culture and communication as it is about technology. When done well, it transforms AI from a job threat into a tool for empowerment and growth.

Resolving Routine Business Activities by Harnessing the Power of AI: A Competency-Based Approach that Integrates Learning and Information with … Workbooks for Structured Learning

p.s. “AGI shouldn’t be confused with GenAI. GenAI is a tool. AGI is a
goal of evolving that tool to the extent that its capabilities match
human cognitive abilities, or even surpasses them, across a wide
range of tasks. We’re not there yet, perhaps never will be, or per
haps it’ll arrive sooner than we expected. But when it comes to
AGI, think about LLMs demonstrating and exceeding humanlike
intelligence”

DATA RESIDENT : AN ADVANCED APPROACH TO DATA QUALITY, PROVENANCE, AND CONTINUITY IN DYNAMIC ENVIRONMENTS

Interpretation of Ethical AI Deployment under the EU AI Act

AI Governance: Applying AI Policy and Ethics through Principles and Assessments

ISO/IEC 42001:2023, First Edition: Information technology – Artificial intelligence – Management system

ISO 42001 Artificial Intelligence Management Systems (AIMS) Implementation Guide: AIMS Framework | AI Security Standards

Businesses leveraging AI should prepare now for a future of increasing regulation.

Digital Ethics in the Age of AI 

DISC InfoSec’s earlier posts on the AI topic

InfoSec services | InfoSec books | Follow our blog | DISC llc is listed on The vCISO Directory | ISO 27k Chat bot | Comprehensive vCISO Services | ISMS Services | Security Risk Assessment Services

Tags: AI Governance


May 29 2025

Why CISOs Must Prioritize Data Provenance in AI Governance

Category: AI,IT Governancedisc7 @ 9:29 am

In the rapidly evolving landscape of artificial intelligence (AI), Chief Information Security Officers (CISOs) are grappling with the challenges of governance and data provenance. As AI tools become increasingly integrated into various business functions, often without centralized oversight, the traditional methods of data governance are proving inadequate. The core concern lies in the assumption that popular or “enterprise-ready” AI models are inherently secure and compliant, leading to a dangerous oversight of data provenance—the ability to trace the origin, transformation, and handling of data.

Data provenance is crucial in AI governance, especially with large language models (LLMs) that process and generate data in ways that are often opaque. Unlike traditional systems where data lineage can be reconstructed, LLMs can introduce complexities where prompts aren’t logged, outputs are copied across systems, and models may retain information without clear consent. This lack of transparency poses significant risks in regulated domains like legal, finance, or privacy, where accountability and traceability are paramount.

The decentralized adoption of AI tools across enterprises exacerbates these challenges. Various departments may independently implement AI solutions, leading to a sprawl of tools powered by different LLMs, each with its own data handling policies and compliance considerations. This fragmentation means that security organizations often lose visibility and control over how sensitive information is processed, increasing the risk of data breaches and compliance violations.

Contrary to the belief that regulations are lagging behind AI advancements, many existing data protection laws like GDPR, CPRA, and others already encompass principles applicable to AI usage. The issue lies in the systems’ inability to respond to these regulations effectively. LLMs blur the lines between data processors and controllers, making it challenging to determine liability and ownership of AI-generated outputs. In audit scenarios, organizations must be able to demonstrate the actions and decisions made by AI tools, a capability many currently lack.

To address these challenges, modern AI governance must prioritize infrastructure over policy. This includes implementing continuous, automated data mapping to track data flows across various interfaces and systems. Records of Processing Activities (RoPA) should be updated to include model logic, AI tool behavior, and jurisdictional exposure. Additionally, organizations need to establish clear guidelines for AI usage, ensuring that data handling practices are transparent, compliant, and secure.

Moreover, fostering a culture of accountability and awareness around AI usage is essential. This involves training employees on the implications of using AI tools, encouraging responsible behavior, and establishing protocols for monitoring and auditing AI interactions. By doing so, organizations can mitigate risks associated with AI adoption and ensure that data governance keeps pace with technological advancements.

CISOs play a pivotal role in steering their organizations toward robust AI governance. They must advocate for infrastructure that supports data provenance, collaborate with various departments to ensure cohesive AI strategies, and stay informed about evolving regulations. By taking a proactive approach, CISOs can help their organizations harness the benefits of AI while safeguarding against potential pitfalls.

In conclusion, as AI continues to permeate various aspects of business operations, the importance of data provenance in AI governance cannot be overstated. Organizations must move beyond assumptions of safety and implement comprehensive strategies that prioritize transparency, accountability, and compliance. By doing so, they can navigate the complexities of AI adoption and build a foundation of trust and security in the digital age.

For further details, access the article here on Data provenance

DATA RESIDENT : AN ADVANCED APPROACH TO DATA QUALITY, PROVENANCE, AND CONTINUITY IN DYNAMIC ENVIRONMENTS

Interpretation of Ethical AI Deployment under the EU AI Act

AI Governance: Applying AI Policy and Ethics through Principles and Assessments

ISO/IEC 42001:2023, First Edition: Information technology – Artificial intelligence – Management system

ISO 42001 Artificial Intelligence Management Systems (AIMS) Implementation Guide: AIMS Framework | AI Security Standards

Businesses leveraging AI should prepare now for a future of increasing regulation.

Digital Ethics in the Age of AI 

DISC InfoSec’s earlier posts on the AI topic

InfoSec services | InfoSec books | Follow our blog | DISC llc is listed on The vCISO Directory | ISO 27k Chat bot | Comprehensive vCISO Services | ISMS Services | Security Risk Assessment Services

Tags: data provenance


May 23 2025

Interpretation of Ethical AI Deployment under the EU AI Act

Category: AIdisc7 @ 5:39 am

Scenario: A healthcare startup in the EU develops an AI system to assist doctors in diagnosing skin cancer from images. The system uses machine learning to classify lesions as benign or malignant.

1. Risk-Based Classification

  • EU AI Act Requirement: Classify the AI system into one of four risk categories: unacceptable, high-risk, limited-risk, minimal-risk.
  • Interpretation in Scenario:
    The diagnostic system qualifies as a high-risk AI because it affects people’s health decisions, thus requiring strict compliance with specific obligations.

2. Data Governance & Quality

  • EU AI Act Requirement: High-risk AI systems must use high-quality datasets to avoid bias and ensure accuracy.
  • Interpretation in Scenario:
    The startup must ensure that training data are representative of all demographic groups (skin tones, age ranges, etc.) to reduce bias and avoid misdiagnosis.

3. Transparency & Human Oversight

  • EU AI Act Requirement: Users should be aware they are interacting with an AI system; meaningful human oversight is required.
  • Interpretation in Scenario:
    Doctors must be clearly informed that the diagnosis is AI-assisted and retain final decision-making authority. The system should offer explainability features (e.g., heatmaps on images to show reasoning).

4. Robustness, Accuracy, and Cybersecurity

  • EU AI Act Requirement: High-risk AI systems must be technically robust and secure.
  • Interpretation in Scenario:
    The AI tool must maintain high accuracy under diverse conditions and protect patient data from breaches. It should include fallback mechanisms if anomalies are detected.

5. Accountability and Documentation

  • EU AI Act Requirement: Maintain detailed technical documentation and logs to demonstrate compliance.
  • Interpretation in Scenario:
    The startup must document model architecture, training methodology, test results, and monitoring processes, and be ready to submit these to regulators if required.

6. Registration and CE Marking

  • EU AI Act Requirement: High-risk systems must be registered in an EU database and undergo conformity assessments.
  • Interpretation in Scenario:
    The startup must submit their system to a notified body, demonstrate compliance, and obtain CE marking before deployment.

AI Governance: Applying AI Policy and Ethics through Principles and Assessments

ISO/IEC 42001:2023, First Edition: Information technology – Artificial intelligence – Management system

ISO 42001 Artificial Intelligence Management Systems (AIMS) Implementation Guide: AIMS Framework | AI Security Standards

Businesses leveraging AI should prepare now for a future of increasing regulation.

Digital Ethics in the Age of AI 

DISC InfoSec’s earlier posts on the AI topic

InfoSec services | InfoSec books | Follow our blog | DISC llc is listed on The vCISO Directory | ISO 27k Chat bot | Comprehensive vCISO Services | ISMS Services | Security Risk Assessment Services

Tags: Digital Ethics, EU AI Act, ISO 42001


May 22 2025

AI Data Security Report

Category: AI,data securitydisc7 @ 1:41 pm

Summary of the AI Data Security Report

The AI Data Security report, jointly authored by the NSA, CISA, FBI, and cybersecurity agencies from Australia, New Zealand, and the UK, provides comprehensive guidance on securing data throughout the AI system lifecycle. It emphasizes the critical importance of data integrity and confidentiality in ensuring the reliability of AI outcomes. The report outlines best practices such as implementing data encryption, digital signatures, provenance tracking, secure storage solutions, and establishing a robust trust infrastructure. These measures aim to protect sensitive, proprietary, or mission-critical data used in AI systems.

Key Risk Areas and Mitigation Strategies

The report identifies three primary data security risks in AI systems:

  1. Data Supply Chain Vulnerabilities: Risks associated with sourcing data from external providers, which may introduce compromised or malicious datasets.
  2. Poisoned Data: The intentional insertion of malicious data into training datasets to manipulate AI behavior.
  3. Data Drift: The gradual evolution of data over time, which can degrade AI model performance if not properly managed.

To mitigate these risks, the report recommends rigorous validation of data sources, continuous monitoring for anomalies, and regular updates to AI models to accommodate changes in data patterns.

Feedback and Observations

The report offers a timely and thorough framework for organizations to enhance the security of their AI systems. By addressing the entire data lifecycle, it underscores the necessity of integrating security measures from the initial stages of AI development through deployment and maintenance. However, the implementation of these best practices may pose challenges, particularly for organizations with limited resources or expertise in AI and cybersecurity. Therefore, additional support in the form of training, standardized tools, and collaborative initiatives could be beneficial in facilitating widespread adoption of these security measures.

For further details, access the report: AI Data Security Report

InfoSec services | InfoSec books | Follow our blog | DISC llc is listed on The vCISO Directory | ISO 27k Chat bot | Comprehensive vCISO Services | ISMS Services | Security Risk Assessment Services

Tags: AI Data Security


May 22 2025

AI in the Legislature: Promise, Pitfalls, and the Future of Lawmaking

Category: AI,Security and privacy Lawdisc7 @ 9:00 am

Bruce Schneier’s essay, “AI-Generated Law,” delves into the emerging role of artificial intelligence in legislative processes, highlighting both its potential benefits and inherent risks. He examines global developments, such as the United Arab Emirates’ initiative to employ AI for drafting and updating laws, aiming to accelerate legislative procedures by up to 70%. This move is part of a broader strategy to transform the UAE into an “AI-native” government by 2027, with a substantial investment exceeding $3 billion. While this approach has garnered attention, it’s not entirely unprecedented. In 2023, Porto Alegre, Brazil, enacted a local ordinance on water meter replacement, drafted with the assistance of ChatGPT—a fact that was not disclosed to the council members at the time. Such instances underscore the growing trend of integrating AI into legislative functions worldwide.

Schneier emphasizes that the integration of AI into lawmaking doesn’t necessitate formal procedural changes. Legislators can independently utilize AI tools to draft bills, much like they rely on staffers or lobbyists. This democratization of legislative drafting tools means that AI can be employed at various governmental levels without institutional mandates. For example, since 2020, Ohio has leveraged AI to streamline its administrative code, eliminating approximately 2.2 million words of redundant regulations. Such applications demonstrate AI’s capacity to enhance efficiency in legislative processes.

The essay also addresses the potential pitfalls of AI-generated legislation. One concern is the phenomenon of “confabulation,” where AI systems might produce plausible-sounding but incorrect or nonsensical information. However, Schneier argues that human legislators are equally prone to errors, citing the Affordable Care Act’s near downfall due to a typographical mistake. Moreover, he points out that in non-democratic regimes, laws are often arbitrary and inhumane, regardless of whether they are drafted by humans or machines. Thus, the medium of law creation—human or AI—doesn’t inherently guarantee justice or fairness.

A significant concern highlighted is the potential for AI to exacerbate existing power imbalances. Given AI’s capabilities, there’s a risk that those in power might use it to further entrench their positions, crafting laws that serve specific interests under the guise of objectivity. This could lead to a veneer of neutrality while masking underlying biases or agendas. Schneier warns that without transparency and oversight, AI could become a tool for manipulation rather than a means to enhance democratic processes.

Despite these challenges, Schneier acknowledges the potential benefits of AI in legislative contexts. AI can assist in drafting clearer, more consistent laws, identifying inconsistencies, and ensuring grammatical precision. It can also aid in summarizing complex bills, simulating potential outcomes of proposed legislation, and providing legislators with rapid analyses of policy impacts. These capabilities can enhance the legislative process, making it more efficient and informed.

The essay underscores the inevitability of AI’s integration into lawmaking, driven by the increasing complexity of modern governance and the demand for efficiency. As AI tools become more accessible, their adoption in legislative contexts is likely to grow, regardless of formal endorsements or procedural changes. This organic integration poses questions about accountability, transparency, and the future role of human judgment in crafting laws.

In reflecting on Schneier’s insights, it’s evident that while AI offers promising tools to enhance legislative efficiency and precision, it also brings forth challenges that necessitate careful consideration. Ensuring transparency in AI-assisted lawmaking processes is paramount to maintain public trust. Moreover, establishing oversight mechanisms can help mitigate risks associated with bias or misuse. As we navigate this evolving landscape, a balanced approach that leverages AI’s strengths while safeguarding democratic principles will be crucial.

For further details, access the article here

Artificial Intelligence: Legal Issues, Policy, and Practical Strategies

AIMS and Data Governance

InfoSec services | InfoSec books | Follow our blog | DISC llc is listed on The vCISO Directory | ISO 27k Chat bot | Comprehensive vCISO Services | ISMS Services | Security Risk Assessment Services

Tags: #Lawmaking, AI, AI Laws, AI legislature


« Previous PageNext Page »