Jun 30 2025

Artificial Intelligence: The Next Battlefield in Cybersecurity

Category: AI,cyber securitydisc7 @ 8:56 am

Artificial Intelligence (AI) stands as a paradox in the cybersecurity landscape. While it empowers attackers with tools to launch faster, more convincing scams, it also offers defenders unmatched capabilities—if used strategically.

1. AI: A Dual-Edged Sword
The post emphasizes AI’s paradox in cybersecurity—it empowers attackers to launch sophisticated assaults while offering defenders potent tools to counteract those very threats

2. Rising Threats from Adversarial AI
AI emerging risks, such as data poisoning and adversarial inputs that can subtly mislead or manipulate AI systems deployed for defense

3. Secure AI Lifecycle Practices
To mitigate these threats, the article recommends implementing security across the entire AI lifecycle—covering design, development, deployment, and continual monitoring

4. Regulatory and Framework Alignment
It points out the importance of adhering to standards like ISO and NIST, as well as upcoming regulations around AI safety, to ensure both compliance and security .

5. Human-AI Synergy
A key insight is blending AI with human oversight/processes, such as threat modeling and red teaming, to maximize AI’s effectiveness while maintaining accountability

6. Continuous Adaptation and Education

Modern social engineering attacks have evolved beyond basic phishing emails. Today, they may come as deepfake videos of executives, convincingly realistic invoices, or well-timed scams exploiting current events or behavioral patterns.

The sophistication of these AI-powered attacks has rendered traditional cybersecurity tools inadequate. Defenders can no longer rely solely on static rules and conventional detection methods.

To stay ahead, organizations must counter AI threats with AI-driven defenses. This means deploying systems that can analyze behavioral patterns, verify identity authenticity, and detect subtle anomalies in real time.

Forward-thinking security teams are embedding AI into critical areas like endpoint protection, authentication, and threat detection. These adaptive systems provide proactive security rather than reactive fixes.

Ultimately, the goal is not to fear AI but to outsmart the adversaries who use it. By mastering and leveraging the same tools, defenders can shift the balance of power.

🧠 Case Study: AI-Generated Deepfake Voice Scam — $35 Million Heist

In 2023, a multinational company in the UK fell victim to a highly sophisticated AI-driven voice cloning attack. Fraudsters used deepfake audio to impersonate the company’s CEO, directing a senior executive to authorize a $35 million transfer to a fake supplier account. The cloned voice was realistic enough to bypass suspicion, especially because the attackers timed the call during a period when the CEO was known to be traveling.

This attack exploited AI-based social engineering and psychological trust cues, bypassing traditional cybersecurity defenses such as spam filters and endpoint protection.

Defense Lesson:
To prevent such attacks, organizations are now adopting AI-enabled voice biometrics, real-time anomaly detection, and multi-factor human-in-the-loop verification for high-value transactions. Some are also training employees to identify subtle behavioral or contextual red flags, even when the source seems authentic.

In early 2023, a multinational company in Hong Kong lost over $25 million after employees were tricked by a deepfake video call featuring AI-generated replicas of senior executives. The attackers used AI to mimic voices and appearances convincingly enough to authorize fraudulent transfers—highlighting how far social engineering has advanced with AI.

Source: [CNN Business, Feb 2024 – “Scammers used deepfake video call to steal millions”]

This example reinforces the urgency of integrating AI into threat detection and identity verification systems, showing how traditional security tools are no longer sufficient against such deception.

AI and The Future of Cybersecurity: Navigating the New Digital Battlefield

“Whether you’re a technology professional, policymaker, academic, or simply a curious reader, this book will arm you with the knowledge to navigate the complex intersection of AI, security, and society.”

Digital Ethics in the Age of AI – Navigating the ethical frontier today and beyond

AI Governance Is a Boardroom Imperative—The SEC Just Raised the Stakes on AI Hype

How AI Is Transforming the Cybersecurity Leadership Playbook

Previous AI posts

IBM’s model-routing approach

Top 5 AI-Powered Scams to Watch Out for in 2025

Summary of CISO 3.0: Leading AI Governance and Security in the Boardroom

AI in the Workplace: Replacing Tasks, Not People

Why CISOs Must Prioritize Data Provenance in AI Governance

Interpretation of Ethical AI Deployment under the EU AI Act

AI Governance: Applying AI Policy and Ethics through Principles and Assessments

ISO/IEC 42001:2023, First Edition: Information technology – Artificial intelligence – Management system

ISO 42001 Artificial Intelligence Management Systems (AIMS) Implementation Guide: AIMS Framework | AI Security Standards

Businesses leveraging AI should prepare now for a future of increasing regulation.

Digital Ethics in the Age of AI 

DISC InfoSec’s earlier posts on the AI topic

InfoSec services | InfoSec books | Follow our blog | DISC llc is listed on The vCISO Directory | ISO 27k Chat bot | Comprehensive vCISO Services | ISMS Services | Security Risk Assessment Services | Mergers and Acquisition Security

Tags: AI and Security, artificial intelligence, Digital Battlefield, Digital Ethics, Ethical Frontier


Jun 25 2025

AI Governance Is a Boardroom Imperative—The SEC Just Raised the Stakes on AI Hype

Category: AI,IT Governancedisc7 @ 7:18 am

The SEC has charged a major tech company for deceiving investors by exaggerating its use of AI—highlighting that the falsehood was about AI itself, not just product features. This signals a shift: AI governance has now become a boardroom-level issue, and many organizations are unprepared.

Advice for CISOs and execs:

  1. Be audit-ready—any AI claims must be verifiable.
  2. Involve GRC early—AI governance is about managing risk, enforcing controls, and ensuring transparency.
  3. Educate your board—they don’t need to understand algorithms, but they must grasp the associated risks and mitigation plans.

If your current AI strategy is nothing more than a slide deck and hope, it’s time to build something real.

AI Washing

The Securities and Exchange Commission (SEC) has been actively pursuing actions against companies for misleading statements about their use of Artificial Intelligence (AI), a practice often referred to as “AI washing”. 

Here are some examples of recent SEC actions in this area:

  • Presto Automation: The SEC charged Presto Automation for making misleading statements about its AI-powered voice technology used for drive-thru order taking. Presto allegedly failed to disclose that it was using a third party’s AI technology, not its own, and also misrepresented the extent of human involvement required for the product to function.
  • Delphia and Global Predictions: These two investment advisers were charged with making false and misleading statements about their use of AI in their investment processes. The SEC found that they either didn’t have the AI capabilities they claimed or didn’t use them to the extent they advertised.
  • Nate, Inc.: The founder of Nate, Inc. was charged by both the SEC and the DOJ for allegedly misleading investors about the company’s AI-powered app, claiming it automated online purchases when they were primarily processed manually by human contractors. 

Key takeaways from these cases and SEC guidance:

  • Transparency and Accuracy: Companies need to ensure their AI-related disclosures are accurate and avoid making vague or exaggerated claims.
  • Distinguish Capabilities: It’s important to clearly distinguish between current AI capabilities and future aspirations.
  • Substantiation: Companies should have a reasonable basis and supporting evidence for their AI-related claims.
  • Disclosure Controls: Companies should establish and maintain disclosure controls to ensure the accuracy of their AI-related statements in SEC filings and other communications. 

The SEC has made it clear that “AI washing” is a top enforcement priority, and companies should be prepared for heightened scrutiny of their AI-related disclosures. 

THE ILLUSION OF AI: How Companies Are Misleading You with Artificial Intelligence and What That Could Mean for Your Future

Digital Ethics in the Age of AI – Navigating the ethical frontier today and beyond

AI Governance: Applying AI Policy and Ethics through Principles and Assessments

How AI Is Transforming the Cybersecurity Leadership Playbook

Previous AI posts

IBM’s model-routing approach

Top 5 AI-Powered Scams to Watch Out for in 2025

Summary of CISO 3.0: Leading AI Governance and Security in the Boardroom

AI in the Workplace: Replacing Tasks, Not People

Why CISOs Must Prioritize Data Provenance in AI Governance

Interpretation of Ethical AI Deployment under the EU AI Act

AI Governance: Applying AI Policy and Ethics through Principles and Assessments

ISO/IEC 42001:2023, First Edition: Information technology – Artificial intelligence – Management system

ISO 42001 Artificial Intelligence Management Systems (AIMS) Implementation Guide: AIMS Framework | AI Security Standards

Businesses leveraging AI should prepare now for a future of increasing regulation.

Digital Ethics in the Age of AI 

DISC InfoSec’s earlier posts on the AI topic

InfoSec services | InfoSec books | Follow our blog | DISC llc is listed on The vCISO Directory | ISO 27k Chat bot | Comprehensive vCISO Services | ISMS Services | Security Risk Assessment Services | Mergers and Acquisition Security

Tags: AI Governance, AI Hype, AI Washing, Boardroom Imperative, Digital Ethics, SEC, THE ILLUSION OF AI


May 23 2025

Interpretation of Ethical AI Deployment under the EU AI Act

Category: AIdisc7 @ 5:39 am

Scenario: A healthcare startup in the EU develops an AI system to assist doctors in diagnosing skin cancer from images. The system uses machine learning to classify lesions as benign or malignant.

1. Risk-Based Classification

  • EU AI Act Requirement: Classify the AI system into one of four risk categories: unacceptable, high-risk, limited-risk, minimal-risk.
  • Interpretation in Scenario:
    The diagnostic system qualifies as a high-risk AI because it affects people’s health decisions, thus requiring strict compliance with specific obligations.

2. Data Governance & Quality

  • EU AI Act Requirement: High-risk AI systems must use high-quality datasets to avoid bias and ensure accuracy.
  • Interpretation in Scenario:
    The startup must ensure that training data are representative of all demographic groups (skin tones, age ranges, etc.) to reduce bias and avoid misdiagnosis.

3. Transparency & Human Oversight

  • EU AI Act Requirement: Users should be aware they are interacting with an AI system; meaningful human oversight is required.
  • Interpretation in Scenario:
    Doctors must be clearly informed that the diagnosis is AI-assisted and retain final decision-making authority. The system should offer explainability features (e.g., heatmaps on images to show reasoning).

4. Robustness, Accuracy, and Cybersecurity

  • EU AI Act Requirement: High-risk AI systems must be technically robust and secure.
  • Interpretation in Scenario:
    The AI tool must maintain high accuracy under diverse conditions and protect patient data from breaches. It should include fallback mechanisms if anomalies are detected.

5. Accountability and Documentation

  • EU AI Act Requirement: Maintain detailed technical documentation and logs to demonstrate compliance.
  • Interpretation in Scenario:
    The startup must document model architecture, training methodology, test results, and monitoring processes, and be ready to submit these to regulators if required.

6. Registration and CE Marking

  • EU AI Act Requirement: High-risk systems must be registered in an EU database and undergo conformity assessments.
  • Interpretation in Scenario:
    The startup must submit their system to a notified body, demonstrate compliance, and obtain CE marking before deployment.

AI Governance: Applying AI Policy and Ethics through Principles and Assessments

ISO/IEC 42001:2023, First Edition: Information technology – Artificial intelligence – Management system

ISO 42001 Artificial Intelligence Management Systems (AIMS) Implementation Guide: AIMS Framework | AI Security Standards

Businesses leveraging AI should prepare now for a future of increasing regulation.

Digital Ethics in the Age of AI 

DISC InfoSec’s earlier posts on the AI topic

InfoSec services | InfoSec books | Follow our blog | DISC llc is listed on The vCISO Directory | ISO 27k Chat bot | Comprehensive vCISO Services | ISMS Services | Security Risk Assessment Services

Tags: Digital Ethics, EU AI Act, ISO 42001


Apr 01 2025

Things You may not want to Tell ChatGPT

Category: AI,Information Privacydisc7 @ 8:37 am

​Engaging with AI chatbots like ChatGPT offers numerous benefits, but it’s crucial to be mindful of the information you share to safeguard your privacy. Sharing sensitive data can lead to security risks, including data breaches or unauthorized access. To protect yourself, avoid disclosing personal identity details, medical information, financial account data, proprietary corporate information, and login credentials during your interactions with ChatGPT. ​

Chat histories with AI tools may be stored and could potentially be accessed by unauthorized parties, especially if the AI company faces legal actions or security breaches. To mitigate these risks, it’s advisable to regularly delete your conversation history and utilize features like temporary chat modes that prevent the saving of your interactions. ​

Implementing strong security measures can further enhance your privacy. Use robust passwords and enable multifactor authentication for your accounts associated with AI services. These steps add layers of security, making unauthorized access more difficult. ​

Some AI companies, including OpenAI, provide options to manage how your data is used. For instance, you can disable model training, which prevents your conversations from being utilized to improve the AI model. Additionally, opting for temporary chats ensures that your interactions aren’t stored or used for training purposes. ​

For tasks involving sensitive or confidential information, consider using enterprise versions of AI tools designed with enhanced security features suitable for professional environments. These versions often come with stricter data handling policies and provide better protection for your information.

By being cautious about the information you share and utilizing available privacy features, you can enjoy the benefits of AI chatbots like ChatGPT while minimizing potential privacy risks. Staying informed about the data policies of the AI services you use and proactively managing your data sharing practices are key steps in protecting your personal and sensitive information.

For further details, access the article here

DISC InfoSec’s earlier post on the AI topic

What You Are Not Told About ChatGPT: Key Insights into the Inner Workings of ChatGPT & How to Get the Most Out of It

Digital Ethics in the Age of AI – Navigating the ethical frontier today and beyond

InfoSec services | InfoSec books | Follow our blog | DISC llc is listed on The vCISO Directory | ISO 27k Chat bot | Comprehensive vCISO Services | ISMS Services | Security Risk Assessment Services

Tags: AI Ethics, AI privacy, ChatGPT, Digital Ethics, privacy


Aug 28 2022

Digital Ethics Book Bundle

Category: cyber security,Information SecurityDISC @ 12:53 pm
Digital Ethics Book Bundle

As technology advances, so must our ability to use such technology ethically. The rise of AI (artificial intelligence) and big data raises concerns about data privacy and cyber security. ITG have combined their latest titles into one bundle, saving you 20% – ideal for bank holiday reading.

Digital Ethics Book Bundle Understand the growing social, ethical and security concerns of advancing technology with this new collection:

Digital Earth – Cyber threats, privacy and ethics in an age of paranoia

Artificial Intelligence – Ethical, social, and security impacts for the present and the future

The Art of Cyber Security – A practical guide to winning the war on cyber crime

Save 20% when you buy the Digital Ethics Book Bundle online (RRP: ÂŁ80.85).
Digital Ethics Book Bundle
Buy now

Tags: Digital Ethics