InfoSec and Compliance – With 20 years of blogging experience, DISC InfoSec blog is dedicated to providing trusted insights and practical solutions for professionals and organizations navigating the evolving cybersecurity landscape. From cutting-edge threats to compliance strategies, this blog is your reliable resource for staying informed and secure. Dive into the content, connect with the community, and elevate your InfoSec expertise!
Overview: This academic paper examines the growing ethical and regulatory challenges brought on by AI’s integration with cybersecurity. It traces the evolution of AI regulation, highlights pressing concernsâlike bias, transparency, accountability, and data privacyâand emphasizes the tension between innovation and risk mitigation.
Key Insights:
AI systems raise unique privacy/security issues due to their opacity and lack of human oversight.
Current regulations are fragmentedâvarying by sectorâwith no unified global approach.
Bridging the regulatory gap requires improved AI literacy, public engagement, and cooperative policymaking to shape responsible frameworks.
Source: Authored by Vikram Kulothungan, published in January 2025, this paper cogently calls for a globally harmonized regulatory strategy and multi-stakeholder collaboration to ensure AIâs secure deployment.
Why This Post Stands Out
Comprehensive: Tackles both cybersecurity and privacy within the AI contextânot just one or the other.
Forward-Looking: Addresses systemic concerns, laying the groundwork for future regulation rather than retrofitting rules around current technology.
Action-Oriented: Frames AI regulation as a collaborative challenge involving policymakers, technologists, and civil society.
Additional Noteworthy Commentary on AI Regulation
1. Anthropic CEOâs NYT Op-ed: A Call for Sensible Transparency
Anthropic CEO Dario Amodei criticized a proposed 10-year ban on state-level AI regulation as âtoo blunt.â He advocates a federal transparency standard requiring AI developers to disclose testing methods, risk mitigation, and pre-deployment safety measures.
2. Californiaâs AI Policy Report: Guarding Against Irreversible Harms
A report commissioned by Governor Newsom warns of AIâs potential to facilitate biological and nuclear threats. It advocates “trust but verify” frameworks, increased transparency, whistleblower protections, and independent safety validation.
3. Mutually Assured Deregulation: The Risks of a Race Without Guardrails
Gilad Abiri argues that dismantling AI safety oversight in the name of competition is dangerous. Deregulation doesnât give lasting advantagesâit undermines long-term security, enabling proliferation of harmful AI capabilities like bioweapon creation or unstable AGI.
Broader Context & Insights
Fragmented Landscape: U.S. lacks unified privacy or AI laws; even executive orders remain limited in scope.
Data Risk: Many organizations suffer from unintended AI data exposure and poor governance despite having some policies in place.
Regulatory Innovation: Texas passed a law focusing only on government AI use, signaling a partial step toward regulationâbut private sector oversight remains limited.
International Efforts: The Council of Europeâs AI Convention (2024) is a rare international treaty aligning AI development with human rights and democratic values.
Research Proposals: Techniques like blockchain-enabled AI governance are being explored as transparency-heavy, cross-border compliance tools.
Opinion
AIâs pace of innovation is extraordinaryâand so are its risks. Weâre at a crossroads where lack of regulation isnât a neutral stanceâit accelerates inequity, privacy violations, and even public safety threats.
Whatâs needed:
Layered Regulation: From sector-specific rules to overarching international frameworks; we need both precision and stability.
Transparency Mandates: Companies must be held to explicit standardsâmodel testing practices, bias mitigation, data usage, and safety protocols.
Public Engagement & Literacy: AI literacy shouldnât be limited to technologists. Citizens, policymakers, and enforcement institutions must be equipped to participate meaningfully.
Safety as Innovation Avenue: Strong regulation doesnât kill innovationâit guides it. Clear rules create reliable markets, investor confidence, and socially acceptable products.
The paper âSecuring the AI Frontierâ sets the right toneâurging collaboration, ethics, and systemic governance. Pair that with state-level transparency measures (like Newsomâs report) and critiques of over-deregulation (like Abiriâs essay), and we get a multi-faceted strategy toward responsible AI.
Connected vehicles have rapidly proliferated across Europe, brimming with sophisticated software, myriad sensors, and continuous connectivity. While these advancements deliver conveniences like remote control features and intelligent navigation, they simultaneously expand the vehicleâs digital attack surface, what enhances âsmartnessâ inherently introduces fresh cybersecurity vulnerabilities.
A recent study â both technical and survey-based â questioned roughly 300 mostly European participants about their awareness and attitudes regarding smart-car security and privacy. The findings indicate that most people understand their vehicles share data with both manufacturers and third parties, particularly those driving newer models. Western Europeans showed greater awareness of these data flows than respondents from Eastern Europe.
Despite rising awareness, many drivers lack clarity about what precisely âsmart carâ entails. Consumers tend to emphasize visible functionalities â such as self-driving aids or entertainment systems â while overlooking the less visible but critical issue of how data is managed, stored, or potentially exploited.
The existing regulatory environment is striving to catch up. Frameworks like UN R155 and R156, already in effect, mandate systematic cybersecurity management and secure software update mechanisms for connected cars. Similarly, from July 2024, EU rules require that new vehicles cannot be registered unless they guarantee robust cybersecurityâpushing automakers toward ‘security by design.
Moreover, Europe is developing additional protective technologies. For example, the EU-funded SELFY project is building a toolkit to safeguard connected traffic systems, aiming to issue cybersecurity certificates and bolster defenses against cyber threats. The European Commission is also establishing protocols around testing, data recording, safety monitoring, and incident reporting for advanced automated and driverless vehicle systems.
Nevertheless, gaps remainâparticularly between policy progress and public trust. Even as regulations evolve and technical tools mature, many vehicle users remain uncertain about the extent of data collection, storage, and sharing. Without stronger transparency, consumer trust is likely to lag behind technological and regulatory advancements.
Car Security and Privacy
Connected cars represent a defining shift in the mobility landscapeâoffering unprecedented convenience but accompanied by elevated risks. The central paradox is clear: as vehicles become more connected and intelligent, they become more exposed. This isnât just a matter of potential remote hacking; itâs about data flowâwhere, how, and by whom vehicle data is used.
Europe is taking commendable steps by enforcing cybersecurity mandates (like R155/R156) and promoting proactive, security-by-design approaches. Projects like SELFY and structured regulatory initiatives around automated vehicles signal forward motion.
However, the real challenge lies in closing the trust gap. Many drivers still donât have a clear understanding of data practices. Communicating complex cybersecurity architecture in accessible terms is essential. Automakers and regulators must both educate and reassureâperhaps through public dashboards, standardized labels on data practices, or periodic transparency reports that explain what data is collected, why, who has access, and how it’s protected.
For drivers, vigilance remains crucial. Prioritize vehicles that support secure over-the-air updates, enforce two-factor authentication for vehicle apps, and carefully review privacy settings. As consumers, push for clarity and accountabilityâour vehicles shouldnât just be smart; they should also be secure and respectful of our privacy.
ISO 42001 is the upcoming standard for AI Management Systems (AIMS), similar in structure to ISO 27001 for information security. While the full standard is not yet widely published, the main requirements for an internal audit of an ISO 42001 AIMS can be outlined based on common audit principles and the expected clauses in the standard. Here’s a structured view:
1. Audit Scope and Objectives
Define what parts of the AI management system will be audited (processes, teams, AI models, AI governance, data handling, etc.).
Ensure the audit covers all ISO 42001 clauses relevant to your organization.
Determine audit objectives, e.g.,:
Compliance with ISO 42001.
Effectiveness of risk management for AI.
Alignment with organizational AI strategy and policies.
2. Compliance with AIMS Requirements
Check whether the organizationâs AI management system meets ISO 42001 requirements, which likely include:
AI governance framework.
Risk management for AI (AI lifecycle, bias, safety, privacy).
Policies and procedures for AI development, deployment, and monitoring.
Data management and ethical AI principles.
Roles, responsibilities, and competency requirements for AI personnel.
3. Documentation and Records
Verify that documentation exists and is maintained, e.g.:
AI policies, procedures, and guidelines.
Risk assessments, impact assessments, and mitigation plans.
Training records and personnel competency evaluations.
Records of AI incidents, anomalies, or failures.
Audit logs of AI models and data handling activities.
4. Risk Management and Controls
Review whether risks related to AI (bias, safety, security, privacy) are identified, assessed, and mitigated.
Check implementation of controls:
Data quality and integrity controls.
Model validation and testing.
Human oversight and accountability mechanisms.
Compliance with relevant regulations and ethical standards.
5. Performance Monitoring and Improvement
Evaluate monitoring and measurement processes:
Metrics for AI model performance and compliance.
Monitoring of ethical and legal adherence.
Feedback loops for continuous improvement.
Assess whether corrective actions and improvements are identified and implemented.
6. Internal Audit Process Requirements
Audits should be planned, objective, and systematic.
Auditors must be independent of the area being audited.
Audit reports must include:
Findings (compliance, nonconformities, opportunities for improvement).
Recommendations.
Follow-up to verify closure of nonconformities.
7. Management Review Alignment
Internal audit results should feed into management reviews for:
AI risk mitigation effectiveness.
Resource allocation.
Policy updates and strategic AI decisions.
Key takeaway: An ISO 42001 internal audit is not just about checking boxesâitâs about verifying that AI systems are governed, ethical, and risk-managed throughout their lifecycle, with evidence, controls, and continuous improvement in place.
An Internal Audit agreement aligned with ISO 42001 should include the following key components, each described below to ensure clarity and operational relevance:
🧭 Scope of Services
The agreement should clearly define the consultantâs role in leading and advising the internal audit team. This includes directing the audit process, training team members on ISO 42001 methodologies, and overseeing all phasesâfrom planning to reporting. It should also specify advisory responsibilities such as interpreting ISO 42001 requirements, identifying compliance gaps, and validating governance frameworks. The scope must emphasize the consultantâs authority to review and approve all audit work to ensure alignment with professional standards.
📄 Deliverables
A detailed list of expected outputs should be included, such as a comprehensive audit report with an executive summary, gap analysis, and risk assessment. The agreement should also cover a remediation plan with prioritized actions, implementation guidance, and success metrics. Supporting materials like policy templates, training recommendations, and compliance monitoring frameworks should be outlined. Finally, it should ensure the development of a capable internal audit team and documentation of audit procedures for future use.
⏳ Timeline
The agreement must specify key milestones, including project start and completion dates, training deadlines, audit phase completion, and approval checkpoints for draft and final reports. This timeline ensures accountability and helps coordinate internal resources effectively.
💰 Compensation
This section should detail the total project fee, payment terms, and a milestone-based payment schedule. It should also clarify reimbursable expenses (e.g., travel) and note that internal team costs and facilities are the clientâs responsibility. Transparency in financial terms helps prevent disputes and ensures mutual understanding.
👥 Client Responsibilities
The clientâs obligations should be clearly stated, including assigning qualified internal audit team members, ensuring their availability, designating a project coordinator, and providing access to necessary personnel, systems, and facilities. The agreement should also require timely feedback on deliverables and commitment from the internal team to complete audit tasks under the consultantâs guidance.
🎓 Consultant Responsibilities
The consultantâs duties should include providing expert leadership, training the internal team, reviewing and approving all work products, maintaining quality standards, and being available for ongoing consultation. This ensures the consultant remains accountable for the integrity and effectiveness of the audit process.
🔐 Confidentiality
A robust confidentiality clause should protect proprietary information shared during the engagement. It should specify the duration of confidentiality obligations post-engagement and ensure that internal audit team members are bound by equivalent terms. This builds trust and safeguards sensitive data.
💡 Intellectual Property
The agreement should clarify ownership of work products, stating that outputs created by the internal team under the consultantâs guidance belong to the client. It should also allow the consultant to retain general methodologies and templates for future use, while jointly owning training materials and audit frameworks.
⚖️ Limitation of Liability
This clause should cap the consultantâs liability to the total fee paid and exclude consequential or punitive damages. It should reinforce that ISO 42001 compliance is ultimately the clientâs responsibility, with the consultant providing guidance and oversightânot execution.
🛑 Termination
The agreement should include provisions for termination with advance notice, payment for completed work, delivery of all completed outputs, and survival of confidentiality obligations. It should also ensure that any training and knowledge transfer remains with the client post-termination.
📜 General Terms
Standard legal provisions should be included, such as independent contractor status, governing law, severability, and a clause stating that the agreement represents the entire understanding between parties. These terms provide legal clarity and protect both sides.
BruteForceAI is a free, open-source penetration testing tool that enhances traditional brute-force attacks by integrating large language models (LLMs). It automates identification of login form elementsâsuch as username and password fieldsâby analyzing HTML content and deducing the correct selectors.
After mapping out the login structure, the tool conducts multi-threaded brute-force or password-spraying attacks. It simulates human-like behavior by randomizing timing, introducing slight delays, and varying the user-agentâconcealing its activity from conventional detection systems.
Intended for legitimate security use, BruteForceAI is geared toward authorized penetration testing, academic research, self-assessment of oneâs applications, and participation in bug bounty programsâalways within proper legal and ethical bounds. It is freely available on GitHub for practitioners to explore and deploy.
By combining intelligence-powered analysis and automated attack execution, BruteForceAI streamlines what used to be a tedious and manual process. It automates both discovery (login field detection) and exploitation (attack execution). This dual capability can significantly speed up testing workflows for security professionals.
BruteForceAI
BruteForceAI represents a meaningful leap in how penetration testers can validate and improve authentication safeguards. On the positive side, its automation and intelligent behavior modeling could expedite thorough and realistic attack simulationsâespecially useful for uncovering overlooked vulnerabilities hidden in login logic or form implementations.
That said, such power is a double-edged sword. Thereâs an inherent risk that malicious actors could repurpose the tool for unauthorized attacks, given its stealthy methods and automation. Its detection evasion tacticsâmimicking human activity to avoid being flaggedâcould be exploited by bad actors to evade traditional defenses. For defenders, this heightens the importance of deploying robust controls like rate limiting, behavioral monitoring, anomaly detection, and multi-factor authentication.
In short, as a security tool itâs impressive and helpfulâif used responsibly. Ensuring it remains in the hands of ethical professionals and not abused requires awareness, cautious deployment, and informed defense strategies.
Download
This tool is designed for responsible and ethical use, including authorized penetration testing, security research and education, testing your own applications, and participating in bug bounty programs within the proper scope.
Regulatory Alignment: ISO 42001 supports GDPR, HIPAA, and EU AI Act compliance.
Client Trust: Demonstrates responsible AI governance to enterprise clients.
Competitive Edge: Positions ShareVault as a forward-thinking, standards-compliant VDR provider.
Audit Readiness: Facilitates internal and external audits of AI systems and data handling.
If ShareVault were to pursue ISO 42001 certification, it would not only strengthen its AI governance but also reinforce its reputation in regulated industries like life sciences, finance, and legal services.
Here’s a tailored ISO/IEC 42001 implementation roadmap for a Virtual Data Room (VDR) provider like ShareVault, focusing on responsible AI governance, risk mitigation, and regulatory alignment.
🗺️ ISO/IEC 42001 Implementation Roadmap for ShareVault
Phase 1: Initiation & Scoping
🔹 Objective: Define the scope of AI use and align with business goals.
Identify AI-powered features (e.g., smart search, document tagging, access analytics).
Define scope of the AI Management System (AIMS): which systems, processes, and data are covered.
Appoint an AI Governance Lead or Steering Committee.
Phase 2: Gap Analysis & Risk Assessment
🔹 Objective: Understand current state vs. ISO 42001 requirements.
Conduct a gap analysis against ISO 42001 clauses.
Evaluate risks related to:
Data privacy (e.g., GDPR, HIPAA)
Bias in AI-driven document classification
Misuse of access analytics
Review existing controls and identify vulnerabilities.
Phase 3: Policy & Governance Framework
🔹 Objective: Establish foundational policies and oversight mechanisms.
Draft an AI Policy aligned with ethical principles and legal obligations.
Define roles and responsibilities for AI oversight.
Create procedures for:
Human oversight and intervention
Incident reporting and escalation
Lifecycle management of AI models
Phase 4: Data & Model Governance
🔹 Objective: Ensure trustworthy data and model practices.
Implement controls for training and testing data quality.
Document data sources, preprocessing steps, and validation methods.
Establish model documentation standards (e.g., model cards, audit trails).
Define retention and retirement policies for outdated models.
Phase 5: Operational Controls & Monitoring
🔹 Objective: Embed AI governance into daily operations.
Integrate AI risk controls into DevOps and product workflows.
Set up performance monitoring dashboards for AI features.
Enable logging and traceability of AI decisions.
Conduct regular internal audits and reviews.
Phase 6: Stakeholder Engagement & Transparency
🔹 Objective: Build trust with users and clients.
Communicate AI capabilities and limitations clearly in the UI.
Provide opt-out or override options for AI-driven decisions.
Engage clients in defining acceptable AI behavior and use cases.
Train staff on ethical AI use and ISO 42001 principles.
Phase 7: Certification & Continuous Improvement
🔹 Objective: Achieve compliance and evolve responsibly.
Prepare documentation for ISO 42001 certification audit.
Conduct mock audits and address gaps.
Establish feedback loops for continuous improvement.
Monitor regulatory changes (e.g., EU AI Act, U.S. AI bills) and update policies accordingly.
🧠 Bonus Tip: Align with Other Standards
ShareVault can integrate ISO 42001 with:
ISO 27001 (Information Security)
ISO 9001 (Quality Management)
SOC 2 (Trust Services Criteria)
EU AI Act (for high-risk AI systems)
visual roadmap for implementing ISO/IEC 42001 tailored to a Virtual Data Room (VDR) provider like ShareVault:
🗂️ ISO 42001 Implementation Roadmap for VDR Providers
Each phase is mapped to a monthly milestone, showing how AI governance can be embedded step-by-step:
📌 Milestone Highlights
Month 1 â Initiation & Scoping Define AI use cases (e.g., smart search, access analytics), map stakeholders, appoint governance lead.
Month 2 â Gap Analysis & Risk Assessment Evaluate risks like bias in document tagging, privacy breaches, and misuse of analytics.
Month 3 â Policy & Governance Framework Draft AI policy, define oversight roles, and create procedures for human intervention and incident handling.
Month 4 â Data & Model Governance Implement controls for training data, document model behavior, and set retention policies.
Month 5 â Operational Controls & Monitoring Embed governance into workflows, monitor AI performance, and conduct internal audits.
Month 6 â Stakeholder Engagement & Transparency Communicate AI capabilities to users, engage clients in ethical discussions, and train staff.
Month 7 â Certification & Continuous Improvement Prepare for ISO audit, conduct mock assessments, and monitor evolving regulations like the EU AI Act.
The age of AI-assisted hacking is no longer loomingâit’s here. Hackers of all stripesâfrom state actors to cybercriminalsâare now integrating AI tools into their operations, while defenders are racing to catch up.
Key Developments
In midâ2025, Russian intelligence reportedly sent phishing emails to Ukrainians containing AI-powered attachments that automatically scanned victims’ computers for sensitive files and transmitted them back to Russia. NBC Bay Area
AI models like ChatGPT have become highly adept at translating natural language into code, helping hackers automate their work and scale operations. NBC Bay Area
AI hasn’t ushered in a hacking revolution that enables novices to bring down power gridsâbut it is significantly enhancing the efficiency and reach of skilled hackers. NBC Bay Area
On the Defensive Side
Cybersecurity defenders are also turning to AIâGoogleâs âGeminiâ model helped identify over 20 software vulnerabilities, speeding up bug detection and patching.
Alexei Bulazel of the White Houseâs National Security Council believes defenders currently hold a slight edge over attackers, thanks to America’s tech infrastructure, but that balance may shift as agentic (autonomous) AI tools proliferate.
A notable milestone: an AI called âXbowâ topped the HackerOne leaderboard, prompting the platform to create a separate category for AI-generated hacking tools.
My Take
This article paints a vivid picture of an escalating AI arms race in cybersecurity. My view? Itâs a dramatic turning point:
AI is already tipping the scaleâbut not overwhelmingly. Hackers are more efficient, but full-scale automated digital threats havenât arrived. Still, what used to require deep expertise is becoming accessible to more people.
Defenders arenât standing idle. AI-assisted scanning and rapid vulnerability detection are powerful tools in the white-hat arsenalâand may remain decisive, especially when backed by robust tech ecosystems.
The real battleground is trust. As AI makes exploits more sophisticated and deception more believable (e.g., deepfakes or phishing), trust becomes the most vulnerable asset. This echoes broader reports showing attacks are increasingly AIâpowered, whether via deceptive audio/video or tailored phishing campaigns.
Vigilance must evolve. Automated defenses and rapid detection will be key. Organizations should also invest in digital literacyâtraining humans to recognize deception even as AI tools become ever more convincing.
Related Reading Highlights
Here are some recent news pieces that complement the NBC article, reinforcing the duality of AIâs role in cyber threats:
This book positions itself not just as a technical guide but as a strategic roadmap for the future of cybersecurity leadership. It emphasizes that in todayâs complex threat environment, CISOs must evolve beyond technical mastery and step into the role of business leaders who weave cybersecurity into the very fabric of organizational strategy.
The core message challenges the outdated view of CISOs as purely technical experts. Instead, it calls for a strategic shift toward business alignment, measurable risk management, and adoption of emerging technologies like AI and machine learning. This evolution reflects growing expectations from boards, executives, and regulatorsâexpectations that CISOs must now meet with business fluency, not just technical insight.
The book goes further by offering actionable guidance, case studies, and real-world examples drawn from extensive experience across hundreds of security programs. It explores practical topics such as risk quantification, cyber insurance, and defining materiality, filling the gap left by more theory-heavy resources.
For aspiring CISOs, the book provides a clear path to transition from technical expertise to strategic leadership. For current CISOs, it delivers fresh insight into strengthening business acumen and boardroom credibility, enabling them to better drive value while protecting organizational assets.
My thought: This bookâs strength lies in recognizing that the modern CISO role is no longer just about defending networks but about enabling business resilience and trust. By blending strategy with technical depth, it seems to prepare security leaders for the boardroom-level influence they now require. In an era where cybersecurity is a business risk, not just an IT issue, this perspective feels both timely and necessary.
Harmonized rules for AI systems in the EU, prohibitions on certain AI practices, requirements for high risk AI, transparency rules, market surveillance, and innovation support.
1. Overview: How the AI Act Treats Open-Source vs. Closed-Source Models
The EU AI Act (formalized in 2024) regulates AI systems using a risk-based framework that ranges from unacceptable to minimal risk. It also includes a specific layer for general-purpose AI (GPAI)ââfoundation modelsâ like large language models.
Open-source models enjoy limited exemptions, especially if:
Theyâre not high-risk,
Not unsafe or interacting directly with individuals,
Not monetized,
Or not deemed to present systemic risk.
Closed-source (proprietary) models donât benefit from such leniency and must comply with all applicable obligations across risk categories.
2. Benefits of Open-Source Models under the AI Act
a) Greater Transparency & Documentation
Open-source code, weights, and architecture are accessible by defaultâaligning with transparency expectations (e.g., model cards, training data logs)âand often already publicly documented.
Independent auditing becomes more feasible through community visibility.
A Stanford study found open-source models tend to comply more readily with data and compute transparency requirements than closed-source alternatives.
b) Lower Compliance Burden (in Certain Cases)
Exemptions: Non-monetized open-source models that donât pose systemic risk may dodge burdensome obligations like documentation or designated representatives.
For academic or purely scientific purposes, thereâs additional leniencyâeven if models are open-source.
c) Encourages Innovation, Collaboration & Inclusion
Open-source democratizes AI access, reducing barriers for academia, startups, nonprofits, and regional players.
Wider collaboration speeds up innovation and enables localization (e.g., fine-tuning for local languages or use cases).
Diverse contributors help surface bias and ethical concerns, making models more inclusive.
3. Drawbacks of Open-Source under the AI Act
a) Disproportionate Regulatory Burden
The Actâs âone-size-fits-allâ approach imposes heavy requirements (like ten-year documentation, third-party audits) even on decentralized, collectively developed modelsâraising feasibility concerns.
Who carries responsibility in distributed, open environments remains unclear.
b) Loopholes and Misuse Risks
The Actâs light treatment of non-monetized open-source models could be exploited by malicious actors to skirt regulations.
Open-source models can be modified or misused to generate disinformation, deepfakes, or hate contentâwithout safeguards that closed systems enforce.
c) Still Subject to Core Obligations
Even under exemptions, open-source GPAI must still:
Disclose training content,
Respect EU copyright laws,
Possibly appoint authorized representatives if systemic risk is suspected.
d) Additional Practical & Legal Complications
Licensing: Some so-called âopen-sourceâ models carry restrictive terms (e.g., commercial restrictions, copyleft provisions) that may hinder compliance or downstream use.
Support disclaimers: Open-source licenses typically disclaim warrantiesârisking liability gaps.
Security vulnerabilities: Public availability of code may expose models to tampering or release of harmful versions.
4. Closed-Source Models: Benefits & Drawbacks
Benefits
Able to enforce usage restrictions, internal safety mechanisms, and fine-grained control over deploymentâreducing misuse risk.
Clear compliance path: centralized providers can manage documentation, audits, and risk mitigation systematically.
Stable liability chain, with better alignment to legal frameworks.
Drawbacks
Less transparency: core workings are hidden, making audits and oversight harder.
Higher compliance burden: must meet all applicable obligations across risk categories without the possibility of exemptions.
Innovation lock-in: smaller players and researchers may face high entry barriers.
5. Synthesis: Choosing Between Open-Source and Closed-Source under the AI Act
Dimension
Open-Source
Closed-Source
Transparency & Auditing
Highâcode, data, model accessible
Lowâblack box systems
Regulatory Burden
Lower for non-monetized, low-risk models; heavy for complex, high-risk cases
Uniformly high, though manageable by central entities
Under the EU AI Act, open-source AI is recognized and, in some respects, encouragedâbut only under narrow, carefully circumscribed conditions. When models are non-monetized, low-risk, or aimed at scientific research, open-source opens up paths for innovation. The transparency and collaborative dynamics are strong virtues.
However, when open-source intersects with high risk, monetization, or systemic potential, the Act tightens its gripâsubjecting models to many of the same obligations as proprietary ones. Worse, ambiguity in responsibility and enforcement may undermine both innovation and safety.
Conversely, closed-source models offer regulatory clarity, security, and control; but at the cost of transparency, higher compliance burden, and restricted access for smaller players.
TL;DR
Choose open-source if your goal is transparency, inclusivity, and innovationâso long as you keep your model non-monetized, transparently documented, and low-risk.
Choose closed-source when safety, regulatory oversight, and controlled deployment are paramount, especially in sensitive or high-risk applications.
As AI adoption accelerates, especially in regulated or high-impact sectors, the European Union is setting the bar for responsible development. Article 15 of the EU AI Act lays out clear obligations for providers of high-risk AI systemsâfocusing on accuracy, robustness, and cybersecurity throughout the AI system’s lifecycle. Hereâs what that means in practiceâand why it matters now more than ever.
1. Security and Reliability From Day One
The AI Act demands that high-risk AI systems be designed with integrity and resilience from the ground up. That means integrating controls for accuracy, robustness, and cybersecurity not only at deployment but throughout the entire lifecycle. Itâs a shift from reactive patching to proactive engineering.
2. Accuracy Is a Design Requirement
Gone are the days of vague performance promises. Under Article 15, providers must define and document expected accuracy levels and metrics in the user instructions. This transparency helps users and regulators understand how the system should performâand flags any deviation from those expectations.
3. Guarding Against Exploitation
AI systems must also be robust against manipulation, whether it’s malicious input, adversarial attacks, or system misuse. This includes protecting against changes to the AIâs behavior, outputs, or performance caused by vulnerabilities or unauthorized interference.
4. Taming Feedback Loops in Learning Systems
Some AI systems continue learning even after deployment. Thatâs powerfulâbut dangerous if not governed. Article 15 requires providers to minimize or eliminate harmful feedback loops, which could reinforce bias or lead to performance degradation over time.
5. Compliance Isnât OptionalâItâs Auditable
The Act calls for documented procedures that demonstrate compliance with accuracy, robustness, and security standards. This includes verifying third-party contributions to system development. Providers must be ready to show their work to market surveillance authorities (MSAs) on request.
6. Leverage the Cyber Resilience Act
If your high-risk AI system also falls under the scope of the EU Cyber Resilience Act (CRA), good news: meeting the CRAâs essential cybersecurity requirements can also satisfy the AI Actâs demands. Providers should assess the overlap and streamline their compliance strategies.
7. Donât Forget the GDPR
When personal data is involved, Article 15 interacts directly with the GDPRâespecially Articles 5(1)(d), 5(1)(f), and 32, which address accuracy and security. If your organization is already GDPR-compliant, youâre on the right track, but Article 15 still demands additional technical and operational precision.
Final Thought:
Article 15 raises the bar for how we build, deploy, and monitor high-risk AI systems. It doesnât just aim to prevent failuresâit pushes providers to deliver trustworthy, resilient, and secure AI from the start. For organizations that embrace this proactively, itâs not just about avoiding finesâitâs about building AI systems that earn trust and deliver long-term value.
Transforming Cybersecurity & Compliance into Strategic Strength
In an era of ever-tightening regulations and ever-evolving threats, Deura InfoSec Consulting (DISC LLC) stands out by turning compliance from a checkbox into a proactive asset.
🛡️ What We Offer: Core Services at a Glance
1. vCISO Services
Access seasoned CISO-level expertiseâwithout the cost of a full-time executive. Our vCISO services provide strategic leadership, ongoing security guidance, executive reporting, and risk management aligned with your business needs.
2. Compliance & Certification Support
Whether you’re targeting ISO 27001, ISO 27701, ISO 42001, NIST, GDPR, SOCâŻ2, HIPAA, or PCI DSS, DISC supports your entire journeyâfrom assessments and gap analysis to policy creation, control implementation, and audit preparation.
3. Security Risk Assessments
Identify risks across infrastructure, cloud, vendors, and business-critical systems using frameworks such as MITRE ATT&CK (via CALDERA), with actionable risk scorecards and remediation roadmaps.
4. Riskâbased Strategic Planning
We bridge the gap from your current (âasâisâ) security state to your desired (âtoâbeâ) maturity level. Our process includes strategic roadmapping, metrics to measure progress, and embedding business-aligned security into operations.
5. Security Awareness & Training
Equip your workforce and leadership with tailored training programsâranging from executive briefings to role-based educationâin vital areas like governance, compliance, and emerging threats.
6. Penetration Testing & Tool Oversight
Using top-tier tools like Burp Suite Pro and OWASP ZAP, DISC uncovers vulnerabilities in web applications and APIs. These assessments are accompanied by remediation guidance and optional managed detection support.
7. At DISC LLC, we help organizations harness the power of data and artificial intelligenceâresponsibly. OurAIMS (Artificial Intelligence Management System) & Data Governance solutions are designed to reduce risk, ensure compliance, and build trust. We implement governance frameworks that align with ISO 27001, ISO 27701, ISO 42001, GDPR, EU AI ACT, HIPAA, and CCPA, supporting both data accuracy and AI accountability. From data classification policies to ethical AI guidelines, bias monitoring, and performance audits, our approach ensures your AI and data strategies are transparent, secure, and future-ready. By integrating AI and data governance, DISC empowers you to lead with confidence in a rapidly evolving digital world.
🔍 Why DISC Works
Fixed-fee, handsâon approach: No bloated documents, just precise and efficient delivery aligned with your needs.
Expert-led services: With 20+ years in security and compliance, DISCâs consultants guide you at every stage.
Audit-ready processes: Leverage frameworks and tools like GRC platform to streamline compliance, reduce overhead, and stay audit-ready.
Tailored to SMBs & enterprises: From startups to established firms, DISC crafts solutions scalable to your size and skillset.
🚀 Ready to Elevate Your Security?
DISC LLC is more than a service providerâitâs your long-term advisor. Whether you’re combating cyber risk or scaling your compliance posture, our services deliver predictable value and empower you to make security a strategic advantage.
Get started today with a free consultation, including a one-hour session with a vCISO, to see where your organization standsâand where it needs to go.
IBMâs latest Cost of a Data Breach Report (2025) highlights a growing and costly issue: âshadow AIââwhere employees use generative AI tools without IT oversightâis significantly raising breach expenses. Around 20% of organizations reported breaches tied to shadow AI, and those incidents carried an average $670,000 premium per breach, compared to firms with minimal or no shadow AI exposure IBM+Cybersecurity Dive.
The latest IBM/Ponemon Institute report reveals that the global average cost of a data breach fell by 9% in 2025, down to $4.44âŻmillionâthe first decline in five yearsâmainly driven by faster breach identification and containment thanks to AI and automation. However, in the United States, breach costs surged 9%, reaching a record high of $10.22âŻmillion, attributed to higher regulatory fines, rising detection and escalation expenses, and slower AI governance adoption. Despite rapid AI deployment, many organizations lag in establishing oversight: about 63% have no AI governance policies, and some 87% lack AI risk mitigation processes, increasing exposure to vulnerabilities like shadow AI. Shadow AIârelated breaches tend to cost moreâadding roughly $200,000 per incidentâand disproportionately involve compromised personally identifiable information and intellectual property. While AI is accelerating incident resolutionâwhich for the first time dropped to an average of 241 daysâthe speed of adoption is creating a security oversight gap that could amplify long-term risks unless governance and audit practices catch up IBM.
2
Although only 13% of organizations surveyed reported breaches involving AI models or tools, a staggering 97% of those lacked proper AI access controlsâshowing that even a small number of incidents can have profound consequences when governance is poor IBM Newsroom.
3
When shadow AIârelated breaches occurred, they disproportionately compromised critical data: personally identifiable information in 65% of cases and intellectual property in 40%, both higher than global averages for all breaches.
4
The absence of formal AI governance policies is striking. Nearly twoâthirds (63%) of breached organizations either donât have AI governance in place or are still developing one. Even among those with policies, many lack approval workflows or audit processes for unsanctioned AI usageâfewer than half conduct regular audits, and 61% lack governance technologies.
5
Despite advances in AIâdriven security tools that help reduce detection and containment times (now averaging 241 days, a nineâyear low), the rapid, unchecked rollout of AI technologies is creating what IBM refers to as security debt, making organizations increasingly vulnerable over time.
6
Attackers are integrating AI into their playbooks as well: 16% of breaches studied involved use of AI toolsâparticularly for phishing schemes and deepfake impersonations, complicating detection and remediation efforts.
7
The financial toll remains steep. While the global average breach cost has dropped slightly to $4.44âŻmillion, US organizations now average a record $10.22âŻmillion per breach. In many cases, businesses reacted by raising pricesâwith nearly oneâthird implementing hikes of 15% or more following a breach.
8
IBM recommends strengthening AI governance via root practices: access control, data classification, audit and approval workflows, employee training, collaboration between security and compliance teams, and use of AIâpowered security monitoring. Investing in these practices can help organizations adopt AI safely and responsibly IBM.
🧠 My Take
This report underscores how shadow AI isnât just a budding IT curiosityâitâs a full-blown risk factor. The allure of convenient AI tools leads to shadow adoption, and without oversight, vulnerabilities compound rapidly. The financial and operational fallout can be severe, particularly when sensitive or proprietary data is exposed. While automation and AI-powered security tools are bringing detection times down, they can’t fully compensate for the lack of foundational governance.
Organizations must treat AI not as an optional upgrade, but as a core infrastructure requiring the same rigour: visibility, policy control, audits, and education. Otherwise, they risk building a house of cards: fast growth over fragile ground. The right blend of technology and policy isnât optionalâitâs essential to prevent shadow AI from becoming a shadow crisis.
President Trumpâs longâanticipated executive 20âpage âAI Action Planâ was unveiled during his âWinning the AI Raceâ speech in Washington, D.C. The document outlines a wide-ranging federal push to accelerate U.S. leadership in artificial intelligence.
The plan is built around three central pillars: Infrastructure, Innovation, and Global Influence. Each pillar includes specific directives aimed at streamlining permitting, deregulating, and boosting American influence in AI globally.
Under the infrastructure pillar, the plan proposes fastâtracking data center permitting and modernizing the U.S. electrical gridâincluding expanding new power sourcesâto meet AIâs intensive energy demands.
On innovation, it calls for removing regulatory red tape, promoting openâweight (openâsource) AI models for broader adoption, and federal efforts to pre-empt or symbolically block state AI regulations to create uniform national policy.
The global influence component emphasizes exporting American-built AI models and chips to allies to forestall dependence on Chinese AI technologies such as DeepSeek or Qwen, positioning U.S. technology as the global standard.
A series of executive orders complemented the strategy, including one to ban âwokeâ or ideologically biased AI in federal procurementârequiring that models be âtruthful,â neutral, and free from DEI or political content.
The plan also repealed or rescinded previous Biden-era AI regulations and dismantled the AI Safety Institute, replacing it with a proâinnovation U.S. Center for AI Standards and Innovation focused on economic growth rather than ethical guardrails.
Workforce development received attention through new funding streams, AI literacy programs, and the creation of a Department of Labor AI Workforce Research Hub. These seek to prepare for economic disruption but are limited in scope compared to the scale of potential AI-driven change.
Observers have praised the emphasis on domestic infrastructure, streamlined permitting, and investment in openâsource models. Yet critics warn that corporate interests, especially from major tech and energy industries, may benefit mostâsometimes at the expense of public safeguards and long-term viability.
⚠️ Lack of regulatory guardrails
The AI Action Plan notably lacks meaningful guardrails or regulatory frameworks. It strips back environmental permitting requirements, discourages stateâlevel regulation by threatening funding withdrawals, bans ideological considerations like DEI from federal AI systems, and eliminates previously established safety standards. While advocating a âtryâfirstâ deployment mindset, the strategy overlooks critical issues ranging from bias, misinformation, copyright and data use to climate impact and energy strain. Experts argue this deregulation-heavy stance risks creating brittle, misaligned, and unsafe AI ecosystemsâwith little accountability or public oversight
A comparison of Trumpâs AI Action Plan and the EU AI Act, focusing on guardrails, safety, security, human rights, and accountability:
1. Regulatory Guardrails
EU AI Act: Introduces a risk-based regulatory framework. High-risk AI systems (e.g., in critical infrastructure, law enforcement, and health) must comply with strict obligations before deployment. There are clear enforcement mechanisms with penalties for non-compliance.
Trump AI Plan: Focuses on deregulation and rapid deployment, removing many guardrails such as environmental and ethical oversight. It rescinds Biden-era safety mandates and discourages state-level regulation, offering minimal federal oversight or compliance mandates.
➡ Verdict: The EU prioritizes regulated innovation, while the Trump plan emphasizes unregulated speed and growth.
2. AI Safety
EU AI Act: Requires transparency, testing, documentation, and human oversight for high-risk AI systems. Emphasizes pre-market evaluation and post-market monitoring for safety assurance.
Trump AI Plan: Shutters the U.S. AI Safety Institute and replaces it with a pro-growth Center for AI Standards, focused more on competitiveness than technical safety. No mandatory safety evaluations for commercial AI systems.
➡ Verdict: The EU mandates safety as a prerequisite; the U.S. plan defers safety to industry discretion.
3. Cybersecurity and Technical Robustness
EU AI Act: Requires cybersecurity-by-design for AI systems, including resilience against manipulation or data poisoning. High-risk AI systems must ensure integrity, robustness, and resilience.
Trump AI Plan: Encourages rapid development and deployment but provides no explicit cybersecurity requirements for AI models or infrastructure beyond vague infrastructure support.
➡ Verdict: The EU embeds security controls, while the Trump plan omits structured cyber risk considerations.
4. Human Rights and Discrimination
EU AI Act: Prohibits AI systems that pose unacceptable risks to fundamental rights (e.g., social scoring, manipulative behavior). Strong safeguards for non-discrimination, privacy, and civil liberties.
Trump AI Plan: Bans AI models in federal use that promote âwokeâ or DEI-related content, aiming for so-called âneutrality.â Critics argue this amounts to ideological filtering, not real neutrality, and may undermine protections for marginalized groups.
➡ Verdict: The EU safeguards rights through legal obligations; the U.S. approach is politicized and lacks rights-based protections.
5. Accountability and Oversight
EU AI Act: Creates a comprehensive governance structure including a European AI Office and national supervisory authorities. Clear roles for compliance, enforcement, and redress.
Trump AI Plan: No formal accountability mechanisms for private AI developers or federal use beyond procurement preferences. Lacks redress channels for affected individuals.
➡ Verdict: EU embeds accountability through regulation; Trumpâs plan leaves accountability vague and market-driven.
6. Transparency Requirements
EU AI Act: Requires AI systems (especially those interacting with humans) to disclose their AI nature. High-risk models must document datasets, performance, and design logic.
Trump AI Plan: No transparency mandates for AI modelsâeither in federal procurement or commercial deployment.
➡ Verdict: The EU enforces transparency, while the Trump plan favors developer discretion.
7. Bias and Fairness
EU AI Act: Demands bias detection and mitigation for high-risk AI, with auditing and dataset scrutiny.
Trump AI Plan: Frames anti-bias mandates (like DEI or fairness audits) as ideological interference, and bans such requirements from federal procurement.
➡ Verdict: EU takes bias seriously as a safety issue; Trumpâs plan politicizes and rejects fairness frameworks.
8. Stakeholder and Public Participation
EU AI Act: Drafted after years of consultation with stakeholders: civil society, industry, academia, and governments.
Trump AI Plan: Developed behind closed doors with little public engagement and strong industry influence, especially from tech and energy sectors.
➡ Verdict: The EU Act is consensus-based, while Trumpâs plan is executive-driven.
9. Strategic Approach
EU AI Act: Balances innovation with protection, ensuring AI benefits society while minimizing harm.
Trump AI Plan: Views AI as an economic and geopolitical race, prioritizing speed, scale, and market dominance over systemic safeguards.
⚠️ Conclusion: Lack of Guardrails in the Trump AI Plan
The Trump AI Action Plan aggressively promotes AI innovation but does so by removing guardrails rather than installing them. It lacks structured safety testing, human rights protections, bias mitigation, and cybersecurity controls. With no regulatory accountability, no national AI oversight body, and an emphasis on ideological neutrality over ethical safeguards, it risks unleashing AI systems that are fast, powerfulâbut potentially misaligned, unsafe, and unjust.
In contrast, the EU AI Act may slow innovation at times but ensures it unfolds within a trusted, accountable, and rights-respecting framework. U.S. as prioritizing rapid innovation with minimal oversight, while the EU takes a structured, rules-based approach to AI development. Calling it the âWild Wild Westâ of AI governance isnât far off â it captures the perception that in the U.S., AI developers operate with few legal constraints, limited government oversight, and an emphasis on market freedom rather than public safeguards.
A Nation of Laws or a Race Without Rules?
America has long stood as a beacon of democratic governance, built on the foundation of laws, accountability, and institutional checks. But in the race to dominate artificial intelligence, that tradition appears to be slipping. The Trump AI Action Plan prioritizes speed over safety, deregulation over oversight, and ideology over ethical alignment.
In stark contrast, the EU AI Act reflects a commitment to structured, rights-based governance â even if it means moving slower. This emerging divide raises a critical question: Is the U.S. still a nation of laws when it comes to emerging technologies, or is it becoming the Wild West of AI?
If America aims to lead the world in AIânot just through dominance but by earning global trustâit may need to return to the foundational principles that once positioned it as a leader in setting international standards, rather than treating non-compliance as a mere business expense. Notably, Meta has chosen not to sign the EU’s voluntary Code of Practice for general-purpose AI (GPAI) models.
The penalties outlined in the EU AI Act do enforce compliance. The Act is equipped with substantial enforcement provisions to ensure that operatorsâsuch as AI providers, deployers, importers, and distributorsâadhere to its rules. example question below, guess what is an appropriate penality for explicitly prohibited use of AI system under EU AI Act.
A technology company was found to be using an AI system for real-time remote biometric identification, which is explicitly prohibited by the AI Act. What is the appropriate penalty for this violation?
A) A formal warning without financial penalties B) An administrative fine of up to âŹ7.5 million or 1% of the total global annual turnover in the previous financial year C) An administrative fine of up to âŹ15 million or 3% of the total global annual turnover in the previous financial year D) An administrative fine of up to âŹ35 million or 7% of the total global annual turnover in the previous financial year
Integrating ISO standards across business functionsâparticularly Governance, Risk, and Compliance (GRC)âhas become not just a best practice but a necessity in the age of Artificial Intelligence (AI). As AI systems increasingly permeate operations, decision-making, and customer interactions, the need for standardized controls, accountability, and risk mitigation is more urgent than ever. ISO standards provide a globally recognized framework that ensures consistency, security, quality, and transparency in how organizations adopt and manage AI technologies.
In the GRC domain, ISO standards like ISO/IEC 27001 (information security), ISO/IEC 38500 (IT governance), ISO 31000 (risk management), and ISO/IEC 42001 (AI management systems) offer a structured approach to managing risks associated with AI. These frameworks guide organizations in aligning AI use with regulatory compliance, internal controls, and ethical use of data. For example, ISO 27001 helps in safeguarding data fed into machine learning models, while ISO 31000 aids in assessing emerging AI risks such as bias, algorithmic opacity, or unintended consequences.
The integration of ISO standards helps unify siloed departmentsâsuch as IT, legal, HR, and operationsâby establishing a common language and baseline for risk and control. This cohesion is particularly crucial when AI is used across multiple departments. AI doesnât respect organizational boundaries, and its risks ripple across all functions. Without standardized governance structures, businesses risk deploying fragmented, inconsistent, and potentially harmful AI systems.
ISO standards also support transparency and accountability in AI deployment. As regulators worldwide introduce new AI regulationsâsuch as the EU AI Actâstandards like ISO/IEC 42001 help organizations demonstrate compliance, build trust with stakeholders, and prepare for audits. This is especially important in industries like healthcare, finance, and defense, where the margin for error is small and ethical accountability is critical.
Moreover, standards-driven integration supports scalability. As AI initiatives grow from isolated pilot projects to enterprise-wide deployments, ISO frameworks help maintain quality and control at scale. ISO 9001, for instance, ensures continuous improvement in AI-supported processes, while ISO/IEC 27017 and 27018 address cloud security and data privacyâkey concerns for AI systems operating in the cloud.
AI systems also introduce new third-party and supply chain risks. ISO standards such as ISO/IEC 27036 help in managing vendor security, and when integrated into GRC workflows, they ensure AI solutions procured externally adhere to the same governance rigor as internal developments. This is vital in preventing issues like AI-driven data breaches or compliance gaps due to poorly vetted partners.
Importantly, ISO integration fosters a culture of risk-aware innovation. Instead of slowing down AI adoption, standards provide guardrails that enable responsible experimentation and faster time to trust. They help organizations embed privacy, ethics, and accountability into AI from the design phase, rather than retrofitting compliance after deployment.
In conclusion, ISO standards are no longer optional checkboxes; they are strategic enablers in the age of AI. For GRC leaders, integrating these standards across business functions ensures that AI is not only powerful and efficient but also safe, transparent, and aligned with organizational values. As AIâs influence grows, ISO-based governance will distinguish mature, trusted enterprises from reckless adopters.
What does BS ISO/IEC 42001 – Artificial intelligence management system cover? BS ISO/IEC 42001:2023 specifies requirements and provides guidance for establishing, implementing, maintaining and continually improving an AI management system within the context of an organization.
ISO/IEC 42001:2023 – from establishing to maintain an AI management system.
ISO/IEC 27701 2019 Standard – Published in August of 2019, ISO 27701 is a new standard for information and data privacy. Your organization can benefit from integrating ISO 27701 with your existing security management system as doing so can help you comply with GDPR standards and improve your data security.
In todayâs landscape, cyber threats are no longer a question of âifâ but âwhen.â The financial and reputational costs of data breaches can be devastating. Traditionally, encryption has served as the frontline defenseâlocking data away. But tokenization offers a differentâand arguably superiorâapproach: remove sensitive data entirely, and hackers end up breaking into an empty vault
Tokenization works much like casino chips. Instead of walking around with cash, players use chips that only hold value within the casino. If stolen, these chips are useless outside the establishment. Similarly, sensitive information (like credit card numbers) is stored in a highly secure âtoken vault.â The system returns a non-sensitive, randomized token to your applicationâa placeholder with zero intrinsic value
Once your systems are operating solely with tokens, real data never touches them. This minimizes the risk: even if your servers are compromised, attackers only obtain meaningless tokens. The sensitive data remains locked away, accessible only through secure channels to the token vault
Tokenization significantly reduces your ârisk profile.â Without sensitive data in your environment, the biggest asset that cybercriminals target disappears. This process, often referred to as âdata de-scoping,â eliminates your core liabilityâif you donât store sensitive data, you canât lose it
For businesses handling payment cards, tokenization simplifies compliance with PCI DSS. Most mandates apply only when real cardholder data enters your systems. By outsourcing tokenization to a certified provider, you dramatically shrink your audit scope and compliance burden, translating into cost and time savings
Unlike many masking methods, tokenization preserves the utility of data. Tokens can mirror the format of the original dataâsuch as 16-digit numbers preserving the last four digits. This allows you to perform analytics, generate reports, and support loyalty systems without ever exposing the actual data
More than just an enhanced security layer, tokenization is a strategic data management tool. It fundamentally reduces the value of what resides in your systems, making them less enticing and more resilient. This dual benefitâheightened security and operational efficiencyâforms the basis for a more robust and trustworthy enterprise
🔒 Key Benefits of Tokenization
Risk Reduction: Sensitive data is removed from core systems, minimizing exposure to breaches.
Simplified Compliance: Limits PCI DSS scope and lowers audit complexity and costs.
Operational Flexibility: Maintains usability of data for analytics and reporting.
Security by Design: Reduces attack surfaceâno valuable data means no incentive for theft.
🔄 Step-by-Step Example (Credit Card Payment)
Scenario: A customer enters their credit card number on an e-commerce site.
Original Data Collected: Customer enters: 4111 1111 1111 1111.
Tokenization Process Begins: The payment processor sends the card number to a tokenization service.
Token Issued: The service generates a random token, like A94F-Z83D-J1K9-X72B, and stores the actual card number securely in its token vault.
Token Returned: The merchantâs system only stores and uses the token (A94F-Z83D-J1K9-X72B)ânot the real card number.
Transaction Authorization: When needed (e.g. to process a refund), the merchant sends the token to the tokenization provider, which maps it back to the original card and processes the transaction securely.
Most risk assessments fail to support real decisions. Learn how to turn risk management into a strategic advantage, not just a compliance task.
1. In many organizations, risk assessments are treated as checklist exercisesâcompleted to meet compliance requirements, not to drive action. They often lack relevance to current business decisions and serve more as formalities than strategic tools.
2. When no real decision is being considered, a risk assessment becomes little more than paperwork. It consumes time, effort, and even credibility without providing meaningful value to the business. In such cases, risk teams risk becoming disconnected from the core priorities of the organization.
3. This disconnect is reflected in recent research. According to PwCâs 2023 Global Risk Survey, while 73% of executives agree that risk management is critical to strategic decisions, only 22% believe it is effectively influencing those decisions. Gartnerâs 2023 survey also found that over half of organizations see risk functions as too siloed to support enterprise-wide decisions.
4. Even more concerning is the finding from NC Stateâs ERM Initiative: over 60% of risk assessments are performed without a clear decision-making context. This means that most risk work happens in a vacuum, far removed from the actual choices business leaders are making.
5. Risk management should not be a separate track from businessâit should be a core driver of decision-making under uncertainty. Its value lies in making trade-offs explicit, identifying blind spots, and empowering leaders to act with clarity and confidence.
6. Before launching into a new risk register update or a 100 plus page report, organizations should ask a sharper business related question: What business decision are we trying to support with this assessment? When risk is framed this way, it becomes a strategic advantage, not an overhead cost.
7. By shifting focus from managing risks to enabling better decisions, risk management becomes a force multiplier for strategy, innovation, and resilience. It helps business leaders act not just with cautionâbut with confidence.
Conclusion A well-executed risk assessment helps businesses prioritize what matters, allocate resources wisely, and protect value while pursuing growth. To be effective, risk assessments must be decision-driven, timely, and integrated into business conversations. Donât treat them as routine reportsâuse them as decision tools that connect uncertainty to action.
ASM Is Evolving Into Holistic, Proactive Defense Attack Surface Management has grown from merely tracking exposed vulnerabilities to encompassing all digital assetsâcloud systems, IoT devices, internal apps, corporate premises, and supplier infrastructure. Modern ASM solutions donât just catalog known risks; they continuously discover new assets and alert on changes in real time. This shift from reactive to proactive defense helps organizations anticipate threats before they materialize.
AI, Machine Learning & Threat Intelligence Drive Detection AI/ML is now foundational in ASM tools, capable of scanning vast data sets to find misconfigurations, blind spots, and chained vulnerabilities faster than human operators could. Integrated threat-intel feeds then enrich these findings, enabling contextual prioritizationâyour team can focus on what top adversaries are actively attacking.
Zero Trust & Continuous Monitoring Are Essential ASM increasingly integrates with Zero Trust principles, ensuring every device, user, or connection is verified before granting access. Combined with ongoing asset monitoringâboth EASM (external) and CAASM (internal)âthis provides a comprehensive visibility framework. Such alignment enables security teams to detect unexpected changes or suspicious behaviors in hybrid environments.
Third-Party, IoT/OT & Shadow Assets in Focus Attack surfaces are no longer limited to corporate servers. IoT and OT devices, along with shadow IT and third-party vendor infrastructure, are prime targets. ASM platforms now emphasize uncovering default credentials, misconfigured firmware, and regularizing access across partner ecosystems. This expanded view helps mitigate supply-chain and vendor-based risks
ASM Is a Continuous Service, Not a One-Time Scan Todayâs ASM is about ongoing exposure assessment. Whether delivered in-house or via ASM-as-a-Service, the goal is to map, monitor, validate, and remediate 24/7. Context-rich alerts backed by human-friendly dashboards empower teams to tackle the most critical risks first. While tools offer automation, the human element remains vitalâsecurity teams need to connect ASM findings to business context
In short, ASM in 2025 is about persistent, intelligent, and context-aware attack surface management spanning internal environments, cloud, IoT, and third-party ecosystems. It blends AI-powered insights, Zero Trust philosophy, and continuous monitoring to detect vulnerabilities proactively and prioritize them based on real-world threat context.
In todayâs fast-evolving AI landscape, rapid innovation is accompanied by serious challenges. Organizations must grapple with ethical dilemmas, data privacy issues, and uncertain regulatory environmentsâall while striving to stay competitive. These complexities make it critical to approach AI development and deployment with both caution and strategy.
Despite the hurdles, AI continues to unlock major advantages. From streamlining operations to improving decision-making and generating new roles across industries, the potential is undeniable. However, realizing these benefits demands responsible and transparent management of AI technologies.
Thatâs where ISO/IEC 42001:2023 comes into play. This global standard introduces a structured framework for implementing Artificial Intelligence Management Systems (AIMS). It empowers organizations to approach AI development with accountability, safety, and compliance at the core.
Deura InfoSec LLC (deurainfosec.com) specializes in helping businesses align with the ISO 42001 standard. Our consulting services are designed to help organizations assess AI risks, implement strong governance structures, and comply with evolving legal and ethical requirements.
We support clients in building AI systems that are not only technically sound but also trustworthy and socially responsible. Through our tailored approach, we help you realize AIâs full potentialâwhile minimizing its risks.
If your organization is looking to adopt AI in a secure, ethical, and future-ready way, ISO Consulting LLC is your partner. Visit Deura InfoSec to discover how our ISO 42001 consulting services can guide your AI journey.
We guide company through ISO/IEC 42001 implementation, helping them design a tailored AI Management System (AIMS) aligned with both regulatory expectations and ethical standards. Our team conduct a comprehensive risk assessment, implemented governance controls, and built processes for ongoing monitoring and accountability.
👉 Visit Deura Infosec to start your AI compliance journey.
ISO 42001âthe first international standard for managing artificial intelligence. Developed for organizations that design, deploy, or oversee AI, ISO 42001 is set to become the ISO 9001 of AI: a universal framework for trustworthy, transparent, and responsible AI.
âWhether youâre a technology professional, policymaker, academic, or simply a curious reader, this book will arm you with the knowledge to navigate the complex intersection of AI, security, and society.â
At Deura InfoSec, we help small to mid-sized businesses navigate the complex world of cybersecurity and complianceâwithout the confusion, cost, or delays of traditional approaches. Whether you’re facing a looming audit, need to meet ISO 27001, NIST, HIPAA, or other regulatory standards, or just want to know where your risks areâweâve got you covered.
We offer fixed-price compliance assessments, vCISO services, and easy-to-understand risk scorecards so you know exactly where you stand and what to fixâfast. No bloated reports. No endless consulting hours. Just actionable insights that move you forward.
Our proven SGRC frameworks, automated tools, and real-world expertise help you stay audit-ready, reduce business risk, and build trust with customers.
📌 ISO 27001 | ISO 42001 | SOC 2 | HIPAA | NIST | Privacy | TPRM | M&A 📌 Risk & Gap Assessments | vCISO | Internal Audit 📌 Security Roadmaps | AI & InfoSec Governance | Awareness Training
Start with our Compliance Self-Assessment and discover how secureâand compliantâyou really are.
Several posts published recently discuss AI security and privacy, highlighting different perspectives and concerns. Here’s a summary of the most prominent themes and posts:
Emerging Concerns and Risks:
Growing Anxiety around AI Data Privacy: A recent survey found that a significant majority of Americans (91%) are concerned about social media platforms using their data to train AI models, with 69% aware of this practice.
AI-Powered Cyber Threats on the Rise: AI is increasingly being used to generate sophisticated phishing attacks and malware, making it harder to distinguish between legitimate and malicious content.
Gap between AI Adoption and Security Measures: Many organizations are quickly adopting AI but lag in implementing necessary security controls, creating a major vulnerability for data leaks and compliance issues.
Deepfakes and Impersonation Scams: The use of AI in creating realistic deepfakes is fueling a surge in impersonation scams, increasing privacy risks.
Opaque AI Models and Bias: The “black box” nature of some AI models makes it difficult to understand how they make decisions, raising concerns about potential bias and discrimination.
Regulatory Developments:
Increasing Regulatory Scrutiny: Governments worldwide are focusing on regulating AI, with the EU AI Act setting a risk-based framework and China implementing comprehensive regulations for generative AI.
Focus on Data Privacy and User Consent: New regulations emphasize data minimization, purpose limitation, explicit user consent for data collection and processing, and requirements for data deletion upon request.
Best Practices and Mitigation Strategies:
Robust Data Governance: Organizations must establish clear data governance frameworks, including data inventories, provenance tracking, and access controls.
Privacy by Design: Integrating privacy considerations from the initial stages of AI system development is crucial.
Utilizing Privacy-Preserving Techniques: Employing techniques like differential privacy, federated learning, and synthetic data generation can enhance data protection.
Continuous Monitoring and Threat Detection: Implementing tools for continuous monitoring, anomaly detection, and security audits helps identify and address potential threats.
Employee Training: Educating employees about AI-specific privacy risks and best practices is essential for building a security-conscious culture.
Specific Mentions:
NSA’s CSI Guidance: The National Security Agency (NSA) released joint guidance on AI data security, outlining best practices for organizations.
Stanford’s 2025 AI Index Report: This report highlighted a significant increase in AI-related privacy and security incidents, emphasizing the need for stronger governance frameworks.
DeepSeek AI App Risks: Experts raised concerns about the DeepSeek AI app, citing potential security and privacy vulnerabilities.
Based on current trends and recent articles, it’s evident that AI security and privacy are top-of-mind concerns for individuals, organizations, and governments alike. The focus is on implementing strong data governance, adopting privacy-preserving techniques, and adapting to evolving regulatory landscapes.Â
The rapid rise of AI has introduced new cyber threats, as bad actors increasingly exploit AI tools to enhance phishing, social engineering, and malware attacks. Generative AI makes it easier to craft convincing deepfakes, automate hacking tasks, and create realistic fake identities at scale. At the same time, the use of AI in security tools also raises concerns about overreliance and potential vulnerabilities in AI models themselves. As AI capabilities grow, so does the urgency for organizations to strengthen AI governance, improve employee awareness, and adapt cybersecurity strategies to meet these evolving risks.
There is a lack of comprehensive federal security and privacy regulations in the U.S., but violations of international standards often lead to substantial penalties abroadfor U.S. organizations. Penalties imposed abroad effectively become a cost of doing business for U.S. organizations.
Meta has faced dozens of fines and settlements across multiple jurisdictions, with at least a dozen significant penalties totaling tens of billions of dollars/euros cumulatively.
Artificial intelligence (AI) and large language models (LLMs) emerging as the top concern for security leaders. For the first time, AI, including tools such as LLMs, has overtaken ransomware as the most pressing issue.
âWhether youâre a technology professional, policymaker, academic, or simply a curious reader, this book will arm you with the knowledge to navigate the complex intersection of AI, security, and society.â
The NIST Gap Assessment Tool is a structured resourceâtypically a checklist, questionnaire, or software toolâused to evaluate an organizationâs current cybersecurity or risk management posture against a specific NIST framework. The goal is to identify gaps between existing practices and the standards outlined by NIST, so organizations can plan and prioritize improvements.
The NIST SP 800-171 standard is primarily used by non-federal organizationsâespecially contractors and subcontractorsâthat handle Controlled Unclassified Information (CUI) on behalf of the U.S. federal government.
Specifically, it’s used by:
Defense Contractors â working with the Department of Defense (DoD).
Contractors/Subcontractors â serving other civilian federal agencies (e.g., DOE, DHS, GSA).
Universities & Research Institutions â receiving federal research grants and handling CUI.
IT Service Providers â managing federal data in cloud, software, or managed service environments.
Manufacturers & Suppliers â in the Defense Industrial Base (DIB) who process CUI in any digital or physical format.
Why it matters:
Compliance with NIST 800-171 is required under DFARS 252.204-7012 for DoD contractors and is becoming a baseline for other federal supply chains. Organizations must implement the 110 security controls outlined in NIST 800-171 to protect the confidentiality of CUI.
✅ NIST 800-171 Compliance Checklist
1. Access Control (AC)
Limit system access to authorized users.
Separate duties of users to reduce risk.
Control remote and internal access to CUI.
Manage session timeout and lock settings.
2. Awareness & Training (AT)
Train users on security risks and responsibilities.
Provide CUI handling training.
Update training regularly.
3. Audit & Accountability (AU)
Generate audit logs for events.
Protect audit logs from modification.
Review and analyze logs regularly.
4. Configuration Management (CM)
Establish baseline configurations.
Control changes to systems.
Implement least functionality principle.
5. Identification & Authentication (IA)
Use unique IDs for users.
Enforce strong password policies.
Implement multifactor authentication.
6. Incident Response (IR)
Establish an incident response plan.
Detect, report, and track incidents.
Conduct incident response training and testing.
7. Maintenance (MA)
Perform system maintenance securely.
Control and monitor maintenance tools and activities.
8. Media Protection (MP)
Protect and label CUI on media.
Sanitize or destroy media before disposal.
Restrict media access and transfer.
9. Physical Protection (PE)
Limit physical access to systems and facilities.
Escort visitors and monitor physical areas.
Protect physical entry points.
10. Personnel Security (PS)
Screen individuals prior to system access.
Ensure CUI access is revoked upon termination.
11. Risk Assessment (RA)
Conduct regular risk assessments.
Identify and evaluate vulnerabilities.
Document risk mitigation strategies.
12. Security Assessment (CA)
Develop and maintain security plans.
Conduct periodic security assessments.
Monitor and remediate control effectiveness.
13. System & Communications Protection (SC)
Protect CUI during transmission.
Separate system components handling CUI.
Implement boundary protections (e.g., firewalls).
14. System & Information Integrity (SI)
Monitor systems for malicious code.
Apply security patches promptly.
Report and correct flaws quickly.
The NIST Gap Assessment Toolkit will cost-effectively assess your organization against the NIST SP 800-171 standard. It will help you to:
Understand the NIST SP 800-171 requirements for storing, processing, and transmitting CUI (Controlled Unclassified Information)
Quickly identify your NIST SP 800-171 compliance gaps
Plan and prioritise your NIST SP 800-171 project to ensure data handling meets U.S. DoD (Department of Defense) requirements