InfoSec and Compliance – With 20 years of blogging experience, DISC InfoSec blog is dedicated to providing trusted insights and practical solutions for professionals and organizations navigating the evolving cybersecurity landscape. From cutting-edge threats to compliance strategies, this blog is your reliable resource for staying informed and secure. Dive into the content, connect with the community, and elevate your InfoSec expertise!
AI risk management and governance, so aligning your risk management policy means integrating AI-specific considerations alongside your existing risk framework. Here’s a structured approach:
1. Understand ISO 42001 Scope and Requirements
ISO 42001 sets standards for AI governance, risk management, and compliance across the AI lifecycle.
Key areas include:
Risk identification and assessment for AI systems.
Mitigation strategies for bias, errors, security, and ethical concerns.
Transparency, explainability, and accountability of AI models.
Compliance with legal and regulatory requirements (GDPR, EU AI Act, etc.).
2. Map Your Current Risk Policy
Identify where your existing policy addresses:
Risk assessment methodology
Roles and responsibilities
Monitoring and reporting
Incident response and corrective actions
Note gaps related to AI-specific risks, such as algorithmic bias, model explainability, or data provenance.
3. Integrate AI-Specific Risk Controls
AI Risk Identification: Add controls for data quality, model performance, and potential bias.
Risk Assessment: Include likelihood, impact, and regulatory consequences of AI failures.
Mitigation Strategies: Document methods like model testing, monitoring, human-in-the-loop review, or bias audits.
Governance & Accountability: Assign clear ownership for AI system oversight and compliance reporting.
4. Ensure Regulatory and Ethical Alignment
Map your AI systems against applicable standards:
EU AI Act (high-risk AI systems)
GDPR or HIPAA for data privacy
ISO 31000 for general risk management principles
Document how your policy addresses ethical AI principles, including fairness, transparency, and accountability.
5. Update Policy Language and Procedures
Add a dedicated “AI Risk Management” section to your policy.
Include:
Scope of AI systems covered
Risk assessment processes
Monitoring and reporting requirements
Training and awareness for stakeholders
Ensure alignment with ISO 42001 clauses (risk identification, evaluation, mitigation, monitoring).
6. Implement Monitoring and Continuous Improvement
Establish KPIs and metrics for AI risk monitoring.
Include regular audits and reviews to ensure AI systems remain compliant.
Integrate lessons learned into updates of the policy and risk register.
7. Documentation and Evidence
Keep records of:
AI risk assessments
Mitigation plans
Compliance checks
Incident responses
This will support ISO 42001 certification or internal audits.
ISO 42001 is the upcoming standard for AI Management Systems (AIMS), similar in structure to ISO 27001 for information security. While the full standard is not yet widely published, the main requirements for an internal audit of an ISO 42001 AIMS can be outlined based on common audit principles and the expected clauses in the standard. Here’s a structured view:
1. Audit Scope and Objectives
Define what parts of the AI management system will be audited (processes, teams, AI models, AI governance, data handling, etc.).
Ensure the audit covers all ISO 42001 clauses relevant to your organization.
Determine audit objectives, e.g.,:
Compliance with ISO 42001.
Effectiveness of risk management for AI.
Alignment with organizational AI strategy and policies.
2. Compliance with AIMS Requirements
Check whether the organization’s AI management system meets ISO 42001 requirements, which likely include:
AI governance framework.
Risk management for AI (AI lifecycle, bias, safety, privacy).
Policies and procedures for AI development, deployment, and monitoring.
Data management and ethical AI principles.
Roles, responsibilities, and competency requirements for AI personnel.
3. Documentation and Records
Verify that documentation exists and is maintained, e.g.:
AI policies, procedures, and guidelines.
Risk assessments, impact assessments, and mitigation plans.
Training records and personnel competency evaluations.
Records of AI incidents, anomalies, or failures.
Audit logs of AI models and data handling activities.
4. Risk Management and Controls
Review whether risks related to AI (bias, safety, security, privacy) are identified, assessed, and mitigated.
Check implementation of controls:
Data quality and integrity controls.
Model validation and testing.
Human oversight and accountability mechanisms.
Compliance with relevant regulations and ethical standards.
5. Performance Monitoring and Improvement
Evaluate monitoring and measurement processes:
Metrics for AI model performance and compliance.
Monitoring of ethical and legal adherence.
Feedback loops for continuous improvement.
Assess whether corrective actions and improvements are identified and implemented.
6. Internal Audit Process Requirements
Audits should be planned, objective, and systematic.
Auditors must be independent of the area being audited.
Audit reports must include:
Findings (compliance, nonconformities, opportunities for improvement).
Recommendations.
Follow-up to verify closure of nonconformities.
7. Management Review Alignment
Internal audit results should feed into management reviews for:
AI risk mitigation effectiveness.
Resource allocation.
Policy updates and strategic AI decisions.
Key takeaway: An ISO 42001 internal audit is not just about checking boxes—it’s about verifying that AI systems are governed, ethical, and risk-managed throughout their lifecycle, with evidence, controls, and continuous improvement in place.
An Internal Audit agreement aligned with ISO 42001 should include the following key components, each described below to ensure clarity and operational relevance:
🧭 Scope of Services
The agreement should clearly define the consultant’s role in leading and advising the internal audit team. This includes directing the audit process, training team members on ISO 42001 methodologies, and overseeing all phases—from planning to reporting. It should also specify advisory responsibilities such as interpreting ISO 42001 requirements, identifying compliance gaps, and validating governance frameworks. The scope must emphasize the consultant’s authority to review and approve all audit work to ensure alignment with professional standards.
📄 Deliverables
A detailed list of expected outputs should be included, such as a comprehensive audit report with an executive summary, gap analysis, and risk assessment. The agreement should also cover a remediation plan with prioritized actions, implementation guidance, and success metrics. Supporting materials like policy templates, training recommendations, and compliance monitoring frameworks should be outlined. Finally, it should ensure the development of a capable internal audit team and documentation of audit procedures for future use.
⏳ Timeline
The agreement must specify key milestones, including project start and completion dates, training deadlines, audit phase completion, and approval checkpoints for draft and final reports. This timeline ensures accountability and helps coordinate internal resources effectively.
💰 Compensation
This section should detail the total project fee, payment terms, and a milestone-based payment schedule. It should also clarify reimbursable expenses (e.g., travel) and note that internal team costs and facilities are the client’s responsibility. Transparency in financial terms helps prevent disputes and ensures mutual understanding.
👥 Client Responsibilities
The client’s obligations should be clearly stated, including assigning qualified internal audit team members, ensuring their availability, designating a project coordinator, and providing access to necessary personnel, systems, and facilities. The agreement should also require timely feedback on deliverables and commitment from the internal team to complete audit tasks under the consultant’s guidance.
🎓 Consultant Responsibilities
The consultant’s duties should include providing expert leadership, training the internal team, reviewing and approving all work products, maintaining quality standards, and being available for ongoing consultation. This ensures the consultant remains accountable for the integrity and effectiveness of the audit process.
🔐 Confidentiality
A robust confidentiality clause should protect proprietary information shared during the engagement. It should specify the duration of confidentiality obligations post-engagement and ensure that internal audit team members are bound by equivalent terms. This builds trust and safeguards sensitive data.
💡 Intellectual Property
The agreement should clarify ownership of work products, stating that outputs created by the internal team under the consultant’s guidance belong to the client. It should also allow the consultant to retain general methodologies and templates for future use, while jointly owning training materials and audit frameworks.
⚖️ Limitation of Liability
This clause should cap the consultant’s liability to the total fee paid and exclude consequential or punitive damages. It should reinforce that ISO 42001 compliance is ultimately the client’s responsibility, with the consultant providing guidance and oversight—not execution.
🛑 Termination
The agreement should include provisions for termination with advance notice, payment for completed work, delivery of all completed outputs, and survival of confidentiality obligations. It should also ensure that any training and knowledge transfer remains with the client post-termination.
📜 General Terms
Standard legal provisions should be included, such as independent contractor status, governing law, severability, and a clause stating that the agreement represents the entire understanding between parties. These terms provide legal clarity and protect both sides.
Integrating ISO standards across business functions—particularly Governance, Risk, and Compliance (GRC)—has become not just a best practice but a necessity in the age of Artificial Intelligence (AI). As AI systems increasingly permeate operations, decision-making, and customer interactions, the need for standardized controls, accountability, and risk mitigation is more urgent than ever. ISO standards provide a globally recognized framework that ensures consistency, security, quality, and transparency in how organizations adopt and manage AI technologies.
In the GRC domain, ISO standards like ISO/IEC 27001 (information security), ISO/IEC 38500 (IT governance), ISO 31000 (risk management), and ISO/IEC 42001 (AI management systems) offer a structured approach to managing risks associated with AI. These frameworks guide organizations in aligning AI use with regulatory compliance, internal controls, and ethical use of data. For example, ISO 27001 helps in safeguarding data fed into machine learning models, while ISO 31000 aids in assessing emerging AI risks such as bias, algorithmic opacity, or unintended consequences.
The integration of ISO standards helps unify siloed departments—such as IT, legal, HR, and operations—by establishing a common language and baseline for risk and control. This cohesion is particularly crucial when AI is used across multiple departments. AI doesn’t respect organizational boundaries, and its risks ripple across all functions. Without standardized governance structures, businesses risk deploying fragmented, inconsistent, and potentially harmful AI systems.
ISO standards also support transparency and accountability in AI deployment. As regulators worldwide introduce new AI regulations—such as the EU AI Act—standards like ISO/IEC 42001 help organizations demonstrate compliance, build trust with stakeholders, and prepare for audits. This is especially important in industries like healthcare, finance, and defense, where the margin for error is small and ethical accountability is critical.
Moreover, standards-driven integration supports scalability. As AI initiatives grow from isolated pilot projects to enterprise-wide deployments, ISO frameworks help maintain quality and control at scale. ISO 9001, for instance, ensures continuous improvement in AI-supported processes, while ISO/IEC 27017 and 27018 address cloud security and data privacy—key concerns for AI systems operating in the cloud.
AI systems also introduce new third-party and supply chain risks. ISO standards such as ISO/IEC 27036 help in managing vendor security, and when integrated into GRC workflows, they ensure AI solutions procured externally adhere to the same governance rigor as internal developments. This is vital in preventing issues like AI-driven data breaches or compliance gaps due to poorly vetted partners.
Importantly, ISO integration fosters a culture of risk-aware innovation. Instead of slowing down AI adoption, standards provide guardrails that enable responsible experimentation and faster time to trust. They help organizations embed privacy, ethics, and accountability into AI from the design phase, rather than retrofitting compliance after deployment.
In conclusion, ISO standards are no longer optional checkboxes; they are strategic enablers in the age of AI. For GRC leaders, integrating these standards across business functions ensures that AI is not only powerful and efficient but also safe, transparent, and aligned with organizational values. As AI’s influence grows, ISO-based governance will distinguish mature, trusted enterprises from reckless adopters.
What does BS ISO/IEC 42001 – Artificial intelligence management system cover? BS ISO/IEC 42001:2023 specifies requirements and provides guidance for establishing, implementing, maintaining and continually improving an AI management system within the context of an organization.
ISO/IEC 42001:2023 – from establishing to maintain an AI management system.
ISO/IEC 27701 2019 Standard – Published in August of 2019, ISO 27701 is a new standard for information and data privacy. Your organization can benefit from integrating ISO 27701 with your existing security management system as doing so can help you comply with GDPR standards and improve your data security.
In today’s fast-evolving AI landscape, rapid innovation is accompanied by serious challenges. Organizations must grapple with ethical dilemmas, data privacy issues, and uncertain regulatory environments—all while striving to stay competitive. These complexities make it critical to approach AI development and deployment with both caution and strategy.
Despite the hurdles, AI continues to unlock major advantages. From streamlining operations to improving decision-making and generating new roles across industries, the potential is undeniable. However, realizing these benefits demands responsible and transparent management of AI technologies.
That’s where ISO/IEC 42001:2023 comes into play. This global standard introduces a structured framework for implementing Artificial Intelligence Management Systems (AIMS). It empowers organizations to approach AI development with accountability, safety, and compliance at the core.
Deura InfoSec LLC (deurainfosec.com) specializes in helping businesses align with the ISO 42001 standard. Our consulting services are designed to help organizations assess AI risks, implement strong governance structures, and comply with evolving legal and ethical requirements.
We support clients in building AI systems that are not only technically sound but also trustworthy and socially responsible. Through our tailored approach, we help you realize AI’s full potential—while minimizing its risks.
If your organization is looking to adopt AI in a secure, ethical, and future-ready way, ISO Consulting LLC is your partner. Visit Deura InfoSec to discover how our ISO 42001 consulting services can guide your AI journey.
We guide company through ISO/IEC 42001 implementation, helping them design a tailored AI Management System (AIMS) aligned with both regulatory expectations and ethical standards. Our team conduct a comprehensive risk assessment, implemented governance controls, and built processes for ongoing monitoring and accountability.
👉 Visit Deura Infosec to start your AI compliance journey.
ISO 42001—the first international standard for managing artificial intelligence. Developed for organizations that design, deploy, or oversee AI, ISO 42001 is set to become the ISO 9001 of AI: a universal framework for trustworthy, transparent, and responsible AI.
“Whether you’re a technology professional, policymaker, academic, or simply a curious reader, this book will arm you with the knowledge to navigate the complex intersection of AI, security, and society.”
AI businesses are at risk due to growing cyber threats, regulatory pressure, and ethical concerns. They often process vast amounts of sensitive data, making them prime targets for breaches and data misuse. Malicious actors can exploit AI systems through model manipulation, adversarial inputs, or unauthorized access. Additionally, lack of standardized governance and compliance frameworks exposes them to legal and reputational damage. As AI adoption accelerates, so do the risks.
AI businesses are at risk because they often handle large volumes of sensitive data, rely on complex algorithms that may be vulnerable to manipulation, and operate in a rapidly evolving regulatory landscape. Threats include data breaches, model poisoning, IP theft, bias in decision-making, and misuse of AI tools by attackers. Additionally, unclear accountability and lack of standardized AI security practices increase their exposure to legal, reputational, and operational risks.
Why it matters
It matters because the integrity, security, and trustworthiness of AI systems directly impact business reputation, customer trust, and regulatory compliance. A breach or misuse of AI can lead to financial loss, legal penalties, and harm to users. As AI becomes more embedded in critical decision-making—like healthcare, finance, and security—the risks grow more severe. Ensuring responsible and secure AI isn’t just good practice—it’s essential for long-term success and societal trust.
To reduce risks in AI businesses, we can:
Implement strong governancewith AIMS – Define clear accountability, policies, and oversight for AI development and use.
Secure data and models – Encrypt sensitive data, restrict access, and monitor for tampering or misuse.
Conduct risk assessments – Regularly evaluate threats, vulnerabilities, and compliance gaps in AI systems.
Ensure transparency and fairness – Use explainable AI and audit algorithms for bias or unintended consequences.
Stay compliant – Align with evolving regulations like GDPR, NIST AI RMF, or the EU AI Act.
Train teams – Educate employees on AI ethics, security best practices, and safe use of generative tools.
Proactive risk management builds trust, protects assets, and positions AI businesses for sustainable growth.
ISO/IEC 42001:2023 – from establishing to maintain an AI management system (AIMS)
BSI ISO 31000 is standard for any organization seeking risk management guidance
ISO/IEC 27001 and ISO/IEC 42001, both standards address risk and management systems, but with different focuses. ISO/IEC 27001 is centered on information security—protecting data confidentiality, integrity, and availability—while ISO/IEC 42001 is the first standard designed specifically for managing artificial intelligence systems responsibly. ISO/IEC 42001 includes considerations like AI-specific risks, ethical concerns, transparency, and human oversight, which are not fully addressed in ISO 27001. Organizations working with AI should not rely solely on traditional information security controls.
While ISO/IEC 27001 remains critical for securing data, ISO/IEC 42001 complements it by addressing broader governance and accountability issues unique to AI. The article suggests that companies developing or deploying AI should integrate both standards to build trust and meet growing stakeholder and regulatory expectations. Applying ISO 42001 can help demonstrate responsible AI practices, ensure explainability, and mitigate unintended consequences, positioning organizations to lead in a more regulated AI landscape.
“Whether you’re a technology professional, policymaker, academic, or simply a curious reader, this book will arm you with the knowledge to navigate the complex intersection of AI, security, and society.”
As AI systems become increasingly integrated into critical sectors such as finance, healthcare, and defense, their unpredictable and opaque behavior introduces significant risks to society. Traditional safety protocols may not be sufficient to manage the potential threats posed by highly advanced AI, especially those capable of causing existential harm. To address this, researchers propose Guillotine, a hypervisor-based architecture designed to securely sandbox powerful AI models.
Guillotine leverages established virtualization techniques but also introduces fundamentally new isolation strategies tailored for AI with existential-risk potential. Unlike typical software, such AI may attempt to analyze and subvert the very systems meant to contain them. This requires a deep co-design of hypervisor software with the underlying hardware—CPU, memory, network interfaces, and storage—to prevent side-channel leaks and eliminate avenues for reflective exploitation.
Beyond technical isolation, Guillotine incorporates physical fail-safes inspired by systems in nuclear power plants and aviation. These include hardware-level disconnection mechanisms and even radical approaches like data center flooding to forcibly shut down or destroy rogue AI. These physical controls offer a final layer of defense should digital barriers fail.
The underlying concern is that many current AI safety frameworks rely on policy rather than technical enforcement. As AI becomes more capable, it may learn to bypass or manipulate these soft controls. Guillotine directly confronts this problem by embedding enforcement into the architecture itself—creating systems that can’t be talked out of enforcing the rules.
In essence, Guillotine represents a shift from trust-based AI safety toward hardened, tamper-resistant infrastructure. It acknowledges that if AI is to be trusted with mission-critical roles—or if it poses existential threats—we must engineer control systems with the same rigor and physical safeguards used in other high-risk industries.
Managing AI Risks: A Strategic Imperative – responsibility and disruption must coexist
Artificial Intelligence (AI) is transforming sectors across the board—from healthcare and finance to manufacturing and logistics. While its potential to drive innovation and efficiency is clear, AI also introduces complex risks that can impact fairness, transparency, security, and compliance. To ensure these technologies are used responsibly, organizations must implement structured governance mechanisms to manage AI-related risks proactively.
Understanding the Key Risks
Unchecked AI systems can lead to serious problems. Biases embedded in training data can produce discriminatory outcomes. Many models function as opaque “black boxes,” making their decisions difficult to explain or audit. Security threats like adversarial attacks and data poisoning also pose real dangers. Additionally, with evolving regulations like the EU AI Act, non-compliance could result in significant penalties and reputational harm. Perhaps most critically, failure to demonstrate transparency and accountability can erode public trust, undermining long-term adoption and success.
ISO/IEC 42001: A Framework for Responsible AI
To address these challenges, ISO/IEC 42001—the first international AI management system standard—offers a structured, auditable framework. Published in 2023, it helps organizations govern AI responsibly, much like ISO 27001 does for information security. It supports a risk-based approach that accounts for ethical, legal, and societal expectations.
Key Components of ISO/IEC 42001
Contextual Risk Assessment: Tailors risk management to the organization’s specific environment, mission, and stakeholders.
Defined Governance Roles: Assigns clear responsibilities for managing AI systems.
Life Cycle Risk Management: Addresses AI risks across development, deployment, and ongoing monitoring.
Ethics and Transparency: Encourages fairness, explainability, and human oversight.
Continuous Improvement: Promotes regular reviews and updates to stay aligned with technological and regulatory changes.
Benefits of Certification
Pursuing ISO 42001 certification helps organizations preempt security, operational, and legal risks. It also enhances credibility with customers, partners, and regulators by demonstrating a commitment to responsible AI. Moreover, as regulations tighten, ISO 42001 provides a compliance-ready foundation. The standard is scalable, making it practical for both startups and large enterprises, and it can offer a competitive edge during audits, procurement processes, and stakeholder evaluations.
Practical Steps to Get Started
To begin implementing ISO 42001:
Inventory your existing AI systems and assess their risk profiles.
Identify governance and policy gaps against the standard’s requirements.
Develop policies focused on fairness, transparency, and accountability.
Train teams on responsible AI practices and ethical considerations.
Final Recommendation
AI is no longer optional—it’s embedded in modern business. But its power demands responsibility. Adopting ISO/IEC 42001 enables organizations to build AI systems that are secure, ethical, and aligned with regulatory expectations. Managing AI risk effectively isn’t just about compliance—it’s about building systems people can trust.
Planning AI compliance within the next 12–24 months reflects:
The time needed to inventory AI use, assess risk, and integrate policies
The emerging maturity of frameworks like ISO 42001, NIST AI RMF, and others
The expectation that vendors will demand AI assurance from partners by 2026
Companies not planning to do anything (the 6%) are likely in less regulated sectors or unaware of the pace of change. But even that 6% will feel pressure from insurers, regulators, and B2B customers.
Here are the Top 7 GenAI Security Practices that organizations should adopt to protect their data, users, and reputation when deploying generative AI tools:
1. Data Input Sanitization
Why: Prevent leakage of sensitive or confidential data into prompts.
How: Strip personally identifiable information (PII), secrets, and proprietary info before sending input to GenAI models.
2. Model Output Filtering
Why: Avoid toxic, biased, or misleading content from being released to end users.
How: Use automated post-processing filters and human review where necessary to validate output.
3. Access Controls & Authentication
Why: Prevent unauthorized use of GenAI systems, especially those integrated with sensitive internal data.
How: Enforce least privilege access, strong authentication (MFA), and audit logs for traceability.
4. Prompt Injection Defense
Why: Attackers can manipulate model behavior through cleverly crafted prompts.
How: Sanitize user input, use system-level guardrails, and test for injection vulnerabilities during development.
5. Data Provenance & Logging
Why: Maintain accountability for both input and output for auditing, compliance, and incident response.
How: Log inputs, model configurations, and outputs with timestamps and user attribution.
6. Secure Model Hosting & APIs
Why: Prevent model theft, abuse, or tampering via insecure infrastructure.
How: Use secure APIs (HTTPS, rate limiting), encrypt models at rest/in transit, and monitor for anomalies.
7. Regular Testing and Red-Teaming
Why: Proactively identify weaknesses before adversaries exploit them.
How: Conduct adversarial testing, red-teaming exercises, and use third-party GenAI security assessment tools.
The Strategic Synergy: ISO 27001 and ISO 42001 – A New Era in Governance
After years of working closely with global management standards, it’s deeply inspiring to witness organizations adopting what I believe to be one of the most transformative alliances in modern governance:ISO 27001 and the newly introduced ISO 42001.
ISO 42001, developed for AI Management Systems, was intentionally designed to align with the well-established information security framework of ISO 27001. This alignment wasn’t incidental—it was a deliberate acknowledgment that responsible AI governance cannot exist without a strong foundation of information security.
Together, these two standards create a governance model that is not only comprehensive but essential for the future:
ISO 27001 fortifies the integrity, confidentiality, and availability of data—ensuring that information is secure and trusted.
ISO 42001 builds on that by governing how AI systems use this data—ensuring those systems operate in a transparent, ethical, and accountable manner.
This integration empowers organizations to:
Extend trust from data protection to decision-making processes.
Safeguard digital assets while promoting responsible AI outcomes.
Bridge security, compliance, and ethical innovation under one cohesive framework.
In a world increasingly shaped by AI, the combined application of ISO 27001 and ISO 42001 is not just a best practice—it’s a strategic imperative.
High-level summary of the ISO/IEC 42001 Readiness Checklist
1. Understand the Standard
Purchase and study ISO/IEC 42001 and related annexes.
Familiarize yourself with AI-specific risks, controls, and life cycle processes.
Review complementary ISO standards (e.g., ISO 22989, 31000, 38507).
2. Define AI Governance
Create and align AI policies with organizational goals.
Assign roles, responsibilities, and allocate resources for AI systems.
Establish procedures to assess AI impacts and manage their life cycles.
Ensure transparency and communication with stakeholders.
3. Conduct Risk Assessment
Identify potential risks: data, security, privacy, ethics, compliance, and reputation.
Use Annex C for AI-specific risk scenarios.
4. Develop Documentation and Policies
Ensure AI policies are relevant, aligned with broader org policies, and kept up to date.
Maintain accessible, centralized documentation.
5. Plan and Implement AIMS (AI Management System)
Conduct a gap analysis with input from all departments.
Create a step-by-step implementation plan.
Deliver training and build monitoring systems.
6. Internal Audit and Management Review
Conduct internal audits to evaluate readiness.
Use management reviews and feedback to drive improvements.
Track and resolve non-conformities.
7. Prepare for and Undergo External Audit
Select a certified and reputable audit partner.
Hold pre-audit meetings and simulations.
Designate a central point of contact for auditors.
Address audit findings with action plans.
8. Focus on Continuous Improvement
Establish a team to monitor post-certification compliance.
Regularly review and enhance the AIMS.
Avoid major system changes during initial implementation.