InfoSec and Compliance – With 20 years of blogging experience, DISC InfoSec blog is dedicated to providing trusted insights and practical solutions for professionals and organizations navigating the evolving cybersecurity landscape. From cutting-edge threats to compliance strategies, this blog is your reliable resource for staying informed and secure. Dive into the content, connect with the community, and elevate your InfoSec expertise!
The General Data Protection Regulation (GDPR) has already raised many controversies, and one of the biggest ones is certainly which documents are required. For example, often you see companies who think having a privacy policy and a consent form on their website is enough; however, this is only a small part of the documents that are required to be fully compliant with this new privacy regulation.
Therefore, we created a list of GDPR documentation requirements to help you find all mandatory documents at one place . Please note that the names of the documents are not prescribed by the GDPR, so you may use some other titles; you also have a possibility to merge some of these documents.
Mandatory documents and records required by EU GDPR
Here are the documents that you must have if you want to be fully GDPR compliant:
Privacy Notice (Articles 12, 13, and 14) – this document (which can also be published on your website) explains in simple words how you will process personal data of your customers, website visitors, and others.
Employee Privacy Notice (Articles 12, 13 and 14) – explains how your company is going to process personal data of your employees (which could include health records, criminal records, etc.).
Data Retention Policy (Articles 5, 13, 17, and 30) – describes the process of deciding how long a particular type of personal data will be kept, and how it will be securely destroyed.
Data Retention Schedule (Article 30) – lists all of your personal data and describes how long each type of data will be kept.
Parental Consent Form (Article 8) – if the data subject is below the age of 16 years, then a parent needs to provide the consent for processing personal data.
Supplier Data Processing Agreement (Articles 28, 32, and 82) – you need this document to regulate data protection with a processor or any other supplier.
Data Breach Register (Article 33) – this is where you’ll record all of your data breaches. (Hopefully, it will be very short.)
Data Breach Notification Form to the Supervisory Authority (Article 33) – in case you do have a data breach, you’ll need to notify the Supervisory Authority in a formal way.
Data Breach Notification Form to Data Subjects (Article 34) – again, in case of a data breach, you’ll have the unpleasant duty to notify data subjects in a formal way.
I just wanted to inform you that, at the end of September, Advisera launched “Second Course Exam for Free” promotional campaign. The campaign will start on September 22, and end on September 29, 2022.
In this promotion the second course exam is completely FREE OF CHARGE.
The bundles are displayed on two landing pages, one with bundles related to ISO 9001 and another with bundles related to ISO 27001.
Foundations course exam bundles:
ISO 9001 Foundations exam + ISO 14001 Foundation exam
ISO 9001 Foundations exam + ISO 27001 Foundation exam
ISO 9001 Foundations exam + ISO 13485 Foundation exam
ISO 9001 Foundations exam + ISO 45001 Foundation exam
ISO 14001 Foundations exam + ISO 45001 Foundation exam
Internal Auditor course exam bundles:
ISO 9001 Internal Auditor exam + ISO 14001 Internal Auditor exam
ISO 9001 Internal Auditor exam + ISO 27001 Internal Auditor exam
ISO 9001 Internal Auditor exam + ISO 13485 Internal Auditor exam
ISO 9001 Internal Auditor exam + ISO 45001 Internal Auditor exam
ISO 14001 Internal Auditor exam + ISO 45001 Internal Auditor exam
Lead Auditor course exam bundles:
ISO 9001 Lead Auditor exam + ISO 14001 Lead Auditor exam
ISO 9001 Lead Auditor exam + ISO 13485 Lead Auditor exam
ISO 9001 Lead Auditor exam + ISO 45001 Lead Auditor exam
ISO 14001 Lead Auditor exam + ISO 45001 Lead Auditor exam
Lead Implementer course exam bundles:
ISO 9001 Lead Implementer exam + ISO 14001 Lead Implementer exam
ISO 9001 Lead Implementer exam + ISO 13485 Lead Implementer exam
ISO 9001 Lead Implementer exam + ISO 45001 Lead Implementer exam
ISO 14001 Lead Implementer exam + ISO 45001 Lead Implementer exam
2/ ISO 27001/EU GDPR-related bundles:
ISO 27001 Foundations exam + EU GDPR Foundations exam
ISO 27001 Foundations exam + ISO 9001 Foundation exam
ISO 27001 Internal Auditor exam + EU GDPR Data Protection Officer exam
ISO 27001 Internal Auditor exam + ISO 9001 Internal Auditor exam
ISO 27001 Lead Auditor exam + ISO 9001 Lead Auditor exam
ISO 27001 Lead Implementer exam + ISO 9001 Lead Implementer exam
Take ISO 27001 course exam and get the EU GDPR course exam for Free
French data protection authority says Google Analytics is in violation of GDPR
The French national data protection authority, CNIL, issued a formal notice to managers of an unnamed local website today arguing that its use of Google Analytics is in violation of the European Union’s General Data Protection Regulation, following a similar decision by Austria last month.
The root of the issue stems from the website’s use of Google Analytics, which functions as a tool for managers to track content performance and page visits. CNIL said the tool’s use and transfer of personal data to the U.S. fails to abide by landmark European regulations because the U.S. was deemed to not have equivalent privacy protections.
European regulators including CNIL have been investigating such complaints over the last two years, following a decision by the EU’s top court that invalidated the U.S.’s “Privacy Shield” agreement on data transfers. NOYB, the European Center for Digital Rights, reported 101 complaints in 27 member states of the EU and 3 states in the European Economic Area against data controllers who conduct the transatlantic transfers.
Privacy Shield, which went into effect in August of 2016, was a “self-certification mechanism for companies established in the United States of America,” according to CNIL.
Originally, the Privacy Shield was considered by the European Commission to be a sufficient safeguard for transferring personal data from European entities to the United States. However, in 2020 the adequacy decision was reversed due to no longer meeting standards.
An equivalency test was used to compare European and U.S. regulations which immediately established the U.S.’s failure to protect the data of non-U.S. citizens. European citizens would remain unaware that their data is being used and how it is being used, and they cannot be compensated for any misuse of data, CNIL found.
CNIL concluded that Google Analytics does not provide adequate supervision or regulation, and the risks for French users of the tool are too great.
“Indeed, if Google has adopted additional measures to regulate data transfers within the framework of the Google Analytics functionality, these are not sufficient to exclude the possibility of access by American intelligence services to this data,” CNIL said.
The unnamed site manager has been given a month to update its operations to be in compliance with GDPR. If the tool cannot meet regulations, CNIL suggests transitioning away from the current state of Google Analytics and replacing it with a different tool that does not transmit the data.
The privacy watchdog does not call for a ban of Google Analytics, but rather suggests revisions that follow the guidelines. “Concerning the audience measurement and analysis services of a website, the CNIL recommends that these tools be used only to produce anonymous statistical data, thus allowing an exemption from consent if the data controller ensures that there are no illegal transfers,” the watchdog said.
Most management systems, compliance, and certification projects require documented policies, procedures, and work instructions. GDPR compliance is no exception. Documentation of policies and processes are vital to achieve compliance.
ITG GDPR Documentation Toolkit gives you a complete set of easily customizable GDPR-compliant documentation templates to help you demonstrate your compliance with the GDPR’s requirements quickly, easily, and affordably.
“Having recently kicked off a GDPR project with a large international organisation I was tasked with creating their Privacy Compliance Framework. The GDPR toolkit provided by IT Governance proved to be invaluable providing the project with a well organised framework of template documents covering all elements of the PIMS framework. It covers areas such as Subject Access Request Procedure, Retention of Records Procedure and Data Protection Impact Assessment Procedure helping you to put in practice policies and procedures to enable the effective management of personal information on individuals. For anyone seeking some support with their GDPR plans the toolkit is well work consideration.”
Two-thirds of remote workers risk potentially breaching GDPR guidelines by printing out work-related documents at home, according to a new study from Go Shred.
The confidential shredding and records management company discovered that 66% of home workers have printed work-related documents since they began working from home, averaging five documents every week. Such documents include meeting notes/agendas (42%), internal documents including procedure manuals (32%), contracts and commercial documents (30%) and receipts/expense forms (27%).
Furthermore, 20% of home workers admitted to printing confidential employee information including payroll, addresses and medical information, with 13% having printed CVs or application forms.
The issue is that, to comply with the GDPR, all companies that store or process personal information about EU citizens within EU states are required to have an effective, documented, auditable process in place for the collection, storage and destruction of personal information.
However, when asked whether they have disposed of any printed documents since working from home, 24% of respondents said they haven’t disposed of them yet as they plan to take them back to the office and a further 24% said they used a home shredding machine but disposed of the documents in their own waste. This method of disposal is not recommended due to personal waste bins not providing enough security for confidential waste and therefore still leaving employers open to a data breach and potential fines, Go Shred pointed out.
Most concerning of all, 8% of those polled said they have no plans to dispose of the work-related documents they have printed at home, with 7% saying they haven’t done so because they do not know how to.
Personal data breach notification procedures under the GDPR
Organizations must create a procedure that applies in the event of a personal data breach under Article 33 – “Notification of a personal data breach to the supervisory authority” – and Article 34 of the GDPR – “Communication of a personal data breach to the data subject”.
Help with creating a data breach notification template
The picture above is an example of what a data breach notification might look like – available from the market-leading EU GDPR Documentation Toolkit – which sets out the scope of the procedure, responsibilities and the steps that will be taken by the organization to communicate the breach from:
A privacy notice is a public statement of how your organisation applies data protection principles to processing data. It should be a clear and concise document that is accessible by individuals.
Articles 12, 13 and 14 of the GDPR outline the requirements on giving privacy information to data subjects. These are more detailed and specific than in the UK Data Protection Act 1998 (DPA).
The GDPR says that the information you provide must be:
Concise, transparent, intelligible and easily accessible;
Written in clear and plain language, particularly if addressed to a child; and
Free of charge.
Help with creating a privacy notice template
The privacy notice should address the following to sufficiently inform the data subject:
If you are looking for a complete set of GDPR templates to help with your compliance project, you may be interested in the market-leading EU GDPR Documentation Toolkit. This toolkit is designed and developed by expert GDPR practitioners, and has been used by thousands of organisations worldwide. It includes:
A complete set of easy-to-use and customisable documentation templates, which will save you time and money and ensure GDPR compliance;
Helpful dashboards and project tools to ensure complete GDPR coverage;
With the advent of the European Union (EU) deadline for General Data Protection Regulation (GDPR) (EU 2016/679 regulation) coming up on 25 May 2018, many organizations are addressing their data gathering, protection and retention needs concerning the privacy of their data for EU citizens and residents. This regulation has many parts, as ISACA has described in many of its recent publications and events, but all of the efforts revolve around the protection and retention of the EU participants’ personal information. The 6 main areas for data protection defined in this regulation are:
Data security controls need to be, by default, active at all times. Allowing security controls to be optional is not recommended or even suggested. “Always on” is the mantra for protection.
These controls and the protection they provide must be embedded inside all applications. The GDPR view is that privacy is an essential part of functionality, the security of the system and its processing activities.
Along with embedding the data protection controls in applications, the system must maintain data privacy across the entire processing effort for the affected data. This end-to-end need for protection includes collection efforts, retention requirements and even the new “right to be forgotten” requirement, wherein the customer has the right to request removal of their data from an organization’s storage.
Complete data protection and privacy adds full-functional security and business requirements to any processing system in this framework for data privacy. It provides that business requirements and data protection requirements be equally important during the business process.
The primary requirement for protection within the GDPR framework demands the security and privacy controls implemented are proactive rather than reactive. As its principal goal, the system needs to prevent issues, releases and successful attacks. The system is to keep privacy events from occurring in the first place.
With all of these areas needed under GDPR, the most important point for organizations to understand about GDPR is transparency. The EU wants full disclosure of an organization’s efforts, documentation, reviews, assessments and results available for independent third-party review at any point. The goal is to ensure privacy managed by these companies is not dependent upon technology or business practices. It needs to be provable to outside parties and, therefore, acceptable. The EU has purposely placed some strong fine structures and responses into this regulation to ensure compliance.
Having reviewed various organizational efforts in preparation for GDPR implementation, it has been found that it is good practice to look at these 6 areas for all the collected and retained data, not just EU-based data. This zero-tolerance approach to data breaches is purposely designed to be stringent and strong. Good luck to all in meeting and maintaining the data privacy and security requirements of GDPR.
Those who have studied the Regulation will be aware that there are many references to certification schemes, seals and marks. The GDPR encourages the use of certification schemes like ISO 27001 to serve the purpose of demonstrating that the organisation is actively managing its data security in line with international best practice.
Managing people, processes and technology
ISO 27001 is the international best practice standard for information security, and is a certifiable standard that is broad-based and encompasses the three essential aspects of a comprehensive information security regime: people, processes and technology. By implementing measures to protect information using this three-pronged approach, the company is able to defend itself from not only technology-based risks, but other, more common threats, such as poorly informed staff or ineffective procedures.
By implementing ISO 27001, your organisation will be deploying an ISMS (information security management system): a system that is supported by top leadership, incorporated into your organisation’s culture and strategy, and which is constantly monitored, updated and reviewed. Using a process of continual improvement, your organisation will be able to ensure that the ISMS adapts to changes – both in the environment and inside the organisation – to continually identify and reduce risks.
What does the GDPR say?
The GDPR states clearly in Article 32 that “the controller and the processor shall implement appropriate technical and organisational measures to ensure a level of security appropriate to the risk, including inter alia as appropriate:
the pseudonymisation and encryption of personal data;
the ability to ensure the ongoing confidentiality, integrity, availability and resilience of processing systems and services;
the ability to restore the availability and access to personal data in a timely manner in the event of a physical or technical incident;
a process for regularly testing, assessing and evaluating the effectiveness of technical and organisational measures for ensuring the security of the processing.”
Let’s look at these items separately:
Encryption of data is recommended by ISO 27001 as one of the measures that can and should be taken to reduce the identified risks. ISO 27001:2013 outlines 114 controls that can be used to reduce information security risks. Since the controls an organisation implements are based on the outcomes of an ISO 27001-compliant risk assessment, the organisation will be able to identify which assets are at risk and require encryption to adequately protect them.
One of ISO 27001’s core tenets is the importance of ensuring the ongoing confidentiality, integrity and availability of information. Not only is confidentiality important, but the integrity and availability of such data is critical as well. If the data is available but in a format that is not usable because of a system disruption, then the integrity of that data has been compromised; if the data is protected but inaccessible to those who need to use it as part of their jobs, then the availability of that data has been compromised.
Risk assessment
ISO 27001 mandates that organisations conduct a thorough risk assessment by identifying threats and vulnerabilities that can affect an organisation’s information assets, and to take steps to assure the confidentiality, availability and integrity (CIA) of that data. The GDPR specifically requires a risk assessment to ensure an organisation has identified risks that can impact personal data.
Business continuity
ISO 27001 addresses the importance of business continuity management, whereby it provides a set of controls that will assist the organisation to protect the availability of information in case of an incident and protect critical business processes from the effects of major disasters to ensure their timely resumption.
Testing and assessments
Lastly, organisations that opt for certification to ISO 27001 will have their ISMSs independently assessed and audited by an accredited certification body to ensure that the management system meets the requirements of the Standard. Companies need to regularly review their ISMS and conduct the necessary assessments as prescribed by the Standard in order to ensure it continues protecting the company’s information. Achieving accredited certification to ISO 27001 delivers an independent, expert assessment of whether you have implemented adequate measures to protect your data.
The requirements to achieve compliance with ISO 27001 of course do not stop there. Being a broad standard, it covers many other elements, including the importance of staff awareness training and leadership support. ISO 27001 has already been adopted by thousands of organisations globally, and, given the current rate and severity of data breaches, it is also one of the fastest growing management system standards today.
The GDPR will replace these with a pan-European regulatory framework effective from 25 May 2018. The GDPR applies to all EU organizations – whether commercial business or public authority – that collect, store or process the personal data (PII) of EU individuals.
Organizations based outside the EU that monitor or offer goods and services to individuals in the EU will have to observe the new European rules and adhere to the same level of protection of personal data. This potentially includes organizations everywhere in the world, regardless of how difficult it may be to enforce the Regulation. Compliance consultant must know the following 9 tenants of the GDPR.
Supervisory Authority – A one-stop shop provision means that organizations will only have to deal with a single supervisory authority, not one for each of the EU’s 28 member states, making it simpler and cheaper for companies to do business in the EU.
Breach Disclosure – Organizations must disclose and document the causes of breaches, effects of breaches, and actions taken to address them.
Processor must be able to provide “sufficient guarantees to implement appropriate technical and organizational measures” to ensure that processing will comply with the GDPR and that data subjects’ rights are protected. This requirement flows down the supply chain, so a processor cannot subcontract work to a second processor without the controller’s explicit authorization. If requested by subject you must cease processing and using his or her data for some limited period of time.
Data Consent – The Regulation imposes stricter requirements on obtaining valid consent from individuals to justify the processing of their personal data. Consent must be “freely given, specific, informed and unambiguous indication of the individual’s wishes”. The organization must also keep records so it can demonstrate that consent has been given by the relevant individual. Data can only be used for the purposes that data subject originally explicitly consented. You must obtain and document consent for only one specific purpose at a time.
Right to be forgotten – Individuals have a right to require the data controller to erase all personal data held about them in certain circumstances, such as where the data is no longer necessary for the purposes for which it was collected. If requested by subject, you must erase their data on premises, in apps and on devices.
Data portability – Individuals will have the right to transfer personal data from one data controller to another where processing is based on consent or necessity for the performance of a contract, or where processing is carried out by automated means
Documentation – The Regulation requires quite a bit of documentation. In addition to the explicit and implicit requirements for specific records (especially including proof of consent from data subjects), you should also ensure that you have documented how you comply with the GDPR so that you have some evidence to support your claims if the supervisory authority has any cause to investigate.
Fines – Major noncompliance of the law will be punishable by fines of up to either 4% or €20 million of group annual worldwide turnover.
Data protection by design – Organization must ensure data security and data privacy across cloud and endpoints as well as design their system and processes that protects from unauthorized data access and malware. Specifically, organizations must take appropriate technical and organizational measures before data processing begin to ensure that it meets the requirements of the Regulation. Data privacy risks must be properly assessed, and controllers may use adherence to approved codes of conduct or management system certifications, such as ISO 27001, to demonstrate their compliance.
How to improve information security under the GDPR
Although many businesses understand the importance of implementing the right procedures for detection, report and investigate a data breach, but not many are aware of how to go about this effectively, especially during implementation phase.
Seven steps that can help you prevent a data breach:
Find out where your personal information resides and prioritize your data.
Identify all the risks that could cause a breach of your personal data.
Apply the most appropriate measures (controls) to mitigate those risks.
Implement the necessary policies and procedures to support the controls.
Conduct regular tests and audits to make sure the controls are working as intended.
Review, report and update your plans regularly.
Implement comprehensive and robust ISMS.
ISO 27001, the international information security standard, can help you achieve all of the above and protect all your other confidential company information, too. To achieve GDPR compliance, feel free to contact us for more detail on implementation.
As part of an EU General Data Protection Regulation (GDPR) compliance project, organisations will need to map their data and information flows in order to assess their privacy risks. This is also an essential first step for completing a data protection impact assessment (DPIA), which is mandatory for certain types of processing.
The key elements of data mapping
To effectively map your data, you need to understand the information flow, describe it and identify its key elements.
1. Understand the information flow
An information flow is a transfer of information from one location to another, for example:
From inside to outside the European Union; or
From suppliers and sub-suppliers through to customers.
2. Describe the information flow
Walk through the information lifecycle to identify unforeseen or unintended uses of data. This also helps to minimise what data is collected.
Make sure the people who will be using the information are consulted on the practical implications.
Consider the potential future uses of the information collected, even if it is not immediately necessary.
3. Identify its key elements
Data items
What kind of data is being processed (name, email, address, etc.) and what category does it fall into (health data, criminal records, location data, etc.)?
Formats
In what format do you store data (hardcopy, digital, database, bring your own device, mobile phones, etc.)?
Transfer method
How do you collect data (post, telephone, social media) and how do you share it internally (within your organisation) and externally (with third parties)?
Location
What locations are involved within the data flow (offices, the Cloud, third parties, etc.)?
Accountability
Who is accountable for the personal data? Often this changes as the data moves throughout the organisation.
Access
Who has access to the data in question?
The key challenges of data mapping
Identifying personal data Personal data can reside in a number of locations and be stored in a number of formats, such as paper, electronic and audio. Your first challenge is deciding what information you need to record and in what format.
Identifying appropriate technical and organizational safeguards The second challenge is likely to be identifying the appropriate technology – and the policy and procedures for its use – to protect information while also determining who controls access to it.
Understanding legal and regulatory obligations Your final challenge is determining what your organisation’s legal and regulatory obligations are. As well as the GDPR, this can include other compliance standards, such as the Payment Card Industry Data Security Standard (PCI DSS) and ISO 27001.Once you’ve completed these three challenges, you’ll be in a position to move forward, gaining the trust and confidence of your key stakeholders.
Data flow mapping
To help you gather the above information and consolidate it into one area, Vigilant Software, a subsidiary of IT Governance, has developed a data flow mapping tool with a specific focus on the GDPR.
Accelerate your GDPR compliance implementation project with the market-leading EU GDPR Documentation Toolkit used by hundreds of organizations worldwide, now with significant improvements and new content for summer 2017:
A complete set of easy-to-use and customizable documentation templates, which will save you time and money, and ensure compliance with the GDPR.
Easy-to-use dashboards and project tools to ensure complete coverage of the GDPR.
Direction and guidance from expert GDPR practitioners.
Includes two licenses for the GDPR Staff Awareness E-learning Course.
The General Data Protection Regulation (GDPR) is a new law that will harmonize data protection in the European Union (EU) and will be enforced from May 25, 2018. It aims to protect EU residents from data and privacy breaches, and has been introduced to keep up with the modern digital landscape.
Who needs to comply with the GDPR?
The GDPR will apply to all organizations outside of the EU that process the personal data of EU residents.
Non-compliance can result in hefty fines of up to 4% of annual global turnover or €20 million $23.5 million) – whichever is greater.
Organizations that are compliant with the new Regulation will also find that their processes and contractual relationships are more robust and reliable.
What do US organizations need to do to comply with the GDPR?
The transition period for compliance with the GDPR ends in May 2018. This means that organizations now have less than ten months to make sure they are compliant.
For US organizations, the most significant change concerns the territorial reach of the GDPR.
The GDPR will supersede the current EU Data Protection Directive. Under the current Regulation, organizations without a physical presence or employees in the EU have one main compliance issue to deal with: How to legally transfer data out of the EU. The EU–US Privacy Shield provides such a mechanism for compliance.
Almost all US organizations that collect or process EU residents’ data will need to comply fully with the requirements of the GDPR. US organizations without a physical EU presence must also appoint a GDPR representative based in a Member State.
Save 10% on your essential guide to the GDPR and the EU–US Privacy Shield
August’s book of the month is the ideal resource for anyone wanting a clear primer on the principles of data protection and their new obligations under the GDPR and the EU–US Privacy Shield.
Strengthen Your Supply Chain with a Vendor Security Posture Assessment
In today’s hyper-connected world, vendor security is not just a checkbox—it’s a business imperative. One weak link in your third-party ecosystem can expose your entire organization to breaches, compliance failures, and reputational harm.
At DeuraInfoSec, our Vendor Security Posture Assessment delivers complete visibility into your third-party risk landscape. We combine ISO 27002:2022 control mapping with CMMI-based maturity evaluations to give you a clear, data-driven view of each vendor’s security readiness.
Our assessment evaluates critical domains including governance, personnel security, IT risk management, access controls, software development, third-party oversight, and business continuity—ensuring no gaps go unnoticed.
✅ Key Benefits:
Identify and mitigate vendor security risks before they impact your business.
Gain measurable insights into each partner’s security maturity level.
Strengthen compliance with ISO 27001, SOC 2, GDPR, and other frameworks.
Build trust and transparency across your supply chain.
Support due diligence and audit requirements with documented, evidence-based results.
Protect your organization from hidden third-party risks—get a Vendor Security Posture Assessment today.
At DeuraInfoSec, our vendor security assessments combine ISO 27002:2022 control mapping with CMMI maturity evaluations to provide a holistic view of a vendor’s security posture. Assessments measure maturity across key domains such as governance, HR and personnel security, IT risk management, access management, software development, third-party management, and business continuity.
Why Vendor Assessments Matter Third-party vendors often handle sensitive information or integrate with your systems, creating potential risk exposure. A structured assessment identifies gaps in security programs, policies, controls, and processes, enabling proactive remediation before issues escalate.
Key Insights from a Typical Assessment
Overall Maturity: Vendors are often at Level 2 (“Managed”) maturity, indicating processes exist but may be reactive rather than proactive.
Critical Gaps: Common areas needing immediate attention include governance policies, security program scope, incident response, background checks, access management, encryption, and third-party risk management.
Remediation Roadmap: Improvements are phased—from immediate actions addressing critical gaps within 30 days, to medium- and long-term strategies targeting full compliance and optimized security processes.
The Benefits of a Structured Assessment
Risk Reduction: Address vulnerabilities before they impact your organization.
Compliance Preparedness: Prepare for ISO 27001, SOC 2, GDPR, HIPAA, PCI DSS, and other regulatory standards.
Continuous Improvement: Establish metrics and KPIs to track security progress over time.
Confidence in Partnerships: Ensure that vendors meet contractual and regulatory obligations, safeguarding your business reputation.
Next Steps Organizations should schedule executive reviews to approve remediation budgets, assign ownership for gap closure, and implement monitoring and measurement frameworks. Follow-up assessments ensure ongoing improvement and alignment with industry best practices.
You may ask your critical vendors to complete the following assessment and share the full assessment results along with the remediation guidance in a PDF report.
Vendor Security Assessment
$57.00 USD
ISO 27002:2022 Control Mapping with CMMI Maturity Assessment – our vendor security assessments combine ISO 27002:2022 control mapping with CMMI maturity evaluations to provide a holistic view of a vendor’s security posture. Assessments measure maturity across key domains such as governance, HR and personnel security, IT risk management, access management, software development, third-party management, and business continuity. This assessment contains 10 profile & 47 assessment questionnaires
DeuraInfoSec Services We help organizations enhance vendor security readiness and achieve compliance with industry standards. Our services include ISO 27001 certification preparation, SOC 2 readiness, virtual CISO (vCISO) support, AI governance consulting, and full security program management.
For organizations looking to strengthen their third-party risk management program and achieve measurable security improvements, a vendor assessment is the first crucial step.
Organizations using AI must adopt governance practices that enable trust, transparency, and ethical deployment. In the governance perspective of CAF-AI, AWS highlights that as AI scale grows, Deployment practices must also guarantee alignment with business priorities, ethical norms, data quality, and regulatory obligations.
A new foundational capability named “Responsible use of AI” is introduced. This capability is added alongside others such as risk management and data curation. Its aim is to enable organizations to foster ongoing innovation while ensuring that AI systems are used in a manner consistent with acceptable ethical and societal norms.
Responsible AI emphasizes mechanisms to monitor systems, evaluate their performance (and unintended outcomes), define and enforce policies, and ensure systems are updated when needed. Organizations are encouraged to build oversight mechanisms for model behaviour, bias, fairness, and transparency.
The lifecycle of AI deployments must incorporate controls for data governance (both for training and inference), model validation and continuous monitoring, and human oversight where decisions have significant impact. This ensures that AI is not a “black box” but a system whose effects can be understood and managed.
The paper points out that as organizations scale AI initiatives—from pilot to production to enterprise-wide roll-out—the challenges evolve: data drift, model degradation, new risks, regulatory change, and cost structures become more complex. Proactive governance and responsible-use frameworks help anticipate and manage these shifts.
Part of responsible usage also involves aligning AI systems with societal values — ensuring fairness (avoiding discrimination), explainability (making results understandable), privacy and security (handling data appropriately), robust behaviour (resilience to misuse or unexpected inputs), and transparency (users know what the system is doing).
From a practical standpoint, embedding responsible-AI practices means defining who in the organization is accountable (e.g., data scientists, product owners, governance team), setting clear criteria for safe use, documenting limitations of the systems, and providing users with feedback or recourse when outcomes go astray.
It also means continuous learning: organizations must update policies, retrain or retire models if they become unreliable, adapt to new regulations, and evolve their guardrails and monitoring as AI capabilities advance (especially generative AI). The whitepaper stresses a journey, not a one-time fix.
Ultimately, AWS frames responsible use of AI not just as a compliance burden, but as a competitive advantage: organizations that shape, monitor, and govern their AI systems well can build trust with customers, reduce risk (legal, reputational, operational), and scale AI more confidently.
My opinion: Given my background in information security and compliance, this responsible-AI framing resonates strongly. The shift to view responsible use of AI as a foundational capability aligns with the risk-centric mindset you already bring to vCISO work. In practice, I believe the most valuable elements are: (a) embedding human-in-the-loop and oversight especially where decisions impact individuals; (b) ensuring ongoing monitoring of models for drift and unintended bias; (c) making clear disclosures and transparency about AI system limitations; and (d) viewing governance not as a one-off checklist but as an evolving process tied to business outcomes and regulatory change.
In short: responsible use of AI is not just ethical “nice to have” — it’s essential for sustainable, trustworthy AI deployment and an important differentiator for service providers (such as vCISOs) who guide clients through AI adoption and its risks.
Here’s a concise, ready-to-use vCISO AI Compliance Checklist based on the AWS Responsible Use of AI guidance, tailored for small to mid-sized enterprises or client advisory use. It’s structured for practicality—one page, action-oriented, and easy to share with executives or operational teams.
vCISO AI Compliance Checklist
1. Governance & Accountability
Assign AI governance ownership (board, CISO, product owner).
Define escalation paths for AI incidents.
Align AI initiatives with organizational risk appetite and compliance obligations.
2. Policy Development
Establish AI policies on ethics, fairness, transparency, security, and privacy.
Define rules for sensitive data usage and regulatory compliance (GDPR, HIPAA, CCPA).
Document roles, responsibilities, and AI lifecycle procedures.
3. Data Governance
Ensure training and inference data quality, lineage, and access control.
Track consent, privacy, and anonymization requirements.
Audit datasets periodically for bias or inaccuracies.
4. Model Oversight
Validate models before production deployment.
Continuously monitor for bias, drift, or unintended outcomes.
Maintain a model inventory and lifecycle documentation.
5. Monitoring & Logging
Implement logging of AI inputs, outputs, and behaviors.
Deploy anomaly detection for unusual or harmful results.
Retain logs for audits, investigations, and compliance reporting.
6. Human-in-the-Loop Controls
Enable human review for high-risk AI decisions.
Provide guidance on interpretation and system limitations.
Establish feedback loops to improve models and detect misuse.
7. Transparency & Explainability
Generate explainable outputs for high-impact decisions.
Document model assumptions, limitations, and risks.
Communicate AI capabilities clearly to internal and external stakeholders.
8. Continuous Learning & Adaptation
Retrain or retire models as data, risks, or regulations evolve.
Update governance frameworks and risk assessments regularly.
Monitor emerging AI threats, vulnerabilities, and best practices.
9. Integration with Enterprise Risk Management
Align AI governance with ISO 27001, ISO 42001, NIST AI RMF, or similar standards.
Include AI risk in enterprise risk management dashboards.
Report responsible AI metrics to executives and boards.
✅ Tip for vCISOs: Use this checklist as a living document. Review it quarterly or when major AI projects are launched, ensuring policies and monitoring evolve alongside technology and regulatory changes.
AI adversarial attacks exploit vulnerabilities in machine learning systems, often leading to serious consequences such as misinformation, security breaches, and loss of trust. These attacks are increasingly sophisticated and demand proactive defense strategies.
The article from Mindgard outlines six major types of adversarial attacks that threaten AI systems:
1. Evasion Attacks
These occur when malicious inputs are crafted to fool AI models during inference. For example, a slightly altered image might be misclassified by a vision model. This is especially dangerous in autonomous vehicles or facial recognition systems, where misclassification can lead to physical harm or privacy violations.
2. Poisoning Attacks
Here, attackers tamper with the training data to corrupt the model’s learning process. By injecting misleading samples, they can manipulate the model’s behavior long-term. This undermines the integrity of AI systems and can be used to embed backdoors or biases.
3. Model Extraction Attacks
These involve reverse-engineering a deployed model to steal its architecture or parameters. Once extracted, attackers can replicate the model or identify its weaknesses. This poses a threat to intellectual property and opens the door to further exploitation.
4. Inference Attacks
Attackers attempt to deduce sensitive information from the model’s outputs. For instance, they might infer whether a particular individual’s data was used in training. This compromises privacy and violates data protection regulations like GDPR.
5. Backdoor Attacks
These are stealthy manipulations where a model behaves normally until triggered by a specific input. Once activated, it performs malicious actions. Backdoors are particularly insidious because they’re hard to detect and can be embedded during training or deployment.
6. Denial-of-Service (DoS) Attacks
By overwhelming the model with inputs or queries, attackers can degrade performance or crash the system entirely. This disrupts service availability and can have cascading effects in critical infrastructure.
Consequences
The consequences of these attacks range from loss of trust and reputational damage to regulatory non-compliance and physical harm. They also hinder the scalability and adoption of AI in sensitive sectors like healthcare, finance, and defense.
My take: Adversarial attacks highlight a fundamental tension in AI development: the race for performance often outpaces security. While innovation drives capabilities, it also expands the attack surface. I believe that robust adversarial testing, explainability, and secure-by-design principles should be non-negotiable in AI governance frameworks. As AI systems become more embedded in society, resilience against adversarial threats must evolve from a technical afterthought to a strategic imperative.
“the race for performance often outpaces security” becomes especially true in the United States, because there’s no single, comprehensive federal cybersecurity or data protection law that governs all industries in AI Governance like EU AI act.
There is currently an absence of well-defined regulatory frameworks governing the use of generative AI. As this technology advances at a rapid pace, existing laws and policies often lag behind, creating grey areas in accountability, ownership, and ethical use. This regulatory gap can give rise to disputes over intellectual property rights, data privacy, content authenticity, and liability when AI-generated outputs cause harm, infringe copyrights, or spread misinformation. Without clear legal standards, organizations and developers face growing uncertainty about compliance and responsibility in deploying generative AI systems.
1. Costly Implementation: Developing, deploying, and maintaining AI systems can be highly expensive. Costs include infrastructure, data storage, model training, specialized talent, and continuous monitoring to ensure accuracy and compliance. Poorly managed AI investments can lead to financial losses and limited ROI.
2. Data Leaks: AI systems often process large volumes of sensitive data, increasing the risk of exposure. Improper data handling or insecure model training can lead to breaches involving confidential business information, personal data, or proprietary code.
3. Regulatory Violations: Failure to align AI operations with privacy and data protection regulations—such as GDPR, HIPAA, or AI-specific governance laws—can result in penalties, reputational damage, and loss of customer trust.
4. Hallucinations and Deepfakes: Generative AI may produce false or misleading outputs, known as “hallucinations.” Additionally, deepfake technology can manipulate audio, images, or videos, creating misinformation that undermines credibility, security, and public trust.
5. Over-Reliance on AI for Decision-Making: Dependence on AI systems without human oversight can lead to flawed or biased decisions. Inaccurate models or insufficient contextual awareness can negatively affect business strategy, hiring, credit scoring, or security decisions.
6. Security Vulnerabilities in AI Applications: AI software can contain exploitable flaws. Attackers may use methods like data poisoning, prompt injection, or model inversion to manipulate outcomes, exfiltrate data, or compromise integrity.
7. Bias and Discrimination: AI systems trained on biased datasets can perpetuate or amplify existing inequities. This may result in unfair treatment, reputational harm, or non-compliance with anti-discrimination laws.
8. Intellectual Property (IP) Risks: AI models may inadvertently use copyrighted or proprietary material during training or generation, exposing organizations to legal disputes and ethical challenges.
9. Ethical and Accountability Concerns: Lack of transparency and explainability in AI systems can make it difficult to assign accountability when things go wrong. Ethical lapses—such as privacy invasion or surveillance misuse—can erode trust and trigger regulatory action.
10. Environmental Impact: Training and operating large AI models consume significant computing power and energy, raising sustainability concerns and increasing an organization’s carbon footprint.
Recently, a college student learned the hard way that conversations with AI can be used against them. The Springfield Police Department reported that the student vandalized 17 vehicles in a single morning, damaging windshields, side mirrors, wipers, and hoods.
Evidence against the student included his own statements, but notably, law enforcement obtained transcripts of his conversation with ChatGPT from his iPhone. In these chats, the student reportedly asked the AI what would happen if he “smashed the sh*t out of multiple cars” and commented that “no one saw me… and even if they did, they don’t know who I am.”
While the case has a somewhat comical angle, it highlights an important lesson: AI conversations should not be assumed private. Users must treat interactions with AI as potentially recorded and accessible in the future.
Organizations implementing generative AI should address confidentiality proactively. A key consideration is whether user input is used to train or fine-tune models. Questions include whether prompt data, conversation history, or uploaded files contribute to model improvement and whether users can opt out.
Another consideration is data retention and access. Organizations need to define where user input is stored, for how long, and who can access it. Proper encryption at rest and in transit, along with auditing and logging access, is critical. Law enforcement access should also be anticipated under legal processes.
Consent and disclosure are central to responsible AI usage. Users should be informed clearly about how their data will be used, whether explicit consent is required, and whether terms of service align with federal and global privacy standards.
De-identification and anonymity are also crucial. Any data used for training should be anonymized, with safeguards preventing re-identification. Organizations should clarify whether synthetic or real user data is used for model refinement.
Legal and ethical safeguards are necessary to mitigate risks. Organizations should consider indemnifying clients against misuse of sensitive data, undergoing independent audits, and ensuring compliance with GDPR, CPRA, and other privacy regulations.
AI conversations can have real-world consequences. Even casual or hypothetical discussions with AI might be retrieved and used in investigations or legal proceedings. Awareness of this reality is essential for both individuals and organizations.
In conclusion, this incident serves as a cautionary tale: AI interactions are not inherently private. Users and organizations must implement robust policies, technical safeguards, and clear communication to manage risks. Treat every AI chat as potentially observable, and design systems with privacy, consent, and accountability in mind.
Opinion: This case is a striking reminder of how AI is reshaping accountability and privacy. It’s not just about technology—it’s about legal, ethical, and organizational responsibility. Anyone using AI should assume that nothing is truly confidential and plan accordingly.