
Summary of the key points from the Joint Statement on AI-Generated Imagery and the Protection of Privacy published on 23 February 2026 by the Global Privacy Assembly’s International Enforcement Cooperation Working Group (IEWG) — coordinated by data protection authorities including the UK’s Information Commissioner’s Office (ICO):
📌 What the Statement is:
Data protection regulators from 61 jurisdictions around the world issued a coordinated statement raising serious concerns about AI systems that generate realistic images and videos of identifiable individuals without their consent. This includes content that can be intimate, defamatory, or otherwise harmful.
📌 Core Concerns:
The authorities emphasize that while AI can bring benefits, current developments — especially image and video generation integrated into widely accessible platforms — have enabled misuse that poses significant risks to privacy, dignity, safety, and especially the welfare of children and other vulnerable groups.
📌 Expectations and Principles for Organisations:
Signatories outlined a set of fundamental principles that must guide the development and use of AI content generation systems:
- Implement robust safeguards to prevent misuse of personal information and avoid creation of harmful, non-consensual content.
- Ensure meaningful transparency about system capabilities, safeguards, appropriate use, and risks.
- Provide mechanisms for individuals to request removal of harmful content and respond swiftly.
- Address specific risks to children and vulnerable people with enhanced protections and clear communication.
📌 Why It Matters:
By coordinating a global position, regulators are signaling that companies developing or deploying generative AI imagery tools must proactively meet privacy and data protection laws — and that creating identifiable harmful content without consent can already constitute criminal offences in many jurisdictions.
How the Feb 23, 2026 Joint Statement by data protection regulators on AI-generated imagery — including the one from the UK Information Commissioner’s Office — will affect the future of AI governance globally:
🔎 What the Statement Says (Summary)
The joint statement — coordinated by the Global Privacy Assembly’s International Enforcement Cooperation Working Group (IEWG) and signed by 61 data protection and privacy authorities worldwide — focuses on serious concerns about AI systems that can generate realistic images/videos of real people without their knowledge or consent.
Key principles for organisations developing or deploying AI content-generation systems include:
- Implement robust safeguards to prevent misuse of personal data and harmful image creation.
- Ensure transparency about system capabilities, risks, and guardrails.
- Provide effective removal mechanisms for harmful content involving identifiable individuals.
- Address specific risks to children and vulnerable groups with enhanced protections.
The statement also emphasizes legal compliance with existing privacy and data protection laws and notes that generating non-consensual intimate imagery can be a criminal offence in many places.
🧭 How This Will Shape AI Governance
1. 📈 Raising the Bar on Responsible AI Development
This statement signals a shift from voluntary guidelines to expectations that privacy and human-rights protections must be embedded early in development lifecycles.
- Privacy-by-design will no longer be just a GDPR buzzword – regulators expect demonstrable safeguards from the outset.
- Systems must be transparent about their risks and limitations.
- Organisations failing to do so are more likely to attract enforcement attention, especially where harms affect children or vulnerable groups. (EDPB)
This creates a global baseline of expectations even where laws differ — a powerful signal to tech companies and AI developers.
2. 🛡️ Stronger Enforcement and Coordination Between Regulators
Because 61 authorities co-signed the statement and pledged to share information on enforcement approaches, we should expect:
- More coordinated investigations and inquiries, particularly against major platforms that host or enable AI image generation.
- Cross-border enforcement actions, especially where harmful content is widely distributed.
- Regulators referencing each other’s decisions when assessing compliance with privacy and data protection law. (EDPB)
This cooperation could make compliance more uniform globally, reducing “regulatory arbitrage” where companies try to escape strict rules by operating in lax jurisdictions.
3. ⚖️ Clarifying Legal Risks for Harmful AI Outputs
Two implications for AI governance and compliance:
- Non-consensual image creation may be treated as criminal or civil harm in many places — not just a policy issue. Regulators explicitly said it can already be a crime in many jurisdictions.
- Organisations may face tougher liability and accountability obligations when identifiable individuals are involved — particularly where children are depicted.
This adds legal pressure on AI developers and platforms to ensure their systems don’t facilitate defamation, harassment, or exploitation.
4. 🤝 Encouraging Proactive Engagement Between Industry and Regulators
The statement encourages organisations to engage proactively with regulators, not reactively:
- Early risk assessments
- Regular compliance outreach
- Open dialogue on mitigations
This marks a shift from regulators policing after harm to requiring proactive risk governance — a trend increasingly reflected in broader AI regulation such as the EU AI Act. (mlex.com)
5. 🌐 Contributing to Emerging Global Norms
Even without a single binding law or treaty, this statement helps build international norms for AI governance:
- Shared principles help align diverse legal frameworks (e.g., GDPR, local privacy laws, soon the EU AI Act).
- Sets the stage for future binding rules or standards in areas like content provenance, watermarking, and transparency.
- Helps civil society and industry advocate for consistent global risk standards for AI content generation.
📌 Bottom Line
This joint statement is more than a warning — it’s a governance pivot point. It signals that:
✅ Privacy and data protection are now core governance criteria for generative AI — not nice-to-have.
✅ Regulators globally are ready to coordinate enforcement.
✅ Companies that build or deploy AI systems will increasingly be held accountable for the real-world harms their outputs can cause.
In short, the statement helps shift AI governance from frameworks and principles toward operational compliance and enforceable expectations.
Source: https://ico.org.uk/media2/fb1br3d4/20260223-iewg-joint-statement-on-ai-generated-imagery.pdf

InfoSec services | InfoSec books | Follow our blog | DISC llc is listed on The vCISO Directory | ISO 27k Chat bot | Comprehensive vCISO Services | ISMS Services | AIMS Services | Security Risk Assessment Services | Mergers and Acquisition Security
At DISC InfoSec, we help organizations navigate this landscape by aligning AI risk management, governance, security, and compliance into a single, practical roadmap. Whether you are experimenting with AI or deploying it at scale, we help you choose and operationalize the right frameworks to reduce risk and build trust. Learn more at DISC InfoSec.
Data Governance & Privacy Program
- Stop Debating Frameworks. Start Implementing Safeguards
- The 14 Vulnerability Domains That Make or Break Your Application Security
- Why Cryptographic Agility Is Now a Leadership Imperative
- Global Privacy Regulators Draw a Hard Line on AI-Generated Imagery
- Scaling Penetration Testing Expertise with AI: The DISC InfoSec Approach


