Objectives of EU AI Act is:
Harmonized rules for AI systems in the EU, prohibitions on certain AI practices, requirements for high
risk AI, transparency rules, market surveillance, and innovation support.
1. Overview: How the AI Act Treats Open-Source vs. Closed-Source Models
- The EU AI Act (formalized in 2024) regulates AI systems using a risk-based framework that ranges from unacceptable to minimal risk. It also includes a specific layer for general-purpose AI (GPAI)—“foundation models” like large language models.
- Open-source models enjoy limited exemptions, especially if:
- They’re not high-risk,
- Not unsafe or interacting directly with individuals,
- Not monetized,
- Or not deemed to present systemic risk.
- Closed-source (proprietary) models don’t benefit from such leniency and must comply with all applicable obligations across risk categories.
2. Benefits of Open-Source Models under the AI Act
a) Greater Transparency & Documentation
- Open-source code, weights, and architecture are accessible by default—aligning with transparency expectations (e.g., model cards, training data logs)—and often already publicly documented.
- Independent auditing becomes more feasible through community visibility.
- A Stanford study found open-source models tend to comply more readily with data and compute transparency requirements than closed-source alternatives.
b) Lower Compliance Burden (in Certain Cases)
- Exemptions: Non-monetized open-source models that don’t pose systemic risk may dodge burdensome obligations like documentation or designated representatives.
- For academic or purely scientific purposes, there’s additional leniency—even if models are open-source.
c) Encourages Innovation, Collaboration & Inclusion
- Open-source democratizes AI access, reducing barriers for academia, startups, nonprofits, and regional players.
- Wider collaboration speeds up innovation and enables localization (e.g., fine-tuning for local languages or use cases).
- Diverse contributors help surface bias and ethical concerns, making models more inclusive.
3. Drawbacks of Open-Source under the AI Act
a) Disproportionate Regulatory Burden
- The Act’s “one-size-fits-all” approach imposes heavy requirements (like ten-year documentation, third-party audits) even on decentralized, collectively developed models—raising feasibility concerns.
- Who carries responsibility in distributed, open environments remains unclear.
b) Loopholes and Misuse Risks
- The Act’s light treatment of non-monetized open-source models could be exploited by malicious actors to skirt regulations.
- Open-source models can be modified or misused to generate disinformation, deepfakes, or hate content—without safeguards that closed systems enforce.
c) Still Subject to Core Obligations
- Even under exemptions, open-source GPAI must still:
- Disclose training content,
- Respect EU copyright laws,
- Possibly appoint authorized representatives if systemic risk is suspected.
d) Additional Practical & Legal Complications
- Licensing: Some so-called “open-source” models carry restrictive terms (e.g., commercial restrictions, copyleft provisions) that may hinder compliance or downstream use.
- Support disclaimers: Open-source licenses typically disclaim warranties—risking liability gaps.
- Security vulnerabilities: Public availability of code may expose models to tampering or release of harmful versions.
4. Closed-Source Models: Benefits & Drawbacks
Benefits
- Able to enforce usage restrictions, internal safety mechanisms, and fine-grained control over deployment—reducing misuse risk.
- Clear compliance path: centralized providers can manage documentation, audits, and risk mitigation systematically.
- Stable liability chain, with better alignment to legal frameworks.
Drawbacks
- Less transparency: core workings are hidden, making audits and oversight harder.
- Higher compliance burden: must meet all applicable obligations across risk categories without the possibility of exemptions.
- Innovation lock-in: smaller players and researchers may face high entry barriers.
5. Synthesis: Choosing Between Open-Source and Closed-Source under the AI Act
Dimension | Open-Source | Closed-Source |
---|---|---|
Transparency & Auditing | High—code, data, model accessible | Low—black box systems |
Regulatory Burden | Lower for non-monetized, low-risk models; heavy for complex, high-risk cases | Uniformly high, though manageable by central entities |
Innovation & Accessibility | High—democratizes access, collaboration | Limited—controlled by large orgs |
Security & Misuse Risk | Higher—modifiable, misuse easier | Lower—safeguarded, controlled deployment |
Liability & Accountability | Diffuse—decentralized contributors complicate oversight | Clear—central authority responsible |
6. Final Thoughts
Under the EU AI Act, open-source AI is recognized and, in some respects, encouraged—but only under narrow, carefully circumscribed conditions. When models are non-monetized, low-risk, or aimed at scientific research, open-source opens up paths for innovation. The transparency and collaborative dynamics are strong virtues.
However, when open-source intersects with high risk, monetization, or systemic potential, the Act tightens its grip—subjecting models to many of the same obligations as proprietary ones. Worse, ambiguity in responsibility and enforcement may undermine both innovation and safety.
Conversely, closed-source models offer regulatory clarity, security, and control; but at the cost of transparency, higher compliance burden, and restricted access for smaller players.
TL;DR
- Choose open-source if your goal is transparency, inclusivity, and innovation—so long as you keep your model non-monetized, transparently documented, and low-risk.
- Choose closed-source when safety, regulatory oversight, and controlled deployment are paramount, especially in sensitive or high-risk applications.
Further reading on EU AI Act implications
https://www.barrons.com/articles/ai-tech-stocks-regulation-microsoft-google-amazon-meta-30424359?
https://apnews.com/article/a3df6a1a8789eea7fcd17bffc750e291?

Agentic AI Security Risks: Why Enterprises Can’t Afford to Fly Blind
NIST Strengthens Digital Identity Security to Tackle AI-Driven Threats
Securing Agentic AI: Emerging Risks and Governance Imperatives
From Compliance to Confidence: How DISC LLC Delivers Strategic Cybersecurity Services That Scale
Expertise-in-Virtual-CISO-vCISO-Services-2Download
Secure Your Business. Simplify Compliance. Gain Peace of Mind
InfoSec services | InfoSec books | Follow our blog | DISC llc is listed on The vCISO Directory | ISO 27k Chat bot | Comprehensive vCISO Services | ISMS Services | Security Risk Assessment Services | Mergers and Acquisition Security