Aug 21 2025

How to Classify an AI system into one of the categories: unacceptable risk, high risk, limited risk, minimal or no risk.

Category: AI,Information Classificationdisc7 @ 1:25 pm

🔹 1. Unacceptable Risk (Prohibited AI)

These are AI practices banned outright because they pose a clear threat to safety, rights, or democracy.
Examples:

  • Social scoring by governments (like assigning citizens a “trust score”).
  • Real-time biometric identification in public spaces for mass surveillance (with narrow exceptions like serious crime).
  • Manipulative AI that exploits vulnerabilities (e.g., toys with voice assistants that encourage dangerous behavior in kids).

👉 If your system falls here → cannot be marketed or used in the EU.


🔹 2. High Risk

These are AI systems with significant impact on people’s rights, safety, or livelihoods. They are allowed but subject to strict compliance (risk management, testing, transparency, human oversight, etc.).
Examples:

  • AI in recruitment (CV screening, job interview analysis).
  • Credit scoring or AI used for approving loans.
  • Medical AI (diagnosis, treatment recommendations).
  • AI in critical infrastructure (electricity grid management, transport safety systems).
  • AI in education (grading, admissions decisions).

👉 If your system is high-risk → must undergo conformity assessment and registration before use.


🔹 3. Limited Risk

These require transparency obligations, but not full compliance like high-risk systems.
Examples:

  • Chatbots (users must know they’re talking to AI, not a human).
  • AI systems generating deepfakes (must disclose synthetic nature unless for law enforcement/artistic/expressive purposes).
  • Emotion recognition systems in non-high-risk contexts.

👉 If limited risk → inform users clearly, but lighter obligations.


🔹 4. Minimal or No Risk

The majority of AI applications fall here. They’re largely unregulated beyond general EU laws.
Examples:

  • Spam filters.
  • AI-powered video games.
  • Recommendation systems for e-commerce or music streaming.
  • AI-driven email autocomplete.

👉 If minimal/no risk → free use with no extra requirements.


⚖️ Rule of Thumb for Classification:

  • If it manipulates or surveils → often unacceptable risk.
  • If it affects health, jobs, education, finance, safety, or fundamental rights → high risk.
  • If it interacts with humans but without major consequences → limited risk.
  • If it’s just convenience or productivity-related → minimal/no risk.

A decision tree you can use to classify any AI system under the EU AI Act risk framework:


🧭 EU AI Act AI System Risk Classification Decision Tree

Step 1: Check for Prohibited Practices

👉 Does the AI system do any of the following?

  • Social scoring of individuals by governments or large-scale ranking of citizens?
  • Manipulative AI that exploits vulnerable groups (e.g., children, disabled, addicted)?
  • Real-time biometric identification in public spaces (mass surveillance), except for narrow law enforcement use?
  • Subliminal manipulation that harms people?

Yes → UNACCEPTABLE RISK (Prohibited, not allowed in EU).
No → go to Step 2.


Step 2: Check for High-Risk Use Cases

👉 Does the AI system significantly affect people’s safety, rights, or livelihoods, such as:

  • Biometrics (facial recognition, identification, sensitive categorization)?
  • Education (grading, admissions, student assessment)?
  • Employment (recruitment, CV screening, promotion decisions)?
  • Essential services (credit scoring, access to welfare, healthcare)?
  • Law enforcement & justice (predictive policing, evidence analysis, judicial decision support)?
  • Critical infrastructure (transport, energy, water, safety systems)?
  • Medical devices or health AI (diagnosis, treatment recommendations)?

Yes → HIGH RISK (Strict obligations: conformity assessment, risk management, registration, oversight).
No → go to Step 3.


Step 3: Check for Transparency Requirements (Limited Risk)

👉 Does the AI system:

  • Interact with humans in a way that users might think they are talking to a human (e.g., chatbot, voice assistant)?
  • Generate or manipulate content that could be mistaken for real (e.g., deepfakes, synthetic media)?
  • Use emotion recognition or biometric categorization outside high-risk cases?

Yes → LIMITED RISK (Transparency obligations: disclose AI use to users).
No → go to Step 4.


Step 4: Everything Else

👉 Is the AI system just for convenience, productivity, personalization, or entertainment without major societal or legal impact?

Yes → MINIMAL or NO RISK (Free use, no extra regulation).


⚖️ Quick Classification Examples:

  • Social scoring AI → ❌ Unacceptable Risk
  • AI for medical diagnosis → 🚨 High Risk
  • AI chatbot for customer service → ⚠️ Limited Risk
  • Spam filter / recommender system → ✅ Minimal Risk

Practical OWASP Security Testing: Hands-On Strategies for Detecting and Mitigating Web Vulnerabilities in the Age of AI

Building Trust with High-Risk AI: What Article 15 of the EU AI Act Means for Accuracy, Robustness & Cybersecurity

From Compliance to Confidence: How DISC LLC Delivers Strategic Cybersecurity Services That Scale

Expertise-in-Virtual-CISO-vCISO-Services-2Download

Secure Your Business. Simplify Compliance. Gain Peace of Mind

InfoSec services | InfoSec books | Follow our blog | DISC llc is listed on The vCISO Directory | ISO 27k Chat bot | Comprehensive vCISO Services | ISMS Services | Security Risk Assessment Services | Mergers and Acquisition Security

Tags: AI categories, AI Sytem, EU AI Act


Sep 02 2022

What is ISO 27001 Information Classification?

Category: Information Classification,ISO 27kDISC @ 10:50 am

Information classification is a process in which organisations assess the data that they hold and the level of protection it should be given.

Organisations usually classify information in terms of confidentiality – i.e. who is granted access to view it. A typical system contains four levels of confidentiality:

  • Confidential (only senior management have access)
  • Restricted (most employees have access)
  • Internal (all employees have access)
  • Public information (everyone has access)

As you might expect, larger and more complex organisations will need more levels, with each one accounting for specific groups of employees who need access to certain information.

The levels shouldn’t be based on employees’ seniority but on the information that’s necessary to perform certain job functions.

Take the healthcare sector for example. Doctors and nurses need access to patients’ personal data, including their medical histories, which is highly sensitive.

However, they shouldn’t have access to other types of sensitive information, such as financial records.

In these cases, a separate classification should be created to distinguish between sensitive medical information and sensitive administrative information.


Where does ISO 27001 fit in?

Organizations that are serious about data protection should follow ISO 27001.

The Standard describes best practices for creating and maintaining an ISMS (information security management system), and the classification of information plays a crucial role.

Control objective A.8.2 is titled ‘Information Classification’, and instructs that organisations “ensure that information receives an appropriate level of protection”.

ISO 27001 doesn’t explain how you should do that, but the process is straightforward. You just need to follow four simple steps.

1) Enter your assets into an inventory

The first step is to collate all your information into an inventory (or asset register).

You should also note who is responsible for it (who owns it) and what format it’s in (electronic documents, databases, paper documents, storage media, etc.).

2) Classification

Next, you need to classify the information.

Asset owners are responsible for this, but it’s a good idea for senior management to provide guidelines based on the results of the organization’s ISO 27001 risk assessment.

Information that would be affected by more significant risks should usually be given a higher level of confidentiality. But be careful, because this isn’t always the case.

There will be instances where sensitive information must be made available to a broader set of employees for them to do their job. The information may well pose a threat if it’s confidentiality is compromised, but the organisation must make it widely available in order to function.

3) Labelling

Once you’ve classified your information, the asset owner must create a system for labelling it.

You’ll need different processes for information that’s stored digitally and physically, but it should be consistent and clear.

For example, you might decide that paper documents will be labelled on the cover page, the top-right corner of each subsequent page and the folder containing the document.

For digital files, you might list the classification in a column on your databases, on the front page of the document and the header of each subsequent page.

4) Handling

Finally, you must establish rules for how to protect each information asset based on its classification and format.

For example, you might say that internal paper documents can be kept in an unlocked cabinet that all employees can access.

By contrast, restricted information should be placed in a locked cabinet, and confidential information stored in a secure location.

Additional rules should be established for data in transit – whether it’s being posted, emailed or employees carry it with them.

You can keep track of all these rules by using a table like this:

Information classification table example

Use a table to simplify the data handling documentation process.

Source: What is ISO 27001 Information Classification

Introduction to Cataloging and Classification

Tags: classification, Introduction to Cataloging and Classification