How to Classify Your AI System Under the EU AI Act
Understanding the EU AI Act's Risk-Based Approach
The EU AI Act (Regulation 2024/1689) takes a risk-based approach to AI regulation. Rather than applying the same rules to every AI system, it categorizes systems into four risk tiers, with obligations proportional to the level of risk. Classification matters because it determines which obligations apply to you and how much work compliance will actually require.
This guide walks you through the classification process step by step, covering the legal framework, practical decision points, and common pitfalls. It should give you a solid working knowledge of how to classify AI systems in your organization, though edge cases will always benefit from legal counsel.
The Four Risk Categories
The EU AI Act defines four risk levels:
1. Prohibited (Unacceptable Risk)
AI practices that are outright banned because they pose an unacceptable risk to fundamental rights. These are defined in Article 5 and include social scoring, manipulative AI, and certain uses of biometric identification. There is no compliance pathway here. Prohibited systems cannot be deployed in the EU.
2. High-Risk
AI systems that pose significant risks to health, safety, or fundamental rights. These must meet extensive requirements before being placed on the market or put into service: risk management, data governance, technical documentation, transparency, human oversight, and more. The two pathways to high-risk classification are defined in Articles 6 and 7, with specific use cases listed in Annex III.
3. Limited Risk
AI systems with specific transparency obligations. Users must be informed they are interacting with an AI system. This covers chatbots, emotion recognition systems, and AI systems that generate or manipulate content (deepfakes). The relevant provisions are in Article 50.
4. Minimal Risk
AI systems that pose negligible risk. No mandatory requirements apply, though voluntary codes of conduct are encouraged under Article 95. Most AI applications fall here: spam filters, AI-powered video games, inventory management systems, and so on.
Step 1: Check for Prohibited Practices (Article 5)
Start by checking whether your AI system falls under the prohibited practices defined in Article 5. The key questions:
-
Does the system use subliminal, manipulative, or deceptive techniques to materially distort a person's behavior?
-
Does it exploit any vulnerabilities of a specific group due to age, disability, or social or economic situation?
-
Does it perform social scoring — evaluating or classifying people based on their social behavior or personal characteristics, leading to detrimental treatment?
-
Does it perform real-time remote biometric identification in publicly accessible spaces for law enforcement purposes?
-
Does it create or expand facial recognition databases through untargeted scraping?
-
Does it infer emotions in the workplace or educational institutions?
-
Does it categorize people based on biometric data to infer sensitive attributes like race, political opinions, or sexual orientation?
If the answer to any of these is yes, your system is prohibited. Some narrow exceptions exist (for example, law enforcement biometric identification under strict conditions), but these require careful legal analysis.
Step 2: Check for High-Risk Classification (Articles 6 and 7, Annex III)
If your system is not prohibited, the next question is whether it is high-risk. There are two pathways to high-risk classification:
Pathway A: Safety Component of Regulated Products (Article 6(1))
An AI system is high-risk if it is:
-
Intended to be used as a safety component of a product, or is itself a product, covered by EU harmonized legislation listed in Annex I (such as medical devices, machinery, toys, elevators, radio equipment), AND
-
Required to undergo a third-party conformity assessment under that legislation.
For example, an AI system embedded in a medical device that requires CE marking through a notified body would be high-risk under Pathway A.
Pathway B: Annex III Use Cases (Article 6(2))
An AI system is high-risk if it falls under one of the use case areas listed in Annex III. These eight areas are:
-
Biometrics, including remote biometric identification (not real-time for law enforcement, which is prohibited), biometric categorization, and emotion recognition systems.
-
Critical infrastructure, covering AI used as safety components in digital infrastructure, road traffic, and the supply of water, gas, heating, or electricity.
-
Education and vocational training. This includes AI that determines access to educational institutions, evaluates learning outcomes, assesses appropriate education levels, or monitors exam behavior.
-
Employment and workers management, such as AI for recruitment, screening and filtering job applications, promotion and termination decisions, task allocation based on personal traits, and employee performance monitoring.
-
Access to essential services. Think AI that evaluates eligibility for public benefits, creditworthiness for loans, risk in life and health insurance, and emergency services dispatch.
-
Law enforcement, including risk assessment of individuals, polygraphs, evidence reliability evaluation, profiling in criminal investigations, and crime analytics.
-
Migration, asylum and border control, covering risk assessment, document authentication, and examination of asylum applications.
-
Administration of justice and democratic processes, meaning AI that assists judicial authorities in researching and interpreting facts and law.
There is an exception worth knowing about. Under Article 6(3), an AI system listed in Annex III is not considered high-risk if it does not pose a significant risk of harm to health, safety, or fundamental rights. This applies when the system performs a narrow procedural task, improves the result of a previously completed human activity, detects decision-making patterns without replacing or influencing human assessment, or performs a preparatory task to an assessment.
If you rely on this exception, you need to document your reasoning and notify the national competent authority before placing the system on the market. The authority can disagree with your assessment.
Step 3: Check for Limited-Risk Transparency Obligations (Article 50)
If your system is not prohibited or high-risk, check whether it triggers transparency obligations under Article 50. These apply to:
-
AI systems that interact with people. Users must be informed they are interacting with an AI system (e.g., chatbots must identify themselves as AI).
-
Emotion recognition and biometric categorization systems. People subjected to these must be informed.
-
AI-generated content, including deepfakes, which must be machine-readably labeled as such.
One thing to watch: these transparency obligations can apply in addition to high-risk requirements. A high-risk AI system that also interacts with users must satisfy both sets of obligations.
Step 4: Classify as Minimal Risk
If your AI system is not prohibited, not high-risk under either pathway, and does not trigger limited-risk transparency obligations, it is minimal risk. No mandatory requirements apply, but you are encouraged to voluntarily adopt codes of conduct (Article 95) and ensure AI literacy within your organization (Article 4).
Common Classification Mistakes
Here are the errors we see most often when organizations go through this process.
Embedded AI gets overlooked
Many organizations focus on standalone AI applications but forget about AI components embedded in larger products or services. A recommendation engine in an HR platform, for example, could be high-risk even if the platform vendor does not market it as an "AI product."
The Article 6(3) exception is narrower than you think
The exception for AI systems in Annex III that do not pose significant risk sounds broad, but the four qualifying conditions (narrow procedural task, improvement of completed activity, detection without replacement, preparatory task) are quite specific. Document your reasoning carefully. Regulators will scrutinize it.
Provider vs. deployer confusion
Your obligations differ based on whether you are a provider or deployer. This distinction trips people up especially when modifications are involved: if you significantly modify a high-risk AI system, you may become its new provider, inheriting all provider obligations including the conformity assessment.
GPAI model integration
If your AI system uses a general-purpose AI model (like a large language model), the GPAI model itself has separate obligations under Chapter V. But building a high-risk application on top of a GPAI model does not exempt you. The high-risk obligations still apply to your system.
Treating classification as a one-time exercise
Changes to your AI system's purpose, functionality, or deployment context can shift its risk level. You need ongoing monitoring and a process for re-classification when things change.
Practical Tips for Classification
Start with an AI inventory. You cannot classify what you have not identified. Before starting classification, create an inventory of all AI systems in your organization, including third-party tools and embedded AI components.
Get the right people in the room. Classification requires input from legal, technical, and business teams. Legal understands the regulatory framework. Technical knows what the system actually does. Business knows how it is deployed and who it affects.
Document your reasoning. Whatever the classification outcome, write it down. Regulators will expect to see evidence of a thorough classification process, especially if you classify a system as non-high-risk.
Use tooling where you can. Manual classification with spreadsheets does not scale well and is error-prone. Structured assessment tools that walk you through the decision tree are worth the investment. Our free risk classification wizard covers each step with contextual help and instant results.
Build in re-classification triggers. Any change to the system's intended purpose, the population it affects, or its deployment context should prompt a fresh review. Make this part of your AI governance process rather than an afterthought.
What Comes After Classification?
Classification determines your compliance roadmap. If a system is prohibited, you need to cease deployment or redesign it fundamentally. High-risk systems require the full set of obligations: risk management, data governance, Annex IV documentation, logging, transparency, human oversight, accuracy and robustness, quality management, conformity assessment, EU database registration, and post-market monitoring. Limited-risk systems need transparency measures per Article 50. Minimal-risk systems have no mandatory requirements, though voluntary codes of conduct and AI literacy are encouraged.
If you want help working through this, our free classification wizard can walk you through the decision tree for your specific systems. For a broader overview of the regulation, see our complete EU AI Act guide.
Related Articles
FRIA under the EU AI Act: Complete Article 27 Guide [2026]
Who must conduct a FRIA under EU AI Act Article 27, what it covers, and how to complete one in 5 phases. Practical guide with FRIA vs DPIA comparison and common mistakes.
Annex IV Technical Documentation: A Practical Guide
A practical, detailed guide to preparing Annex IV technical documentation for high-risk AI systems under the EU AI Act, with templates and best practices.