- Home
- Resources
- EU AI Act Guide
- Risk Categories
EU AI Act Risk Categories Explained
The EU AI Act classifies AI systems into four risk tiers. Each tier carries different compliance obligations — from an outright ban to no mandatory requirements. Understanding where your AI systems fall is the foundation of compliance.
Prohibited AI Practices
Article 5 · Enforceable from 2 February 2025
Certain AI applications are deemed an unacceptable risk to fundamental rights, safety, and democratic values. These are completely banned under the EU AI Act, with no conformity assessment pathway — they simply cannot be placed on the market or used within the EU.
Banned Practices Include:
- Social scoring by public authorities — AI systems that evaluate or classify individuals based on social behaviour or personal characteristics, leading to detrimental or unfavourable treatment unrelated to the context in which the data was originally generated.
- Real-time remote biometric identification in public spaces — Using AI for live facial recognition in publicly accessible areas for law enforcement purposes. Narrow exceptions exist for searching for missing children, preventing imminent terrorist threats, and locating suspects of serious crimes (subject to judicial authorisation).
- Subliminal manipulation — AI systems that deploy subliminal techniques beyond a person's consciousness to materially distort behaviour in a manner likely to cause physical or psychological harm.
- Exploitation of vulnerabilities — AI that targets specific groups — based on age, disability, or social or economic situation — to distort their behaviour in harmful ways.
- Emotion recognition in workplaces and schools — AI systems used to infer emotions of employees or students, with limited exceptions for medical or safety purposes.
- Untargeted facial image scraping — Building facial recognition databases by scraping images from the internet or CCTV footage without consent.
- Predictive policing based on profiling — AI that assesses the risk of an individual committing a criminal offence based solely on profiling or personality traits (not objective, verifiable facts).
What You Must Do
If any of your AI systems fall into these categories, you must immediately discontinue their use. There is no transition period — these prohibitions have been enforceable since 2 February 2025. Violations carry the highest penalties: up to €35 million or 7% of global annual turnover.
High-Risk AI Systems
Articles 6–7, Annex III · Applicable from 2 August 2026
High-risk AI systems pose significant risks to health, safety, or fundamental rights but are permitted on the market provided they comply with stringent requirements. This is the most compliance-intensive category.
Annex III Categories
AI systems are classified as high-risk if they fall into one of these domains:
- Biometric identification and categorisation — Remote biometric identification (not real-time in public), biometric categorisation by sensitive attributes.
- Critical infrastructure — Management and operation of road traffic, water, gas, heating, and electricity supply, including AI used as safety components.
- Education and vocational training — AI for admissions, student assessment, monitoring behaviour during exams, or assigning educational content based on learning levels.
- Employment and worker management — Recruitment screening, interview evaluation, promotion and termination decisions, task allocation, and performance monitoring.
- Access to essential services — Credit scoring, insurance risk assessment, eligibility for public benefits, and emergency service dispatch prioritisation.
- Law enforcement — Risk assessments for victims and offenders, polygraph tools, evidence evaluation, and crime analytics (where not prohibited).
- Migration and border control — Visa application screening, asylum application assessment, and risk assessment tools at borders.
- Administration of justice — AI used to research and interpret facts and law, or to apply rules to concrete facts.
What You Must Do
High-risk systems require a comprehensive compliance programme: risk management, data governance, technical documentation, logging, transparency, human oversight, accuracy and robustness testing, a quality management system, conformity assessment, and EU database registration. See our high-risk systems guide for the full requirements breakdown.
Limited-Risk AI Systems
Article 50 · Transparency obligations
Limited-risk AI systems do not carry the full weight of high-risk compliance but must meet specific transparency obligations designed to ensure users know they are interacting with AI or viewing AI-generated content.
Systems Covered
- Chatbots and conversational AI — Any AI system that interacts directly with natural persons must clearly disclose that the person is interacting with an AI system, unless this is obvious from the circumstances.
- Deepfakes and AI-generated content — Content that has been artificially generated or manipulated (images, audio, video) must be labelled as such. Providers must ensure outputs are marked in a machine-readable format where technically feasible.
- Emotion recognition systems — Where not prohibited (i.e., outside workplace/education contexts), emotion recognition systems must inform the persons exposed to them.
- Biometric categorisation systems — Systems that categorise individuals based on biometric data must inform those being categorised.
What You Must Do
The primary obligation is transparency. Ensure that users are informed when they interact with an AI system, that AI-generated or manipulated content is labelled, and that individuals subjected to emotion recognition or biometric categorisation are notified. Penalties for non-compliance with transparency obligations can still reach €15 million or 3% of global turnover.
Minimal-Risk AI Systems
No mandatory requirements
The vast majority of AI systems in use today fall into the minimal-risk category. These are AI applications that do not pose significant risks to fundamental rights or safety and are therefore subject to no mandatory requirements under the EU AI Act.
Examples
- AI-powered spam filters and email categorisation
- AI in video games and entertainment
- Inventory management and demand forecasting systems
- AI-assisted code completion and developer tools
- Basic content recommendation engines (non-manipulative)
- Search algorithms and information retrieval
- Manufacturing quality control through image recognition
Voluntary Compliance
While no mandatory requirements exist, the EU AI Act encourages providers of minimal-risk AI to voluntarily adopt codes of conduct that mirror the principles applied to higher-risk systems. This includes transparency about AI use, basic performance monitoring, and adherence to ethical AI principles. Voluntary compliance can demonstrate corporate responsibility and build user trust.
Even for minimal-risk AI, other EU regulations may still apply. The GDPR remains relevant whenever personal data is processed, and sector-specific regulations (such as financial services or medical devices) may impose additional requirements.
Not sure which category your AI falls into?
Our interactive assessment walks you through the classification criteria in under 10 minutes. No account required.
Classify your AI systemGet the complete compliance picture
From classification to conformity assessment — Haffa.ai guides you through every step.