EU AI Act Risk Classification Guide
The EU AI Act uses a risk-based approach. Not all AI systems get the same treatment. A spam filter and a recruitment algorithm operate under completely different rules because the potential for harm is different. The regulation sorts AI into four tiers, and your obligations depend on where your system lands.
Tier 1: Unacceptable Risk — Banned
Some AI applications are prohibited outright. No exceptions, no workarounds, no conformity assessment that makes them okay. The ban took effect on February 2, 2025.
Banned systems include:
- Social scoring by public authorities that leads to detrimental treatment of individuals based on their social behavior or personal characteristics.
- Real-time remote biometric identification in publicly accessible spaces by law enforcement, with three narrow exceptions: searching for missing children, preventing imminent terrorist threats, and locating suspects of specific serious crimes.
- Subliminal manipulation that exploits a person's vulnerabilities (age, disability, social situation) to distort their behavior in harmful ways.
- Emotion recognition in workplaces and education, except where necessary for safety or medical reasons.
- Untargeted facial image scraping from the internet or CCTV to build recognition databases.
If your system does any of these things, stop. There is no compliance path.
Tier 2: High Risk — Regulated Heavily
This is where the regulation has the most teeth. High-risk systems are legal but come with extensive obligations covering documentation, testing, monitoring, and human oversight.
Two pathways land an AI system in the high-risk category:
Annex I: AI in regulated products
AI embedded in products already covered by EU safety legislation: medical devices, vehicles, toys, machinery, elevators, aviation systems. These go through existing conformity assessment procedures, now updated to include AI-specific requirements.
Annex III: Standalone high-risk use cases
Eight categories of AI applications:
- Biometric identification and categorization of natural persons (remote identification systems).
- Critical infrastructure management: AI controlling water, gas, heating, electricity supply, or digital infrastructure.
- Education and vocational training: systems that determine access, admission, assign students to groups, or assess learning outcomes.
- Employment and workers management: recruitment tools, promotion decisions, task allocation, performance monitoring, termination decisions.
- Essential services access: credit scoring, insurance pricing, emergency service dispatch prioritization.
- Law enforcement: risk assessment of individuals, polygraph systems, evidence reliability evaluation, crime prediction.
- Migration and border control: visa and asylum application assessment, border surveillance, risk screening.
- Justice and democratic processes: AI assisting judicial authorities in researching facts, applying law, or influencing election outcomes.
For details on compliance obligations for these systems, see our high-risk AI systems guide.
Tier 3: Limited Risk — Transparency Only
Limited-risk systems have one primary obligation: tell people they are interacting with AI.
This applies to:
- Chatbots — users must know they are talking to a machine, not a human. Read our chatbot requirements breakdown.
- Deepfakes and AI-generated content — must be labeled as artificially generated or manipulated.
- Emotion recognition systems — users must be informed that emotion detection is occurring.
Most customer support chatbots fall here. The obligation is straightforward: disclose the AI nature of the interaction at the point of contact.
Tier 4: Minimal Risk — No Specific Obligations
The vast majority of AI systems in use today fall into this category. Spam filters, recommendation engines for content, AI-assisted inventory management, weather prediction models, AI in video games.
The regulation imposes no specific requirements on these systems. Companies can voluntarily adopt codes of conduct, but nothing is mandated.
How to Determine Your System's Risk Tier
Walk through this sequence:
- Is your system on the prohibited list? If yes, you cannot deploy it.
- Is it embedded in a product covered by Annex I EU legislation? If yes, it is high risk.
- Does it fall under one of the eight Annex III categories? If yes, it is high risk. But note: a new provision allows Annex III systems to escape high-risk classification if they do not pose a significant risk of harm, do not materially influence decision outcomes, or serve a narrow preparatory task.
- Does it interact directly with people, generate synthetic content, or detect emotions? If yes, it is limited risk.
- None of the above? Minimal risk.
When in doubt, classify up. The penalties for getting it wrong far outweigh the cost of over-compliance.
What Comes Next
Once you know your risk tier, the next step is matching it to specific obligations. Our compliance checklist walks through what each tier requires. For the broader regulatory timeline, check the EU AI Act implementation schedule.
For context on how this fits with existing data protection rules, see EU AI Act vs GDPR.


