High-Risk AI Systems Under the EU AI Act
High-risk AI systems sit at the core of the EU AI Act's regulatory framework. They are legal to deploy but carry the heaviest compliance burden: risk management, technical documentation, data governance, human oversight, logging, and conformity assessment. Getting this classification wrong, in either direction, costs money.
Classify too low and you face enforcement penalties. Classify too high and you spend resources on unnecessary compliance. This guide covers exactly what qualifies as high risk and what the designation requires.
Two Paths to High-Risk Classification
Path 1: AI in regulated products (Annex I)
If your AI system is a safety component of a product covered by existing EU harmonization legislation, or if the AI system itself is such a product, it is high risk. This covers AI in:
- Medical devices (Regulation 2017/745)
- In-vitro diagnostic devices (Regulation 2017/746)
- Motor vehicles (Regulation 2019/2144)
- Aviation (Regulation 2018/1139)
- Machinery (Regulation 2023/1230)
- Toys, lifts, pressure equipment, radio equipment
These systems follow existing CE marking procedures, updated to incorporate AI-specific requirements. Full enforcement for Annex I systems: August 2, 2027.
Path 2: Standalone high-risk use cases (Annex III)
Eight categories of AI systems that are high risk regardless of the product they run on:
1. Biometrics
Remote biometric identification systems (not real-time in public spaces, which is banned). Systems that categorize people based on biometric data.
2. Critical infrastructure
AI managing or operating as safety components of: road traffic, water supply, gas, heating, electricity, and digital infrastructure. An AI system optimizing power grid load distribution falls here.
3. Education and vocational training
Systems determining access to education, evaluating learning outcomes, assessing appropriate education levels, or monitoring prohibited behavior during tests. An AI proctoring tool is high risk.
4. Employment and worker management
AI in recruitment (resume screening, interview assessment), promotion and termination decisions, task allocation based on individual traits, and performance monitoring. This is one of the most commercially relevant categories.
5. Access to essential services
AI evaluating creditworthiness, setting insurance premiums or assessing insurance claims, prioritizing emergency service dispatch, or assessing eligibility for public benefits and services.
For customer support teams: if your AI decides whether to grant or deny access to an essential service without human review, it lands here.
6. Law enforcement
Individual risk assessment systems, polygraphs and similar detection tools, evidence reliability evaluation, crime analytics, and profiling of natural persons.
7. Migration, asylum, and border control
AI assessing visa and asylum applications, border surveillance systems, and risk screening tools for irregular migration.
8. Administration of justice and democratic processes
AI assisting judicial authorities in fact research and law interpretation. AI systems that could influence election outcomes.
The Exception Clause
Not every Annex III system automatically stays high risk. The Act includes a carve-out: an Annex III system is not high risk if it:
- Performs a narrow procedural task (data conversion, formatting)
- Improves the result of a previously completed human activity
- Detects decision-making patterns without replacing human assessment
- Performs a preparatory task for an assessment listed in Annex III
And the AI system does not pose a significant risk of harm to health, safety, or fundamental rights. This exception requires documentation. You cannot just claim it applies; you must demonstrate why.
Provider Obligations for High-Risk Systems
Providers carry the heaviest burden. Before placing a high-risk system on the market:
- Implement a risk management system (Article 9)
- Meet data governance requirements (Article 10)
- Create and maintain technical documentation (Article 11)
- Build in automatic logging (Article 12)
- Ensure transparency toward deployers (Article 13)
- Design for effective human oversight (Article 14)
- Achieve appropriate accuracy, robustness, and cybersecurity (Article 15)
- Complete conformity assessment and obtain CE marking
- Register in the EU database
Our compliance checklist breaks each of these into actionable steps.
Deployer Obligations
Deployers (organizations using high-risk AI) also have requirements:
- Use the system according to provider instructions
- Ensure human oversight by competent and trained individuals
- Monitor system operation and report malfunctions to providers
- Conduct a Data Protection Impact Assessment when required by GDPR (see AI Act vs GDPR)
- Inform affected individuals that they are subject to high-risk AI decision-making
- Retain logs generated by the system for at least 6 months
Conformity Assessment Process
Most Annex III systems undergo self-assessment following the procedure in Annex VI of the regulation. The provider evaluates its own compliance, prepares a declaration of conformity, and affixes the CE marking.
Exception: biometric identification systems require third-party assessment by a notified body.
Annex I systems follow the conformity assessment procedure of their specific product legislation, now incorporating AI requirements.
After any substantial modification to a high-risk system, the conformity assessment must be repeated.
Timeline
High-risk requirements for Annex III systems become enforceable August 2, 2026. For Annex I systems, August 2, 2027. See the full EU AI Act timeline for all dates.
For the overall regulatory summary, start with our EU AI Act overview.


