Back to blogAI Agent Architecture

EU AI Act Compliance Checklist

GuruSup

Compliance with the EU AI Act is not a single action. It is a sequence of steps that vary based on your AI system's risk classification. This checklist gives you a concrete path from assessment to conformity, organized by what you need to do and when.

If you have not yet classified your systems, start with our risk classification guide.

Step 1: AI System Inventory

Before you can comply, you need to know what you have.

  • List every AI system your organization develops, deploys, or procures.
  • Document the purpose, input data types, output decisions, and affected user groups for each.
  • Identify whether you act as provider, deployer, importer, or distributor for each system.
  • Map each system to a risk tier (unacceptable, high, limited, minimal).

This inventory becomes your compliance backbone. Update it every time you add, modify, or retire an AI system.

Step 2: Eliminate Prohibited Systems

The ban on unacceptable-risk AI took effect February 2, 2025. If your inventory includes any prohibited system, decommission it immediately.

Check for: social scoring mechanisms, untargeted biometric scraping, subliminal manipulation techniques, emotion recognition in workplaces (unless safety-critical), and real-time remote biometric identification in public spaces (unless under the narrow law-enforcement exceptions). See the full EU AI Act summary for the complete prohibited list.

Step 3: High-Risk System Requirements

This is the bulk of compliance work. Each requirement below maps to a specific article in the regulation.

3.1 Risk Management System (Article 9)

  • Establish a risk management process that runs throughout the AI system's lifecycle.
  • Identify and analyze known and foreseeable risks.
  • Implement mitigation measures and test their effectiveness.
  • Document residual risks and communicate them to deployers.
  • Review and update the risk assessment when the system or its context changes.

3.2 Data Governance (Article 10)

  • Define criteria for training, validation, and testing datasets.
  • Ensure datasets are relevant, representative, and as free of errors as reasonably achievable.
  • Address potential biases, especially when processing special categories of personal data.
  • Document data provenance, collection methods, and preprocessing steps.
  • If personal data is involved, ensure GDPR compliance runs in parallel. See our AI Act vs GDPR comparison.

3.3 Technical Documentation (Article 11)

  • Prepare documentation before the system enters the market.
  • Include: general system description, design specifications, development process, risk management measures, data governance practices, performance metrics, and known limitations.
  • Keep documentation updated for the system's entire commercial lifetime plus 10 years.

3.4 Record-Keeping and Logging (Article 12)

  • Implement automatic logging of system events.
  • Logs must enable tracing of operations back to specific inputs and decisions.
  • Retain logs for a period appropriate to the system's purpose, minimum as defined by applicable sectoral law.

3.5 Transparency and Information (Article 13)

  • Provide deployers with clear instructions for use.
  • Document the system's intended purpose, level of accuracy, and known limitations.
  • Specify the human oversight measures needed for safe operation.
  • Include information about expected input data characteristics.

3.6 Human Oversight (Article 14)

  • Design systems so humans can effectively oversee their operation.
  • Enable the human overseer to understand the system's capabilities and limitations.
  • Provide tools to interpret system output and override or reverse decisions.
  • Include a stop function or the ability to intervene in real time where appropriate.

3.7 Accuracy, Robustness, Cybersecurity (Article 15)

  • Declare accuracy levels and metrics in technical documentation.
  • Test resilience against errors, faults, and adversarial attacks.
  • Implement cybersecurity measures proportionate to the risk.
  • Address potential vulnerabilities from data poisoning, model manipulation, and input exploitation.

Step 4: Conformity Assessment

Before placing a high-risk system on the market, you must complete a conformity assessment.

  • For most Annex III systems: self-assessment following Annex VI procedures.
  • For biometric identification systems: assessment by a notified body (third party).
  • For Annex I systems: follow the existing sectoral conformity assessment, updated for AI requirements.
  • Result: a declaration of conformity and CE marking.

Step 5: Limited-Risk Transparency Obligations

If your system is limited risk, compliance is lighter but still mandatory:

  • Chatbots and conversational AI: disclose AI nature to users before or at the start of interaction. Details in our chatbot requirements guide.
  • AI-generated content (text, audio, images, video): label it as artificially generated.
  • Emotion recognition: inform subjects that the system is operating.
  • For customer support teams, this typically means a disclosure banner or message at conversation start.

Step 6: Registration and Monitoring

  • Register high-risk AI systems in the EU database before market placement.
  • Implement post-market monitoring proportionate to the system's risk.
  • Report serious incidents to national authorities within 15 days of becoming aware.
  • Cooperate with market surveillance authorities upon request.

Step 7: Ongoing Compliance

  • Schedule periodic reviews of risk assessments and technical documentation.
  • Monitor system performance against declared accuracy levels.
  • Update the system's conformity assessment after substantial modifications.
  • Track regulatory updates from the European AI Office and national authorities.
  • Train staff involved in AI system operation and oversight.

The enforcement timeline governs when each requirement becomes enforceable. The penalty structure shows the cost of non-compliance.

Related articles