Back to blogAI Agent Architecture

EU AI Act Summary: Everything You Need to Know

GuruSup

The EU AI Act entered into force on August 1, 2024. It is the first comprehensive legal framework for artificial intelligence anywhere in the world. If your company develops, deploys, or uses AI systems that touch EU citizens, this regulation applies to you regardless of where you are headquartered.

This summary covers what the Act regulates, who it affects, how the risk-based system works, and the deadlines you need to track.

What the EU AI Act Regulates

The regulation targets AI systems placed on the EU market or whose output affects people within the EU. It defines an AI system broadly: any machine-based system that generates predictions, content, decisions, or recommendations that can influence physical or virtual environments.

The Act does not regulate AI research that has no market application, purely military AI systems, or AI used exclusively for personal non-professional purposes. Everything else is in scope.

Who the Act Affects

Three categories of organizations fall under the regulation:

  • Providers — companies that develop or place AI systems on the market.
  • Deployers — organizations that use AI systems in a professional context.
  • Importers and distributors — entities that bring third-party AI into the EU market.

A US-based SaaS company selling an AI chatbot to European customers is a provider under this law. A Spanish bank using that chatbot for loan assessments is a deployer. Both have compliance obligations, though the provider carries the heavier burden.

The Risk-Based Classification System

The Act organizes AI systems into four risk tiers. Each tier carries different obligations. Read our detailed risk classification guide for a full breakdown.

  • Unacceptable risk — banned outright. Social scoring by governments, real-time biometric surveillance in public spaces (with narrow exceptions), manipulative AI targeting vulnerable groups.
  • High risk — allowed but heavily regulated. Includes AI in recruitment, credit scoring, education assessment, law enforcement, and critical infrastructure. See our high-risk AI systems guide.
  • Limited risk — transparency obligations only. Chatbots must disclose they are AI. Deepfakes must be labeled. Emotion recognition systems need user notification.
  • Minimal risk — no specific obligations. Spam filters, AI in video games, inventory management systems.

Key Compliance Requirements

For high-risk systems, the requirements are substantial:

  • A risk management system covering the entire AI lifecycle
  • Data governance standards for training and validation datasets
  • Technical documentation sufficient for conformity assessment
  • Logging capabilities that enable traceability of system decisions
  • Transparency toward deployers about capabilities and limitations
  • Human oversight mechanisms proportionate to the risk level
  • Accuracy, robustness, and cybersecurity standards

Our compliance checklist breaks these down into actionable steps.

Enforcement and Penalties

Fines scale by violation severity. The maximum is 35 million euros or 7% of global annual turnover, whichever is higher. That ceiling applies to deploying banned AI systems. High-risk compliance violations cap at 15 million euros or 3%. Providing incorrect information to authorities: 7.5 million or 1%.

Each EU member state designates national competent authorities for enforcement. At the EU level, the European AI Office coordinates implementation and has direct enforcement power over general-purpose AI models. For the full penalty structure, see our penalties and enforcement guide.

Timeline: What Happens When

The Act uses a phased rollout:

  • February 2, 2025 — prohibitions on unacceptable-risk AI take effect.
  • August 2, 2025 — rules for general-purpose AI models apply. Governance structures must be operational.
  • August 2, 2026 — the bulk of obligations kick in, including all high-risk requirements for Annex III systems.
  • August 2, 2027 — rules for high-risk AI embedded in products regulated by existing EU legislation (Annex I) take effect.

Track every deadline in our EU AI Act timeline.

How It Interacts with GDPR

The AI Act does not replace GDPR. It layers on top. If your AI system processes personal data, you comply with both. The data governance requirements in the AI Act explicitly reference GDPR principles. Our AI Act vs GDPR comparison maps out where the two regulations overlap and where they diverge.

What to Do Now

Start with an inventory of every AI system your organization uses or develops. Classify each by risk tier. For anything in the high-risk category, begin documenting your risk management approach, data practices, and human oversight mechanisms.

If you operate customer support AI, most chatbot deployments fall under limited risk with transparency obligations, but some may qualify as high risk depending on the decisions they influence.

The companies that treat this as a box-ticking exercise will struggle. The ones that build compliance into their development process now will have a structural advantage when enforcement begins in August 2026.

For a broader view on building responsible AI practices, see our AI governance framework guide and the AI governance signal hub.

Related articles