Back to blogAI Agent Architecture

EU AI Act Chatbot Requirements

GuruSup

Every chatbot deployed in the EU market needs to comply with the AI Act's transparency requirements by August 2, 2026. The core obligation is simple: tell people they are talking to a machine. The practical implementation has more nuance than that one sentence suggests.

Article 50: The Transparency Foundation

Article 50 of the AI Act sets transparency obligations for AI systems that interact directly with people. For chatbots, the relevant provisions are:

  • Providers must ensure that AI systems designed to interact directly with natural persons are designed and developed so that the persons concerned are informed they are interacting with an AI system. This applies unless it is obvious from the circumstances and context of use.
  • Providers of AI systems that generate synthetic text, audio, images, or video must ensure the output is marked as artificially generated or manipulated in a machine-readable format.

The "unless obvious" exception is narrow. A clearly robotic voice assistant might qualify. A text-based chatbot that generates human-sounding responses almost certainly does not. When in doubt, disclose.

What Disclosure Looks Like in Practice

At conversation start

The most straightforward approach: display a clear message before or at the beginning of the interaction. Examples that work:

  • "You are chatting with an AI assistant. You can request a human agent at any time."
  • "This conversation is powered by AI. For human support, type 'agent'."
  • A persistent visual indicator (icon, badge, or label) that the conversation partner is AI.

What does not work: burying the disclosure in a terms-of-service page, using vague language like "smart assistant" without clarifying the AI nature, or only disclosing in a help center article.

For AI-generated content within conversations

If your chatbot generates text, images, or other content that could be mistaken for human-created material, that content must be labeled. This applies to:

  • AI-written email drafts sent through the chatbot
  • AI-generated summaries of documents or conversations
  • Synthetic images or media shared in chat

The labeling must be machine-readable (metadata) in addition to any human-visible indicator.

Emotion recognition disclosure

If your chatbot or its underlying system detects emotions (sentiment analysis that goes beyond routing and actually identifies emotional states), you must inform the user. A chatbot that detects frustration and adjusts its tone needs to disclose this capability.

When Chatbots Become High Risk

Most chatbots fall under limited risk with transparency-only obligations. But certain deployments push chatbots into high-risk territory:

  • Chatbots that make or directly influence decisions about essential services: insurance claims processing, loan application assessment, benefit eligibility determination. These fall under Annex III category 5.
  • Recruitment chatbots that screen candidates, schedule interviews based on AI scoring, or make hiring recommendations. Annex III category 4.
  • Chatbots in healthcare that provide diagnostic suggestions or triage patients. May qualify as medical device AI under Annex I.
  • Educational chatbots that assess student performance or determine access to educational programs. Annex III category 3.

The determining factor is not the technology (it is still a chatbot) but the decision it influences. A chatbot that answers FAQs is limited risk. The same chatbot architecture deciding whether to approve a credit application is high risk.

Technical Implementation Requirements

For limited-risk chatbots

  • Disclosure mechanism at interaction start (UI element, message, or persistent indicator)
  • Machine-readable labels on AI-generated content (metadata tagging)
  • Emotion recognition notice if applicable
  • Documentation of the transparency measures implemented

For high-risk chatbots (in addition to the above)

  • Full conformity assessment per compliance checklist
  • Risk management system covering the chatbot's entire lifecycle
  • Data governance for training and fine-tuning datasets
  • Logging of all interactions sufficient for traceability
  • Human oversight mechanisms (escalation to human agents, override capabilities)
  • Registration in the EU database

Third-Party Chatbot Providers

If you use a chatbot platform built by someone else (and most companies do), the responsibility splits:

  • The chatbot provider must ensure the system is designed for transparency compliance (Article 50 obligations, CE marking for high-risk).
  • You as the deployer must actually implement the transparency disclosure in your deployment, use the system according to instructions, and maintain human oversight.

Check your vendor's AI Act compliance documentation. If they cannot provide a declaration of conformity or transparency guidance, that is a red flag.

Common Chatbot Scenarios

FAQ chatbot on a company website

Limited risk. Add a disclosure at conversation start. Label any AI-generated content. Done.

Customer support chatbot with ticket creation

Limited risk if it assists humans who make final decisions. Disclose AI nature. Ensure escalation path to human agents. See our customer support AI guide.

Insurance claims chatbot that approves or denies

High risk (Annex III, category 5). Full conformity assessment required. Human oversight mandatory for claim decisions.

HR chatbot screening job applicants

High risk (Annex III, category 4). Conformity assessment, bias testing, transparency to candidates about AI involvement in the process.

Enforcement Timeline

Chatbot transparency obligations become enforceable August 2, 2026. High-risk chatbot requirements on the same date for Annex III systems.

For the full schedule, see our EU AI Act timeline. For how this interacts with data protection when chatbots process personal data, read AI Act vs GDPR.

Start with the EU AI Act summary for the full regulatory context.

Related articles