Up North AIUp North
Back to insights
5 min read

Is Intercom Compliant with the EU AI Act?

Intercom's Fin AI chatbot is limited-risk under the EU AI Act — but you still need to disclose that users are talking to AI. Here's what the law requires and how to get it right.

ai-actcompliancecustomer-support
Share

Your chatbot needs a simple disclosure — here's how

If you use Intercom for customer support, you've probably enabled Fin — their AI agent that resolves customer questions automatically. It's good at what it does. It reads your help center, understands customer questions, and provides accurate answers without a human in the loop.

The good news: Fin is not a high-risk AI system under the EU AI Act. It doesn't make decisions about people's employment, creditworthiness, or access to essential services. It answers customer support questions.

The compliance requirement is straightforward: you must tell users they're interacting with AI, not a human. That's it — in principle. In practice, getting this right involves a few nuances worth understanding.

What Intercom's Fin AI does

Intercom is a customer messaging platform. Fin is its AI-powered support agent, launched in 2023 and steadily improved since. Here's what it does:

  • Automated resolution — answers customer questions by drawing on your help center articles, past conversations, and custom content you provide
  • Conversational AI — maintains context across a conversation, asks clarifying questions, and handles follow-ups
  • Handoff to humans — escalates to a human agent when it can't resolve the issue or when the customer requests it
  • Custom actions — can perform tasks like checking order status or updating account details via API integrations
  • Multi-language support — responds in the customer's language

Fin uses large language models under the hood, but it's constrained to your content — it's designed not to hallucinate or make up information that isn't in your knowledge base. When it doesn't know the answer, it says so and routes to a human.

How the EU AI Act classifies this

Intercom's Fin falls under Article 50 — Transparency obligations for certain AI systems. Specifically, Article 50(1):

Providers shall ensure that AI systems intended to interact directly with natural persons are designed and developed in such a way that the natural persons concerned are informed that they are interacting with an AI system, unless this is obvious from the circumstances and the context of use.

This makes Fin a limited-risk system. It's not in Annex III (high-risk use cases). It doesn't fall under the prohibited practices in Article 5. It's simply an AI system that interacts directly with people — and those people have a right to know it's AI.

Why it's not high-risk:

Customer support chatbots are not listed in Annex III. The high-risk categories focus on AI used in critical domains: biometrics, critical infrastructure, education, employment, access to essential services, law enforcement, immigration, and justice. A chatbot that helps someone reset their password or check their order status doesn't fall into any of these.

One exception to watch: If your Intercom Fin bot is used in a context where it influences access to essential private or public services — for example, if it's the primary way people apply for insurance claims, access healthcare information, or interact with government services — the classification could shift. For most B2B and B2C companies, this won't apply. But if you're in financial services, insurance, or healthcare, evaluate your specific use case carefully.

What obligations apply to you as a deployer

Your obligations are defined in Article 50 and are considerably lighter than those for high-risk systems. Here's what you need to do:

1. Disclose that users are interacting with AI (Art. 50(1))

Users must know they're talking to an AI system, not a human. The disclosure must be:

  • Timely — before or at the start of the interaction, not after
  • Clear — unambiguous language, not buried in fine print
  • Accessible — visible to the user without requiring extra clicks

The "unless this is obvious from the circumstances" exception is narrow. Don't rely on it. Even if your chatbot is clearly labeled "Fin AI" in the interface, explicit disclosure is the safe approach.

2. Label AI-generated content (Art. 50(2))

If Fin generates text that could be mistaken for human-written content — such as detailed explanations, recommendations, or responses that are sent via email — this should be identifiable as AI-generated. In a live chat context, this is usually covered by the initial disclosure. In email or asynchronous contexts, consider adding a note.

3. AI literacy (Art. 4)

Everyone in your organization who operates, oversees, or makes decisions about your AI systems must have a sufficient level of AI literacy. For your support team, this means understanding:

  • What Fin can and can't do
  • When and how it escalates to humans
  • How to review and correct its responses
  • What data it uses to generate answers

This obligation has been in effect since February 2, 2025.

Practical steps to comply

1. Add a clear disclosure to your chat widget.

The simplest approach: make sure the first message in any Fin conversation identifies it as AI. Options include:

  • A system message: "You're chatting with Fin, our AI assistant. I'll do my best to help, and I can connect you with a human agent at any time."
  • A visual indicator in the chat header: "AI Assistant" with an AI icon
  • Both (recommended)

Intercom already provides some built-in labeling for Fin. Check your settings to ensure it's enabled and visible. Don't customize it away for branding reasons.

2. Ensure human escalation is easy and visible.

While not strictly an AI Act requirement, making it easy for users to reach a human is best practice and reduces regulatory risk. If users feel trapped in an AI conversation, complaints follow — and regulators pay attention to complaints.

Configure Fin to:

  • Offer human handoff proactively when it can't resolve an issue
  • Respond to phrases like "talk to a human" or "speak to an agent" immediately
  • Never block or delay access to human support

3. Review Fin's knowledge base for accuracy.

The AI Act's transparency obligations are lighter, but you still have general obligations under consumer protection law and GDPR. If Fin gives incorrect information that causes harm — wrong refund policy, incorrect legal information, misleading product claims — you're liable. Regularly audit:

  • The content Fin draws from (help center articles, custom answers)
  • Fin's actual responses to common questions (spot-check regularly)
  • Customer feedback and complaint patterns

4. Update your privacy policy.

Your privacy policy should mention that AI is used in customer interactions, what data is processed (conversation content, metadata), and for what purpose. This is a GDPR requirement as much as an AI Act one. Be specific: "We use Intercom's Fin AI to provide automated customer support. Fin processes your messages to generate relevant responses based on our help documentation."

5. Train your support team.

Ensure your human agents understand:

  • How Fin works and what it can resolve
  • How to review conversations Fin handled (for quality)
  • When and how to intervene if Fin gives incorrect information
  • Their responsibilities under the AI Act's literacy requirement

6. Document your compliance measures.

Keep a simple record of:

  • What AI systems you use for customer interaction (Fin)
  • What disclosures you've implemented
  • When you last reviewed Fin's accuracy
  • Your training program for support staff

This doesn't need to be elaborate. A one-page internal document updated quarterly is sufficient for a limited-risk system.

What about GDPR?

The AI Act sits alongside GDPR — it doesn't replace it. For Intercom Fin, your GDPR obligations include:

  • Lawful basis for processing conversation data (likely legitimate interest for customer support)
  • Data processing agreement with Intercom (you should already have one)
  • Data retention — don't keep conversation data longer than necessary
  • Data subject rights — users can request access to or deletion of their conversation data

If you're already GDPR-compliant with Intercom, the AI Act adds relatively little overhead. The main addition is the explicit AI disclosure requirement.

The timeline

The transparency obligations under Article 50 apply from August 2, 2025 — so if you're reading this in 2026, they're already in effect. The AI literacy obligation (Art. 4) has been in effect since February 2, 2025.

Penalties for non-compliance with transparency obligations can reach 15 million EUR or 3% of global annual turnover. In practice, regulators are likely to start with warnings and guidance for limited-risk violations — but don't count on leniency indefinitely.

The bottom line

Intercom's Fin AI is a limited-risk system under the EU AI Act. Your compliance obligations are real but manageable: disclose AI to users, ensure your team has AI literacy, and keep your GDPR house in order. The most important action is the simplest — make sure every user who interacts with Fin knows they're talking to AI, right from the start. If you've already done that, you're most of the way there.

Take our free AI Act scan to see how Intercom and your other AI tools are classified → /ai-act-scan

See Intercom's full risk classification → /ai-act-scan/tools/intercom

Want to go deeper?

We explore the frontier of AI-built software by actually building it. See what we're working on.