Up North AIUp North
Takaisin näkemyksiin
9 min lukuaika

What the EU AI Act Means for SaaS Companies

SaaS companies face a unique challenge under the EU AI Act: you might be both a deployer of AI tools and a provider of AI systems. This guide explains the difference, why it matters, and what CTOs need to do now.

ai-actcompliancesaas
Share

The dual identity problem

If you're a CTO or tech lead at a SaaS company, the EU AI Act puts you in a position that most other businesses don't face: you might be regulated from two directions at once.

On one side, your team uses AI tools — Copilot for code generation, ChatGPT for support drafts, Intercom's AI for customer conversations, Salesforce Einstein for lead scoring. That makes you a deployer.

On the other side, if your own product has AI features — recommendation engines, automated classification, predictive analytics, AI-powered search — you might be a provider. And the obligations for providers are dramatically heavier than for deployers.

Most SaaS companies we work with haven't thought about this distinction. They assumed the AI Act was about ChatGPT and facial recognition — someone else's problem. It's not. If you ship software with AI capabilities to customers in the EU, this regulation directly affects your product roadmap, your architecture decisions, and potentially your go-to-market timeline.

Provider vs. deployer: the distinction that changes everything

This is the single most important concept in the AI Act for SaaS companies. Get this wrong and you'll either over-invest in compliance you don't need, or under-invest in compliance you definitely do.

Deployer (Article 3(4))

A deployer is anyone who uses an AI system under their authority. When your marketing team uses ChatGPT to draft emails, you're a deployer. When your engineers use Copilot, you're a deployer. When your sales team relies on Salesforce Einstein, you're a deployer.

Deployer obligations are manageable. For high-risk systems, you need human oversight, transparency with affected people, impact assessments, and record keeping. For general-purpose AI and lower-risk tools, the obligations are lighter — mainly transparency and AI literacy.

Provider (Article 3(3))

A provider is anyone who develops an AI system or has an AI system developed and places it on the market or puts it into service under their own name or trademark. This is where SaaS companies get caught.

If your product includes:

  • An AI-powered feature that makes predictions, recommendations, or decisions
  • A machine learning model you trained or fine-tuned
  • An automated classification or scoring system
  • A generative AI feature (text, image, code generation)

...and you sell or license that product to customers, you are likely a provider of an AI system.

Provider obligations are substantial: technical documentation, quality management systems, conformity assessments, post-market monitoring, incident reporting, and potentially registration in the EU database.

The "general-purpose AI" wrinkle

If you integrate a foundation model (GPT-4, Claude, Gemini) into your product, you're building on what the AI Act calls a general-purpose AI (GPAI) model. The GPAI provider (OpenAI, Anthropic, Google) has obligations under Articles 53-55. But when you wrap that model in your product and ship it to customers, you take on provider obligations for the resulting AI system. The GPAI provider's compliance doesn't cover yours.

This catches a lot of SaaS companies off guard. "We just use the OpenAI API" doesn't exempt you from being a provider of the AI system you built on top of it.

Common SaaS tools and their risk classification

Tools you deploy (your team uses internally)

General-purpose AI tools (GPAI transparency obligations)

  • GitHub Copilot — Code generation. Minimal risk for most use cases. Your obligation: ensure developers understand it's AI-generated code and review it. AI literacy training applies.
  • ChatGPT / Claude — Text generation. Minimal risk for internal use. Obligation: don't use it for prohibited practices, ensure AI literacy. If customer-facing, transparency obligations apply.
  • Notion AI, Grammarly — Writing assistance. Minimal risk.

Tools that could be high-risk depending on use

  • Intercom AI / Fin — Customer support chatbot. If it makes decisions that significantly affect customers (e.g., claim adjudication, access to services), it could qualify as high-risk. At minimum, you must disclose to users they're interacting with AI (Article 50).
  • Salesforce Einstein — Lead scoring and prediction. Generally not high-risk unless used for credit decisions or in a way that affects access to essential services.
  • HubSpot AI — Marketing automation and scoring. Similar to Einstein — risk level depends on the downstream decisions.

AI features you provide (in your product)

This is where it gets serious. Ask yourself these questions about every AI feature in your product:

  1. Does it fall under Annex III? If your AI feature is used by customers for recruitment, credit scoring, insurance pricing, access to education, law enforcement, or critical infrastructure management, it's high-risk regardless of what your product does.

  2. Is it a safety component? If your AI feature is embedded in a product covered by EU product safety legislation (medical devices, machinery, vehicles), it's high-risk.

  3. Does it interact with natural persons? If so, you must at minimum inform users they're interacting with AI (Article 50). If it generates synthetic content (deepfakes, AI text, AI images), you must ensure that content is machine-readable as AI-generated.

  4. Did you fine-tune or substantially modify a GPAI model? If you took a foundation model and fine-tuned it for a specific purpose, you may be classified as a provider of a new AI system — not just someone using an API.

The provider compliance stack

If you determine that your SaaS product makes you a provider of a high-risk AI system, here's what you're looking at:

Technical documentation (Article 11, Annex IV)

You need comprehensive documentation covering:

  • System description and intended purpose
  • Design and development methodology
  • Data governance (training, validation, testing data)
  • Human oversight measures
  • Accuracy, robustness, and cybersecurity specifications
  • Known limitations and foreseeable misuse scenarios

This isn't a one-page summary. Annex IV is detailed and expects engineering-level documentation.

Quality management system (Article 17)

You must implement a QMS that covers:

  • Regulatory compliance strategy
  • Design and development procedures
  • Testing and validation
  • Data management
  • Post-market monitoring
  • Incident reporting
  • Communication with authorities

If you already have ISO 27001 or SOC 2, you have a foundation — but the AI Act QMS requirements go further, particularly around data governance and algorithmic testing.

Conformity assessment (Article 43)

For most high-risk AI systems, you can self-assess conformity (internal control, Annex VI). Some categories require third-party assessment by a notified body. Determine which applies to your product early — notified body capacity is limited and wait times are growing.

Post-market monitoring (Article 72)

You must actively monitor your AI system after deployment. This means tracking performance, collecting user feedback, monitoring for drift and bias, and acting on findings. Build this into your product analytics from the start.

EU database registration (Article 49)

High-risk AI systems must be registered in the EU database before being placed on the market. This is a public registry — your competitors will be able to see it, and your customers will check it.

Step-by-step compliance checklist for SaaS companies

  • [ ] Audit internal AI tool usage — Catalog every AI tool your organization uses. Include shadow IT — developers using Copilot, marketers using ChatGPT, support staff using AI summarizers. Classify each by risk level.

  • [ ] Map AI features in your product — List every feature in your product that uses AI, ML, or automated decision-making. Be thorough: recommendation engines, search ranking, fraud detection, auto-categorization, predictive analytics, chatbots, content generation.

  • [ ] Determine your role for each — For each AI feature: are you a provider, deployer, or both? Document the reasoning. If you integrate a third-party AI API, determine where provider responsibility begins and ends.

  • [ ] Classify risk levels — Map each AI feature against Annex III (high-risk categories) and Annex I (EU legislation). Most SaaS features will be limited or minimal risk, but don't assume — check carefully.

  • [ ] For provider obligations: start technical documentation — If you're a provider of a high-risk system, begin Annex IV documentation now. This is the most time-consuming step and typically requires engineering, product, and legal collaboration.

  • [ ] Implement transparency requirements — At minimum, ensure users know when they're interacting with AI (Article 50). For AI-generated content, implement content labeling. Update your product documentation and terms of service.

  • [ ] Build or extend your QMS — If you need a quality management system, start building it. If you have SOC 2 or ISO 27001, extend them — don't start from scratch.

  • [ ] Plan conformity assessment — Determine whether you need self-assessment or third-party assessment. If the latter, contact notified bodies early. The process takes months.

  • [ ] Design post-market monitoring — Build monitoring into your product: performance tracking, bias detection, drift alerts, user feedback collection. This is easier to build in from the start than to retrofit.

  • [ ] Establish AI literacy training — Article 4 requires that all staff working with AI have sufficient AI literacy. For SaaS companies, this means engineering, product, support, and sales teams.

  • [ ] Review contracts with AI vendors — Ensure your agreements with AI providers (OpenAI, Anthropic, etc.) clearly allocate responsibilities under the AI Act. Get their compliance documentation.

  • [ ] Update customer contracts — If you're a provider, your customers (deployers) will need information from you to meet their own obligations. Build this into your contracts and documentation proactively.

Timeline for SaaS companies

| Date | Impact on SaaS | |------|---------------| | February 2, 2025 | Prohibited practices ban. Audit your product for any prohibited use cases (social scoring, manipulation, emotion recognition in workplaces/schools). | | August 2, 2025 | GPAI obligations apply. If you build on foundation models, understand your GPAI provider's compliance status. AI literacy requirements take effect. | | August 2, 2026 | High-risk AI obligations apply. If your product contains high-risk AI features, full compliance required: documentation, QMS, conformity assessment, monitoring, registration. | | August 2, 2027 | High-risk AI in products governed by other EU legislation (medical devices, machinery, etc.). |

The strategic reality: If you're building AI features into your product right now, build compliance into the design. Retrofitting a quality management system and technical documentation onto an existing system is 3-5x more expensive than building it in from the start.

The competitive angle

Here's something most compliance guides won't tell you: early AI Act compliance is a competitive advantage for SaaS companies selling into Europe.

Enterprise buyers are already asking about AI Act readiness in procurement questionnaires. Being able to say "we're compliant, here's our documentation, here's our entry in the EU database" will win deals that less-prepared competitors will lose.

Conversely, if you can't answer AI Act questions during a sales cycle, you'll lose to someone who can. European enterprise procurement teams are treating AI Act compliance the same way they treated GDPR — as a qualifying criterion, not a nice-to-have.

Start with clarity

The worst position to be in is uncertain. You don't know if you're a provider, you haven't classified your features, and enforcement is approaching. The first step is always the same: understand exactly where you stand.

Our AI Act scan maps your entire tool stack and product features against the regulation. It tells you whether you're a provider, deployer, or both — and exactly which obligations apply. Free, takes 10 minutes, and gives you a clear starting point.

Take our free AI Act scan

Haluatko syventyä?

Tutkimme tekoälyllä rakennetun ohjelmiston eturintamaa itse rakentamalla. Katso mihin olemme paneutuneet.