Is Teamtailor Compliant with the EU AI Act?
Teamtailor is one of the most popular ATS platforms in the Nordics. If you use its AI features to screen or rank candidates, it's classified as high-risk under the EU AI Act. Here's what that means for you.
Your ATS is probably high-risk and you don't know it
If you're a Nordic company, there's a good chance you use Teamtailor. It's the default applicant tracking system for thousands of companies across Sweden, Norway, Denmark, and Finland. It's well-designed, easy to use, and increasingly powered by AI features that help you screen CVs, rank candidates, and surface the best applicants faster.
That last part is the problem — at least from a regulatory perspective.
The EU AI Act, which entered into force in August 2024 and whose obligations are phasing in through 2026, has a very specific view on AI systems used in recruitment: they are high-risk. Not "maybe high-risk." Not "depends on how you use them." If AI is involved in screening, filtering, or ranking job applicants, the system falls squarely into the high-risk category.
And here's the part that catches most companies off guard: the obligations don't just fall on Teamtailor as the provider. They fall on you, the deployer — the company that decided to turn on those AI features and point them at real candidates.
What Teamtailor does
Teamtailor is an applicant tracking system (ATS) built in Stockholm, used by over 10,000 companies. At its core, it manages job postings, applications, and the hiring pipeline. But like most modern ATS platforms, it has added AI-powered features on top:
- AI-assisted candidate screening — automatically evaluating and filtering incoming applications
- Candidate ranking — surfacing applicants the system considers a better match
- Smart recommendations — suggesting candidates from your talent pool for open roles
- AI-generated job descriptions and interview questions
The text generation features (job descriptions, interview questions) are relatively low-risk under the AI Act. But the moment AI is involved in deciding which candidates move forward and which don't, you've entered high-risk territory.
How the EU AI Act classifies this
The AI Act defines high-risk AI systems in Article 6 and lists specific use cases in Annex III. Recruitment and candidate evaluation are explicitly called out.
Annex III, Area 4 — Employment, workers management and access to self-employment:
AI systems intended to be used for the recruitment or selection of natural persons, in particular to place targeted job advertisements, to analyse and filter job applications, and to evaluate candidates.
This isn't ambiguous. If Teamtailor's AI features are analyzing applications and evaluating candidates on your behalf, the system is high-risk by definition.
It doesn't matter that Teamtailor is a Swedish company, or that you're a 50-person startup, or that the AI is "just helping" your recruiters. The classification is based on the use case, not the company size or the sophistication of the model.
What obligations apply to you as a deployer
Under the AI Act, Teamtailor is the provider — they build and supply the AI system. You are the deployer — you use it in a professional context. Both have obligations, but yours are more operational than technical.
Article 26 — Obligations of deployers of high-risk AI systems:
-
Human oversight (Art. 26(1)): You must ensure that humans with appropriate competence, training, and authority oversee the AI system. Concretely: someone on your hiring team needs to understand what the AI is doing and have the ability to override its decisions. "The AI ranked them, so we went with that" is not compliant.
-
Input data quality (Art. 26(4)): You must ensure that the data you feed into the system is relevant and representative. If your historical hiring data is biased (and most is), the AI will reproduce that bias — and you'll be responsible for the outcome.
-
Monitoring (Art. 26(5)): You must monitor the system's operation and report any serious incidents or malfunctions to both the provider (Teamtailor) and the relevant national authority.
-
Fundamental rights impact assessment (Art. 27): Organizations with more than 50 employees, or public bodies, must conduct a fundamental rights impact assessment before deploying the high-risk system. This is similar in spirit to a DPIA under GDPR, but focused on AI-specific risks like discrimination and fairness.
-
Transparency to affected persons (Art. 26(11)): Candidates must be informed that AI is being used in the hiring process. This isn't optional — and a vague mention in your privacy policy probably isn't enough.
-
Record-keeping (Art. 26(6)): You must keep logs of the AI system's operation for an appropriate period. If a candidate challenges a hiring decision, you need to be able to show what the AI did and what role it played.
Practical steps to comply
Here's what this looks like in practice if you're using Teamtailor with AI features:
1. Audit which AI features you've actually enabled. Log into Teamtailor and check what's turned on. Are you using AI screening? Candidate ranking? Smart recommendations? Many companies enable features without realizing the regulatory implications. Make a list.
2. Designate a human reviewer for AI-assisted decisions. Every AI-generated ranking or screening decision should be reviewed by a person before it affects a candidate's outcome. Document this process. The reviewer should understand what criteria the AI uses and have clear authority to override it.
3. Inform candidates clearly. Update your job postings or application process to include a clear statement that AI is used in screening. Something like: "We use AI-assisted tools to help evaluate applications. All AI assessments are reviewed by a human recruiter before any decision is made." Put it where candidates will actually see it — not buried in a 40-page privacy policy.
4. Conduct a fundamental rights impact assessment. If you're required to do one (50+ employees or public body), do it before relying on the AI features. If you're below the threshold, do a lighter version anyway — it's good practice and demonstrates due diligence.
5. Ask Teamtailor for documentation. As a provider of a high-risk AI system, Teamtailor is required to give you instructions for use, information about the system's capabilities and limitations, and technical documentation. Request this explicitly. If they can't provide it, that's a red flag.
6. Check for bias. Review your hiring outcomes. Are candidates from certain demographics being filtered out disproportionately? If you don't measure it, you can't fix it — and the AI Act expects you to monitor for exactly this.
7. Keep records. Maintain logs of AI-assisted hiring decisions for at least the period recommended by your legal team (and certainly for the duration of any applicable limitation period for discrimination claims).
The timeline
The AI Act's obligations for high-risk systems apply from August 2, 2026. That's not far away. If you're using Teamtailor's AI features today, you should be working on compliance now — not waiting for enforcement actions to start.
Penalties for non-compliance can reach up to 15 million EUR or 3% of global annual turnover, whichever is higher. For most companies, the reputational risk of a discrimination finding is even more concerning.
The bottom line
Teamtailor is a great ATS. But if you're using its AI features for candidate screening or ranking, you're operating a high-risk AI system under the EU AI Act — and you have specific legal obligations as the deployer. The good news is that compliance is achievable. It mostly comes down to human oversight, transparency, and documentation. Start now.
Take our free AI Act scan to see how Teamtailor and your other AI tools are classified → /ai-act-scan
See Teamtailor's full risk classification → /ai-act-scan/tools/teamtailor
Vil du gå dybere?
Vi udforsker fronten af AI-bygget software ved faktisk at bygge den. Se hvad vi arbejder på.