Up North AIUp North
Tilbage til indsigt
8 min læsning

EU AI Act Compliance for HR Teams: A Practical Guide

HR is the single highest-risk area under the EU AI Act, and most People teams don't know it yet. This guide covers what HR directors need to do — from candidate screening to performance reviews — before enforcement begins.

ai-actcompliancehr
Share

HR is ground zero for the AI Act

If you work in HR and you use any AI-powered tool for hiring, screening, or evaluating employees, you are operating in what the EU AI Act explicitly classifies as high-risk. Not medium. Not "it depends." High-risk — the same category as critical infrastructure and law enforcement.

This isn't a technicality. Article 6 and Annex III of the AI Act specifically call out "employment, workers management and access to self-employment" as a high-risk domain. That means AI systems used for recruiting, candidate filtering, interview assessment, promotion decisions, task allocation, and performance monitoring all fall under the strictest compliance requirements.

Most HR teams we talk to have no idea. They adopted tools like Teamtailor's AI screening, HireVue video assessments, or Alva Labs psychometric testing because those tools save time — which they do. But the compliance obligations that come with deploying high-risk AI systems are significant, and ignorance is not a defense.

The clock is ticking. High-risk AI system obligations apply from August 2, 2026. If you're reading this in early 2026, you have months, not years.

What the AI Act actually requires from HR teams

Let's cut through the legal jargon. Here's what matters for People leaders:

You are a "deployer"

Under the AI Act, the company using the AI tool is called a deployer (Article 3(4)). Your vendor — Teamtailor, HireVue, Greenhouse — is the provider. Both have obligations, but yours don't disappear just because you bought a SaaS product.

Your specific obligations as a deployer of high-risk AI (Article 26)

1. Human oversight is mandatory

You must ensure that a qualified human being oversees every AI-assisted decision that affects a candidate or employee. This means:

  • A recruiter must review AI-generated candidate rankings before anyone is rejected
  • Automated screening cannot be the sole reason for eliminating a candidate
  • Performance scores generated by AI tools must be reviewed by a manager before action is taken

This is not "have a human click approve." Article 14 requires that the human overseer genuinely understands the system's capabilities and limitations, can interpret its output correctly, and can decide to disregard or override it.

2. You must inform people (Article 26(7))

Workers and their representatives must be informed that they are subject to AI systems. Candidates must be told when AI is used in the hiring process. This applies to:

  • AI-powered CV screening in your ATS
  • Video interview analysis (emotion, speech patterns, facial expressions)
  • Psychometric or cognitive assessments scored by AI
  • AI-generated interview questions or evaluation rubrics
  • Performance monitoring tools with AI components

3. Data Protection Impact Assessment (Article 26(9))

Before deploying a high-risk AI system, you must conduct a DPIA under GDPR Article 35. If you already did one for your ATS, review it — the AI Act adds new dimensions. If you never did one, start now.

4. Record keeping

You must keep logs of the AI system's operation for at least six months (Article 26(6)), or longer if required by other EU or national law. This means you need audit trails showing what the AI recommended and what the human decided.

5. Fundamental rights impact assessment (Article 27)

Deployers of high-risk AI must assess the impact on fundamental rights before putting the system into use. For HR, this covers the right to non-discrimination, privacy, and fair working conditions.

Your tools and their risk levels

Here's a practical breakdown of common HR tools and where they likely fall:

High-risk (Annex III obligations apply)

  • Teamtailor AI Screening — Automated candidate ranking and filtering. High-risk under Annex III, point 4(a): AI used for recruitment or selection of candidates.
  • HireVue Video Assessments — Analyzes candidate video interviews. High-risk. Note: HireVue dropped facial analysis in 2021 after backlash, but their remaining AI assessment features still qualify.
  • Alva Labs — Psychometric and logic testing with AI scoring. High-risk: AI-based evaluation of candidates.
  • Greenhouse AI Features — If using AI-powered candidate scoring or automated screening rules. High-risk when the AI influences hiring decisions.
  • Gong (for performance monitoring) — If used to evaluate employee performance through call analysis. High-risk under Annex III, point 4(b): AI for making decisions affecting work-related relationships.
  • Culture Amp AI Insights — If AI generates performance assessments or flags employees. High-risk when outputs influence employment decisions.

Lower risk but still has obligations

  • ChatGPT/Copilot for writing job descriptions — Minimal risk, but check for bias in generated text
  • AI scheduling tools — Generally minimal risk unless they allocate tasks based on employee profiling
  • AI-powered L&D recommendations — Likely limited risk, but could be high-risk if tied to promotion criteria

The critical nuance

A tool isn't inherently high-risk — it depends on how you use it. Gong used for sales coaching tips is different from Gong used to generate performance scores that feed into promotion decisions. The latter is high-risk. Document your use cases clearly.

Step-by-step compliance checklist

Here is what your HR team should be working through right now:

  • [ ] Inventory all AI tools — List every tool in your HR stack that uses AI or machine learning. Include features within larger platforms (e.g., AI screening inside your ATS). Don't forget tools individual recruiters might be using informally, like ChatGPT for candidate assessment.

  • [ ] Classify each by risk level — Map each tool and its use case against Annex III. When in doubt, classify higher. Document your reasoning.

  • [ ] Request provider documentation — For high-risk tools, your vendor must provide you with instructions for use, information about the system's capabilities and limitations, and technical documentation. If they can't, that's a red flag. Ask for their AI Act compliance roadmap.

  • [ ] Assign human oversight roles — Designate specific people responsible for overseeing each high-risk AI system. These aren't just names on paper — they need training on how the system works, what its known limitations are, and when to override it.

  • [ ] Update candidate and employee notices — Add AI disclosure to your privacy notices, candidate communications, and employee handbooks. Be specific: which tools, what they do, what data they process.

  • [ ] Conduct or update DPIAs — Perform a Data Protection Impact Assessment for each high-risk AI system. If your DPO hasn't been involved, loop them in now.

  • [ ] Complete fundamental rights impact assessment — Assess impact on non-discrimination, privacy, dignity, and fair working conditions. Document it.

  • [ ] Establish monitoring procedures — Set up ongoing monitoring for accuracy, bias, and drift in your AI tools. Check outputs regularly. Log anomalies.

  • [ ] Create an escalation process — Define what happens when the AI makes a clearly wrong recommendation, when a candidate or employee challenges an AI-assisted decision, or when you discover bias in the system.

  • [ ] Inform worker representatives — If you have a works council, trade union presence, or employee representatives, they must be informed about AI systems affecting workers. In the Nordics, this intersects with existing co-determination frameworks.

  • [ ] Set up record-keeping — Ensure you can retain logs of AI system operations for at least six months. Work with your vendors to understand what logging they provide and what you need to supplement.

  • [ ] Train your team — Everyone who interacts with high-risk AI systems needs AI literacy training (Article 4). This isn't optional and applies at all levels.

Timeline and deadlines

The AI Act entered into force on August 1, 2024. The obligations roll out in phases:

| Date | What happens | |------|-------------| | February 2, 2025 | Prohibited practices ban takes effect (e.g., social scoring, emotion recognition at work in certain contexts) | | August 2, 2025 | GPAI model obligations apply; AI literacy requirements take effect (Article 4) | | August 2, 2026 | High-risk AI system obligations fully apply — this is the big one for HR | | August 2, 2027 | Obligations for high-risk AI embedded in products regulated by other EU legislation |

For HR teams, August 2, 2026 is the date that matters. By then, you must have human oversight in place, candidates and employees informed, DPIAs completed, fundamental rights assessments done, and monitoring procedures running.

Don't wait for your vendor to tell you what to do. Many HR tech providers are still figuring out their own compliance. Your obligations as a deployer exist independently of whether your provider has their act together.

The cost of getting this wrong

The penalties under the AI Act are substantial:

  • Up to 35 million EUR or 7% of global turnover for prohibited practices
  • Up to 15 million EUR or 3% of global turnover for violating high-risk obligations
  • Up to 7.5 million EUR or 1.5% of global turnover for supplying incorrect information

But the real risk for HR teams isn't the fine — it's a candidate or employee challenge that reveals you had no human oversight, no documentation, and no impact assessment. That's a reputational and legal exposure that goes well beyond the AI Act.

What to do right now

If you've read this far and realized you have work to do, start with the inventory. You can't comply with rules you don't understand applied to tools you haven't mapped.

Most HR teams underestimate how many AI-powered tools they're already using. The average mid-size company has 3-7 AI-enabled HR tools, many adopted without formal procurement review.

Want to know exactly where you stand? Take our free AI Act compliance scan. It maps your current tools against the regulation and tells you what needs attention — in plain language, not legal boilerplate.

Take our free AI Act scan

Vil du gå dybere?

Vi udforsker fronten af AI-bygget software ved faktisk at bygge den. Se hvad vi arbejder på.