Is HireVue Compliant with the EU AI Act?
HireVue's AI-powered video interviews are the textbook example of high-risk AI under the EU AI Act. If you use them, you have serious compliance obligations. Here's exactly what they are.
AI video interviews are the EU's poster child for high-risk AI
When EU lawmakers were drafting the AI Act, they had a very specific type of AI system in mind for the high-risk category: AI that evaluates job candidates. And when regulators give examples of what that looks like in practice, AI-powered video interview assessment is usually the first thing they mention.
HireVue is the market leader in this space. Its platform records video interviews, then uses AI to assess candidates — analyzing their responses to predict job performance, cultural fit, and other hiring-relevant traits. It's used by some of the largest employers in the world, including many European companies.
Under the EU AI Act, this is about as clearly high-risk as it gets. There's no grey zone, no "it depends on how you use it." If you're using HireVue's AI assessment features, you're deploying a high-risk AI system, and you have specific obligations that you need to meet by August 2026.
What HireVue does
HireVue offers a video interviewing platform with several AI-powered features:
- Structured video interviews — candidates record responses to standardized questions on their own time
- AI assessment — the system evaluates candidate responses using natural language processing and machine learning models
- Game-based assessments — cognitive and behavioral games that AI scores
- Interview builder — AI-generated interview questions based on the role
- Candidate ranking — AI-generated scores that rank applicants
It's worth noting that HireVue retired its facial analysis feature in 2021 after significant backlash. The current system focuses on what candidates say (language, content) rather than how they look. But that distinction, while important ethically, doesn't change the AI Act classification. The assessment of candidates by AI — regardless of the specific signals used — is the trigger.
How the EU AI Act classifies this
HireVue's AI assessment falls directly under Annex III, Area 4 — Employment, workers management and access to self-employment:
AI systems intended to be used for the recruitment or selection of natural persons, in particular to place targeted job advertisements, to analyse and filter job applications, and to evaluate candidates.
The key phrase is "evaluate candidates." That is HireVue's core product. The AI system takes in a candidate's video interview and produces an assessment — a score, a ranking, a recommendation. This is textbook high-risk.
Additionally, Article 6(2) confirms that AI systems listed in Annex III are high-risk unless they don't pose a significant risk of harm to health, safety, or fundamental rights. In the case of recruitment AI, the fundamental right at stake is non-discrimination (Article 21 of the EU Charter of Fundamental Rights). The European Commission has been explicit that recruitment AI poses inherent discrimination risks — which is precisely why it's in Annex III.
No exception applies here. The AI Act includes an exception in Article 6(3) for AI systems that don't perform "profiling" of natural persons. HireVue's assessment is, by definition, profiling: it evaluates personal characteristics to make predictions about job performance. The exception doesn't apply.
Why this matters more than other HR tools
HireVue is different from, say, an ATS that uses AI to filter resumes. With HireVue:
- The AI is the primary evaluator. Unlike a resume screener where a human typically reviews the filtered list, HireVue's AI assessment can be the main (or only) data point a recruiter sees about a candidate's interview performance.
- The stakes are immediate and personal. A low HireVue score can mean a candidate is rejected before ever speaking to a human. The AI's assessment directly determines who gets the job and who doesn't.
- Bias is hard to detect. When AI evaluates speech patterns, vocabulary, and response structure, it can encode biases related to accent, socioeconomic background, neurodiversity, and cultural communication norms. These biases are subtle and systemic.
EU regulators understand this. The AI Act's recitals specifically mention that AI systems in recruitment "may perpetuate historical patterns of discrimination" and that "the use of AI systems in recruitment and selection may have a disproportionate impact on certain groups."
What obligations apply to you as a deployer
As the company using HireVue, you are the deployer under the AI Act. HireVue is the provider. Your obligations under Article 26 are substantial:
1. Human oversight (Art. 26(1), Art. 14)
You must ensure meaningful human oversight of HireVue's AI assessments. This means:
- A qualified human must review AI-generated assessments before they determine a candidate's outcome
- That person must understand how the AI works, what its limitations are, and how to interpret its outputs
- They must have the authority and capability to override the AI's recommendation
- "Rubber stamping" the AI's score doesn't count as oversight
2. Transparency to candidates (Art. 26(11))
Candidates must be informed that AI is being used to assess them. This must happen:
- Before the assessment takes place (not after)
- In clear, accessible language (not legal boilerplate)
- With enough detail that they understand what the AI is evaluating
Best practice: tell candidates explicitly that their video interview will be assessed by an AI system, explain what the AI evaluates, and give them a point of contact if they have concerns.
3. Fundamental rights impact assessment (Art. 27)
If you have more than 50 employees or are a public body, you must conduct a fundamental rights impact assessment before deploying HireVue. This should cover:
- Risks of discrimination across protected characteristics
- Impact on candidates' right to an effective remedy
- Measures to mitigate identified risks
- Your process for monitoring outcomes
4. Input data quality (Art. 26(4))
You're responsible for the relevance and quality of the data that goes into the system. For HireVue, this means:
- The interview questions you configure should be job-relevant and non-discriminatory
- You should review whether the AI model was validated on a population that's representative of your candidate pool
- If you notice the system performing poorly for certain demographics, you must act
5. Record-keeping and logging (Art. 26(6))
Maintain records of AI-assisted hiring decisions for an appropriate retention period. For every role where HireVue is used, you should be able to show:
- Which candidates were assessed by AI
- What scores or rankings they received
- What human review occurred
- What the final hiring decision was and how the AI assessment influenced it
6. Inform workers' representatives (Art. 26(7))
If you have works councils or employee representatives, they must be informed about the deployment of high-risk AI systems in the workplace. Even though HireVue affects candidates (not current employees), works councils in several EU countries have broad consultation rights on hiring practices.
Practical steps to comply
1. Assess whether you actually need AI-scored interviews. This is the most important question. HireVue's video platform can be used without AI scoring — simply as a structured interview tool that humans evaluate. If the compliance burden of high-risk AI isn't justified by the hiring volume, consider turning off AI assessment and having humans review the interviews directly.
2. If you proceed, get HireVue's compliance documentation. Under Article 13, providers must supply deployers with instructions for use, information about accuracy and limitations, and technical documentation. Request this from HireVue specifically in the context of EU AI Act compliance. Ask for their bias testing methodology and results.
3. Build a human review process. Design a workflow where every AI assessment is reviewed by a trained recruiter before it affects a candidate. Document the review criteria and ensure reviewers have the tools to override the AI. Track override rates — if humans never override the AI, it suggests the review isn't meaningful.
4. Update candidate communications. Revise your application process to include clear, upfront disclosure about AI use. Don't bury it in terms and conditions. Put it on the interview invitation itself: "Your responses will be assessed by an AI system. A human recruiter will review all AI assessments before any hiring decision is made."
5. Conduct your fundamental rights impact assessment. Do this before deploying or before the August 2026 deadline if already deployed. Include specific analysis of discrimination risks related to accent, language proficiency, neurodiversity, and cultural communication styles.
6. Monitor outcomes. Track hiring outcomes by demographic group (to the extent permitted by local law). If the AI consistently scores certain groups lower, investigate and address it. This isn't just an AI Act requirement — it's also a GDPR and employment law issue.
7. Establish an incident reporting process. Know who to contact at HireVue and at your national AI authority if you discover the system is producing discriminatory or erroneous results.
The timeline
High-risk system obligations apply from August 2, 2026. However, the AI literacy obligation under Article 4 is already in effect as of February 2, 2025 — everyone in your organization involved in deploying or overseeing HireVue should have sufficient AI literacy to understand what the system does and what its risks are.
Penalties for deployer non-compliance can reach 15 million EUR or 3% of annual global turnover.
The bottom line
HireVue is the clearest possible example of high-risk AI under the EU AI Act. If you use its AI assessment features, your compliance obligations are significant — but also well-defined. The core requirements are human oversight, transparency to candidates, bias monitoring, and documentation. If you can't meet these, consider whether AI-scored video interviews are the right tool for your hiring process. Sometimes the most compliant solution is the simplest one: have humans watch the interviews.
Take our free AI Act scan to see how HireVue and your other AI tools are classified → /ai-act-scan
See HireVue's full risk classification → /ai-act-scan/tools/hirevue
Haluatko syventyä?
Tutkimme tekoälyllä rakennetun ohjelmiston eturintamaa itse rakentamalla. Katso mihin olemme paneutuneet.