Up North AIUp North
Back to insights
7 min read

5 AI Systems Your Company Probably Uses That Are High-Risk Under the AI Act

Most companies don't realise they're already deploying high-risk AI systems. Here are five common tools — from recruitment screening to customer eligibility chatbots — that trigger full AI Act compliance obligations.

ai-actcomplianceshadow-ai
Share

The shadow AI problem

Ask a company's leadership how many AI systems they use, and you'll get a number. Ask the people who actually do the work, and you'll get a much bigger number.

This gap — between the AI systems management knows about and the ones actually running — is the single biggest compliance risk under the EU AI Act. Not because companies are being reckless, but because modern SaaS tools have quietly embedded AI features that now trigger regulatory obligations nobody anticipated when the purchase order was signed.

The AI Act classifies certain AI systems as "high-risk" based on their domain and function, not on how sophisticated the underlying model is. A simple scoring algorithm used in recruitment is high-risk. A cutting-edge language model used for internal brainstorming is not. What matters is what the system does and who it affects.

Here are five AI systems your company likely uses — or your teams have adopted without formal approval — that fall squarely into the high-risk category.

1. Applicant tracking systems with AI screening

Examples: HireVue, Pymetrics, LinkedIn Recruiter's AI ranking, Workday Recruiting's candidate scoring

Why it's high-risk: The AI Act's Annex III explicitly lists AI systems used in "recruitment or selection of natural persons, for advertising vacancies, screening or filtering applications, evaluating candidates" as high-risk. This isn't limited to video interview analysis or personality assessments. Any AI-driven ranking, filtering, or scoring of job candidates qualifies.

The catch: Most modern ATS platforms now include AI features by default. LinkedIn Recruiter uses AI to rank candidates. Workday scores applicant-job fit automatically. These features are often enabled by default or turned on by an eager recruiting team without IT or legal review.

What you need to do: Conduct a conformity assessment covering risk management, data governance, human oversight, and technical documentation. Ensure a human meaningfully reviews AI-ranked candidates before decisions are made — rubber-stamping AI recommendations doesn't count. Document the system's purpose, logic, and limitations. Establish a process for candidates to request human review of automated decisions.

2. Employee performance and productivity analytics

Examples: Microsoft Viva Insights (with AI features), Workday's performance analytics, Lattice AI, Time Doctor's productivity scoring

Why it's high-risk: Annex III covers AI systems used for "making decisions or materially influencing decisions affecting the initiation, continuation, or termination of work-related contractual relationships" and "monitoring and evaluating the performance and behavior of persons in such relationships." If your AI tool generates performance scores, identifies "low performers," flags "disengagement risk," or recommends promotion candidates, it's high-risk.

The catch: These tools are often adopted by HR or department heads as productivity tools — not as AI systems. Microsoft Viva is bundled with Microsoft 365 subscriptions that millions of companies already pay for. Turning on its AI-powered insights features takes a few clicks. Nobody files a procurement request. Nobody does a risk assessment. But the AI Act doesn't care how the tool was adopted — only what it does.

What you need to do: Map every tool that generates insights about employee performance or productivity. Determine which ones use AI (many now do by default). For those that qualify, implement the full high-risk framework: risk management, human oversight, transparency to employees, data quality controls, and comprehensive documentation. Critically, employees must be informed that AI is being used to assess their performance.

3. Credit scoring and BNPL decisioning

Examples: Klarna's AI underwriting, Affirm's risk models, internal credit scoring tools using ML, third-party scoring APIs from Experian or TransUnion

Why it's high-risk: The AI Act explicitly lists AI systems used to "evaluate the creditworthiness of natural persons or establish their credit score" as high-risk. This covers every flavour of automated credit decisioning — traditional ML models, newer LLM-based approaches, and the buy-now-pay-later (BNPL) platforms that have exploded across European e-commerce.

The catch: If your company offers any form of financing, installment payments, or credit — even through a third-party BNPL provider integrated into your checkout flow — you may have obligations as a "deployer" of a high-risk AI system. The AI Act's obligations don't only fall on the provider who built the system. Deployers (the companies that use the system in practice) have their own set of requirements, including human oversight, input data quality, and record-keeping.

What you need to do: If you integrate BNPL or credit scoring into your products, understand your role under the AI Act. Request documentation from your provider about their AI system's conformity. Ensure meaningful human oversight of automated credit decisions. Maintain logs. And if customers are denied credit by an AI system, they have a right to an explanation — make sure you can provide one.

4. AI-powered exam proctoring and assessment tools

Examples: Proctorio, ExamSoft, Respondus Monitor, Mercer Mettl's AI proctoring

Why it's high-risk: The AI Act lists AI systems used in education for "determining access to education and vocational training institutions" and "evaluating learning outcomes" as high-risk. AI proctoring goes further — many of these tools use behavioural analysis, gaze tracking, and anomaly detection to flag potential cheating, which directly affects students' academic outcomes.

The catch: This one is relevant beyond universities. Corporate training programs, professional certifications, and internal assessments increasingly use AI-powered proctoring. If your company uses AI-proctored assessments for employee certification, compliance training evaluation, or skills testing that affects career progression, you may be in high-risk territory.

What you need to do: Audit any assessment or examination tool used in your organisation — including those used by your L&D or training departments. If the tool uses AI to evaluate, score, or monitor test-takers, conduct the required conformity assessment. Ensure human review of any AI-flagged incidents before consequences are imposed. Document the system and communicate its use to the people being assessed.

5. Customer eligibility chatbots and automated triage

Examples: Insurance quote chatbots, benefits eligibility screeners, loan pre-qualification bots, automated claims triage systems

Why it's high-risk: This is the category most companies miss entirely. The AI Act covers AI systems used to "evaluate the eligibility of natural persons for essential private services" — which includes insurance, healthcare access, and financial products. If your chatbot asks customers a series of questions and then determines their eligibility for a product, recommends a coverage level, or triages their claim, it may be operating as a high-risk AI system.

The catch: Many companies built these chatbots using general-purpose tools — Intercom's AI, Zendesk's Answer Bot, custom GPT-powered flows — without considering them as "AI systems" in the regulatory sense. They were built as customer service improvements, not as decision-making systems. But if the chatbot's output materially affects what products a customer can access or what service they receive, the AI Act's high-risk requirements likely apply.

What you need to do: Review every customer-facing chatbot and automated workflow. Map the decisions these systems make or influence. If any of those decisions affect access to services, insurance, credit, or benefits, treat the system as potentially high-risk. Implement human fallback for consequential decisions. Ensure customers know they're interacting with AI (this is also an Article 50 requirement). Document everything.

The bigger picture: you can't govern what you can't see

The common thread across all five examples is visibility. These aren't rogue AI experiments built by a skunkworks team. They're mainstream SaaS tools, adopted through normal procurement channels (or sometimes without any procurement at all), that happen to use AI in ways that trigger regulatory obligations.

The first step in addressing this isn't compliance documentation or risk assessments — it's knowing what you have. A comprehensive AI systems inventory is the foundation for everything else. Without it, you're managing compliance blindly.

This is harder than it sounds. AI capabilities are being added to existing tools constantly. A CRM that was pure rules-based last year may now use ML for lead scoring. A customer service platform that was keyword-matching six months ago now runs on an LLM. Your inventory needs to be a living process, not a one-time audit.

Start with what you actually use

Shadow AI is not a hypothetical risk — it's the default state of most organisations. The tools listed above are in widespread use across European companies. Many of them will require formal compliance programs under the AI Act, with real consequences for non-compliance.

The good news is that the first step is straightforward: find out what AI systems your organisation actually uses. Not what leadership thinks you use. Not what IT has approved. What people actually use, every day, across every department.

Run a free AI systems scan to get a comprehensive picture of your organisation's AI footprint — including the tools you didn't know you had. It's the fastest way to move from uncertainty to a concrete compliance roadmap.

Want to go deeper?

We explore the frontier of AI-built software by actually building it. See what we're working on.