Risk Classification by Use Case
The same tool can have different risk levels depending on how it is used. Below are the common use cases and their EU AI Act classification.
Emergency call guidance
HighWhen AI guidance influences emergency response decisions
Clinical documentation
LimitedDeployer Obligations for High-Risk Use Cases
If you use Corti for any of the high-risk use cases above, these obligations apply to you as a deployer under the EU AI Act.
Risk management system
Art. 9Establish and maintain a risk management system that identifies, analyzes, and mitigates risks throughout the system lifecycle.
Human oversight measures
Art. 14Ensure the AI system can be effectively overseen by natural persons, including the ability to intervene or override decisions.
Transparency and information
Art. 13Provide clear information to affected persons that they are subject to an AI system, and explain how decisions are made.
Data governance
Art. 10Ensure training, validation, and testing data sets are relevant, sufficiently representative, and free of errors.
Automatic logging and record-keeping
Art. 12Keep logs of the AI system's operation for traceability and audit purposes.
EU database registration
Art. 49Register the high-risk AI system in the EU database before putting it into service.
Fundamental rights impact assessment
Art. 27Conduct an assessment of the impact on fundamental rights before deploying the system.
How does this affect your company?
Take our free 3-minute scan to find out which of your AI tools carry obligations under the EU AI Act and get a personalized compliance roadmap.
Scan your companyThis classification is based on the EU AI Act (Regulation 2024/1689) and general guidance. It does not constitute legal advice. Risk levels depend on specific deployment context. Consult a qualified professional for complex situations.