EU AI Act: Compliance Obligations for High-Risk AI Systems
A comprehensive guide to the EU AI Act's risk-based classification, high-risk AI system requirements, conformity assessment procedures, and the compliance timeline through 2027.
The EU AI Act is the first comprehensive legislation regulating artificial intelligence by risk level. It entered into force in August 2024, with obligations phasing in through 2027. For organisations developing, deploying, or importing AI systems in the EU, the Act creates a new compliance domain — one that intersects with existing product safety, data protection, and sector-specific regulations but adds AI-specific requirements that are without precedent.
This guide focuses on the obligations that apply to high-risk AI systems, where the regulatory burden is heaviest and the compliance challenges most complex.
The risk-based classification framework
The AI Act classifies AI systems into four risk tiers, each carrying different regulatory obligations:
Unacceptable risk (prohibited): AI systems that pose a clear threat to fundamental rights are banned outright. Prohibitions took effect in February 2025 and include:
- Social scoring systems by public authorities
- Real-time remote biometric identification in public spaces for law enforcement (with limited exceptions)
- AI that exploits vulnerabilities of specific groups (age, disability, social situation)
- AI that uses subliminal techniques to materially distort behaviour causing harm
- Emotion recognition in workplace and educational settings (with limited exceptions)
- Untargeted scraping of facial images for facial recognition databases
High risk: AI systems that pose significant risks to health, safety, or fundamental rights. These face the most extensive compliance requirements — detailed below.
Limited risk: AI systems with specific transparency obligations. This includes chatbots (must disclose AI interaction), deepfakes (must be labelled), and emotion recognition systems (must inform subjects). These transparency obligations applied from August 2025.
Minimal risk: All other AI systems, which can be used without additional regulatory requirements beyond existing law. The vast majority of AI systems fall here.
The classification isn’t self-assessed in isolation. High-risk AI systems are defined in two ways:
-
Annex I products: AI systems that are safety components of products covered by EU harmonised legislation (medical devices, machinery, toys, vehicles, aviation, marine equipment, rail, and others). These follow the conformity assessment procedures of their sector-specific legislation.
-
Annex III use cases: AI systems used in specific high-risk domains, regardless of the product they’re part of. These eight areas cover the use cases the EU considers most sensitive.
Annex III high-risk use cases
The eight areas designated as high-risk in Annex III are:
-
Biometric identification and categorisation: Remote biometric identification systems, biometric categorisation systems based on sensitive attributes
-
Management and operation of critical infrastructure: AI systems used as safety components in the management of road traffic, water, gas, heating, and electricity supply
-
Education and vocational training: AI used to determine access to educational institutions, evaluate learning outcomes, assess appropriate education levels, and monitor prohibited behaviour during exams
-
Employment, workers management, and access to self-employment: AI for recruitment screening, hiring decisions, task allocation, performance monitoring, and termination decisions
-
Access to and enjoyment of essential services: AI used in credit scoring, insurance pricing, emergency service dispatch, and public assistance benefit eligibility assessment
-
Law enforcement: AI for risk assessment of individuals, polygraph and emotion detection, evidence evaluation, profiling in criminal investigations, and crime prediction
-
Migration, asylum, and border control: AI for risk assessment of individuals, document authenticity verification, and asylum application assessment
-
Administration of justice and democratic processes: AI used to assist judicial authorities in applying the law to facts and circumstances
Each category contains specific sub-cases. Not every AI system used in employment, for example, is high-risk — only those used for specific functions like recruitment filtering or performance evaluation that affect employment decisions.
Obligations for providers of high-risk AI systems
Providers (developers) of high-risk AI systems bear the primary compliance burden. The obligations are extensive and prescriptive:
Risk management system (Article 9): A continuous, iterative process throughout the AI system’s lifecycle that must:
- Identify and analyse known and reasonably foreseeable risks
- Estimate and evaluate risks from intended use and reasonably foreseeable misuse
- Adopt risk management measures based on the state of the art
- Test the system to identify the most appropriate risk management measures
- Consider whether the system is likely to be accessed by or have an impact on children
The risk management system must be documented, regularly updated, and integrated into the provider’s quality management system.
Data and data governance (Article 10): Training, validation, and testing datasets must meet specific quality criteria:
- Relevant, sufficiently representative, and to the best extent possible free of errors and complete
- Appropriate statistical properties with respect to the intended purpose
- Subject to appropriate data governance and management practices including bias examination and detection
For systems using techniques involving training on data, the Act specifies requirements for data preparation, design choices, collection processes, and bias monitoring. These requirements interact directly with GDPR data protection requirements, creating overlapping obligations.
Technical documentation (Article 11): Comprehensive documentation must be drawn up before the system is placed on the market, including:
- General description of the AI system and its intended purpose
- Detailed description of system elements and development process
- Information about monitoring, functioning, and control
- Risk management process documentation
- Description of changes through the system’s lifecycle
- Performance metrics and evaluation results
- Detailed description of the data requirements and datasets used
The documentation requirements are extensive — Annex IV of the AI Act specifies the minimum content across multiple pages of regulatory text.
Record-keeping (Article 12): High-risk AI systems must be designed with automatic logging capabilities that:
- Record events relevant to identifying risks and substantial modifications
- Enable monitoring of the system’s operation
- Facilitate post-market monitoring
- Ensure traceability of the system’s functioning throughout its lifecycle
Logs must be retained for an appropriate period, at minimum six months unless provided otherwise in other EU or national law.
Transparency and provision of information (Article 13): Systems must be designed to enable deployers to interpret outputs and use the system appropriately. Instructions for use must include:
- Provider identity and contact details
- System characteristics, capabilities, and limitations
- Intended purpose and foreseeable misuse scenarios
- Performance metrics including accuracy, robustness, and cybersecurity
- Known circumstances that may impact performance
- Input data specifications
- Human oversight measures
Human oversight (Article 14): High-risk AI systems must be designed to allow effective human oversight during use. The system must enable the human overseer to:
- Fully understand the system’s capacities and limitations
- Monitor operation and detect anomalies, dysfunctions, and unexpected performance
- Correctly interpret the system’s output, taking into account the tools and methods of interpretation
- Decide not to use the system or disregard, override, or reverse the system’s output
- Intervene in or interrupt the system’s operation
Accuracy, robustness, and cybersecurity (Article 15): High-risk AI systems must achieve appropriate levels of accuracy, robustness, and cybersecurity throughout their lifecycle. This includes resilience against errors, faults, inconsistencies, and attempts at manipulation by malicious third parties.
Obligations for deployers of high-risk AI systems
Deployers (organisations using high-risk AI systems) have their own set of obligations:
- Use in accordance with instructions: Deploy the system following the provider’s instructions for use
- Human oversight: Assign human oversight to individuals with appropriate competence, training, and authority
- Input data quality: Ensure input data is relevant and sufficiently representative for the system’s intended purpose
- Monitoring: Monitor the system’s operation and report serious incidents or malfunctions to the provider and relevant authorities
- Data protection impact assessment: Conduct a DPIA under GDPR where applicable
- Fundamental rights impact assessment: For public bodies and certain private entities, assess the impact on fundamental rights before deployment
- Transparency to affected persons: Inform natural persons that they are subject to a high-risk AI system, unless this is apparent from the context
Conformity assessment
Before placing a high-risk AI system on the EU market, providers must conduct a conformity assessment. The procedure depends on the system’s classification:
Annex I systems (safety components of regulated products): Follow the conformity assessment procedure of the relevant sector-specific legislation. This typically involves notified body assessment.
Annex III systems: Most can use provider self-assessment (internal control procedure under Annex VI). The exception is biometric identification and categorisation systems, which require third-party conformity assessment by a notified body.
The conformity assessment must verify compliance with all applicable requirements. Successful assessment results in:
- An EU declaration of conformity
- CE marking on the AI system
- Registration in the EU AI database before the system is placed on the market
Timeline
The AI Act’s obligations phase in over three years:
- February 2025: Prohibitions on unacceptable-risk AI systems
- August 2025: Obligations for general-purpose AI models; transparency obligations for limited-risk systems; governance and penalties framework
- August 2026: All obligations for high-risk AI systems classified under Annex III; deployer obligations; conformity assessment requirements
- August 2027: Obligations for high-risk AI systems that are safety components of products under Annex I
For high-risk Annex III systems, August 2026 is the critical deadline. Providers must have their risk management systems, technical documentation, data governance practices, logging capabilities, and conformity assessments completed by this date.
Interaction with existing regulations
The AI Act doesn’t replace existing law — it adds to it. High-risk AI systems in healthcare must comply with both the AI Act and the Medical Device Regulation. AI systems processing personal data must comply with both the AI Act and the GDPR. AI used in financial services must meet both AI Act requirements and sector-specific regulations (MiFID II, Solvency II, etc.).
These overlapping obligations create compliance complexity. Data governance requirements under the AI Act must be reconciled with GDPR data minimisation principles. Risk management under the AI Act must integrate with existing product safety risk management. Technical documentation must satisfy multiple regulatory frameworks simultaneously.
Preparing for compliance
For organisations developing or deploying high-risk AI systems, the priority actions in 2026 are:
- Classify your AI systems against both Annex I and Annex III to determine which are high-risk
- Implement risk management systems as a continuous process integrated into AI development and deployment
- Audit data governance practices against Article 10 requirements — bias detection, representativeness, and quality
- Prepare technical documentation meeting Annex IV specifications
- Design logging capabilities into systems that don’t currently have them
- Establish human oversight procedures with qualified personnel
- Plan conformity assessment — self-assessment or notified body engagement depending on classification
AuditDSS covers the EU AI Act with obligation-level decomposition — every requirement from risk management through conformity assessment broken into individual, testable obligations with dependency mapping to GDPR, product safety, and sector-specific regulations. Explore AuditDSS.