AIUC-1

AIUC-1 × EU AI Act

The EU Artificial Intelligence Act is an EU regulation classifying AI systems by risk levels (minimal, limited, high, unacceptable) with corresponding compliance obligations.

AIUC-1 operationalizes the EU AI Act by aligning with its requirements. Certification against AIUC-1 is a strong step towards compliance with the EU AI Act as it:

Enables compliance for minimal and limited risk systems

Enables compliance for high risk systems only if specific control activities are met (AIUC can help guide AI companies through this process)

Provides documentation for internal conformity assessments for high risk systems as required in Annex VI

EU AI Act crosswalks by article

EU AI Act article

Article 9: Risk Management System

EU AI Act summary*

The EU AI Act requires a risk management system for high-risk AI systems. This system should be a continuous process throughout the AI's lifecycle, regularly reviewed and updated. It should identify and analyze potential risks to health, safety, or fundamental rights, estimate and evaluate these risks, and adopt measures to manage them. These measures should balance minimizing risks and fulfilling requirements. The system should also ensure that any remaining risk is acceptable and that measures are in place to eliminate or reduce risks as much as possible. High-risk AI systems should be tested to identify the best risk management measures and to ensure they work as intended. The system should also consider whether the AI could negatively impact people under 18 or other vulnerable groups.

EU AI Act article

Article 10: Data and Data Governance

EU AI Act summary*

This article states that high-risk AI systems must be developed using high-quality data sets for training, validation, and testing. These data sets should be managed properly, considering factors like data collection processes, data preparation, potential biases, and data gaps. The data sets should be relevant, representative, error-free, and complete as much as possible. They should also consider the specific context in which the AI system will be used. In some cases, providers may process special categories of personal data to detect and correct biases, but they must follow strict conditions to protect individuals' rights and freedoms.

Relevant AIUC-1 requirements
EU AI Act article

Article 11: Technical Documentation

EU AI Act summary*

This EU law states that before a high-risk AI system is launched, detailed technical documentation must be prepared and kept updated. This documentation should prove that the AI system meets the law's requirements and provide clear information for authorities to check compliance. It should include certain key elements. Small businesses, including start-ups, can provide this information in a simpler way. The EU will create a simplified form for this purpose. If a high-risk AI system is related to a product covered by other EU laws, one set of documentation should include all necessary information. The EU can update these requirements as technology advances.

EU AI Act article

Article 12: Record-Keeping

EU AI Act summary*

This article states that high-risk AI systems must have the ability to automatically record events throughout their lifespan. This is to ensure that the system's actions can be traced back, especially in situations where the AI might pose a risk or undergo significant changes. The system should also record details like when it was used, the database it checked data against, the data that matched, and who verified the results. This is to ensure accountability and safety in the use of high-risk AI systems.

EU AI Act article

Article 13: Transparency and Provision of Information to Deployers

EU AI Act summary*

This article states that high-risk AI systems must be designed to be transparent, so that those using them can understand and use them correctly. They must come with clear instructions, including information about the provider, the system's capabilities and limitations, and any potential risks. The instructions should also explain how to interpret the system's output, any pre-determined changes to the system, and how to maintain it. If relevant, they should also describe how to collect, store and interpret data logs.

Relevant AIUC-1 requirements
EU AI Act article

Article 14: Human Oversight

EU AI Act summary*

This article states that high-risk AI systems must be designed in a way that allows humans to effectively oversee them. The goal of human oversight is to prevent or minimize risks to health, safety, or fundamental rights that may arise from using these systems. The oversight measures should match the risks and context of the AI system's use. These measures could be built into the system by the provider or implemented by the user. The AI system should be provided in a way that allows the overseer to understand its capabilities and limitations, detect and address issues, avoid over-reliance on the system, interpret its output, decide not to use it, or stop its operation. For certain high-risk AI systems, any action or decision based on the system's identification must be verified by at least two competent individuals.

Relevant AIUC-1 requirements
EU AI Act article

Article 15: Accuracy Robustness and Cybersecurity

EU AI Act summary*

The EU AI Act states that high-risk AI systems must be designed to be accurate, robust, and secure. They should perform consistently throughout their lifecycle. The Commission will work with relevant stakeholders to develop ways to measure these qualities. The accuracy of these AI systems should be declared in their instructions. These systems should be resilient to errors and faults, and should have backup plans in place. They should also be designed to reduce the risk of biased outputs. Finally, these systems should be secure against unauthorized third parties trying to exploit their vulnerabilities.

EU AI Act article

Article 16: Obligations of Providers of High-Risk AI Systems

EU AI Act summary*

This article states that companies providing high-risk AI systems must follow certain rules. They must make sure their AI systems meet specific standards and display their contact information on the product or its packaging. They need to have a quality management system and keep certain documents and logs. Before selling or using the AI system, they must have it checked for compliance with regulations. They also need to mark the product with a CE marking to show it meets EU standards. They must register the product, fix any issues, provide necessary information, and prove it meets standards if asked by authorities. The AI system must also be accessible according to EU directives.

EU AI Act article

Article 17: Quality Management System

EU AI Act summary*

The EU AI Act requires providers of high-risk AI systems to establish a quality management system. This system must be documented and include strategies for regulatory compliance, design and development procedures, testing and validation processes, technical specifications, data management systems, risk management, post-market monitoring, incident reporting, communication procedures, record-keeping, resource management, and an accountability framework. The implementation of these aspects should be proportionate to the size of the provider's organization. Providers already subject to quality management obligations under relevant Union law can include these aspects in their existing systems. Financial institutions can meet these requirements by complying with Union financial services law.

EU AI Act article

Article 18: Documentation Keeping

EU AI Act summary*

This article states that providers of high-risk AI systems must keep certain documents for 10 years after the system is released. These documents include technical documentation, quality management system documentation, approved changes, decisions from notified bodies, and the EU declaration of conformity. If the provider goes bankrupt or stops its activities before the 10 years are up, each EU member state will decide how to keep these documents available. Financial institutions that are providers must keep the technical documentation as part of their documentation under Union financial services law.

EU AI Act article

Article 19: Automatically Generated Logs

EU AI Act summary*

This article states that companies providing high-risk AI systems must keep automatically generated logs of these systems, as long as they have control over these logs. These logs must be kept for at least six months, or longer if required by EU or national laws, especially those related to personal data protection. If the provider is a financial institution, they must keep these logs as part of their documentation under financial services law.

EU AI Act article

Article 20: Corrective Actions and Duty of Information

EU AI Act summary*

This article states that if a company providing a high-risk AI system realizes that their system is not following the rules set by the EU, they must immediately fix the issue, stop using it, or recall it. They must also inform anyone involved in distributing or using the system. If the system poses a risk, the company must investigate why and inform the authorities responsible for monitoring the AI market. They must also inform any organization that certified the system, detailing what went wrong and what they did to fix it.

EU AI Act article

Article 21: Cooperation with Competent Authorities

EU AI Act summary*

This article states that companies providing high-risk AI systems must, when asked by a relevant authority, provide all the information needed to show that their AI system meets the required standards. This information must be in a language that the authority can easily understand. If asked, these companies must also allow the authority to access automatically generated logs of the AI system, as long as they control these logs. Any information obtained by the authority must be treated confidentially.

Relevant AIUC-1 requirements
EU AI Act article

Article 22: Authorised Representatives of Providers of High-Risk AI Systems

EU AI Act summary*

This article states that before launching high-risk AI systems in the EU, providers from non-EU countries must appoint a representative within the EU. This representative is responsible for ensuring that the AI system complies with EU regulations, keeping records for 10 years, providing necessary information to authorities, cooperating with authorities in managing risks, and ensuring registration obligations are met. If the provider fails to meet its obligations, the representative can terminate the agreement and must inform the relevant authorities.

Relevant AIUC-1 requirements
EU AI Act article

Article 23: Obligations of Importers

EU AI Act summary*

This article states that before selling a high-risk AI system, importers must ensure it meets all regulations. This includes checking that the system has passed the necessary assessments, has the correct documentation, and is marked with the CE symbol. If the importer suspects the system doesn't meet regulations or has fake documentation, they must not sell it until it complies. Importers must also provide their contact details on the system or its packaging, ensure it's stored and transported safely, and keep a record of its certification and instructions for 10 years. They must also cooperate with authorities if needed.

Relevant AIUC-1 requirements
EU AI Act article

Article 24: Obligations of Distributors

EU AI Act summary*

This article states that before selling a high-risk AI system, distributors must ensure it meets certain standards, including having a CE marking and a copy of the EU declaration of conformity. If the distributor believes the AI system doesn't meet these standards, they can't sell it until it does. They also need to ensure that the AI system remains compliant during storage or transport. If a sold AI system is found to be non-compliant, the distributor must correct it, withdraw it, or recall it. They must also cooperate with authorities and provide any requested information about the AI system.

Relevant AIUC-1 requirements
EU AI Act article

Article 25: Responsibilities Along the AI Value Chain

EU AI Act summary*

This article states that anyone who distributes, imports, deploys, or modifies a high-risk AI system is considered a provider of that system and must follow certain regulations. This includes putting their name on an existing system, making significant changes to a system, or altering the purpose of a system to make it high-risk. The original provider of the system must cooperate with the new provider and provide necessary information and technical access. However, this doesn't apply if the original provider specified that their system shouldn't be changed into a high-risk system. The article also states that the manufacturer of a product that includes a high-risk AI system is considered the provider of that system. Finally, the provider of a high-risk AI system and any third party that supplies components for that system must agree in writing on the necessary information and technical access. This doesn't apply to third parties that provide tools or components under a free and open-source license. The AI Office may develop voluntary contract terms for these situations. The article also emphasizes the need to protect intellectual property rights and trade secrets.

Relevant AIUC-1 requirements
EU AI Act article

Article 26: Obligations of Deployers of High-Risk AI Systems

EU AI Act summary*

This article outlines the responsibilities of those who deploy high-risk AI systems. These include using the systems as per instructions, assigning human oversight, ensuring input data is relevant, and monitoring the system's operation. If a risk is identified, the provider and relevant authorities must be informed immediately. Deployers must also keep logs generated by the AI system for at least six months. Before using a high-risk AI system, workers must be informed. If the system is not registered in the EU database, it should not be used. Deployers must also comply with data protection assessments and cooperate with relevant authorities.

EU AI Act article

Article 27: Fundamental Rights Impact Assessment for High-Risk AI Systems

EU AI Act summary*

Before using a high-risk AI system, public bodies and private entities providing public services must assess how the system could impact people's fundamental rights. This includes describing how and when the system will be used, who it might affect, and what risks it might pose. They also need to outline how humans will oversee the system and what steps will be taken if risks materialize. This assessment must be done for the first use of the system, but can be updated if necessary. The results must be reported to the market surveillance authority, unless exempt. The AI Office will provide a template to help with this process.

Relevant AIUC-1 requirements
EU AI Act article

Article 43: Conformity Assessment

EU AI Act summary*

This article discusses the process of assessing whether high-risk AI systems meet certain standards. If a provider has used certain standards in creating their AI system, they must choose one of two assessment procedures. If the provider hasn't used these standards, or only used part of them, they must follow a specific assessment procedure. If the AI system is for law enforcement or immigration authorities, a market surveillance authority will act as the assessment body. The article also states that if an AI system is significantly modified, it must undergo a new assessment. The Commission can update these procedures as technology advances.

EU AI Act article

Article 44: Certificates

EU AI Act summary*

This article states that certificates for AI systems must be written in a language that can be easily understood by the relevant authorities in the country where the certificate is issued. These certificates are valid for up to five years for certain AI systems, and four years for others. They can be extended upon request, but only after a re-assessment. If an AI system no longer meets the necessary requirements, the certificate can be suspended or withdrawn, unless the provider takes corrective action. There must also be an appeal process for decisions made about these certificates.

Relevant AIUC-1 requirements
EU AI Act article

Article 46: Derogation from Conformity Assessment Procedure

EU AI Act summary*

This article allows for exceptions to the usual approval process for high-risk AI systems. If there's a justified reason, such as public safety or environmental protection, a market surveillance authority can allow these systems to be used for a limited time while they're being assessed. In urgent situations, law enforcement or civil protection authorities can use these systems without prior approval, as long as they request it soon after. If approval is denied, the system must be stopped and its results discarded. The EU Commission and other member states must be informed of any approvals, and if no objections are raised within 15 days, the approval is considered justified. If objections are raised, the Commission will consult with the relevant parties and decide if the approval is justified. If it's not, the approval must be withdrawn.

Relevant AIUC-1 requirements
EU AI Act article

Article 47: EU Declaration of Conformity

EU AI Act summary*

The EU AI Act requires providers of high-risk AI systems to create a written declaration of conformity for each system. This document, which can be physical or electronic, must be kept available for national authorities for 10 years after the system is launched. The declaration must identify the AI system and confirm that it meets certain requirements. It must also be translated into a language that the authorities can understand. If the AI system is subject to other EU laws, a single declaration covering all applicable laws is required. The provider is responsible for ensuring the declaration is kept up-to-date. The Commission can update the declaration's content as needed.

Relevant AIUC-1 requirements
EU AI Act article

Article 48: CE Marking

EU AI Act summary*

This article states that the CE marking, which shows a product meets EU safety standards, must be clearly visible on high-risk AI systems. If it can't be physically placed on the system, it should be on the packaging or documentation. For digital AI systems, a digital CE marking should be easily accessible. If a notified body (an organization that checks the product meets the standards) is involved, their identification number should be included next to the CE marking. If the AI system is also subject to other EU laws requiring a CE marking, the marking indicates it meets those requirements too.

Relevant AIUC-1 requirements
EU AI Act article

Article 49: Registration

EU AI Act summary*

This article states that before any high-risk AI system is launched or used, the provider or their representative must register themselves and their system in the EU database. This also applies to AI systems that the provider has determined are not high-risk. Public authorities or institutions using high-risk AI systems must also register themselves and the system's use in the EU database. Certain high-risk AI systems used in law enforcement, migration, and border control must be registered in a secure, non-public section of the EU database. Only the Commission and national authorities can access this section.

Relevant AIUC-1 requirements
EU AI Act article

Article 50: Transparency Obligations for Providers and Deployers of Certain AI Systems

EU AI Act summary*

This article states that companies must inform users when they are interacting with an AI system, unless it's obvious or the AI is used for legal purposes like crime detection. AI systems that create synthetic content (like deepfakes) must mark their outputs as artificially generated. Companies must also inform users when they use AI for emotion recognition or biometric categorisation, unless it's for legal purposes. If an AI system creates or alters content, the company must disclose this, unless it's for legal purposes or the content is artistic or satirical. The AI Office will help create guidelines for detecting and labelling artificially generated content.

Relevant AIUC-1 requirements
EU AI Act article

Article 72: Post-Market Monitoring by Providers and Post-Market Monitoring Plan for High-Risk AI Systems

EU AI Act summary*

The EU AI Act requires providers of high-risk AI systems to establish a post-market monitoring system. This system should collect and analyze data on the performance of these AI systems throughout their lifetime, ensuring they continue to comply with regulations. The monitoring system should be based on a plan that is part of the technical documentation. The EU Commission will provide a template for this plan. If a monitoring system is already in place under other legislation, providers can integrate the necessary elements from this new requirement, as long as it provides the same level of protection.

EU AI Act article

Article 73: Reporting of Serious Incidents

EU AI Act summary*

This article states that companies who provide high-risk AI systems must report any serious incidents to the authorities in the country where the incident occurred. They must do this as soon as they know there's a link between their AI system and the incident, or if there's a good chance there is a link. They must report within 15 days of finding out about the incident. If the incident is very serious or widespread, they must report it within two days. If someone dies, they must report it within 10 days. They can submit an initial incomplete report if needed, but must follow up with a full report. They must also investigate the incident and work with authorities. The authorities will then take appropriate action within seven days.

* Summaries are created by a third party (artificialintelligenceact.eu). Review the official content of the EU AI Act for the most up-to-date information.

Last updated July 22, 2025.
© 2025 Artificial Intelligence Underwriting Company. All rights reserved.