ISO 42001 is an international standard for AI management systems (AIMS) covering responsible AI development and deployment.
AIUC-1 aligns with ISO 42001. Certification against AIUC-1:
Incorporates the majority of controls from ISO 42001
Translates ISO's management system approach into concrete, auditable requirements
Extends ISO 42001 with third-party testing requirements of, e.g., hallucinations and jailbreak attempts
Addresses additional key concerns such as AI failure plans and AI-specific system security
4.1: Understanding the organization and its context
The organization shall determine external and internal issues relevant to the AI management system’s purpose and ability to achieve intended results.
4.2: Understanding the needs and expectations of interested parties
The organization shall determine interested parties and their relevant requirements to the AI management system.
4.3: Determining the scope of the AI management system
The organization shall define and document the scope of the AI management system, including applicability and boundaries.
4.4: AI management system
The organization shall establish, implement, maintain, and continually improve the AI management system.
5.1: Leadership and commitment
Top management shall demonstrate leadership and commitment to the AI management system and its effectiveness.
5.2: AI policy
Top management shall establish an AI policy appropriate to the organization and supportive of AI objectives.
5.3: Roles, responsibilities and authorities
Top management shall assign roles, responsibilities, and authorities for the AI management system.
6.1.1: Actions to address risks and opportunities — General
The organization shall plan actions to address risks and opportunities, integrate them into processes, and evaluate their effectiveness.
6.1.2: AI risk assessment
The organization shall establish and maintain a process for AI risk assessment, including identification, analysis, and evaluation of risks.
6.1.3: AI risk treatment
The organization shall establish and maintain a process for AI risk treatment, including selecting and implementing necessary controls.
6.1.4: AI system impact assessment
The organization shall conduct AI system impact assessments covering potential effects on individuals, groups, and society.
6.2: AI objectives and planning to achieve them
The organization shall establish AI objectives at relevant functions and levels, consistent with the AI policy, and maintain plans to achieve them.
6.3: Planning of changes
The organization shall plan and control changes to the AI management system in a planned manner.
7.1: Resources
The organization shall determine and provide necessary resources for the AI management system.
7.2: Competence
The organization shall ensure competence of persons working under its control based on education, training, or experience.
7.3: Awareness
Persons under the organization’s control shall be aware of the AI policy, objectives, and their contribution to the AI management system.
7.4: Communication
The organization shall determine internal and external communications relevant to the AI management system.
7.5.1: Documented information — General
The organization shall document information required by the AI management system and by ISO42001.
7.5.2: Creating and updating documented information
The organization shall ensure documented information is properly created, updated, and controlled for suitability and adequacy.
7.5.3: Control of documented information
The organization shall control documented information required by the AI management system and ISO42001.
8.1: Operational planning and control
The organization shall plan, implement, and control processes needed for the AI management system, ensuring outputs meet requirements.
8.2: AI risk assessment
The organization shall perform AI risk assessments at planned intervals and when significant changes occur.
8.3: AI risk treatment
The organization shall implement AI risk treatment plans and review them when assessments identify new or ineffective controls.
8.4: AI system impact assessment
The organization shall perform AI system impact assessments at planned intervals and when significant changes are proposed.
9.1: Monitoring, measurement, analysis and evaluation
The organization shall determine monitoring, measurement, analysis, and evaluation needed to ensure conformity and effectiveness.
9.2.1: Internal audit - General
The organization shall conduct internal audits at planned intervals to provide information on the AI management system.
9.2.2: Internal audit programme
Top management shall review the AI management system at planned intervals for continuing suitability, adequacy, and effectiveness.
9.3.1: Management review - General
The organization shall review the AI management system at planned intervals to ensure its suitability, adequacy, and effectiveness.
9.3.2: Management review inputs
Management review inputs shall include audits, performance trends, nonconformities, feedback, risks, changes, and resources.
9.3.3: Management review results
Management review results shall include decisions on improvements, policy/objectives, resources, and follow-up actions.
10.1: Continual improvement
The organization shall continually improve the AI management system’s suitability, adequacy, and effectiveness.
10.2: Nonconformity and corrective action
The organization shall address nonconformities by correcting them, dealing with consequences, and preventing recurrence.
A.2.2: AI policy
The organization shall document a policy for the development or use of AI systems.
A.2.3: Alignment with other organizational policies
The organization shall determine where other policies can be affected by or apply to the organization's objectives with respect to AI systems.
A.2.4: Review of the AI policy
The AI policy shall be reviewed at planned intervals or additionally as needed to ensure its continuing suitability, adequacy and effectiveness.
A.3.2: AI roles and responsibilities
Roles and responsibilities for AI shall be defined and allocated according to the needs of the organization.
A.3.3: Reporting of concerns
The organization shall define and put in place a process to report concerns about the organization's role with respect to an AI system throughout its life cycle.
A.4.2: Resource documentation
The organization shall identify and document relevant resources required for all activities at given AI system life cycle stages and other AI-related activities relevant for the organization.
A.4.3: Data resources
As part of resource identification, the organization shall document information about the data resources utilized for the AI system.
A.4.4: Tooling resources
As part of resource identification, the organization shall document information about the tooling resources utilized for the AI system.
A.4.5: System and computing resources
As part of resource identification, the organization shall document information about the system and computing resources utilized for the AI system.
A.4.6: Human resources
As part of resource identification, the organization shall document information about the human resources and their competences utilized for the development, deployment, operation, change management, maintenance, transfer and decommissioning, as well as verification and integration of the AI system.
A.5.2: AI system impact assessment process
The organization shall establish a process to assess the potential consequences for individuals or groups of individuals, or both, and societies that can result from the AI system throughout its life cycle.
A.5.3: Documentation of AI system impact assessments
The organization shall document the results of AI system impact assessments and retain results for a defined period.
A.5.4: Assessing AI system impact on individuals or groups of individuals
The organization shall assess and document the potential impacts of AI systems to individuals or groups of individuals throughout the system's life cycle.
A.5.5: Assessing societal impacts of AI systems
The organization shall assess and document the potential societal impacts of their AI systems throughout their life cycle.
A.6.1.2: Objectives for responsible development of AI system
The organization shall identify and document objectives to guide the responsible development AI systems, and take those objectives into account and integrate measures to achieve them in the development life cycle.
A.6.1.3: Processes for responsible AI system design and development
The organization shall define and document the specific processes for the responsible design and development of the AI system.
A.6.2.2: AI system requirements and specification
The organization shall specify and document requirements for new AI systems or material enhancements to existing systems.
A.6.2.3: Documentation of AI system design and development
The organization shall document the AI system design and development based on organizational objectives, documented requirements and specification criteria.
A.6.2.4: AI system verification and validation
The organization shall define and document verification and validation measures for the AI system and specify criteria for their use.
A.6.2.5: AI system deployment
The organization shall document a deployment plan and ensure that appropriate requirements are met prior to deployment.
A.6.2.6: AI system operation and monitoring
The organization shall define and document the necessary elements for the ongoing operation of the AI system. At the minimum, this should include system and performance monitoring, repairs, updates and support.
A.6.2.7: AI system technical documentation
The organization shall determine what AI system technical documentation is needed for each relevant category of interested parties, such as users, partners, supervisory authorities, and provide the technical documentation to them in the appropriate form.
A.6.2.8: AI system recording of event logs
The organization shall determine at which phases of the AI system life cycle, record keeping of event logs should be enabled, but at the minimum when the AI system is in use.
A.7.2: Data for development and enhancement of AI system
The organization shall define, document and implement data management processes related to the development of AI systems.
A.7.3: Acquisition of data
The organization shall determine and document details about the acquisition and selection of the data used in AI systems.
A.7.4: Quality of data for AI systems
The organization shall define and document requirements for data quality and ensure that data used to develop and operate the AI system meet those requirements.
A.7.5: Data provenance
The organization shall define and document a process for recording the provenance of data used in its AI systems over the life cycles of the data and the AI system.
A.7.6: Data preparation
The organization shall define and document its criteria for selecting data preparations and the data preparation methods to be used.
A.8.2: System documentation and information for users
The organization shall determine and provide the necessary information to users of the AI system.
A.8.3: External reporting
The organization shall provide capabilities for interested parties to report adverse impacts of the AI system.
A.8.4: Communication of incidents
The organization shall determine and document a plan for communicating incidents to users of the AI system.
A.8.5: Information for interested parties
The organization shall determine and document their obligations to reporting information about the AI system to interested parties.
A.9.2: Processes for responsible use of AI systems
The organization shall define and document the processes for the responsible use of AI systems.
A.9.3: Objectives for responsible use of AI system
The organization shall identify and document objectives to guide the responsible use of AI systems.
A.9.4: Intended use of the AI system
The organization shall ensure that the AI system is used according to the intended uses of the AI system and its accompanying documentation.
A.10.2: Allocating responsibilities
The organization shall ensure that responsibilities within their AI system life cycle are allocated between the organization, its partners, suppliers, customers and third parties.
A.10.3: Suppliers
The organization shall establish a process to ensure that its usage of services, products or materials provided by suppliers aligns with the organization's approach to the responsible development and use of AI systems.
A.10.4: Customers
The organization shall ensure that its responsible approach to the development and use of AI systems considers their customer expectations and needs.