AIUC-1
Crosswalks

AIUC-1 operationalizes emerging AI frameworks

AIUC-1 is designed to be:

Customer-focused. We prioritize requirements that enterprise customers demand and vendors can pragmatically meet— increasing confidence without adding unnecessary compliance.

Adaptable. We update AIUC-1 as regulation, AI progress, and real-world deployment experience evolves.

Transparent. We keep a public changelog and share our lessons.

Forward-looking. We require AI vendors to conduct testing and review systems at least quarterly to ensure that an AIUC-1 certificate stays relevant.

Insurance-enabling. We emphasize the risks that lead to direct harms and financial losses.

Predictable. We review the standard in partnership with our technical contributors and push updates on January 1, April 1, July 1, and October 1 of each year.

In practice, this means that AIUC-1 builds on other AI frameworks including the EU AI Act, the NIST AI RMF, ISO 42001, and MITRE ATLAS. The regular update cadence will mean AIUC-1 updates also reflect changes to these frameworks.

AIUC-1 does not duplicate the work of non-AI frameworks like SOC 2, ISO 27001, or GDPR. Companies should ensure compliance with these frameworks as needed independently of AIUC-1.

AIUC-1 is already being adopted by multiple AI vendors to address enterprise concerns. It has been developed with technical contributors from MITRE, Cisco, MIT, Stanford, Google Cloud, Orrick, and more.

AIUC-1 is built on

Regional AI legislation (e.g. the Colorado AI Act)

Sector-specific AI legislation

OECD AI Principles

AIUC-1 does not duplicate

AICPA SOC 2

EU GDPR

Canada Artificial Intelligence and Data Act (AIDA)

ISO 27001

CSA AI Controls Matrix

More detail on how each of these frameworks is addressed by AIUC-1 is available in the "crosswalk" section of each requirement.

AIUC-1 operationalizes emerging AI legislation and best practices

Framework

EU AI Act

Description

EU regulation classifying AI systems by risk levels (minimal, limited, high, unacceptable) with corresponding compliance obligations

How AIUC-1 compares

Operationalizes the EU AI Act by aligning with its requirements. Certification against AIUC-1 is a strong step towards compliance with the EU AI Act as it:

Enables compliance for minimal and limited risk systems

Enables compliance for high risk systems only if specific control activities are met (AIUC can help guide AI companies through this process)

Provides documentation for internal conformity assessments for high risk systems as required in Annex VI

EU AI Act crosswalks by article →

Framework

NIST AI RMF

Description

US government framework for managing AI risks throughout the AI lifecycle with four core functions: Govern, Map, Measure, Manage

How AIUC-1 compares

Operationalizes the NIST AI RMF. Certification against AIUC-1:

Translates NIST's high-level actions into specific, auditable controls

Provides concrete implementation guidance for key areas such as harmful output prevention, third-party testing and risk management practices

NIST AI RMF crosswalks by function →

Framework

ISO 42001

Description

International standard for AI management systems (AIMS) covering responsible AI development and deployment

How AIUC-1 compares

Aligns with ISO 42001. Certification against AIUC-1:

Incorporates the majority of controls from ISO 42001

Translates ISO's management system approach into concrete, auditable requirements

Extends ISO 42001 with third-party testing requirements of, e.g., hallucinations and jailbreak attempts

Addresses additional key concerns such as AI failure plans and AI-specific system security

ISO 42001 crosswalks by clause →

Framework

MITRE ATLAS

Description

Knowledge base of adversarial tactics, techniques and mitigation strategies for machine learning systems, similar to MITRE ATT&CK for cybersecurity

How AIUC-1 compares

Integrates MITRE ATLAS, which is a technical contributor to AIUC-1. Certification against AIUC-1:

Incorporates ATLAS mitigation strategies in requirements and controls

Strengthens robustness against the adversarial tactics and techniques identified in ATLAS

Goes beyond ATLAS's focus on security alone

MITRE ATLAS crosswalks by mitigation strategy →

Framework

OWASP Top 10 for LLM Applications

Description

Curated list of the most critical security threats to LLM and generative AI systems

How AIUC-1 compares

Integrates OWASP's Top 10 for LLM Applications. Certification against AIUC-1:

Addresses Top 10 threats in requirements and controls

Strengthens robustness against the threats identified with concrete requirements and controls

Goes beyond OWASP's focus on security alone

OWASP Top 10 crosswalks by threat →

Framework

Regional US regulation

Description

e.g. California SB 1001, New York City Local Law 144, Colorado AI Act

How AIUC-1 compares

Simplifies compliance with regional regulation. AIUC can help guide AI companies through the process of meeting California SB 1001, the Colorado AI Act, and New York City Local Law 144 through optional requirements.

AIUC-1 already addresses top concerns in emerging regional regulations such as discrimination and bias, human-in-the-loop, and data handling.

Framework

Sector-specific regulation

Description

e.g. HIPAA, Fair Credit Reporting Act, Fair Housing Act, FTC guidance on AI & algorithms

How AIUC-1 compares

Simplifies compliance with AI requirements in sector-specific regulation. Certification against AIUC-1:

Prepares organizations to comply with, e.g., FTC guidance on AI & algorithms

Addresses top concerns in sector-specific regulations such as discrimination and bias, human-in-the-loop, monitoring and logging, third-party interactions, and data handling in base requirements

Offers AI companies optional add-on requirements for relevant use cases (e.g., for financial transactions, PII handling)

Framework

OECD AI Principles

Description

First inter-governmental AI standard (2019, updated 2024) with five principles for trustworthy AI adopted by 47+ countries

How AIUC-1 compares

Operationalizes OECD's AI Principles. Certification against AIUC-1:

Translates OECD's five principles into concrete, auditable requirements

Addresses additional key areas such as third-party testing, AI failure plans, and adversarial resilience

Frameworks outside the scope of AIUC-1

Framework

AICPA SOC 2

Description

Leading cybersecurity standard

How AIUC-1 compares

Certification against AIUC-1:

Extends SOC 2 Security controls specifically for AI systems (e.g., jailbreak attempts)

Extends SOC 2 Privacy controls specifically for AI systems (e.g., data used for model training)

Extends SOC 2 Availability controls specifically for AI systems (e.g., system reliability/hallucinations)

Avoids duplication of existing requirements in SOC 2 on general cyber security best practices

Framework

Additional ISO standards including ISO 27001 and ISO 42006

Description

International standards for, e.g., information security management systems (ISMS)

How AIUC-1 compares

Certification against AIUC-1:

Focuses on ISO 42001, which is specific to AI systems

Extends several ISO 27001 controls into the AI domain including the Confidentiality-Integrity-Availability triad

AIUC is following the newly-introduced ISO 42006 standard closely.

Framework

EU GDPR

Description

European data protection regulation with AI-relevant provisions on automated decision-making, profiling, and data subject rights

How AIUC-1 compares

AIUC-1 does not duplicate GDPR.

Framework

Canada Artificial Intelligence and Data Act (AIDA)

Description

Canada's proposed Artificial Intelligence and Data Act regulating AI systems based on impact assessments and risk mitigation

How AIUC-1 compares

AIDA has not been passed into law yet.

AIUC can help with guidance on how to meet AIDA once passed having incorporated similar principles of risk mitigation, risk assessment, transparency, and incident notification.

Framework

CSA AI Controls Matrix

Description

Cloud Security Alliance's AI Controls Matrix providing security controls framework specifically designed for AI/ML systems

How AIUC-1 compares

Certification against AIUC-1:

Addresses key controls for AI vendors from the AICM such as adversarial robustness, system transparency, and documentation of criteria for cloud & on-prem processing

Enables a compliance burden significantly lower than CSA's AICM due to its targeted focus on top AI enterprise concerns

Avoids duplicating controls in areas where CSA is industry-leading, such as data center infrastructure, physical server security, and other domains outside of the AIUC-1 scope

AIUC-1 is continuously updated as new legislation, frameworks, threat patterns and best practices emerge — in collaboration with our network of technical contributors and experts from leading institutions within AI safety, security and reliability. This ensures that the standard stays current, comprehensive and enables easy compliance with applicable frameworks.

Last updated July 22, 2025.
© 2025 Artificial Intelligence Underwriting Company. All rights reserved.