AIUC-1 is designed to be:
Customer-focused. We prioritize requirements that enterprise customers demand and vendors can pragmatically meet— increasing confidence without adding unnecessary compliance.
Adaptable. We update AIUC-1 as regulation, AI progress, and real-world deployment experience evolves.
Transparent. We keep a public changelog and share our lessons.
Forward-looking. We require AI vendors to conduct testing and review systems at least quarterly to ensure that an AIUC-1 certificate stays relevant.
Insurance-enabling. We emphasize the risks that lead to direct harms and financial losses.
Predictable. We review the standard in partnership with our technical contributors and push updates on January 1, April 1, July 1, and October 1 of each year.
In practice, this means that AIUC-1 builds on other AI frameworks including the EU AI Act, the NIST AI RMF, ISO 42001, and MITRE ATLAS. The regular update cadence will mean AIUC-1 updates also reflect changes to these frameworks.
AIUC-1 does not duplicate the work of non-AI frameworks like SOC 2, ISO 27001, or GDPR. Companies should ensure compliance with these frameworks as needed independently of AIUC-1.
AIUC-1 is already being adopted by multiple AI vendors to address enterprise concerns. It has been developed with technical contributors from MITRE, Cisco, MIT, Stanford, Google Cloud, Orrick, and more.
Regional AI legislation (e.g. the Colorado AI Act)
Sector-specific AI legislation
OECD AI Principles
AICPA SOC 2
EU GDPR
Canada Artificial Intelligence and Data Act (AIDA)
ISO 27001
CSA AI Controls Matrix
More detail on how each of these frameworks is addressed by AIUC-1 is available in the "crosswalk" section of each requirement.
EU AI Act
EU regulation classifying AI systems by risk levels (minimal, limited, high, unacceptable) with corresponding compliance obligations
Operationalizes the EU AI Act by aligning with its requirements. Certification against AIUC-1 is a strong step towards compliance with the EU AI Act as it:
Enables compliance for minimal and limited risk systems
Enables compliance for high risk systems only if specific control activities are met (AIUC can help guide AI companies through this process)
Provides documentation for internal conformity assessments for high risk systems as required in Annex VI
NIST AI RMF
US government framework for managing AI risks throughout the AI lifecycle with four core functions: Govern, Map, Measure, Manage
Operationalizes the NIST AI RMF. Certification against AIUC-1:
Translates NIST's high-level actions into specific, auditable controls
Provides concrete implementation guidance for key areas such as harmful output prevention, third-party testing and risk management practices
ISO 42001
International standard for AI management systems (AIMS) covering responsible AI development and deployment
Aligns with ISO 42001. Certification against AIUC-1:
Incorporates the majority of controls from ISO 42001
Translates ISO's management system approach into concrete, auditable requirements
Extends ISO 42001 with third-party testing requirements of, e.g., hallucinations and jailbreak attempts
Addresses additional key concerns such as AI failure plans and AI-specific system security
MITRE ATLAS
Knowledge base of adversarial tactics, techniques and mitigation strategies for machine learning systems, similar to MITRE ATT&CK for cybersecurity
Integrates MITRE ATLAS, which is a technical contributor to AIUC-1. Certification against AIUC-1:
Incorporates ATLAS mitigation strategies in requirements and controls
Strengthens robustness against the adversarial tactics and techniques identified in ATLAS
Goes beyond ATLAS's focus on security alone
OWASP Top 10 for LLM Applications
Curated list of the most critical security threats to LLM and generative AI systems
Integrates OWASP's Top 10 for LLM Applications. Certification against AIUC-1:
Addresses Top 10 threats in requirements and controls
Strengthens robustness against the threats identified with concrete requirements and controls
Goes beyond OWASP's focus on security alone
Regional US regulation
e.g. California SB 1001, New York City Local Law 144, Colorado AI Act
Simplifies compliance with regional regulation. AIUC can help guide AI companies through the process of meeting California SB 1001, the Colorado AI Act, and New York City Local Law 144 through optional requirements.
AIUC-1 already addresses top concerns in emerging regional regulations such as discrimination and bias, human-in-the-loop, and data handling.
Sector-specific regulation
e.g. HIPAA, Fair Credit Reporting Act, Fair Housing Act, FTC guidance on AI & algorithms
Simplifies compliance with AI requirements in sector-specific regulation. Certification against AIUC-1:
Prepares organizations to comply with, e.g., FTC guidance on AI & algorithms
Addresses top concerns in sector-specific regulations such as discrimination and bias, human-in-the-loop, monitoring and logging, third-party interactions, and data handling in base requirements
Offers AI companies optional add-on requirements for relevant use cases (e.g., for financial transactions, PII handling)
OECD AI Principles
First inter-governmental AI standard (2019, updated 2024) with five principles for trustworthy AI adopted by 47+ countries
Operationalizes OECD's AI Principles. Certification against AIUC-1:
Translates OECD's five principles into concrete, auditable requirements
Addresses additional key areas such as third-party testing, AI failure plans, and adversarial resilience
AICPA SOC 2
Leading cybersecurity standard
Certification against AIUC-1:
Extends SOC 2 Security controls specifically for AI systems (e.g., jailbreak attempts)
Extends SOC 2 Privacy controls specifically for AI systems (e.g., data used for model training)
Extends SOC 2 Availability controls specifically for AI systems (e.g., system reliability/hallucinations)
Avoids duplication of existing requirements in SOC 2 on general cyber security best practices
Additional ISO standards including ISO 27001 and ISO 42006
International standards for, e.g., information security management systems (ISMS)
Certification against AIUC-1:
Focuses on ISO 42001, which is specific to AI systems
Extends several ISO 27001 controls into the AI domain including the Confidentiality-Integrity-Availability triad
AIUC is following the newly-introduced ISO 42006 standard closely.
EU GDPR
European data protection regulation with AI-relevant provisions on automated decision-making, profiling, and data subject rights
AIUC-1 does not duplicate GDPR.
Canada Artificial Intelligence and Data Act (AIDA)
Canada's proposed Artificial Intelligence and Data Act regulating AI systems based on impact assessments and risk mitigation
AIDA has not been passed into law yet.
AIUC can help with guidance on how to meet AIDA once passed having incorporated similar principles of risk mitigation, risk assessment, transparency, and incident notification.
CSA AI Controls Matrix
Cloud Security Alliance's AI Controls Matrix providing security controls framework specifically designed for AI/ML systems
Certification against AIUC-1:
Addresses key controls for AI vendors from the AICM such as adversarial robustness, system transparency, and documentation of criteria for cloud & on-prem processing
Enables a compliance burden significantly lower than CSA's AICM due to its targeted focus on top AI enterprise concerns
Avoids duplicating controls in areas where CSA is industry-leading, such as data center infrastructure, physical server security, and other domains outside of the AIUC-1 scope
AIUC-1 is continuously updated as new legislation, frameworks, threat patterns and best practices emerge — in collaboration with our network of technical contributors and experts from leading institutions within AI safety, security and reliability. This ensures that the standard stays current, comprehensive and enables easy compliance with applicable frameworks.