AIUC-1 is updated formally each quarter to ensure that the standard evolves as technology, risk, and regulation evolves.
The most recent version of AIUC-1 was released on October 1, 2025.
These tenets guide how we update the standard:
Customer-focused. We prioritize requirements that enterprise customers demand and vendors can pragmatically meet— increasing confidence without adding unnecessary compliance.
AI-focused. We do not cover non-AI risks that are addressed in frameworks or regulations like SOC 2, ISO 27001, or GDPR.
Insurance-enabling. We prioritize risks that lead to direct harms and financial losses.
Adapts to regulation. We update AIUC-1 to make it easier to comply with new regulations.
Adapts to AI progress. We update AIUC-1 to keep up with new capabilities, like reasoning capabilities and new modalities.
Adapts to the threat landscape. We update AIUC-1 in response to real-world incidents.
Continuous improvement. We regularly update the standard based on real-world deployment experience and stakeholder feedback.
Predictability. We review the standard and push updates quarterly— on January 1, April 1, July 1, and October 1 of each year.
Transparency. We keep a public changelog and share our lessons.
Backward compatibility. Existing certifications remain valid during transition periods.
We welcome feedback, ideas, suggestions, and criticism— provide input on AIUC-1.
This is the first quarterly update of AIUC-1. For this update, focus has been on clarifying and specifying requirements to ensure a clear auditing process and avoid ambiguity. In addition, feedback from technical contributors, customers, and audit processes has motivated a stronger adversarial testing requirement and further details on how AIUC-1 compares to ISO 42001.
Clarified 13 requirements based on audit experience, customer feedback, and input from technical contributors.
Strengthened adversarial testing requirement to mandate independent third-party testing.
Expanded ISO 42001 crosswalk with gap analysis and descriptive notes to support organizations comparing AIUC-1 and ISO 42001.
2025-10-01
A001: Establish input data policy, A002: Establish output data policy
Clarified separation of A001 vs A002: A001 labeled as input data; A002 labeled as output data.
Control activity language adjusted to clarify distinction
2025-10-01
A003: Limit AI agent data collection
Requirement title clarified to emphasize focus on AI agent configuration
2025-10-01
A005: Prevent cross-customer data exposure
Specified to reflect that cross-customer data safeguards should apply not just for model training purposes
2025-10-01
B001: Third-party testing of adversarial robustness
Specified to require that adversarial testing of system robustness is conducted by a third-party
2025-10-01
B003: Manage public release of technical details, B009: Limit output over-exposure
Clarified separation of B001 vs B009: B001 labeled as managing public release of technical details; B009 labeled as limiting output over-exposure
Requirement text clarified to highlight the distinction
2025-10-01
B006: Limit AI agent system access
Requirement title clarified to emphasize focus on AI agent configuration
2025-10-01
B007: Enforce user access privileges to AI systems
Requirement title clarified to emphasize focus on user access privileges
2025-10-01
C005: Prevent customer-defined high risk outputs
Requirement title clarified to highlight that additional risk areas are defined by customer
2025-10-01
C009: Enable real-time feedback and intervention
Requirement title updated to emphasize human intervention capability in requirement
2025-10-01
C012: Third-party testing for customer-defined risk
Requirement title clarified to highlight that additional risk areas are defined by customer
2025-10-01
E013: Implement quality management system
Removed reference to 'high-risk' in requirement to specify that quality management system should apply to entire AI system
2025-10-01
ISO 42001 crosswalk
Expanded AIUC-1 to ISO 42001 mapping with gap analysis and description of gaps to enable easy comparison
2025-10-01
Technical testing passing criteria
Specified that companies must pass AIUC-1 technical tests with no P0 or P1 vulnerabilities identified to qualify for an AIUC-1 certificate
Reflected on Certificate overview page
A001
A001: Establish data use policy
A001: Establish input data policy
A002
A002: Define output rights
A002: Establish output data policy
A003
A003: Implement contextual data safeguards
A003: Limit AI agent data collection
A005
Implement safeguards to prevent cross-customer data exposure when combining customer data from multiple sources for AI model training
Implement safeguards to prevent cross-customer data exposure when combining customer data from multiple sources for AI model training
B003
B003: Limit technical over-disclosure
B003: Manage public release of technical details
B001
B001: Test adversarial robustness
B001: Third-party testing of adversarial robustness
B006
B006: Enforce contextual access controls
B006: Limit AI agent system access
B007
B007: Enforce AI access privileges
Establish and maintain access controls and admin privileges for AI systems in line with policy
B007: Enforce user access privileges to AI systems
Establish and maintain user access controls and admin privileges for AI systems in line with policy
B009
Implement output limitations and obfuscation techniques to reduce information leakage
Implement output limitations and obfuscation techniques to safeguard against information leakage
C005
C005: Prevent other high risk outputs
C005: Prevent customer-defined high risk outputs
C009
C009: Collect real-time feedback
C009: Enable real-time feedback and intervention
C012
C012: Third-party testing for other risk
C012: Third-party testing for customer-defined risk
E013
Establish a quality management system for high-risk AI systems proportionate to the size of the organization
Establish a quality management system for AI systems proportionate to the size of the organization