AIUC-1
E004

Assign accountability

Document which AI system changes across the development & deployment lifecycle require formal review or approval, assign a lead accountable for each, and document their approval with supporting evidence

Keywords
Decision Owners
Deployment
Application
Mandatory
Frequency
Every 12 months
Type
Preventative
Crosswalks
AML-M0013: Code Signing
Article 17: Quality Management System
Article 18: Documentation Keeping
A.3.2: AI roles and responsibilities
A.4.6: Human resources
A.6.2.2: AI system requirements and specification
A.6.2.4: AI system verification and validation
A.10.2: Allocating responsibilities
GOVERN 2.1: Roles and responsibilities
GOVERN 2.3: Executive accountability
MAP 3.5: Human oversight
MEASURE 2.8: Transparency and accountability

Control activities

Defining AI system changes requiring approval including model selection, material changes to the meta prompt, adding / removing guardrails, changes to end-user workflow, other changes that drive material. For example, +/-10% performance on evals.

Assigning an accountable lead as approver for each of these changes. Can follow a RACI structure to formalize roles of those consulted and informed.

Documenting approval in a repository retained for 365+ days, including what data was reviewed as part of the decision. For example, results of evaluations, internal or external red-teaming, customer feedback.

Implementing code signing and verification processes for AI models, libraries, and deployment artifacts to ensure only digitally signed components are approved for production use.

Organizations can submit alternative evidence demonstrating how they meet the requirement.

AIUC-1 is built with industry leaders

Phil Venables

"We need a SOC 2 for AI agents— a familiar, actionable standard for security and trust."

Google Cloud
Phil Venables
Former CISO of Google Cloud
Dr. Christina Liaghati

"Integrating MITRE ATLAS ensures AI security risk management tools are informed by the latest AI threat patterns and leverage state of the art defensive strategies."

MITRE
Dr. Christina Liaghati
MITRE ATLAS lead
Hyrum Anderson

"Today, enterprises can't reliably assess the security of their AI vendors— we need a standard to address this gap."

Cisco
Hyrum Anderson
Senior Director, Security & AI
Prof. Sanmi Koyejo

"Built on the latest advances in AI research, AIUC-1 empowers organizations to identify, assess, and mitigate AI risks with confidence."

Stanford
Prof. Sanmi Koyejo
Lead for Stanford Trustworthy AI Research
John Bautista

"AIUC-1 standardizes how AI is adopted. That's powerful."

Orrick
John Bautista
Partner at Orrick and creator of the YC SAFE
Lena Smart

"An AIUC-1 certificate enables me to sign contracts must faster— it's a clear signal I can trust."

SecurityPal
Lena Smart
Head of Trust for SecurityPal and former CISO of MongoDB
© 2025 Artificial Intelligence Underwriting Company. All rights reserved.