AIUC-1
Principles

E. Accountability

Assign accountability, enforce oversight, create emergency responses and vet suppliers

Requirements

AI failure plan for security breaches  →

E001
·
Mandatory

Document AI failure plan for AI privacy and security breaches assigning accountable owners and establishing notification and remediation with third-party support as needed (e.g. legal, PR, insurers)

Keywords
Incident Response
Security
Privacy
Regulatory Deadlines

AI failure plan for harmful outputs  →

E002
·
Mandatory

Document AI failure plan for harmful AI outputs that cause significant customer harm assigning accountable owners and establishing remediation with third-party support as needed (e.g. legal, PR, insurers)

Keywords
Incident Response
Emergency Response
Harmful Outputs
Hallucinations
Vendors

AI failure plan for hallucinations  →

E003
·
Mandatory

Document AI failure plan for hallucinated AI outputs that cause substantial customer financial loss assigning accountable owners and establishing remediation with third-party support as needed (e.g. legal, PR, insurers)

Keywords
Hallucinations
Incident Response
Customer Loss

Assign accountability  →

E004
·
Mandatory

Document which AI system changes across the development & deployment lifecycle require formal review or approval, assign a lead accountable for each, and document their approval with supporting evidence

Keywords
Decision Owners
Deployment

Assess cloud vs on-prem processing  →

E005
·
Mandatory

Establish criteria for selecting cloud provider, and circumstances for on-premises processing considering data sensitivity, regulatory requirements, security controls, and operational needs

Keywords
Deployment
Cloud Security
On-Premise Security
Data Residency

Conduct vendor due diligence  →

E006
·
Mandatory

Establish AI vendor due diligence processes for foundation and upstream model providers covering data handling, PII controls, security and compliance

Keywords
Vendor Due Diligence
Open-Source
Foundation Models
Upstream Models

Document system change approvals  →

E007
·
Optional

Define approval processes for material changes to AI systems (e.g. model versions, access controls, data sources) requiring formal review and sign-off

Keywords
Approvals
Workflows

Review internal processes  →

E008
·
Mandatory

Establish regular internal reviews of key processes and document review records and approvals

Keywords
Internal Reviews
Documentation

Monitor 3rd-party access  →

E009
·
Optional

Implement systems to monitor third party access

Keywords
Access
Logins

Establish AI acceptable use policy  →

E010
·
Mandatory

Establish and implement an AI acceptable use policy

Keywords
Acceptable Use
Breach

Record processing locations  →

E011
·
Mandatory

Document AI data processing locations

Keywords
Data Processing
Storage Location
Data Protections

Document regulatory compliance  →

E012
·
Mandatory

Document applicable AI laws and standards, required data protections, and strategies for compliance

Keywords
Regulatory
EU
NY
NIST
ISO
GDPR

Implement quality management system  →

E013
·
Optional

Establish a quality management system for high-risk AI systems proportionate to the size of the organization

Keywords
EU
Quality management
Regulatory

Share transparency reports  →

E014
·
Optional

Establish policies for sharing transparency reports with relevant stakeholders including regulators and customers

Keywords
Transparency

Log model activity  →

E015
·
Mandatory

Maintain logs of AI system processes, actions, and model outputs where permitted to support incident investigation, auditing, and explanation of AI system behavior

Keywords
Explainability
Logs

Implement AI disclosure mechanisms  →

E016
·
Mandatory

Implement clear disclosure mechanisms to inform users when they are interacting with AI systems rather than human operators

Keywords
Labelling
Transparency

Document system transparency policy  →

E017
·
Optional

Establish a system transparency policy and maintain a repository of model cards, datasheets, and interpretability reports for major systems

Keywords
Transparency
System Cards

AIUC-1 is built with industry leaders

Phil Venables

"We need a SOC 2 for AI agents— a familiar, actionable standard for security and trust."

Google Cloud
Phil Venables
Former CISO of Google Cloud
Dr. Christina Liaghati

"Integrating MITRE ATLAS ensures AI security risk management tools are informed by the latest AI threat patterns and leverage state of the art defensive strategies."

MITRE
Dr. Christina Liaghati
MITRE ATLAS lead
Hyrum Anderson

"Today, enterprises can't reliably assess the security of their AI vendors— we need a standard to address this gap."

Cisco
Hyrum Anderson
Senior Director, Security & AI
Prof. Sanmi Koyejo

"Built on the latest advances in AI research, AIUC-1 empowers organizations to identify, assess, and mitigate AI risks with confidence."

Stanford
Prof. Sanmi Koyejo
Lead for Stanford Trustworthy AI Research
John Bautista

"AIUC-1 standardizes how AI is adopted. That's powerful."

Orrick
John Bautista
Partner at Orrick and creator of the YC SAFE
Lena Smart

"An AIUC-1 certificate enables me to sign contracts must faster— it's a clear signal I can trust."

SecurityPal
Lena Smart
Head of Trust for SecurityPal and former CISO of MongoDB
© 2025 Artificial Intelligence Underwriting Company. All rights reserved.