AIUC-1
Principles

C. Safety

Prevent harmful AI outputs and brand risk through testing, monitoring and safeguards

Requirements

Define AI risk taxonomy  →

C001
·
Mandatory

Establish a risk taxonomy that categorizes risks within harmful, out-of-scope, and hallucinated outputs, tool calls, and other risks based on application-specific usage

Keywords
Risk Taxonomy
Severity Rating

Conduct pre-deployment testing  →

C002
·
Mandatory

Conduct internal testing of AI systems prior to deployment across risk categories (including high-risk, harmful, hallucinated, and out-of-scope outputs and tool calls) for system changes requiring formal review or approval

Keywords
Internal Testing
Pre-Deployment Testing

Prevent harmful outputs  →

C003
·
Mandatory

Implement safeguards or technical controls to prevent harmful outputs including distressed outputs, angry responses, high-risk advice, offensive content, bias, and deception

Keywords
Harmful Outputs
Distressed
Angry
Advice
Offensive
Bias

Prevent out-of-scope outputs  →

C004
·
Mandatory

Implement safeguards or technical controls to prevent out-of-scope outputs (e.g. political discussion, healthcare advice)

Keywords
Out-of-Scope
Political Discussion
Technical Controls

Prevent other high risk outputs  →

C005
·
Mandatory

Implement safeguards or technical controls to prevent additional high-risk outputs as defined in risk taxonomy

Keywords
High-Risk Outputs
Risk Taxonomy
Technical Controls

Prevent output vulnerabilities  →

C006
·
Mandatory

Implement safeguards to prevent security vulnerabilities in outputs from impacting users

Keywords
Harmful Outputs
Code Injection
Data Exfiltration

Flag high risk recommendations  →

C007
·
Optional

Implement an alerting system that flags high-risk recommendations for human review

Keywords
Human Review
Escalation

Monitor AI risk categories  →

C008
·
Optional

Implement monitoring of AI systems across risk categories

Keywords
Monitoring
High-Risk Outputs

Collect real-time feedback  →

C009
·
Optional

Implement mechanisms to enable real-time user feedback collection and intervention mechanisms

Keywords
Feedback
Intervention
User Control
Transparency

3rd-party testing for harmful outputs  →

C010
·
Mandatory

Appoint expert third-parties to evaluate system robustness to harmful outputs including distressed outputs, angry responses, high-risk advice, offensive content, bias, and deception at least every 3 months

Keywords
Harmful Outputs
Distressed
Angry
Advice
Offensive
Bias
Risk Severity
Toxigen
Third-Party Testing

3rd-party testing for out-of-scope outputs  →

C011
·
Mandatory

Appoint expert third-parties to evaluate system robustness to out-of-scope outputs at least every 3 months (e.g. political discussion, healthcare advice)

Keywords
Out-of-Scope
Political Discussion
Third-Party Testing

3rd-party testing for other risk  →

C012
·
Mandatory

Appoint expert third-parties to evaluate system robustness to additional high-risk outputs as defined in risk taxonomy at least every 3 months

Keywords
High-Risk Outputs
Risk Taxonomy
Third-Party Testing

AIUC-1 is built with industry leaders

Phil Venables

"We need a SOC 2 for AI agents— a familiar, actionable standard for security and trust."

Google Cloud
Phil Venables
Former CISO of Google Cloud
Dr. Christina Liaghati

"Integrating MITRE ATLAS ensures AI security risk management tools are informed by the latest AI threat patterns and leverage state of the art defensive strategies."

MITRE
Dr. Christina Liaghati
MITRE ATLAS lead
Hyrum Anderson

"Today, enterprises can't reliably assess the security of their AI vendors— we need a standard to address this gap."

Cisco
Hyrum Anderson
Senior Director, Security & AI
Prof. Sanmi Koyejo

"Built on the latest advances in AI research, AIUC-1 empowers organizations to identify, assess, and mitigate AI risks with confidence."

Stanford
Prof. Sanmi Koyejo
Lead for Stanford Trustworthy AI Research
John Bautista

"AIUC-1 standardizes how AI is adopted. That's powerful."

Orrick
John Bautista
Partner at Orrick and creator of the YC SAFE
Lena Smart

"An AIUC-1 certificate enables me to sign contracts must faster— it's a clear signal I can trust."

SecurityPal
Lena Smart
Head of Trust for SecurityPal and former CISO of MongoDB
© 2025 Artificial Intelligence Underwriting Company. All rights reserved.