AIUC-1
C001

Define AI risk taxonomy

Establish a risk taxonomy that categorizes risks within harmful, out-of-scope, and hallucinated outputs, tool calls, and other risks based on application-specific usage

Keywords
Risk Taxonomy
Severity Rating
Application
Mandatory
Frequency
Every 3 months
Type
Preventative
Crosswalks
Article 9: Risk Management System
A.5.2: AI system impact assessment process
A.5.3: Documentation of AI system impact assessments
A.5.4: Assessing AI system impact on individuals or groups of individuals
A.5.5: Assessing societal impacts of AI systems
GOVERN 1.3: Risk management processes
GOVERN 1.4: Risk management governance
GOVERN 4.2: Risk documentation
GOVERN 6.1: Third-party risk policies
MANAGE 1.2: Risk prioritization
MANAGE 1.3: Risk response planning
MANAGE 1.4: Residual risk documentation
MAP 1.5: Risk tolerance
MAP 5.1: Impact assessment
MEASURE 1.1: Risk metrics selection
MEASURE 2.10: Privacy risk assessment
MEASURE 2.11: Fairness and bias
MEASURE 3.1: Emergent risk tracking

Control activities

Defining risk categories with severity levels and examples based on industry and deployment context. For example, classifying harmful outputs such as distressed outputs, angry responses, high-risk advice, offensive content, bias, and deception, identifying other high-risk use cases such as safety-critical instructions, legal recommendations, financial advice.

Aligning risk taxonomy with external frameworks and standards. For example, NIST AI RMF functions, EU AI Act article 9, ISO42001 controls.

Establishing severity grading appropriate to organizational context and risk tolerance. For example, implementing consistent scoring methodology across risk categories, defining thresholds for flagging and human review.

Maintaining taxonomy currency with documented change management. For example, reviewing and updating risk categories quarterly or when new threat patterns emerge, adjusting risk thresholds, incorporating lessons from incident response and industry benchmarks.

Identifying additional risk categories that are considered harmful given nature of operations. For example, hallucinations, out-of-scope content.

Organizations can submit alternative evidence demonstrating how they meet the requirement.

AIUC-1 is built with industry leaders

Phil Venables

"We need a SOC 2 for AI agents— a familiar, actionable standard for security and trust."

Google Cloud
Phil Venables
Former CISO of Google Cloud
Dr. Christina Liaghati

"Integrating MITRE ATLAS ensures AI security risk management tools are informed by the latest AI threat patterns and leverage state of the art defensive strategies."

MITRE
Dr. Christina Liaghati
MITRE ATLAS lead
Hyrum Anderson

"Today, enterprises can't reliably assess the security of their AI vendors— we need a standard to address this gap."

Cisco
Hyrum Anderson
Senior Director, Security & AI
Prof. Sanmi Koyejo

"Built on the latest advances in AI research, AIUC-1 empowers organizations to identify, assess, and mitigate AI risks with confidence."

Stanford
Prof. Sanmi Koyejo
Lead for Stanford Trustworthy AI Research
John Bautista

"AIUC-1 standardizes how AI is adopted. That's powerful."

Orrick
John Bautista
Partner at Orrick and creator of the YC SAFE
Lena Smart

"An AIUC-1 certificate enables me to sign contracts must faster— it's a clear signal I can trust."

SecurityPal
Lena Smart
Head of Trust for SecurityPal and former CISO of MongoDB
© 2025 Artificial Intelligence Underwriting Company. All rights reserved.