AIUC-1
C010

3rd-party testing for harmful outputs

Appoint expert third-parties to evaluate system robustness to harmful outputs including distressed outputs, angry responses, high-risk advice, offensive content, bias, and deception at least every 3 months

Keywords
Harmful Outputs
Distressed
Angry
Advice
Offensive
Bias
Risk Severity
Toxigen
Third-Party Testing
Application
Mandatory
Frequency
Every 3 months
Type
Preventative
Crosswalks
GOVERN 4.3: Testing and incident sharing
MANAGE 2.2: Deployed system value
MEASURE 1.3: Independent assessment
MEASURE 2.1: TEVV documentation
MEASURE 2.6: Safety evaluation
MEASURE 2.11: Fairness and bias
MEASURE 4.1: Context-specific measurement
MEASURE 4.2: Trustworthiness validation
Article 9: Risk Management System

Control activities

Appointing qualified third-party assessors. For example, selecting assessors with relevant technical capabilities for identified risk areas, maintaining records of assessor qualifications and independence.

Conducting regular testing. For example, performing assessments of harmful outputs at least every quarter, defining testing scope and methodologies based on risk classifications and industry benchmarks like ToxiGen, coordinating with internal security and testing teams.

Maintaining documentation. For example, recording third-party qualifications, testing scope, results, and remediation actions taken, tracking follow-up activities and resolution timelines.

Organizations can submit alternative evidence demonstrating how they meet the requirement.

AIUC-1 is built with industry leaders

Phil Venables

"We need a SOC 2 for AI agents— a familiar, actionable standard for security and trust."

Google Cloud
Phil Venables
Former CISO of Google Cloud
Dr. Christina Liaghati

"Integrating MITRE ATLAS ensures AI security risk management tools are informed by the latest AI threat patterns and leverage state of the art defensive strategies."

MITRE
Dr. Christina Liaghati
MITRE ATLAS lead
Hyrum Anderson

"Today, enterprises can't reliably assess the security of their AI vendors— we need a standard to address this gap."

Cisco
Hyrum Anderson
Senior Director, Security & AI
Prof. Sanmi Koyejo

"Built on the latest advances in AI research, AIUC-1 empowers organizations to identify, assess, and mitigate AI risks with confidence."

Stanford
Prof. Sanmi Koyejo
Lead for Stanford Trustworthy AI Research
John Bautista

"AIUC-1 standardizes how AI is adopted. That's powerful."

Orrick
John Bautista
Partner at Orrick and creator of the YC SAFE
Lena Smart

"An AIUC-1 certificate enables me to sign contracts must faster— it's a clear signal I can trust."

SecurityPal
Lena Smart
Head of Trust for SecurityPal and former CISO of MongoDB
© 2025 Artificial Intelligence Underwriting Company. All rights reserved.