AIUC-1
C002

Conduct pre-deployment testing

Conduct internal testing of AI systems prior to deployment across risk categories (including high-risk, harmful, hallucinated, and out-of-scope outputs and tool calls) for system changes requiring formal review or approval

Keywords
Internal Testing
Pre-Deployment Testing
Application
Mandatory
Frequency
Every 12 months
Type
Preventative
Crosswalks
AML-M0016: Vulnerability Scanning
Article 9: Risk Management System
Article 27: Fundamental Rights Impact Assessment for High-Risk AI Systems
A.6.2.5: AI system deployment
GOVERN 4.3: Testing and incident sharing
MANAGE 1.1: Purpose achievement
MAP 4.2: Internal risk controls
MEASURE 2.1: TEVV documentation
MEASURE 2.3: Performance demonstration
MEASURE 2.5: Validity and reliability
MEASURE 4.3: Performance tracking

Control activities

Conducting pre-deployment testing with documented results and identified issues. For example, structured hallucination testing, adversarial prompting, safety unit tests, and scenario-based walkthroughs.

Completing risk assessments of identified issues before system deployment. For example, potential impact analysis, mitigation strategies, and residual risk evaluation.

Obtaining approval sign-offs from designated accountable leads with documented rationale for approval decisions and maintained records for review purposes.

Integrating AI system testing into established software development lifecycle (SDLC) gates. For example, requiring risk evaluation and sign-off at staging or pre-production milestones, aligning with CI/CD or MLOps pipelines, and documenting test artifacts in shared repositories.

Implementing pre-deployment vulnerability scanning of AI artifacts and dependencies. For example, scanning model files (e.g. pickle, ONNX) for malicious code or unsafe deserialization, checking runtime behavior for unbounded tool execution or insecure API access, validating ML libraries and infrastructure for known CVEs, and analyzing downstream outputs for unsafe content or behaviors.

Organizations can submit alternative evidence demonstrating how they meet the requirement.

AIUC-1 is built with industry leaders

Phil Venables

"We need a SOC 2 for AI agents— a familiar, actionable standard for security and trust."

Google Cloud
Phil Venables
Former CISO of Google Cloud
Dr. Christina Liaghati

"Integrating MITRE ATLAS ensures AI security risk management tools are informed by the latest AI threat patterns and leverage state of the art defensive strategies."

MITRE
Dr. Christina Liaghati
MITRE ATLAS lead
Hyrum Anderson

"Today, enterprises can't reliably assess the security of their AI vendors— we need a standard to address this gap."

Cisco
Hyrum Anderson
Senior Director, Security & AI
Prof. Sanmi Koyejo

"Built on the latest advances in AI research, AIUC-1 empowers organizations to identify, assess, and mitigate AI risks with confidence."

Stanford
Prof. Sanmi Koyejo
Lead for Stanford Trustworthy AI Research
John Bautista

"AIUC-1 standardizes how AI is adopted. That's powerful."

Orrick
John Bautista
Partner at Orrick and creator of the YC SAFE
Lena Smart

"An AIUC-1 certificate enables me to sign contracts must faster— it's a clear signal I can trust."

SecurityPal
Lena Smart
Head of Trust for SecurityPal and former CISO of MongoDB
© 2025 Artificial Intelligence Underwriting Company. All rights reserved.