AIUC-1
Principles

B. Security

Protect against adversarial attacks like jailbreaks and prompt injections as well as unauthorized tool calls

Requirements

Test adversarial robustness  →

B001
·
Mandatory

Implement adversarial testing program to validate system resilience against adversarial inputs and prompt injection attempts in line with adversarial threat taxonomy

Keywords
Adversarial Testing
Red Teaming
Prompt Injection
Jailbreak

Detect adversarial input  →

B002
·
Optional

Implement monitoring capabilities to detect and respond to adversarial inputs and prompt injection attempts

Keywords
Monitor
Adversarial
Jailbreak
Prompt Injection

Limit technical over-disclosure  →

B003
·
Optional

Implement controls to prevent over-disclosure of technical information about AI systems and organizational details that could enable adversarial targeting

Keywords
Public Disclosure
Open-Source
External Threats

Prevent AI endpoint scraping  →

B004
·
Mandatory

Implement safeguards to prevent probing or scraping of external AI endpoints

Keywords
Scraping
Probing
Rate Limiting
Query Quotas
Zero Trust

Implement real-time input filtering  →

B005
·
Optional

Implement real-time input filtering using automated moderation tools

Keywords
Prompt Injection
Jailbreak
Adversarial Input Protection

Enforce contextual access controls  →

B006
·
Mandatory

Implement safeguards to limit AI agent system access based on context and declared objectives

Keywords
Access Permissions
Agent Permissions

Enforce AI access privileges  →

B007
·
Mandatory

Establish and maintain access controls and admin privileges for AI systems in line with policy

Keywords
Access Controls
Organizational Policy

Protect model deployment environment  →

B008
·
Mandatory

Implement security measures for AI model deployment environments including encryption, access controls and authorization

Keywords
Model Environment
Encryption
Access Controls

Limit output over-exposure  →

B009
·
Mandatory

Implement output limitations and obfuscation techniques to reduce information leakage

Keywords
Output Obfuscation
Fidelity Reduction
Information Leakage
Adversarial Use
Response Filtering

AIUC-1 is built with industry leaders

Phil Venables

"We need a SOC 2 for AI agents— a familiar, actionable standard for security and trust."

Google Cloud
Phil Venables
Former CISO of Google Cloud
Dr. Christina Liaghati

"Integrating MITRE ATLAS ensures AI security risk management tools are informed by the latest AI threat patterns and leverage state of the art defensive strategies."

MITRE
Dr. Christina Liaghati
MITRE ATLAS lead
Hyrum Anderson

"Today, enterprises can't reliably assess the security of their AI vendors— we need a standard to address this gap."

Cisco
Hyrum Anderson
Senior Director, Security & AI
Prof. Sanmi Koyejo

"Built on the latest advances in AI research, AIUC-1 empowers organizations to identify, assess, and mitigate AI risks with confidence."

Stanford
Prof. Sanmi Koyejo
Lead for Stanford Trustworthy AI Research
John Bautista

"AIUC-1 standardizes how AI is adopted. That's powerful."

Orrick
John Bautista
Partner at Orrick and creator of the YC SAFE
Lena Smart

"An AIUC-1 certificate enables me to sign contracts must faster— it's a clear signal I can trust."

SecurityPal
Lena Smart
Head of Trust for SecurityPal and former CISO of MongoDB
© 2025 Artificial Intelligence Underwriting Company. All rights reserved.