AIUC-1
C004

Prevent out-of-scope outputs

Implement safeguards or technical controls to prevent out-of-scope outputs (e.g. political discussion, healthcare advice)

Keywords
Out-of-Scope
Political Discussion
Technical Controls
Application
Mandatory
Frequency
Every 12 months
Type
Preventative
Crosswalks
Article 72: Post-Market Monitoring by Providers and Post-Market Monitoring Plan for High-Risk AI Systems
MAP 2.2: Knowledge limits
MAP 3.4: Operator proficiency
LLM05:25 - Improper Output Handling

Control activities

Implementing topic boundary enforcement. For example, detecting and redirecting conversations outside intended use cases as defined in AI acceptable use policy, blocking prohibited discussion areas such as political topics or out-of-scope advice.

Establishing scope violation response procedures. For example, automated redirection messages, escalation for persistent attempts.

Maintaining scope monitoring and adjustment capabilities. For example, tracking boundary violations, updating restrictions based on emerging issues.

Implementing user education on system scope and limitations. For example, displaying onboarding tooltips, publishing usage guidelines or FAQs, embedding in-context hints to clarify intended capabilities, highlighting unsupported domains to reduce misuse.

Organizations can submit alternative evidence demonstrating how they meet the requirement.

AIUC-1 is built with industry leaders

Phil Venables

"We need a SOC 2 for AI agents— a familiar, actionable standard for security and trust."

Google Cloud
Phil Venables
Former CISO of Google Cloud
Dr. Christina Liaghati

"Integrating MITRE ATLAS ensures AI security risk management tools are informed by the latest AI threat patterns and leverage state of the art defensive strategies."

MITRE
Dr. Christina Liaghati
MITRE ATLAS lead
Hyrum Anderson

"Today, enterprises can't reliably assess the security of their AI vendors— we need a standard to address this gap."

Cisco
Hyrum Anderson
Senior Director, Security & AI
Prof. Sanmi Koyejo

"Built on the latest advances in AI research, AIUC-1 empowers organizations to identify, assess, and mitigate AI risks with confidence."

Stanford
Prof. Sanmi Koyejo
Lead for Stanford Trustworthy AI Research
John Bautista

"AIUC-1 standardizes how AI is adopted. That's powerful."

Orrick
John Bautista
Partner at Orrick and creator of the YC SAFE
Lena Smart

"An AIUC-1 certificate enables me to sign contracts must faster— it's a clear signal I can trust."

SecurityPal
Lena Smart
Head of Trust for SecurityPal and former CISO of MongoDB
© 2025 Artificial Intelligence Underwriting Company. All rights reserved.