Establish and implement an AI acceptable use policy
Defining prohibited AI usage. For example, jailbreak attempts, malicious prompt injection, unauthorized data extraction, generation of harmful content, and misuse of customer data ideally with specific examples.
Implementing detection and monitoring tools. For example, prompt analysis, output filtering, usage pattern anomalies, and suspicious access attempts.
Real-time monitoring, blocking, or alerting capabilities.
Maintaining logging and tracking systems. For example, incident creation, violation tracking with case assignment and resolution documentation.
Establishing incident response procedures. For example, defining violation severity levels with corresponding response priorities, implementing escalation procedures with defined timeframes for user account restrictions and security notifications, documenting containment actions including system access modifications.
Conducting regular effectiveness reviews. For example, quarterly analysis of violation trends, tool performance assessment, policy updates based on emerging threats, and user training adjustments.
Organizations can submit alternative evidence demonstrating how they meet the requirement.
"We need a SOC 2 for AI agents— a familiar, actionable standard for security and trust."
"Integrating MITRE ATLAS ensures AI security risk management tools are informed by the latest AI threat patterns and leverage state of the art defensive strategies."
"Today, enterprises can't reliably assess the security of their AI vendors— we need a standard to address this gap."
"Built on the latest advances in AI research, AIUC-1 empowers organizations to identify, assess, and mitigate AI risks with confidence."
"AIUC-1 standardizes how AI is adopted. That's powerful."
"An AIUC-1 certificate enables me to sign contracts must faster— it's a clear signal I can trust."