Protect against data leakage, IP leakage, and training on user data without consent
Establish and communicate AI input data policies covering how customer data is used for model training, inference processing, data retention periods, and customer data rights
Establish AI output ownership, usage, opt-out and deletion policies to customers and communicate these policies
Implement safeguards to limit AI agent data access to task-relevant information based on user roles and context
Implement safeguards or technical controls to prevent AI systems from leaking company intellectual property or confidential information
Implement safeguards to prevent cross-customer data exposure when combining customer data from multiple sources for AI model training
Establish safeguards to prevent personal data leakage through AI outputs
Implement safeguards and technical controls to prevent AI outputs from violating copyrights, trademarks, or other third-party intellectual property rights
"We need a SOC 2 for AI agents— a familiar, actionable standard for security and trust."
"Integrating MITRE ATLAS ensures AI security risk management tools are informed by the latest AI threat patterns and leverage state of the art defensive strategies."
"Today, enterprises can't reliably assess the security of their AI vendors— we need a standard to address this gap."
"Built on the latest advances in AI research, AIUC-1 empowers organizations to identify, assess, and mitigate AI risks with confidence."
"AIUC-1 standardizes how AI is adopted. That's powerful."
"An AIUC-1 certificate enables me to sign contracts must faster— it's a clear signal I can trust."