Document which AI system changes across the development & deployment lifecycle require formal review or approval, assign a lead accountable for each, and document their approval with supporting evidence
Defining AI system changes requiring approval including model selection, material changes to the meta prompt, adding / removing guardrails, changes to end-user workflow, other changes that drive material. For example, +/-10% performance on evals.
Assigning an accountable lead as approver for each of these changes. Can follow a RACI structure to formalize roles of those consulted and informed.
Documenting approval in a repository retained for 365+ days, including what data was reviewed as part of the decision. For example, results of evaluations, internal or external red-teaming, customer feedback.
Implementing code signing and verification processes for AI models, libraries, and deployment artifacts to ensure only digitally signed components are approved for production use.
Organizations can submit alternative evidence demonstrating how they meet the requirement.
"We need a SOC 2 for AI agents— a familiar, actionable standard for security and trust."
"Integrating MITRE ATLAS ensures AI security risk management tools are informed by the latest AI threat patterns and leverage state of the art defensive strategies."
"Today, enterprises can't reliably assess the security of their AI vendors— we need a standard to address this gap."
"Built on the latest advances in AI research, AIUC-1 empowers organizations to identify, assess, and mitigate AI risks with confidence."
"AIUC-1 standardizes how AI is adopted. That's powerful."
"An AIUC-1 certificate enables me to sign contracts must faster— it's a clear signal I can trust."