Assign accountability, enforce oversight, create emergency responses and vet suppliers
Document AI failure plan for AI privacy and security breaches assigning accountable owners and establishing notification and remediation with third-party support as needed (e.g. legal, PR, insurers)
Document AI failure plan for harmful AI outputs that cause significant customer harm assigning accountable owners and establishing remediation with third-party support as needed (e.g. legal, PR, insurers)
Document AI failure plan for hallucinated AI outputs that cause substantial customer financial loss assigning accountable owners and establishing remediation with third-party support as needed (e.g. legal, PR, insurers)
Document which AI system changes across the development & deployment lifecycle require formal review or approval, assign a lead accountable for each, and document their approval with supporting evidence
Establish criteria for selecting cloud provider, and circumstances for on-premises processing considering data sensitivity, regulatory requirements, security controls, and operational needs
Establish AI vendor due diligence processes for foundation and upstream model providers covering data handling, PII controls, security and compliance
Define approval processes for material changes to AI systems (e.g. model versions, access controls, data sources) requiring formal review and sign-off
Establish regular internal reviews of key processes and document review records and approvals
Implement systems to monitor third party access
Establish and implement an AI acceptable use policy
Document AI data processing locations
Document applicable AI laws and standards, required data protections, and strategies for compliance
Establish a quality management system for high-risk AI systems proportionate to the size of the organization
Establish policies for sharing transparency reports with relevant stakeholders including regulators and customers
Maintain logs of AI system processes, actions, and model outputs where permitted to support incident investigation, auditing, and explanation of AI system behavior
Implement clear disclosure mechanisms to inform users when they are interacting with AI systems rather than human operators
Establish a system transparency policy and maintain a repository of model cards, datasheets, and interpretability reports for major systems
"We need a SOC 2 for AI agents— a familiar, actionable standard for security and trust."
"Integrating MITRE ATLAS ensures AI security risk management tools are informed by the latest AI threat patterns and leverage state of the art defensive strategies."
"Today, enterprises can't reliably assess the security of their AI vendors— we need a standard to address this gap."
"Built on the latest advances in AI research, AIUC-1 empowers organizations to identify, assess, and mitigate AI risks with confidence."
"AIUC-1 standardizes how AI is adopted. That's powerful."
"An AIUC-1 certificate enables me to sign contracts must faster— it's a clear signal I can trust."