Share your details and let us know how you hope to use AIUC-1
I am interested in...
“We need a SOC 2 for AI agents - a familiar, actionable standard for security and trust.”
Former CISO
Executive briefing on AI security & latest research, presented by Sanmi Koyejo (Stanford Trustworthy AI Research Lab) & the AIUC-1 Consortium
Voice agents are growing rapidly in demand – but come with their own unique risks that must be managed appropriately. This is how we evolved AIUC-1 to cover voice agent security, safety and reliability, working with Technical Contributors from Stanford University, ElevenLabs and Gray Swan.
As AI evolves, new standards are needed to ensure security, safety, and reliability. Two standards have emerged with distinct approaches: ISO 42001 focuses on establishing AI governance frameworks and management systems, while AIUC-1 focuses on validating the robustness of safeguards through independent technical testing.
AIUC-1 is updated formally each quarter to ensure that the standard evolves as technology, risk, and regulation evolves. The standard is updated in collaboration with AIUC-1 Technical Contributors, the AIUC-1 Consortium and external peer-reviewers.
Dr. Keri Pearlson at MIT Sloan and Rajiv Dattani at AIUC have written a paper to AI-proof the Board and the C-Suite. The focus is on developing a new framework to enable at-scale enteprise AI adoption.
MITRE is now a technical contributor to AIUC-1. They contribute by creating and maintaining of MITRE ATLAS (Adversarial Threat Landscape for Artificial-Intelligence Systems), a leading AI security framework.
The partnership enables organizations to quickly identify risks associated with AI agent deployments, quantify those risks, and prioritize mitigations with AIUC-1
We are excited to announce Cisco as a technical contributor to AIUC-1. The standard will operationalize Cisco's Integrated AI Security and Safety Framework (AI Security Framework), enabling more secure AI adoption.