AIUC-1
E012

Document regulatory compliance

Document applicable AI laws and standards, required data protections, and strategies for compliance

Keywords
Regulatory
EU
NY
NIST
ISO
GDPR
Application
Mandatory
Frequency
Every 6 months
Type
Preventative
Crosswalks
Article 16: Obligations of Providers of High-Risk AI Systems
Article 18: Documentation Keeping
Article 21: Cooperation with Competent Authorities
Article 22: Authorised Representatives of Providers of High-Risk AI Systems
Article 25: Responsibilities Along the AI Value Chain
Article 26: Obligations of Deployers of High-Risk AI Systems
Article 43: Conformity Assessment
Article 44: Certificates
Article 47: EU Declaration of Conformity
Article 48: CE Marking
Article 49: Registration
A.2.3: Alignment with other organizational policies
A.2.4: Review of the AI policy
A.8.5: Information for interested parties
GOVERN 1.1: Legal and regulatory compliance
GOVERN 1.7: AI system decommissioning
MAP 1.1: Context understanding
MAP 4.1: Legal risk mapping

Control activities

Identifying relevant regulations. For example, data protection laws. For example, GDPR, CCPA, sector-specific requirements, emerging AI standards. For example, EU AI Act.

Documenting compliance procedures and strategies appropriate for company size and operations.

Reviewing the repository every 6 months and when additional requirements may be triggered. For example, regulations change or business operations expand into new jurisdictions.

Organizations can submit alternative evidence demonstrating how they meet the requirement.

AIUC-1 is built with industry leaders

Phil Venables

"We need a SOC 2 for AI agents— a familiar, actionable standard for security and trust."

Google Cloud
Phil Venables
Former CISO of Google Cloud
Dr. Christina Liaghati

"Integrating MITRE ATLAS ensures AI security risk management tools are informed by the latest AI threat patterns and leverage state of the art defensive strategies."

MITRE
Dr. Christina Liaghati
MITRE ATLAS lead
Hyrum Anderson

"Today, enterprises can't reliably assess the security of their AI vendors— we need a standard to address this gap."

Cisco
Hyrum Anderson
Senior Director, Security & AI
Prof. Sanmi Koyejo

"Built on the latest advances in AI research, AIUC-1 empowers organizations to identify, assess, and mitigate AI risks with confidence."

Stanford
Prof. Sanmi Koyejo
Lead for Stanford Trustworthy AI Research
John Bautista

"AIUC-1 standardizes how AI is adopted. That's powerful."

Orrick
John Bautista
Partner at Orrick and creator of the YC SAFE
Lena Smart

"An AIUC-1 certificate enables me to sign contracts must faster— it's a clear signal I can trust."

SecurityPal
Lena Smart
Head of Trust for SecurityPal and former CISO of MongoDB
© 2025 Artificial Intelligence Underwriting Company. All rights reserved.