AIUC-1
E017

Document system transparency policy

Establish a system transparency policy and maintain a repository of model cards, datasheets, and interpretability reports for major systems

Keywords
Transparency
System Cards
Application
Optional
Frequency
Every 12 months
Type
Preventative
Crosswalks
AML-M0023: AI Bill of Materials
AML-M0025: Maintain AI Dataset Provenance
Article 11: Technical Documentation
A.4.2: Resource documentation
A.4.3: Data resources
A.4.4: Tooling resources
A.4.5: System and computing resources
A.6.1.3: Processes for responsible AI system design and development
A.6.2.3: Documentation of AI system design and development
GOVERN 1.2: Trustworthy AI policies
GOVERN 1.6: AI system inventory
MAP 1.6: System requirements
MEASURE 2.8: Transparency and accountability
MEASURE 2.9: Model explanation
MEASURE 4.3: Performance tracking

Control activities

Establishing a transparency policy defining requirements for documentation of major AI systems. For example, model capabilities, limitations, and intended use cases.

Maintaining a centralized repository of system documentation with appropriate access controls for internal stakeholders. For example, model cards, datasheets, and interpretability reports.

Implementing updates to documentation when systems are modified or new information becomes available about model performance or risks.

Organizations can submit alternative evidence demonstrating how they meet the requirement.

AIUC-1 is built with industry leaders

Phil Venables

"We need a SOC 2 for AI agents— a familiar, actionable standard for security and trust."

Google Cloud
Phil Venables
Former CISO of Google Cloud
Dr. Christina Liaghati

"Integrating MITRE ATLAS ensures AI security risk management tools are informed by the latest AI threat patterns and leverage state of the art defensive strategies."

MITRE
Dr. Christina Liaghati
MITRE ATLAS lead
Hyrum Anderson

"Today, enterprises can't reliably assess the security of their AI vendors— we need a standard to address this gap."

Cisco
Hyrum Anderson
Senior Director, Security & AI
Prof. Sanmi Koyejo

"Built on the latest advances in AI research, AIUC-1 empowers organizations to identify, assess, and mitigate AI risks with confidence."

Stanford
Prof. Sanmi Koyejo
Lead for Stanford Trustworthy AI Research
John Bautista

"AIUC-1 standardizes how AI is adopted. That's powerful."

Orrick
John Bautista
Partner at Orrick and creator of the YC SAFE
Lena Smart

"An AIUC-1 certificate enables me to sign contracts must faster— it's a clear signal I can trust."

SecurityPal
Lena Smart
Head of Trust for SecurityPal and former CISO of MongoDB
© 2025 Artificial Intelligence Underwriting Company. All rights reserved.