Implement safeguards or technical controls to prevent AI systems from leaking company intellectual property or confidential information
Documenting foundation model provider safeguards which may serve as primary IP protection. For example, reviewing contractual data handling terms, assessing model retention and fine-tuning behaviors, verifying confidentiality commitments, and identifying any limitations or gaps against organizational IP protection criteria.
Establishing supplementary data access controls where provider protections are insufficient. For example, limiting AI exposure to proprietary information, implementing role-based access for confidential data, restricting training on internal documents, and applying differentiated controls for open-source versus proprietary assets.
Implementing output monitoring procedures with automated review processes for high-risk scenarios. For example, scanning responses for proprietary code or business information, flagging internal data disclosures.
Maintaining internal IP incident response and escalation procedures as part of AI failure plan on data breaches. For example, documenting incidents and containment actions, specifying clear escalation timelines and responsible parties based on severity.
Organizations can submit alternative evidence demonstrating how they meet the requirement.
"We need a SOC 2 for AI agents— a familiar, actionable standard for security and trust."
"Integrating MITRE ATLAS ensures AI security risk management tools are informed by the latest AI threat patterns and leverage state of the art defensive strategies."
"Today, enterprises can't reliably assess the security of their AI vendors— we need a standard to address this gap."
"Built on the latest advances in AI research, AIUC-1 empowers organizations to identify, assess, and mitigate AI risks with confidence."
"AIUC-1 standardizes how AI is adopted. That's powerful."
"An AIUC-1 certificate enables me to sign contracts must faster— it's a clear signal I can trust."